Search Results

Search found 4304 results on 173 pages for 'bytes'.

Page 162/173 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • HTML5 <audio> Safari live broadcast vs not

    - by Peter Parente
    I'm attempting to embed an HTML5 audio element pointing to MP3 or OGG data served by a PHP file . When I view the page in Safari, the controls appear, but the UI says "Live Broadcast." When I click play, the audio starts as expected. Once it ends, however, I can't start it playing again by clicking play. Even using the JS API on the audio element and setting currentTime to 0 fails with an index error exception. I suspected the headers from the PHP script were the problem, particularly missing a content length. But that's not the case. The response headers include a proper Content- Length to indicate the audio has finite size. Furthermore, everything works as expected in Firefox 3.5+. I can click play on the audio element multiple times to hear the sound replay. If I remove the PHP script from the equation and serve up a static copy of the MP3 file, everything works fine in Safari. Does this mean Safari is treating audio src URLs with query parameters differently than URLs that don't have them? Anyone have any luck getting this to work? My simple example page is: <!DOCTYPE html> <html> <head></head> <body> <audio controls autobuffer> <source src="say.php?text=this%20is%20a%20test&format=.ogg" /> <source src="say.php?text=this%20is%20a%20test&format=.mp3" /> </audio> </body> </html> HTTP Headers from PHP script: HTTP/1.x 200 OK Date: Sun, 03 Jan 2010 15:39:34 GMT Server: Apache X-Powered-By: PHP/5.2.10 Content-Length: 8993 Keep-Alive: timeout=2, max=98 Connection: Keep-Alive Content-Type: audio/mpeg HTTP Headers from direct file access: HTTP/1.x 200 OK Date: Sun, 03 Jan 2010 20:06:59 GMT Server: Apache Last-Modified: Sun, 03 Jan 2010 03:20:02 GMT Etag: "a404b-c3f-47c3a14937c80" Accept-Ranges: bytes Content-Length: 8993 Keep-Alive: timeout=2, max=100 Connection: Keep-Alive Content-Type: audio/mpeg I tried hard-coding the Accept-Ranges header into the script too, but no luck.

    Read the article

  • Help with strange memory behavior. Looking for leaks both in my brain and in my code.

    - by BastiBechtold
    I spent the last few days trying to find memory leaks in a program we are developing. First of all, I tried using some leak detectors. After fixing a few issues, they do not find any leaks any more. However, I am also monitoring my application using perfmon.exe. Performance Monitor reports that 'Private Bytes' and 'Working Set - Private' are steadily rising when the app is used. To me, this suggests that the program is using more and more memory the longer it runs. Internal resources seem to be stable however, so this sounds like leaking to me. The program is loading a DLL at runtime. I suspect that these leaks or whatever they are occur in that library and get purged when the library is unloaded, hence they won't get picked up by the leak detectors. I used both DevPartner BoundsChecker and Virtual Leak Detector to look for memory leaks. Both supposedly catch leaks in DLLs. Also, the memory consumption is increasing in steps and those steps roughly, but not exactly, coincide with certain GUI actions I perform in the application. If these were errors in our code, they should get triggered every single time the actions are performed and not just most of the time. Whenever I am confronted with so much strangeness, I begin to question my basic assumptions. So I turn to you, who know everything, for suggestions. Is there a flaw in my assumptions? Do you have an idea of how to go about troubleshooting a problem like this? Edit: I am currently using Microsoft Visual C++ (x86) on Windows 7 64. Edit2: I just used IBM Purify to hunt for leaks. First of all, it lists a full 30% of the program as leaked memory. This can not be true. I guess it is identifying the whole DLL as leaked or something like that. However, if I search for new leaks every few actions, it reports leaks that correspond with the size increase reported by Performance Monitor. This could be a lead to a leak. Sadly, I am only using the trial version of Purify, so it won't show me the actual location of those leaks. (These leaks only show up at runtime. When the program exits, there are no leaks whatsoever reported by any tool.)

    Read the article

  • PHP miniwebsever file download

    - by snikolov
    $httpsock = @socket_create_listen("9090"); if (!$httpsock) { print "Socket creation failed!\n"; exit; } while (1) { $client = socket_accept($httpsock); $input = trim(socket_read ($client, 4096)); $input = explode(" ", $input); $input = $input[1]; $fileinfo = pathinfo($input); switch ($fileinfo['extension']) { default: $mime = "text/html"; } if ($input == "/") { $input = "index.html"; } $input = ".$input"; if (file_exists($input) && is_readable($input)) { echo "Serving $input\n"; $contents = file_get_contents($input); $output = "HTTP/1.0 200 OK\r\nServer: APatchyServer\r\nConnection: close\r\nContent-Type: $mime\r\n\r\n$contents"; } else { //$contents = "The file you requested doesn't exist. Sorry!"; //$output = "HTTP/1.0 404 OBJECT NOT FOUND\r\nServer: BabyHTTP\r\nConnection: close\r\nContent-Type: text/html\r\n\r\n$contents"; function openfile() { $filename = "a.pl"; $file = fopen($filename, 'r'); $filesize = filesize($filename); $buffer = fread($file, $filesize); $array = array("Output"=$buffer,"filesize"=$filesize,"filename"=$filename); return $array; } $send = openfile(); $file = $send['filename']; $filesize = $send['filesize']; $output = 'HTTP/1.0 200 OK\r\n'; $output .= "Content-type: application/octet-stream\r\n"; $output .= 'Content-Disposition: attachment; filename="'.$file.'"\r\n'; $output .= "Content-Length:$filesize\r\n"; $output .= "Accept-Ranges: bytes\r\n"; $output .= "Cache-Control: private\n\n"; $output .= $send['Output']; $output .= "Content-Transfer-Encoding: binary"; $output .= "Connection: Keep-Alive\r\n"; } socket_write($client, $output); socket_close ($client); } socket_close ($httpsock); Hello, I am snikolov i am creating a miniwebserver with php and i would like to know how i can send the client a file to download with his browser such as firefox or internet explore i am sending a file to the user to download via sockets, but the cleint is not getting the filename and the information to download can you please help me here,if i declare the file again i get this error in my server Fatal error: Cannot redeclare openfile() (previously declared in C:\User s\fsfdsf\sfdsfsdf\httpd.php:31) in C:\Users\hfghfgh\hfghg\httpd.php on li ne 29, if its possible, i would like to know if the webserver can show much banwdidth the user request via sockets, perl has the same option as php but its more hardcore than php i dont understand much about perl, i even saw that a miniwebserver can show much the client user pulls from the server would it be possible that you can assist me with this coding, i much aprreciate it thank you guys.

    Read the article

  • Problem writing HTML content to Word document in ASP.NET

    - by saran
    I am trying to export the HTML page contents to Word. My Html display page is: What is your favourite color? NA List the top three school ? one National two Devs three PS And a button for click event. The button click event will open MS word and paste the page contents in word. The word page contains the table property of html design page. It occurs only in Word 2003. But in word 2007 the word document contains the text with out table property. How can I remove this table property in word 2003. I am not able to add the snapshots. Else i will make you clear. I am designing the web page by aspx. I am exporting the web page content by the following code. protected void Button1_Click(object sender, EventArgs e) { Response.ContentEncoding = System.Text.Encoding.UTF7; System.Text.StringBuilder SB = new System.Text.StringBuilder(); System.IO.StringWriter SW = new System.IO.StringWriter(); System.Web.UI.HtmlTextWriter htmlTW = new System.Web.UI.HtmlTextWriter(SW); tbl.RenderControl(htmlTW); string strBody = "<html>" + "<body>" + "<div><b>" + htmlTW.InnerWriter.ToString() + "</b></div>" + "</body>" + "</html>"; Response.AppendHeader("Content-Type", "application/msword"); Response.AppendHeader("Content-disposition", "attachment; filename=" + fileName); Response.ContentEncoding = System.Text.Encoding.UTF7; string fileName1 = "C://Temp/Excel" + DateTime.Now.Millisecond.ToString(); BinaryWriter writer = new BinaryWriter(File.Open(fileName1, FileMode.Create)); writer.Write(strBody); writer.Close(); FileStream fs = new FileStream(fileName1, FileMode.Open, FileAccess.Read); byte[] renderedBytes; // Create a byte array of file stream length renderedBytes = new byte[fs.Length]; //Read block of bytes from stream into the byte array fs.Read(renderedBytes, 0, System.Convert.ToInt32(fs.Length)); //Close the File Stream fs.Close(); FileInfo TheFile = new FileInfo(fileName1); if (TheFile.Exists) { File.Delete(fileName1); } Response.BinaryWrite(renderedBytes); Response.Flush(); Response.End(); }

    Read the article

  • How to encrypt and save a binary stream after serialization and read it back?

    - by Anindya Chatterjee
    I am having some problems in using CryptoStream when I want to encrypt a binary stream after binary serialization and save it to a file. I am getting the following exception System.ArgumentException : Stream was not readable. Can anybody please show me how to encrypt a binary stream and save it to a file and deserialize it back correctly? The code is as follows: class Program { public static void Main(string[] args) { var b = new B {Name = "BB"}; WriteFile<B>(@"C:\test.bin", b, true); var bb = ReadFile<B>(@"C:\test.bin", true); Console.WriteLine(b.Name == bb.Name); Console.ReadLine(); } public static T ReadFile<T>(string file, bool decrypt) { T bObj = default(T); var _binaryFormatter = new BinaryFormatter(); Stream buffer = null; using (var stream = new FileStream(file, FileMode.OpenOrCreate)) { if(decrypt) { const string strEncrypt = "*#4$%^.++q~!cfr0(_!#$@$!&#&#*&@(7cy9rn8r265&$@&*E^184t44tq2cr9o3r6329"; byte[] dv = {0x12, 0x34, 0x56, 0x78, 0x90, 0xAB, 0xCD, 0xEF}; CryptoStream cs; DESCryptoServiceProvider des = null; var byKey = Encoding.UTF8.GetBytes(strEncrypt.Substring(0, 8)); using (des = new DESCryptoServiceProvider()) { cs = new CryptoStream(stream, des.CreateEncryptor(byKey, dv), CryptoStreamMode.Read); } buffer = cs; } else buffer = stream; try { bObj = (T) _binaryFormatter.Deserialize(buffer); } catch(SerializationException ex) { Console.WriteLine(ex.Message); } } return bObj; } public static void WriteFile<T>(string file, T bObj, bool encrypt) { var _binaryFormatter = new BinaryFormatter(); Stream buffer; using (var stream = new FileStream(file, FileMode.Create)) { try { if(encrypt) { const string strEncrypt = "*#4$%^.++q~!cfr0(_!#$@$!&#&#*&@(7cy9rn8r265&$@&*E^184t44tq2cr9o3r6329"; byte[] dv = {0x12, 0x34, 0x56, 0x78, 0x90, 0xAB, 0xCD, 0xEF}; CryptoStream cs; DESCryptoServiceProvider des = null; var byKey = Encoding.UTF8.GetBytes(strEncrypt.Substring(0, 8)); using (des = new DESCryptoServiceProvider()) { cs = new CryptoStream(stream, des.CreateEncryptor(byKey, dv), CryptoStreamMode.Write); buffer = cs; } } else buffer = stream; _binaryFormatter.Serialize(buffer, bObj); buffer.Flush(); } catch(SerializationException ex) { Console.WriteLine(ex.Message); } } } } [Serializable] public class B { public string Name {get; set;} } It throws the serialization exception as follows The input stream is not a valid binary format. The starting contents (in bytes) are: 3F-17-2E-20-80-56-A3-2A-46-63-22-C4-49-56-22-B4-DA ...

    Read the article

  • Why does C# exit when calling the Ada elaboration routine using debug?

    - by erict
    I have a DLL created in Ada using GPS. I am dynamically loading it and calling it successfully both from Ada and from C++. But when I try to call it from C#, the program exits on the call to Elaboration init. What am I missing? The exact same DLL is perfectly happy getting called from C++ and Ada. Edit: If I start the program without Debugging, it also works with C#. But if I run it with the Debugger, then it exits on the call to ElaborationInit. There are no indications in any of the Windows event logs. If the Ada DLL is Pure, and I skip the elaboration init call, the actual function DLL is called correctly, so it has something to do with the elaboration. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Runtime.InteropServices; namespace CallingDLLfromCS { class Program { [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)] public static extern IntPtr LoadLibrary(string dllToLoad); [DllImport("kernel32.dll", CharSet = CharSet.Ansi, SetLastError = true)] public static extern IntPtr GetProcAddress(IntPtr hModule, string procedureName); [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)] public static extern bool FreeLibrary(IntPtr hModule); [UnmanagedFunctionPointer(CallingConvention.StdCall)] delegate int AdaCallable2_dlgt(int val); static AdaCallable2_dlgt fnAdaCallable2 = null; [UnmanagedFunctionPointer(CallingConvention.StdCall)] delegate void ElaborationInit_dlgt(); static ElaborationInit_dlgt ElaborationInit = null; [UnmanagedFunctionPointer(CallingConvention.StdCall)] delegate void AdaFinal_dlgt(); static AdaFinal_dlgt AdaFinal = null; static void Main(string[] args) { int result; bool fail = false; // assume the best IntPtr pDll2 = LoadLibrary("libDllBuiltFromAda.dll"); if (pDll2 != IntPtr.Zero) { // Note the @4 is because 4 bytes are passed. This can be further reduced by the use of a DEF file in the DLL generation. IntPtr pAddressOfFunctionToCall = GetProcAddress(pDll2, "AdaCallable@4"); if (pAddressOfFunctionToCall != IntPtr.Zero) { fnAdaCallable2 = (AdaCallable2_dlgt)Marshal.GetDelegateForFunctionPointer(pAddressOfFunctionToCall, typeof(AdaCallable2_dlgt)); } else fail = true; pAddressOfFunctionToCall = GetProcAddress(pDll2, "DllBuiltFromAdainit"); if (pAddressOfFunctionToCall != IntPtr.Zero) { ElaborationInit = (ElaborationInit_dlgt)Marshal.GetDelegateForFunctionPointer(pAddressOfFunctionToCall, typeof(ElaborationInit_dlgt)); } else fail = true; pAddressOfFunctionToCall = GetProcAddress(pDll2, "DllBuiltFromAdafinal"); if (pAddressOfFunctionToCall != IntPtr.Zero) AdaFinal = (AdaFinal_dlgt)Marshal.GetDelegateForFunctionPointer(pAddressOfFunctionToCall, typeof(AdaFinal_dlgt)); else fail = true; if (!fail) { ElaborationInit.Invoke(); // ^^^^^^^^^^^^^^^^^^^^^^^^^ FAILS HERE result = fnAdaCallable2(50); Console.WriteLine("Return value is " + result.ToString()); AdaFinal(); } FreeLibrary(pDll2); } } } }

    Read the article

  • mounting ext4 fs with block size of 65536

    - by seaquest
    I am doing some benchmarking on EXT4 performance on Compact Flash media. I have created an ext4 fs with block size of 65536. however I can not mount it on ubuntu-10.10-netbook-i386. (it is already mounting ext4 fs with 4096 bytes of block sizes) According to my readings on ext4 it should allow such big block sized fs. I want to hear your comments. root@ubuntu:~# mkfs.ext4 -b 65536 /dev/sda3 Warning: blocksize 65536 not usable on most systems. mke2fs 1.41.12 (17-May-2010) mkfs.ext4: 65536-byte blocks too big for system (max 4096) Proceed anyway? (y,n) y Warning: 65536-byte blocks too big for system (max 4096), forced to continue Filesystem label= OS type: Linux Block size=65536 (log=6) Fragment size=65536 (log=6) Stride=0 blocks, Stripe width=0 blocks 19968 inodes, 19830 blocks 991 blocks (5.00%) reserved for the super user First data block=0 1 block group 65528 blocks per group, 65528 fragments per group 19968 inodes per group Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@ubuntu:~# tune2fs -l /dev/sda3 tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 4cf3f507-e7b4-463c-be11-5b408097099b Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 19968 Block count: 19830 Reserved block count: 991 Free blocks: 18720 Free inodes: 19957 First block: 0 Block size: 65536 Fragment size: 65536 Blocks per group: 65528 Fragments per group: 65528 Inodes per group: 19968 Inode blocks per group: 78 Flex block group size: 16 Filesystem created: Sat Feb 5 14:39:55 2011 Last mount time: n/a Last write time: Sat Feb 5 14:40:02 2011 Mount count: 0 Maximum mount count: 37 Last checked: Sat Feb 5 14:39:55 2011 Check interval: 15552000 (6 months) Next check after: Thu Aug 4 14:39:55 2011 Lifetime writes: 70 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: afb5b570-9d47-4786-bad2-4aacb3b73516 Journal backup: inode blocks root@ubuntu:~# mount -t ext4 /dev/sda3 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

  • Cakephp file upload problem.

    - by vatismarty
    I am using Cakephp as my framework. I have a problem in uploading my files through a form. I am using an Uploader plugin from THIS website. My php ini file has this piece of code. upload_max_filesize = 10M post_max_size = 8M this is from uploader.php -- plugin file has var $maxFileSize = '5M'; //default max file size In my controller.php file, i use this code to set max file size as 1 MB at runtime. function beforeFilter() { parent::beforeFilter(); $this->Uploader->maxFileSize = '1M'; } In the uploader.php, we perform this ... if (empty($this->maxFileSize)) { $this->maxFileSize = ini_get('upload_max_filesize'); //landmark 1 } $byte = preg_replace('/[^0-9]/i', '', $this->maxFileSize); $last = $this->bytes($this->maxFileSize, 'byte'); if ($last == 'T' || $last == 'TB') { $multiplier = 1; $execTime = 20; } else if ($last == 'G' || $last == 'GB') { $multiplier = 3; $execTime = 10; } else if ($last == 'M' || $last == 'MB') { $multiplier = 5; $execTime = 5; } else { $multiplier = 10; $execTime = 3; } ini_set('memore_limit', (($byte * $multiplier) * $multiplier) . $last); ini_set('post_max_size', ($byte * $multiplier) . $last); //error suspected here ini_set('upload_tmp_dir', $this->tempDir); ini_set('upload_max_filesize', $this->maxFileSize); //landmark 2 EXPECTED RESULT: When i try uploading a file that is 2MB of size, it shouldn't take place because maxFileSize is 1MB at run time. So upload should fail. THE PROBLEM IS : But it is getting uploaded. Landmark 1 does not get executed. (in comments)... land mark 2 does not seem to work... upload_max_filesize does not get the value from maxFileSize. Please help me... thank you

    Read the article

  • Using mcrypt to pass data across a webservice is failing

    - by adam
    Hi I'm writing an error handler script which encrypts the error data (file, line, error, message etc) and passes the serialized array as a POST variable (using curl) to a script which then logs the error in a central db. I've tested my encrypt/decrypt functions in a single file and the data is encrypted and decrypted fine: define('KEY', 'abc'); define('CYPHER', 'blowfish'); define('MODE', 'cfb'); function encrypt($data) { $td = mcrypt_module_open(CYPHER, '', MODE, ''); $iv = mcrypt_create_iv(mcrypt_enc_get_iv_size($td), MCRYPT_RAND); mcrypt_generic_init($td, KEY, $iv); $crypttext = mcrypt_generic($td, $data); mcrypt_generic_deinit($td); return $iv.$crypttext; } function decrypt($data) { $td = mcrypt_module_open(CYPHER, '', MODE, ''); $ivsize = mcrypt_enc_get_iv_size($td); $iv = substr($data, 0, $ivsize); $data = substr($data, $ivsize); if ($iv) { mcrypt_generic_init($td, KEY, $iv); $data = mdecrypt_generic($td, $data); } return $data; } echo "<pre>"; $data = md5(''); echo "Data: $data\n"; $e = encrypt($data); echo "Encrypted: $e\n"; $d = decrypt($e); echo "Decrypted: $d\n"; Output: Data: d41d8cd98f00b204e9800998ecf8427e Encrypted: ê÷#¯KžViiÖŠŒÆÜ,ÑFÕUW£´Œt?†÷>c×åóéè+„N Decrypted: d41d8cd98f00b204e9800998ecf8427e The problem is, when I put the encrypt function in my transmit file (tx.php) and the decrypt in my recieve file (rx.php), the data is not fully decrypted (both files have the same set of constants for key, cypher and mode). Data before passing: a:4:{s:3:"err";i:1024;s:3:"msg";s:4:"Oops";s:4:"file";s:46:"/Applications/MAMP/htdocs/projects/txrx/tx.php";s:4:"line";i:80;} Data decrypted: Mª4:{s:3:"err";i:1024@7OYªç`^;g";s:4:"Oops";s:4:"file";sôÔ8F•Ópplications/MAMP/htdocs/projects/txrx/tx.php";s:4:"line";i:80;} Note the random characters in the middle. My curl is fairly simple: $ch = curl_init($url); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, 'data=' . $data); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); $output = curl_exec($ch); Things I suspect could be causing this: Encoding of the curl request Something to do with mcrypt padding missing bytes I've been staring at it too long and have missed something really really obvious If I turn off the crypt functions (so the transfer tx-rx is unencrypted) the data is received fine. Any and all help much appreciated! Thanks, Adam

    Read the article

  • Memory corruption in System.Move due to changed 8087CW mode (png + stretchblt)

    - by André Mussche
    I have strange a memory corruption problem. After many hours debugging and trying I think I found something. For example: I do a simple string assignment: sTest := 'SET LOCK_TIMEOUT '; However, the result sometimes becomes: sTest = 'SET LOCK'#0'TIMEOUT ' So, the _ gets replaced by an 0 byte. I have seen this happening once (reproducing is tricky, dependent on timing) in the System.Move function, when it uses the FPU stack (fild, fistp) for fast memory copy (in case of 9 till 32 bytes to move): ... @@SmallMove: {9..32 Byte Move} fild qword ptr [eax+ecx] {Load Last 8} fild qword ptr [eax] {Load First 8} cmp ecx, 8 jle @@Small16 fild qword ptr [eax+8] {Load Second 8} cmp ecx, 16 jle @@Small24 fild qword ptr [eax+16] {Load Third 8} fistp qword ptr [edx+16] {Save Third 8} ... Using the FPU view and 2 memory debug views (Delphi - View - Debug - CPU - Memory) I saw it going wrong... once... could not reproduce however... This morning I read something about the 8087CW mode, and yes, if this is changed into $27F I get memory corruption! Normally it is $133F: The difference between $133F and $027F is that $027F sets up the FPU for doing less precise calculations (limiting to Double in stead of Extended) and different infiniti handling (which was used for older FPU’s, but is not used any more). Okay, now I found why but not when! I changed the working of my AsmProfiler with a simple check (so all functions are checked at enter and leave): if Get8087CW = $27F then //normally $1372? if MainThreadID = GetCurrentThreadId then //only check mainthread DebugBreak; I "profiled" some units and dll's and bingo (see stack): Windows.StretchBlt(3372289943,0,0,514,345,4211154027,0,0,514,345,13369376) pngimage.TPNGObject.DrawPartialTrans(4211154027,(0, 0, 514, 345, (0, 0), (514, 345))) pngimage.TPNGObject.Draw($7FF62450,(0, 0, 514, 345, (0, 0), (514, 345))) Graphics.TCanvas.StretchDraw((0, 0, 514, 345, (0, 0), (514, 345)),$7FECF3D0) ExtCtrls.TImage.Paint Controls.TGraphicControl.WMPaint((15, 4211154027, 0, 0)) So it is happening in StretchBlt... What to do now? Is it a fault of Windows, or a bug in PNG (included in D2007)? Or is the System.Move function not failsafe?

    Read the article

  • Efficient list compacting

    - by Patrik
    Suppose you have a list of unsigned ints. Suppose some elements are equal to 0 and you want to push them back. Currently I use this code (list is a pointer to a list of unsigned ints of size n for (i = 0; i < n; ++i) { if (list[i]) continue; int j; for (j = i + 1; j < n && !list[j]; ++j); int z; for (z = j + 1; z < n && list[z]; ++z); if (j == n) break; memmove(&(list[i]), &(list[j]), sizeof(unsigned int) * (z - j))); int s = z - j + i; for(j = s; j < z; ++j) list[j] = 0; i = s - 1; } Can you think of a more efficient way to perform this task? The snippet is purely theoretical, in the production code, each element of list is a 64 bytes struct EDIT: I'll post my solution. Many thanks to Jonathan Leffler. void RemoveDeadParticles(int * list, int * n) { int i, j = *n - 1; for (; j >= 0 && list[j] == 0; --j); for (i = 0; i < j; ++i) { if (list[i]) continue; memcpy(&(list[i]), &(list[j]), sizeof(int)); list[j] = 0; for (; j >= 0 && list[j] == 0; --j); if (i == j) break; } *n = i + 1; }

    Read the article

  • PHP GD Allowed memory size exhausted

    - by gurun8
    I'm trying to process a directory of JPEG images (roughly 600+, ranging from 50k to 500k) using PHP: GD to resize and save the images but I've hit a bit of a snag quite early in the process. After correctly processing just 3 images (30K, 18K and 231K) I get a Allowed memory size of 16777216 bytes exhausted PHP Fatal error. I'm cycling through the images and calling the code below: list($w, $h) = getimagesize($src); if ($w > $it->width) { $newwidth = $it->width; $newheight = round(($newwidth * $h) / $w); } elseif ($w > $it->height) { $newheight = $it->height; $newwidth = round(($newheight * $w) / $h); } else { $newwidth = $w; $newheight = $h; } // create resize image $img = imagecreatetruecolor($newwidth, $newheight); $org = imagecreatefromjpeg($src); // Resize imagecopyresized($img, $org, 0, 0, 0, 0, $newwidth, $newheight, $w, $h); imagedestroy($org); imagejpeg($img, $dest); // Free up memory imagedestroy($img); I've tried to free up memory with the imagedestroy function but it doesn't seem to have any affect. The script just keeps consistently choking at the imagecreatefromjpeg line of code. I checked the php.ini and the memory_limit = 16M setting seems like it's holding correctly. But I can't figure out why the memory is filling up. Shouldn't it be releasing the memory back to the garbage collector? I don't really want to increase the memory_limit setting. This seems like a bad workaround that could potentially lead to more issues in the future. FYI: I'm running my script from a command prompt. It shouldn't affect the functionality but might influence your response so I thought I should mention it. Can anyone see if I'm just missing something simple or if there's a design flaw here? You'd think that this would be a pretty straightforward task. Surely this has to be possible, right?

    Read the article

  • iphone problem receiving UDP packets

    - by SooDesuNe
    I'm using sendto() and recvfrom() to send some simple packets via UDP over WiFI. I've tried using two phones, and a simulator, the results I'm getting are: Packets sent from phones - recieved by simulator Packets sent from simulator - simulator recvfrom remains blocking. Packets sent from phones - other phone recvfrom remains blocking. I'm not sure how to start debugging this one, since the simulator/mac is able to receive the the packets, but the phones don't appear to be getting the message. A slight aside, do I need to keep my packets below the MTU for my network? Or is fragmentation handled by the OS or some other lower level software? UPDATE: I forgot to include the packet size and structure. I'm transmitting: typedef struct PacketForTransmission { int32_t packetTypeIdentifier; char data[64]; // size to fit my biggest struct } PacketForTransmission; of which the char data[64] is: typedef struct PacketHeader{ uint32_t identifier; uint32_t datatype; } PacketHeader; typedef struct BasePacket{ PacketHeader header; int32_t cardValue; char sendingDeviceID[41]; //dont forget to save room for the NULL terminator! } BasePacket; typedef struct PositionPacket{ BasePacket basePacket; int32_t x; int32_t y; } PositionPacket; sending packet is like: PositionPacket packet; bzero(&packet, sizeof(packet)); //fill packet with it's associated data PacketForTransmission transmissionPacket; transmissionPacket.packetTypeIdentifier = kPositionPacketType; memcpy(&transmissionPacket.data, (void*)&packet, sizeof(packet)); //put the PositionPacket into data[64] size_t sendResult = sendto(_socket, &transmissionPacket, sizeof(transmissionPacket), 0, [address bytes], [address length]); NSLog(@"packet sent of size: %i", sendResult); and recieving packets is like: while(1){ char dataBuffer[8192]; struct sockaddr addr; socklen_t socklen = sizeof(addr); ssize_t len = recvfrom(_socket, dataBuffer, sizeof(dataBuffer), 0, &addr, &socklen); //continues blocking here NSLog(@"packet recieved of length: %i", len); //do some more stuff }

    Read the article

  • C program using inotify to monitor multiple directories along with sub-directories?

    - by lakshmipathi
    I have program which monitors a directory (/test) and notify me. I want to improve this to monitor another directory (say /opt). And also how to monitor it's subdirectories , current i'll get notified if any changes made to files under /test . but i'm not getting any inotifcation if changes made sub-directory of /test, that is touch /test/sub-dir/files.txt .. Here my current code - hope this will help /* Simple example for inotify in Linux. inotify has 3 main functions. inotify_init1 to initialize inotify_add_watch to add monitor then inotify_??_watch to rm monitor.you the what to replace with ??. yes third one is inotify_rm_watch() */ #include <sys/inotify.h> int main(){ int fd,wd,wd1,i=0,len=0; char pathname[100],buf[1024]; struct inotify_event *event; fd=inotify_init1(IN_NONBLOCK); /* watch /test directory for any activity and report it back to me */ wd=inotify_add_watch(fd,"/test",IN_ALL_EVENTS); while(1){ //read 1024 bytes of events from fd into buf i=0; len=read(fd,buf,1024); while(i<len){ event=(struct inotify_event *) &buf[i]; /* check for changes */ if(event->mask & IN_OPEN) printf("%s :was opened\n",event->name); if(event->mask & IN_MODIFY) printf("%s : modified\n",event->name); if(event->mask & IN_ATTRIB) printf("%s :meta data changed\n",event->name); if(event->mask & IN_ACCESS) printf("%s :was read\n",event->name); if(event->mask & IN_CLOSE_WRITE) printf("%s :file opened for writing was closed\n",event->name); if(event->mask & IN_CLOSE_NOWRITE) printf("%s :file opened not for writing was closed\n",event->name); if(event->mask & IN_DELETE_SELF) printf("%s :deleted\n",event->name); if(event->mask & IN_DELETE) printf("%s :deleted\n",event->name); /* update index to start of next event */ i+=sizeof(struct inotify_event)+event->len; } } }

    Read the article

  • Why does my Perl regular expression only find the last occurrence?

    - by scharan
    I have the following input to a Perl script and I wish to get the first occurrence of NAME="..." strings in each of the <table>...</table> structures. The entire file is read into a single string and the regex acts on that input. However, the regex always returns the last occurrence of NAME="..." strings. Can anyone explain what is going on and how this can be fixed? Input file: ADSDF <TABLE> NAME="ORDERSAA" line1 line2 NAME="ORDERSA" line3 NAME="ORDERSAB" </TABLE> <TABLE> line1 line2 NAME="ORDERSB" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSC" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSD" line3 line3 line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES2" line3 NAME="QUOTES3" NAME="QUOTES4" line3 NAME="QUOTES5" line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES6" NAME="QUOTES7" NAME="QUOTES8" NAME="QUOTES9" line3 line3 </TABLE> <TABLE> NAME="MyName IsKhan" </TABLE> Perl Code starts here: use warnings; use strict; my $nameRegExp = '(<table>((NAME="(.+)")|(.*|\n))*</table>)'; sub extractNames($$){ my ($ifh, $ofh) = @_; my $fullFile; read ($ifh, $fullFile, 1024);#Hardcoded to read just 1024 bytes. while( $fullFile =~ m#$nameRegExp#gi){ print "found: ".$4."\n"; } } sub main(){ if( ($#ARGV + 1 )!= 1){ die("Usage: extractNames infile\n"); } my $infileName = $ARGV[0]; my $outfileName = $ARGV[1]; open my $inFile, "<$infileName" or die("Could not open log file $infileName"); my $outFile; #open my $outFile, ">$outfileName" or die("Could not open log file $outfileName"); extractNames( $inFile, $outFile ); close( $inFile ); #close( $outFile ); } #call main();

    Read the article

  • How to define and work with an array of bits in C?

    - by Eddy
    I want to create a very large array on which I write '0's and '1's. I'm trying to simulate a physical process called random sequential adsorption, where units of length 2, dimers, are deposited onto an n-dimensional lattice at a random location, without overlapping each other. The process stops when there is no more room left on the lattice for depositing more dimers (lattice is jammed). Initially I start with a lattice of zeroes, and the dimers are represented by a pair of '1's. As each dimer is deposited, the site on the left of the dimer is blocked, due to the fact that the dimers cannot overlap. So I simulate this process by depositing a triple of '1's on the lattice. I need to repeat the entire simulation a large number of times and then work out the average coverage %. I've already done this using an array of chars for 1D and 2D lattices. At the moment I'm trying to make the code as efficient as possible, before working on the 3D problem and more complicated generalisations. This is basically what the code looks like in 1D, simplified: int main() { /* Define lattice */ array = (char*)malloc(N * sizeof(char)); total_c = 0; /* Carry out RSA multiple times */ for (i = 0; i < 1000; i++) rand_seq_ads(); /* Calculate average coverage efficiency at jamming */ printf("coverage efficiency = %lf", total_c/1000); return 0; } void rand_seq_ads() { /* Initialise array, initial conditions */ memset(a, 0, N * sizeof(char)); available_sites = N; count = 0; /* While the lattice still has enough room... */ while(available_sites != 0) { /* Generate random site location */ x = rand(); /* Deposit dimer (if site is available) */ if(array[x] == 0) { array[x] = 1; array[x+1] = 1; count += 1; available_sites += -2; } /* Mark site left of dimer as unavailable (if its empty) */ if(array[x-1] == 0) { array[x-1] = 1; available_sites += -1; } } /* Calculate coverage %, and add to total */ c = count/N total_c += c; } For the actual project I'm doing, it involves not just dimers but trimers, quadrimers, and all sorts of shapes and sizes (for 2D and 3D). I was hoping that I would be able to work with individual bits instead of bytes, but I've been reading around and as far as I can tell you can only change 1 byte at a time, so either I need to do some complicated indexing or there is a simpler way to do it? Thanks for your answers

    Read the article

  • Why is FLD1 loading NaN instead?

    - by Bernd Jendrissek
    I have a one-liner C function that is just return value * pow(1.+rate, -delay); - it discounts a future value to a present value. The interesting part of the disassembly is 0x080555b9 : neg %eax 0x080555bb : push %eax 0x080555bc : fildl (%esp) 0x080555bf : lea 0x4(%esp),%esp 0x080555c3 : fldl 0xfffffff0(%ebp) 0x080555c6 : fld1 0x080555c8 : faddp %st,%st(1) 0x080555ca : fxch %st(1) 0x080555cc : fstpl 0x8(%esp) 0x080555d0 : fstpl (%esp) 0x080555d3 : call 0x8051ce0 0x080555d8 : fmull 0xfffffff8(%ebp) While single-stepping through this function, gdb says (rate is 0.02, delay is 2; you can see them on the stack): (gdb) si 0x080555c6 30 return value * pow(1.+rate, -delay); (gdb) info float R7: Valid 0x4004a6c28f5c28f5c000 +41.68999999999999773 R6: Valid 0x4004e15c28f5c28f6000 +56.34000000000000341 R5: Valid 0x4004dceb851eb851e800 +55.22999999999999687 R4: Valid 0xc0008000000000000000 -2 =R3: Valid 0x3ff9a3d70a3d70a3d800 +0.02000000000000000042 R2: Valid 0x4004ff147ae147ae1800 +63.77000000000000313 R1: Valid 0x4004e17ae147ae147800 +56.36999999999999744 R0: Valid 0x4004efb851eb851eb800 +59.92999999999999972 Status Word: 0x1861 IE PE SF TOP: 3 Control Word: 0x037f IM DM ZM OM UM PM PC: Extended Precision (64-bits) RC: Round to nearest Tag Word: 0x0000 Instruction Pointer: 0x73:0x080555c3 Operand Pointer: 0x7b:0xbff41d78 Opcode: 0xdd45 And after the fld1: (gdb) si 0x080555c8 30 return value * pow(1.+rate, -delay); (gdb) info float R7: Valid 0x4004a6c28f5c28f5c000 +41.68999999999999773 R6: Valid 0x4004e15c28f5c28f6000 +56.34000000000000341 R5: Valid 0x4004dceb851eb851e800 +55.22999999999999687 R4: Valid 0xc0008000000000000000 -2 R3: Valid 0x3ff9a3d70a3d70a3d800 +0.02000000000000000042 =R2: Special 0xffffc000000000000000 Real Indefinite (QNaN) R1: Valid 0x4004e17ae147ae147800 +56.36999999999999744 R0: Valid 0x4004efb851eb851eb800 +59.92999999999999972 Status Word: 0x1261 IE PE SF C1 TOP: 2 Control Word: 0x037f IM DM ZM OM UM PM PC: Extended Precision (64-bits) RC: Round to nearest Tag Word: 0x0020 Instruction Pointer: 0x73:0x080555c6 Operand Pointer: 0x7b:0xbff41d78 Opcode: 0xd9e8 After this, everything goes to hell. Things get grossly over or undervalued, so even if there were no other bugs in my freeciv AI attempt, it would choose all the wrong strategies. Like sending the whole army to the arctic. (Sigh, if only I were getting that far.) I must be missing something obvious, or getting blinded by something, because I can't believe that fld1 should ever possibly fail. Even less that it should fail only after a handful of passes through this function. On earlier passes the FPU correctly loads 1 into ST(0). The bytes at 0x080555c6 definitely encode fld1 - checked with x/... on the running process. What gives?

    Read the article

  • Running out of memory.. How?

    - by maxdj
    I'm attempting to write a solver for a particular puzzle. It tries to find a solution by trying every possible move one at a time until it finds a solution. The first version tried to solve it depth-first by continually trying moves until it failed, then backtracking, but this turned out to be too slow. I have rewritten it to be breadth-first using a queue structure, but I'm having problems with memory management. Here are the relevant parts: int main(int argc, char *argv[]) { ... int solved = 0; do { solved = solver(queue); } while (!solved && !pblListIsEmpty(queue)); ... } int solver(PblList *queue) { state_t *state = (state_t *) pblListPoll(queue); if (is_solution(state->pucks)) { print_solution(state); return 1; } state_t *state_cp; puck new_location; for (int p = 0; p < puck_count; p++) { for (dir i = NORTH; i <= WEST; i++) { if (!rules(state->pucks, p, i)) continue; new_location = in_dir(state->pucks, p, i); if (new_location.x != -1) { state_cp = (state_t *) malloc(sizeof(state_t)); state_cp->move.from = state->pucks[p]; state_cp->move.direction = i; state_cp->prev = state; state_cp->pucks = (puck *) malloc (puck_count * sizeof(puck)); memcpy(state_cp->pucks, state->pucks, puck_count * sizeof(puck)); /*CRASH*/ state_cp->pucks[p] = new_location; pblListPush(queue, state_cp); } } } return 0; } When I run it I get the error: ice(90175) malloc: *** mmap(size=2097152) failed (error code=12) *** error: can't allocate region *** set a breakpoint in malloc_error_break to debug Bus error The error happens around iteration 93,000. From what I can tell, the error message is from malloc failing, and the bus error is from the memcpy after it. I have a hard time believing that I'm running out of memory, since each game state is only ~400 bytes. Yet that does seem to be what's happening, seeing as the activity monitor reports that it is using 3.99GB before it crashes. I'm using http://www.mission-base.com/peter/source/ for the queue structure (it's a linked list). Clearly I'm doing something dumb. Any suggestions?

    Read the article

  • Debugging a basic OpenGL texture fail? (iphone)

    - by Ben
    Hey all, I have a very basic texture map problem in GL on iPhone, and I'm wondering what strategies there are for debugging this kind of thing. (Frankly, just staring at state machine calls and wondering if any of them is wrong or misordered is no way to live-- are there tools for this?) I have a 512x512 PNG file that I'm loading up from disk (not specially packed), creating a CGBitmapContext, then calling CGContextDrawImage to get bytes out of it. (This code is essentially stolen from an Apple sample.) I'm trying to map the texture to a "quad", with code that looks essentially like this-- all flat 2D stuff, nothing fancy: glEnable(GL_TEXTURE_2D); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glEnableClientState(GL_TEXTURE_COORD_ARRAY); GLfloat vertices[8] = { viewRect.origin.x, viewRect.size.height, viewRect.origin.x, viewRect.origin.y, viewRect.size.width, viewRect.origin.y, viewRect.size.width, viewRect.size.height }; GLfloat texCoords[8] = { 0, 1.0, 0, 0, 1.0, 0, 1.0, 1.0 }; glBindTexture(GL_TEXTURE_2D, myTextureRef); // This was previously bound to glVertexPointer(2, GL_FLOAT , 0, vertices); glTexCoordPointer(2, GL_FLOAT, 0, texCoords); glDrawArrays(GL_TRIANGLE_FAN, 0, 4); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glDisable(GL_TEXTURE_2D); My supposedly textured area comes out just black. I see no debug output from the CG calls to set up the texture. glGetError reports nothing. If I simplify this code block to just draw the verts, but set up a pure color, the quad area lights up exactly as expected. If I clear the whole context immediately beforehand to red, I don't see the red-- which means something is being rendered there, but not the contents of my PNG. What could I be doing wrong? And more importantly, what are the right tools and techniques for debugging this sort of thing, because running into this kind of problem and not being able to "step through it" in a debugger in any meaningful way is a bummer. Thanks!

    Read the article

  • Load a 6 MB binary file in a SQL Server 2005 VARBINARY(MAX) column using ADO/VC++?

    - by Feroz Khan
    How to load a binary file(.bin) of size 6 MB in a varbinary(MAX) column of SQL Server 2005 database using ADO in a VC++ application. This is the code I am using to load the file which I used to load a .bmp file: BOOL CSaveView::PutECGInDB(CString strFilePath, FieldPtr pFileData) { //Open File CFile fileImage; CFileStatus fileStatus; fileImage.Open(strFilePath,CFile::modeRead); fileImage.GetStatus(fileStatus); //Alocating memory for data ULONG nBytes = (ULONG)fileStatus.m_size; HGLOBAL hGlobal = GlobalAlloc(GPTR,nBytes); LPVOID lpData = GlobalLock(hGlobal); //Putting data into file fileImage.Read(lpData,nBytes); HRESULT hr; _variant_t varChunk; long lngOffset = 0; UCHAR chData; SAFEARRAY FAR *psa = NULL; SAFEARRAYBOUND rgsabound[1]; try { //Create a safe array to store the BYTES rgsabound[0].lLbound = 0; rgsabound[0].cElements = nBytes; psa = SafeArrayCreate(VT_UI1,1,rgsabound); while(lngOffset<(long)nBytes) { chData = ((UCHAR*)lpData)[lngOffset]; hr = SafeArrayPutElement(psa,&lngOffset,&chData); if(hr != S_OK) { return false; } lngOffset++; } lngOffset = 0; //Assign the safe array to a varient varChunk.vt = VT_ARRAY|VT_UI1; varChunk.parray = psa; hr = pFileData->AppendChunk(varChunk); if(hr != S_OK) { return false; } } catch(_com_error &e) { //get info from _com_error _bstr_t bstrSource(e.Source()); _bstr_t bstrDescription(e.Description()); _bstr_t bstrErrorMessage(e.ErrorMessage()); _bstr_t bstrErrorCode(e.Error()); TRACE("Exception thrown for classes generated by #import"); TRACE("\tCode= %08lx\n",(LPCSTR)bstrErrorCode); TRACE("\tCode Meaning = %s\n",(LPCSTR)bstrErrorMessage); TRACE("\tSource = %s\n",(LPCSTR)bstrSource); TRACE("\tDescription = %s\n",(LPCSTR)bstrDescription); } catch(...) { TRACE("***Unhandle Exception***"); } //Free Memory GlobalUnlock(lpData); return true; } But when I read the same file using Getchunk function it gives me all 0s but the size of the file I get is same as the one uploaded. Your help will be highly appreciated.

    Read the article

  • Jumping into argv?

    - by jth
    Hi, I`am experimenting with shellcode and stumbled upon the nop-slide technique. I wrote a little tool that takes buffer-size as a parameter and constructs a buffer like this: [ NOP | SC | RET ], with NOP taking half of the buffer, followed by the shellcode and the rest filled with the (guessed) return address. Its very similar to the tool aleph1 described in his famous paper. My vulnerable test-app is the same as in his paper: int main(int argc, char **argv) { char little_array[512]; if(argc>1) strcpy(little_array,argv[1]); return 0; } I tested it and well, it works: jth@insecure:~/no_nx_no_aslr$ ./victim $(./exploit 604 0) $ exit But honestly, I have no idea why. Okay, the saved eip was overwritten as intended, but instead of jumping somewhere into the buffer, it jumped into argv, I think. gdb showed up the following addresses before strcpy() was called: (gdb) i f Stack level 0, frame at 0xbffff1f0: eip = 0x80483ed in main (victim.c:7); saved eip 0x154b56 source language c. Arglist at 0xbffff1e8, args: argc=2, argv=0xbffff294 Locals at 0xbffff1e8, Previous frame's sp is 0xbffff1f0 Saved registers: ebp at 0xbffff1e8, eip at 0xbffff1ec Address of little_array: (gdb) print &little_array[0] $1 = 0xbfffefe8 "\020" After strcpy(): (gdb) i f Stack level 0, frame at 0xbffff1f0: eip = 0x804840d in main (victim.c:10); saved eip 0xbffff458 source language c. Arglist at 0xbffff1e8, args: argc=-1073744808, argv=0xbffff458 Locals at 0xbffff1e8, Previous frame's sp is 0xbffff1f0 Saved registers: ebp at 0xbffff1e8, eip at 0xbffff1ec So, what happened here? I used a 604 byte buffer to overflow little_array, so he certainly overwrote saved ebp, saved eip and argc and also argv with the guessed address 0xbffff458. Then, after returning, EIP pointed at 0xbffff458. But little_buffer resides at 0xbfffefe8, that`s a difference of 1136 byte, so he certainly isn't executing little_array. I followed execution with the stepi command and well, at 0xbffff458 and onwards, he executes NOPs and reaches the shellcode. I'am not quite sure why this is happening. First of all, am I correct that he executes my shellcode in argv, not little_array? And where does the loader(?) place argv onto the stack? I thought it follows immediately after argc, but between argc and 0xbffff458, there is a gap of 620 bytes. How is it possible that he successfully "lands" in the NOP-Pad at Address 0xbffff458, which is way above the saved eip at 0xbffff1ec? Can someone clarify this? I have actually no idea why this is working. My test-machine is an Ubuntu 9.10 32-Bit Machine without ASLR. victim has an executable stack, set with execstack -s. Thanks in advance.

    Read the article

  • how to deal with the position in a c# stream

    - by CapsicumDreams
    The (entire) documentation for the position property on a stream says: When overridden in a derived class, gets or sets the position within the current stream. The Position property does not keep track of the number of bytes from the stream that have been consumed, skipped, or both. That's it. OK, so we're fairly clear on what it doesn't tell us, but I'd really like to know what it in fact does stand for. What is 'the position' for? Why would we want to alter or read it? If we change it - what happens? In a pratical example, I have a a stream that periodically gets written to, and I have a thread that attempts to read from it (ideally ASAP). From reading many SO issues, I reset the position field to zero to start my reading. Once this is done: Does this affect where the writer to this stream is going to attempt to put the data? Do I need to keep track of the last write position myself? (ie if I set the position to zero to read, does the writer begin to overwrite everything from the first byte?) If so, do I need a semaphore/lock around this 'position' field (subclassing, perhaps?) due to my two threads accessing it? If I don't handle this property, does the writer just overflow the buffer? Perhaps I don't understand the Stream itself - I'm regarding it as a FIFO pipe: shove data in at one end, and suck it out at the other. If it's not like this, then do I have to keep copying the data past my last read (ie from position 0x84 on) back to the start of my buffer? I've seriously tried to research all of this for quite some time - but I'm new to .NET. Perhaps the Streams have a long, proud (undocumented) history that everyone else implicitly understands. But for a newcomer, it's like reading the manual to your car, and finding out: The accelerator pedal affects the volume of fuel and air sent to the fuel injectors. It does not affect the volume of the entertainment system, or the air pressure in any of the tires, if fitted. Technically true, but seriously, what we want to know is that if we mash it to the floor you go faster..

    Read the article

  • Changing RGB color image to Grayscale image using Objective C

    - by user567167
    I was developing a application that changes color image to gray image. However, some how the picture comes out wrong. I dont know what is wrong with the code. maybe the parameter that i put in is wrong please help. UIImage *c = [UIImage imageNamed:@"downRed.png"]; CGImageRef cRef = CGImageRetain(c.CGImage); NSData* pixelData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(cRef)); size_t w = CGImageGetWidth(cRef); size_t h = CGImageGetHeight(cRef); unsigned char* pixelBytes = (unsigned char *)[pixelData bytes]; unsigned char* greyPixelData = (unsigned char*) malloc(w*h); for (int y = 0; y < h; y++) { for(int x = 0; x < w; x++){ int iter = 4*(w*y+x); int red = pixe lBytes[iter]; int green = pixelBytes[iter+1]; int blue = pixelBytes[iter+2]; greyPixelData[w*y+x] = (unsigned char)(red*0.3 + green*0.59+ blue*0.11); int value = greyPixelData[w*y+x]; } } CFDataRef imgData = CFDataCreate(NULL, greyPixelData, w*h); CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData(imgData); size_t width = CGImageGetWidth(cRef); size_t height = CGImageGetHeight(cRef); size_t bitsPerComponent = 8; size_t bitsPerPixel = 8; size_t bytesPerRow = CGImageGetWidth(cRef); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); CGBitmapInfo info = kCGImageAlphaNone; CGFloat *decode = NULL; BOOL shouldInteroplate = NO; CGColorRenderingIntent intent = kCGRenderingIntentDefault; CGDataProviderRelease(imgDataProvider); CGImageRef throughCGImage = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpace, info, imgDataProvider, decode, shouldInteroplate, intent); UIImage* newImage = [UIImage imageWithCGImage:throughCGImage]; CGImageRelease(throughCGImage); newImageView.image = newImage;

    Read the article

  • I need help converting a C# string from one character encoding to another?

    - by Handleman
    According to Spolsky I can't call myself a developer, so there is a lot of shame behind this question... Scenario: From a C# application, I would like to take a string value from a SQL db and use it as the name of a directory. I have a secure (SSL) FTP server on which I want to set the current directory using the string value from the DB. Problem: Everything is working fine until I hit a string value with a "special" character - I seem unable to encode the directory name correctly to satisfy the FTP server. The code example below uses "special" character é as an example uses WinSCP as an external application for the ftps comms does not show all the code required to setup the Process "_winscp". sends commands to the WinSCP exe by writing to the process standardinput for simplicity, does not get the info from the DB, but instead simply declares a string (but I did do a .Equals to confirm that the value from the DB is the same as the declared string) makes three attempts to set the current directory on the FTP server using different string encodings - all of which fail makes an attempt to set the directory using a string that was created from a hand-crafted byte array - which works Process _winscp = new Process(); byte[] buffer; string nameFromString = "Sinéad O'Connor"; _winscp.StandardInput.WriteLine("cd \"" + nameFromString + "\""); buffer = Encoding.UTF8.GetBytes(nameFromString); _winscp.StandardInput.WriteLine("cd \"" + Encoding.UTF8.GetString(buffer) + "\""); buffer = Encoding.ASCII.GetBytes(nameFromString); _winscp.StandardInput.WriteLine("cd \"" + Encoding.ASCII.GetString(buffer) + "\""); byte[] nameFromBytes = new byte[] { 83, 105, 110, 130, 97, 100, 32, 79, 39, 67, 111, 110, 110, 111, 114 }; _winscp.StandardInput.WriteLine("cd \"" + Encoding.Default.GetString(nameFromBytes) + "\""); The UTF8 encoding changes é to 101 (decimal) but the FTP server doesn't like it. The ASCII encoding changes é to 63 (decimal) but the FTP server doesn't like it. When I represent é as value 130 (decimal) the FTP server is happy, except I can't find a method that will do this for me (I had to manually contruct the string from explicit bytes). Anyone know what I should do to my string to encode the é as 130 and make the FTP server happy and finally elevate me to level 1 developer by explaining the only single thing a developer should understand?

    Read the article

  • Security review of an authenticated Diffie Hellman variant

    - by mtraut
    EDIT I'm still hoping for some advice on this, i tried to clarify my intentions... When i came upon device pairing in my mobile communication framework i studied a lot of papers on this topic and and also got some input from previous questions here. But, i didn't find a ready to implement protocol solution - so i invented a derivate and as i'm no crypto geek i'm not sure about the security caveats of the final solution: The main questions are Is SHA256 sufficient as a commit function? Is the addition of the shared secret as an authentication info in the commit string safe? What is the overall security of the 1024 bit group DH I assume at most 2^-24 bit probability of succesful MITM attack (because of 24 bit challenge). Is this plausible? What may be the most promising attack (besides ripping the device out off my numb, cold hands) This is the algorithm sketch For first time pairing, a solution proposed in "Key agreement in peer-to-peer wireless networks" (DH-SC) is implemented. I based it on a commitment derived from: A fix "UUID" for the communicating entity/role (128 bit, sent at protocol start, before commitment) The public DH key (192 bit private key, based on the 1024 bit Oakley group) A 24 bit random challenge Commit is computed using SHA256 c = sha256( UUID || DH pub || Chall) Both parties exchange this commitment, open and transfer the plain content of the above values. The 24 bit random is displayed to the user for manual authentication DH session key (128 bytes, see above) is computed When the user opts for persistent pairing, the session key is stored with the remote UUID as a shared secret Next time devices connect, commit is computed by additionally hashing the previous DH session key before the random challenge. For sure it is not transfered when opening. c = sha256( UUID || DH pub || DH sess || Chall) Now the user is not bothered authenticating when the local party can derive the same commitment using his own, stored previous DH session key. After succesful connection the new DH session key becomes the new shared secret. As this does not exactly fit the protocols i found so far (and as such their security proofs), i'd be very interested to get an opinion from some more crypto enabled guys here. BTW. i did read about the "EKE" protocol, but i'm not sure what the extra security level is.

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >