Search Results

Search found 7107 results on 285 pages for 'processing efficiency'.

Page 51/285 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Rename image file on upload php

    - by blasteralfred
    Hi, I have a form which uploads and re sizes image. The html file file submits data to a php file. The script is as follows; Index.html <form action="resizer.php" method="post" enctype="multipart/form-data"> Image: <input type="file" name="file" /> <input type="submit" name="submit" value="upload" /> </form> Resizer.php <?php require_once('imageresizer.class.php'); $imagename = "myimagename"; //Path To Upload Directory $dirpath = "uploaded/"; //MAX WIDTH AND HEIGHT OF IMAGE $max_height = 100; $max_width = 100; //Create Image Control Object - Parameters(file name, file tmp name, file type, directory path) $resizer = new ImageResizer($_FILES['file']['name'],$_FILES['file']['tmp_name'],$dirpath); //RESIZE IMAGE - Parameteres(max height, max width) $resizer->resizeImage($max_height,$max_width); //Display Image $resizer->showResizedImage(); ?> imageresizer.class.php <?php class ImageResizer{ public $file_name; public $tmp_name; public $dir_path; //Set variables public function __construct($file_name,$tmp_name,$dir_path){ $this->file_name = $file_name; $this->tmp_name = $tmp_name; $this->dir_path = $dir_path; $this->getImageInfo(); $this->moveImage(); } //Move the uploaded image to the new directory and rename public function moveImage(){ if(!is_dir($this->dir_path)){ mkdir($this->dir_path,0777,true); } if(move_uploaded_file($this->tmp_name,$this->dir_path.'_'.$this->file_name)){ $this->setFileName($this->dir_path.'_'.$this->file_name); } } //Define the new filename public function setFileName($file_name){ $this->file_name = $file_name; return $this->file_name; } //Resize the image function with new max height and width public function resizeImage($max_height,$max_width){ $this->max_height = $max_height; $this->max_width = $max_width; if($this->height > $this->width){ $ratio = $this->height / $this->max_height; $new_height = $this->max_height; $new_width = ($this->width / $ratio); } elseif($this->height < $this->width){ $ratio = ($this->width / $this->max_width); $new_width = $this->max_width; $new_height = ($this->height / $ratio); } else{ $new_width = $this->max_width; $new_height = $this->max_height; } $thumb = imagecreatetruecolor($new_width, $new_height); switch($this->file_type){ case 1: $image = imagecreatefromgif($this->file_name); break; case 2: $image = imagecreatefromjpeg($this->file_name); break; case 3: $image = imagecreatefrompng($this->file_name); break; case 4: $image = imagecreatefromwbmp($this->file_name); } imagecopyresampled($thumb, $image, 0, 0, 0, 0, $new_width, $new_height, $this->width, $this->height); switch($this->file_type){ case 1: imagegif($thumb,$this->file_name); break; case 2: imagejpeg($thumb,$this->file_name,100); break; case 3: imagepng($thumb,$this->file_name,0); break; case 4: imagewbmp($thumb,$this->file_name); } imagedestroy($image); imagedestroy($thumb); } public function getImageInfo(){ list($width, $height, $type) = getimagesize($this->tmp_name); $this->width = $width; $this->height = $height; $this->file_type = $type; } public function showResizedImage(){ echo "<img src='".$this->file_name." />"; } public function onSuccess(){ header("location: index.php"); } } ?> Everything is working well. The image will be uploaded in it's original filename and extension with a "_" prefix. But i want to rename the image to "myimagename" on upload, which is a variable in "Resizer.php". How can i make this possible?? Thanks in advance :) blasteralfred

    Read the article

  • iPhone OS: Strategies for high density image work

    - by Jasconius
    I have a project that is coming around the bend this summer that is going to involve, potentially, an extremely high volume of image data for display. We are talking hundreds of 640x480-ish images in a given application session (scaled to a smaller resolution when displayed), and handfuls of very large (1280x1024 or higher) images at a time. I've already done some preliminary work and I've found that the typical 640x480ish image is just a shade under 1MB in memory when placed into a UIImageView and displayed... but the very large images can be a whopping 5+ MB's in some cases. This project is actually be targeted at the iPad, which, in my Instruments tests seems to cap out at about 80-100MB's of addressable physical memory. Details aside, I need to start thinking of how to move huge volumes of image data between virtual and physical memory while preserving the fluidity and responsiveness of the application, which will be high visibility. I'm probably on the higher ends of intermediate at Objective-C... so I am looking for some solid articles and advice on the following: 1) Responsible management of UIImage and UIImageView in the name of conserving physical RAM 2) Merits of using CGImage over UIImage, particularly for the huge images, and if there will be any performance gain 3) Anything dealing with memory paging particularly as it pertains to images I will epilogue by saying that the numbers I have above maybe off by about 10 or 15%. Images may or may not end up being bundled into the actual app itself as opposed to being loaded in from an external server.

    Read the article

  • Can anyone give me a sample DSP script in C/C++

    - by Andrew
    Im working on a (Audio) DSP project and just wondering if there are any sample (Open source) DSP example that are written in c or c++, for my MSP430 Chip. I just want something as a guideline so i can program my own script using the ACD and DCA on my board for sampling. http://focus.ti.com/docs/toolsw/folders/print/msp-exp430f5438.html Thats my board, MSP430F5438 Experimenter Board, from what i herd it can run dsp script via the USB connection with the computer. Im using CCS ( From TI, code composer studio) and Octave/Matlab. Just any DSP example scripts or sites that will help me create my own would be appreciated. What im tying to do, Partial audio (sampled) track -- Nyquist rate sampling -- over- and undersampling -- reconstruction of the audio track.

    Read the article

  • Solid FFmpeg wrapper for C#/.NET

    - by Lillemanden
    I have been searching the web for some time for a solid FFmpeg wrapper for C#/.NET. But I have yet to come up with something useful. I have found the following three projects, but all of them apears to be dead in early alpha stage. FFmpeg.NET ffmpeg-sharp FFLIB.NET So my question is if anyone knows of a wrapper project that is more mature? I am not looking for a full transcoding engine with job queues and more. Just a simple wrapper so I do not have to make a command line call and then parse the console output, but can make method calls and use eventlisteners for progress. And please feel free to mention any active projects, even if they are stil in the early stages.

    Read the article

  • php gdlib angle problem

    - by creativz
    I'm using php gd lib 5.2.13 and tried to make a picture with imagettftext ($image, $color and $font are defined of course). imagettftext($image, 12, 90, 10, 20, $black, $font, "This.is_a test 123"); //image, font size, angle, x value, y value, color, font, text As you can see I want the angle to be 90°. The problem is that the text is not beeing rotated properly, e.g. the dots are at the top (and not at the bottom) of the text. I read that this is a common issue and has been fixed in php gdlib 5.3, But since I have 5.2.13 running on a webhost (...) is there a solution to rotate it properly with using gdlib 5.2.13? Thanks!

    Read the article

  • How to generate a lower frequency version of a signal in Matlab?

    - by estourodepilha.com
    With a sine input, I tried to modify it's frequency cutting some lower frequencies in the spectrum, shifting the main frequency towards zero. As the signal is not fftshifted I tried to do that by eliminating some samples at the begin and at the end of the fft vector: interval = 1; samplingFrequency = 44100; signalFrequency = 440; sampleDuration = 1 / samplingFrequency; timespan = 1 : sampleDuration : (1 + interval); original = sin(2 * pi * signalFrequency * timespan); fourierTransform = fft(original); frequencyCut = 10; %% Hertz frequencyCut = floor(frequencyCut * (length(pattern) / samplingFrequency) / 4); %% Samples maxFrequency = length(fourierTransform) - (2 * frequencyCut); signal = ifft(fourierTransform(frequencyCut + 1:maxFrequency), 'symmetric'); But it didn't work as expected. I also tried to remove the center part of the spectrum, but it wielded a higher frequency sine wave too. How to make it right?

    Read the article

  • Matlab: Analysis of signal

    - by Mateusz
    Hi, I have a problem with this task: For free route perform frequency analysis and give parametrs of each signal component: time of beginning and ending of each component beginning and ending frequency amplitude (in time domain) in the beginning and end of each signal's component level of noise in dB Assume, that, the parametrs of each component like amplitude, frequency is changing lineary in time. Frequency of sampling is 1000Hz For example I have signal like this: Nx=64; fs=1000; t=1/fs*(0:Nx-1); %========================== A1=1; A2=4; f1=500; f2=1000; x1=A1*cos(2*pi*f1*t); x2=A2*sin(2*pi*f2*t); %========================== x=x1+x2;

    Read the article

  • How do you composite an image onto another image with PIL in Python?

    - by Sebastian
    I need to take an image and place it onto a new, generated white background in order for it to be converted into a downloadable desktop wallpaper. So the process would go: 1) Generate new, all white image with 1440x900 dimensions 2) Place existing image on top, centered 3) Save as single image In PIL, I see the ImageDraw object, but nothing indicates it can draw existing image data onto another image. Suggestions or links anyone can recommend?

    Read the article

  • Simple but efficient way to store a series of small changes to an image?

    - by finnw
    I have a series of images. Each one is typically (but not always) similar to the previous one, with 3 or 4 small rectangular regions updated. I need to record these changes using a minimum of disk space. The source images are not compressed, but I would like the deltas to be compressed. I need to be able to recreate the images exactly as input (so a lossy video codec is not appropriate.) I am thinking of something along the lines of: Composite the new image with a negative of the old image Save the composited image in any common format that can compress using RLE (probably PNG.) Recreate the second image by compositing the previous image with the delta. Although the images have an alpha channel, I can ignore it for the purposes of this function. Is there an easy-to-implement algorithm or free Java library with this capability?

    Read the article

  • GD! Converting a png image to jpeg and making the alpha by default white and not black.

    - by Shawn
    I tried something like this but it just makes the background of the image white, not necessarily the alpha of the image. I wanted to just upload everything as jpg's so if i could somehow "flatten" a png image with some transparently to default it to just be white so i can use it as a jpg instead. Appreciate any help. Thanks. $old = imagecreatefrompng($upload); $background = imagecolorallocate($old,255,255,255); imagefill($old, 0, 0, $background); imagealphablending($old, false); imagesavealpha($old, true);

    Read the article

  • Image coding library

    - by Dmitry
    Is there any good library for lossless image encoding/decoding that has compression rate more or less similar to PNG but decoding to raw RGB bitmap data would be much faster than PNG? Also alpha transparency is needed, but not essential because, alpha channel could be taken from separate image. Original problem lies in slowness of reading and decoding PNG files on iPhone using standard libraries. Obvious and the simples solution would have been storing raw RGB bitmap data, but then size of unpacked ipa is too large - 4 times larger than PNG files. So, I am trying to find some compromise solution.

    Read the article

  • Matlab - Propagate points orthogonally on to the edge of shape boundaries

    - by Graham
    Hi I have a set of points which I want to propagate on to the edge of shape boundary defined by a binary image. The shape boundary is defined by a 1px wide white edge. I also have the coordinates of these points stored in a 2 row by n column matrix. The shape forms a concave boundary with no holes within itself made of around 2500 points. I want to cast a ray from each point from the set of points in an orthogonal direction and detect at which point it intersects the shape boundary at. What would be the best method to do this? Are there some sort of ray tracing algorithms that could be used? Or would it be a case of taking orthogonal unit vector and multiplying it by a scalar and testing after multiplication if the end point of the vector is outside the shape boundary. When the end point of the unit vector is outside the shape, just find the point of intersection? Thank you very much in advance for any help!

    Read the article

  • Is their an optimal config/format for a TIFF when using Tesseract or other OCR?

    - by Zando
    I'm having a bizarre problem with Tesseract. I have a name, "Janice" that is in a 200x40 pixel tiff, that Tesseract interprets as a blank. I'm running hundreds of names through Tesseract and they are processed fine. What I'm actually doing, though, is breaking up a larger TIFF into smaller tiffs of one word each. In the larger TIFF, tesseract recognizes "Janice". What could cause it to hiccup in a TIFF that solely contains that word (and there's enough space around the word to not truncate any of the pixels)? I'm using ImageMagick to split the big TIFF, are there options I should set when reconstituting the new TIFF files?

    Read the article

  • Good library to load images of different formats

    - by codymanix
    Hi Iam developing an image viewer application just like irfan-view or acdsee which should be capable to view lots of different image file formats (not just the standard ones which can be done with System.Drawing.Image). Iam currently using ImageMagick but it isn't very fast and seems to be unstable with some image files. Can anyone suggest a good imaging library, ideally with a .NET wrapper already present?

    Read the article

  • Generating a scalogram of a signal

    - by Goz
    Hi there, I'm trying to build a scalogram view for my app to see whether there is relevant information we can retrieve from a wavelet transform as opposed to using a spectograms to see what can be retrieved via an FFT. So far I can take a wave form and I can perform the forward wavelet transform on it. However I am lost at the next step. How do I turn this information into power/energy information? I have a set of wave forms at different frequencies but I have, as I say, no frequency information. Can anyone tell me what the next step is for turning this transformed data into a scalogram? Any help would be much appreciated because my google skills are failing me!

    Read the article

  • Steganography Experiment - Trouble hiding message bits in DCT coefficients

    - by JohnHankinson
    I have an application requiring me to be able to embed loss-less data into an image. As such I've been experimenting with steganography, specifically via modification of DCT coefficients as the method I select, apart from being loss-less must also be relatively resilient against format conversion, scaling/DSP etc. From the research I've done thus far this method seems to be the best candidate. I've seen a number of papers on the subject which all seem to neglect specific details (some neglect to mention modification of 0 coefficients, or modification of AC coefficient etc). After combining the findings and making a few modifications of my own which include: 1) Using a more quantized version of the DCT matrix to ensure we only modify coefficients that would still be present should the image be JPEG'ed further or processed (I'm using this in place of simply following a zig-zag pattern). 2) I'm modifying bit 4 instead of the LSB and then based on what the original bit value was adjusting the lower bits to minimize the difference. 3) I'm only modifying the blue channel as it should be the least visible. This process must modify the actual image and not the DCT values stored in file (like jsteg) as there is no guarantee the file will be a JPEG, it may also be opened and re-saved at a later stage in a different format. For added robustness I've included the message multiple times and use the bits that occur most often, I had considered using a QR code as the message data or simply applying the reed-solomon error correction, but for this simple application and given that the "message" in question is usually going to be between 10-32 bytes I have plenty of room to repeat it which should provide sufficient redundancy to recover the true bits. No matter what I do I don't seem to be able to recover the bits at the decode stage. I've tried including / excluding various checks (even if it degrades image quality for the time being). I've tried using fixed point vs. double arithmetic, moving the bit to encode, I suspect that the message bits are being lost during the IDCT back to image. Any thoughts or suggestions on how to get this working would be hugely appreciated. (PS I am aware that the actual DCT/IDCT could be optimized from it's naive On4 operation using row column algorithm, or an FDCT like AAN, but for now it just needs to work :) ) Reference Papers: http://www.lokminglui.com/dct.pdf http://arxiv.org/ftp/arxiv/papers/1006/1006.1186.pdf Code for the Encode/Decode process in C# below: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Drawing.Imaging; using System.Drawing; namespace ImageKey { public class Encoder { public const int HIDE_BIT_POS = 3; // use bit position 4 (1 << 3). public const int HIDE_COUNT = 16; // Number of times to repeat the message to avoid error. // JPEG Standard Quantization Matrix. // (to get higher quality multiply by (100-quality)/50 .. // for lower than 50 multiply by 50/quality. Then round to integers and clip to ensure only positive integers. public static double[] Q = {16,11,10,16,24,40,51,61, 12,12,14,19,26,58,60,55, 14,13,16,24,40,57,69,56, 14,17,22,29,51,87,80,62, 18,22,37,56,68,109,103,77, 24,35,55,64,81,104,113,92, 49,64,78,87,103,121,120,101, 72,92,95,98,112,100,103,99}; // Maximum qauality quantization matrix (if all 1's doesn't modify coefficients at all). public static double[] Q2 = {1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1}; public static Bitmap Encode(Bitmap b, string key) { Bitmap response = new Bitmap(b.Width, b.Height, PixelFormat.Format32bppArgb); uint imgWidth = ((uint)b.Width) & ~((uint)7); // Maximum usable X resolution (divisible by 8). uint imgHeight = ((uint)b.Height) & ~((uint)7); // Maximum usable Y resolution (divisible by 8). // Start be transferring the unmodified image portions. // As we'll be using slightly less width/height for the encoding process we'll need the edges to be populated. for (int y = 0; y < b.Height; y++) for (int x = 0; x < b.Width; x++) { if( (x >= imgWidth && x < b.Width) || (y>=imgHeight && y < b.Height)) response.SetPixel(x, y, b.GetPixel(x, y)); } // Setup the counters and byte data for the message to encode. StringBuilder sb = new StringBuilder(); for(int i=0;i<HIDE_COUNT;i++) sb.Append(key); byte[] codeBytes = System.Text.Encoding.ASCII.GetBytes(sb.ToString()); int bitofs = 0; // Current bit position we've encoded too. int totalBits = (codeBytes.Length * 8); // Total number of bits to encode. for (int y = 0; y < imgHeight; y += 8) { for (int x = 0; x < imgWidth; x += 8) { int[] redData = GetRedChannelData(b, x, y); int[] greenData = GetGreenChannelData(b, x, y); int[] blueData = GetBlueChannelData(b, x, y); int[] newRedData; int[] newGreenData; int[] newBlueData; if (bitofs < totalBits) { double[] redDCT = DCT(ref redData); double[] greenDCT = DCT(ref greenData); double[] blueDCT = DCT(ref blueData); int[] redDCTI = Quantize(ref redDCT, ref Q2); int[] greenDCTI = Quantize(ref greenDCT, ref Q2); int[] blueDCTI = Quantize(ref blueDCT, ref Q2); int[] blueDCTC = Quantize(ref blueDCT, ref Q); HideBits(ref blueDCTI, ref blueDCTC, ref bitofs, ref totalBits, ref codeBytes); double[] redDCT2 = DeQuantize(ref redDCTI, ref Q2); double[] greenDCT2 = DeQuantize(ref greenDCTI, ref Q2); double[] blueDCT2 = DeQuantize(ref blueDCTI, ref Q2); newRedData = IDCT(ref redDCT2); newGreenData = IDCT(ref greenDCT2); newBlueData = IDCT(ref blueDCT2); } else { newRedData = redData; newGreenData = greenData; newBlueData = blueData; } MapToRGBRange(ref newRedData); MapToRGBRange(ref newGreenData); MapToRGBRange(ref newBlueData); for(int dy=0;dy<8;dy++) { for(int dx=0;dx<8;dx++) { int col = (0xff<<24) + (newRedData[dx+(dy*8)]<<16) + (newGreenData[dx+(dy*8)]<<8) + (newBlueData[dx+(dy*8)]); response.SetPixel(x+dx,y+dy,Color.FromArgb(col)); } } } } if (bitofs < totalBits) throw new Exception("Failed to encode data - insufficient cover image coefficients"); return (response); } public static void HideBits(ref int[] DCTMatrix, ref int[] CMatrix, ref int bitofs, ref int totalBits, ref byte[] codeBytes) { int tempValue = 0; for (int u = 0; u < 8; u++) { for (int v = 0; v < 8; v++) { if ( (u != 0 || v != 0) && CMatrix[v+(u*8)] != 0 && DCTMatrix[v+(u*8)] != 0) { if (bitofs < totalBits) { tempValue = DCTMatrix[v + (u * 8)]; int bytePos = (bitofs) >> 3; int bitPos = (bitofs) % 8; byte mask = (byte)(1 << bitPos); byte value = (byte)((codeBytes[bytePos] & mask) >> bitPos); // 0 or 1. if (value == 0) { int a = DCTMatrix[v + (u * 8)] & (1 << HIDE_BIT_POS); if (a != 0) DCTMatrix[v + (u * 8)] |= (1 << HIDE_BIT_POS) - 1; DCTMatrix[v + (u * 8)] &= ~(1 << HIDE_BIT_POS); } else if (value == 1) { int a = DCTMatrix[v + (u * 8)] & (1 << HIDE_BIT_POS); if (a == 0) DCTMatrix[v + (u * 8)] &= ~((1 << HIDE_BIT_POS) - 1); DCTMatrix[v + (u * 8)] |= (1 << HIDE_BIT_POS); } if (DCTMatrix[v + (u * 8)] != 0) bitofs++; else DCTMatrix[v + (u * 8)] = tempValue; } } } } } public static void MapToRGBRange(ref int[] data) { for(int i=0;i<data.Length;i++) { data[i] += 128; if(data[i] < 0) data[i] = 0; else if(data[i] > 255) data[i] = 255; } } public static int[] GetRedChannelData(Bitmap b, int sx, int sy) { int[] data = new int[8 * 8]; for (int y = sy; y < (sy + 8); y++) { for (int x = sx; x < (sx + 8); x++) { uint col = (uint)b.GetPixel(x,y).ToArgb(); data[(x - sx) + ((y - sy) * 8)] = (int)((col >> 16) & 0xff) - 128; } } return (data); } public static int[] GetGreenChannelData(Bitmap b, int sx, int sy) { int[] data = new int[8 * 8]; for (int y = sy; y < (sy + 8); y++) { for (int x = sx; x < (sx + 8); x++) { uint col = (uint)b.GetPixel(x, y).ToArgb(); data[(x - sx) + ((y - sy) * 8)] = (int)((col >> 8) & 0xff) - 128; } } return (data); } public static int[] GetBlueChannelData(Bitmap b, int sx, int sy) { int[] data = new int[8 * 8]; for (int y = sy; y < (sy + 8); y++) { for (int x = sx; x < (sx + 8); x++) { uint col = (uint)b.GetPixel(x, y).ToArgb(); data[(x - sx) + ((y - sy) * 8)] = (int)((col >> 0) & 0xff) - 128; } } return (data); } public static int[] Quantize(ref double[] DCTMatrix, ref double[] Q) { int[] DCTMatrixOut = new int[8*8]; for (int u = 0; u < 8; u++) { for (int v = 0; v < 8; v++) { DCTMatrixOut[v + (u * 8)] = (int)Math.Round(DCTMatrix[v + (u * 8)] / Q[v + (u * 8)]); } } return(DCTMatrixOut); } public static double[] DeQuantize(ref int[] DCTMatrix, ref double[] Q) { double[] DCTMatrixOut = new double[8*8]; for (int u = 0; u < 8; u++) { for (int v = 0; v < 8; v++) { DCTMatrixOut[v + (u * 8)] = (double)DCTMatrix[v + (u * 8)] * Q[v + (u * 8)]; } } return(DCTMatrixOut); } public static double[] DCT(ref int[] data) { double[] DCTMatrix = new double[8 * 8]; for (int v = 0; v < 8; v++) { for (int u = 0; u < 8; u++) { double cu = 1; if (u == 0) cu = (1.0 / Math.Sqrt(2.0)); double cv = 1; if (v == 0) cv = (1.0 / Math.Sqrt(2.0)); double sum = 0.0; for (int y = 0; y < 8; y++) { for (int x = 0; x < 8; x++) { double s = data[x + (y * 8)]; double dctVal = Math.Cos((2 * y + 1) * v * Math.PI / 16) * Math.Cos((2 * x + 1) * u * Math.PI / 16); sum += s * dctVal; } } DCTMatrix[u + (v * 8)] = (0.25 * cu * cv * sum); } } return (DCTMatrix); } public static int[] IDCT(ref double[] DCTMatrix) { int[] Matrix = new int[8 * 8]; for (int y = 0; y < 8; y++) { for (int x = 0; x < 8; x++) { double sum = 0; for (int v = 0; v < 8; v++) { for (int u = 0; u < 8; u++) { double cu = 1; if (u == 0) cu = (1.0 / Math.Sqrt(2.0)); double cv = 1; if (v == 0) cv = (1.0 / Math.Sqrt(2.0)); double idctVal = (cu * cv) / 4.0 * Math.Cos((2 * y + 1) * v * Math.PI / 16) * Math.Cos((2 * x + 1) * u * Math.PI / 16); sum += (DCTMatrix[u + (v * 8)] * idctVal); } } Matrix[x + (y * 8)] = (int)Math.Round(sum); } } return (Matrix); } } public class Decoder { public static string Decode(Bitmap b, int expectedLength) { expectedLength *= Encoder.HIDE_COUNT; uint imgWidth = ((uint)b.Width) & ~((uint)7); // Maximum usable X resolution (divisible by 8). uint imgHeight = ((uint)b.Height) & ~((uint)7); // Maximum usable Y resolution (divisible by 8). // Setup the counters and byte data for the message to decode. byte[] codeBytes = new byte[expectedLength]; byte[] outBytes = new byte[expectedLength / Encoder.HIDE_COUNT]; int bitofs = 0; // Current bit position we've decoded too. int totalBits = (codeBytes.Length * 8); // Total number of bits to decode. for (int y = 0; y < imgHeight; y += 8) { for (int x = 0; x < imgWidth; x += 8) { int[] blueData = ImageKey.Encoder.GetBlueChannelData(b, x, y); double[] blueDCT = ImageKey.Encoder.DCT(ref blueData); int[] blueDCTI = ImageKey.Encoder.Quantize(ref blueDCT, ref Encoder.Q2); int[] blueDCTC = ImageKey.Encoder.Quantize(ref blueDCT, ref Encoder.Q); if (bitofs < totalBits) GetBits(ref blueDCTI, ref blueDCTC, ref bitofs, ref totalBits, ref codeBytes); } } bitofs = 0; for (int i = 0; i < (expectedLength / Encoder.HIDE_COUNT) * 8; i++) { int bytePos = (bitofs) >> 3; int bitPos = (bitofs) % 8; byte mask = (byte)(1 << bitPos); List<int> values = new List<int>(); int zeroCount = 0; int oneCount = 0; for (int j = 0; j < Encoder.HIDE_COUNT; j++) { int val = (codeBytes[bytePos + ((expectedLength / Encoder.HIDE_COUNT) * j)] & mask) >> bitPos; values.Add(val); if (val == 0) zeroCount++; else oneCount++; } if (oneCount >= zeroCount) outBytes[bytePos] |= mask; bitofs++; values.Clear(); } return (System.Text.Encoding.ASCII.GetString(outBytes)); } public static void GetBits(ref int[] DCTMatrix, ref int[] CMatrix, ref int bitofs, ref int totalBits, ref byte[] codeBytes) { for (int u = 0; u < 8; u++) { for (int v = 0; v < 8; v++) { if ((u != 0 || v != 0) && CMatrix[v + (u * 8)] != 0 && DCTMatrix[v + (u * 8)] != 0) { if (bitofs < totalBits) { int bytePos = (bitofs) >> 3; int bitPos = (bitofs) % 8; byte mask = (byte)(1 << bitPos); int value = DCTMatrix[v + (u * 8)] & (1 << Encoder.HIDE_BIT_POS); if (value != 0) codeBytes[bytePos] |= mask; bitofs++; } } } } } } } UPDATE: By switching to using a QR Code as the source message and swapping a pair of coefficients in each block instead of bit manipulation I've been able to get the message to survive the transform. However to get the message to come through without corruption I have to adjust both coefficients as well as swap them. For example swapping (3,4) and (4,3) in the DCT matrix and then respectively adding 8 and subtracting 8 as an arbitrary constant seems to work. This survives a re-JPEG'ing of 96 but any form of scaling/cropping destroys the message again. I was hoping that by operating on mid to low frequency values that the message would be preserved even under some light image manipulation.

    Read the article

  • How do I use CSS to add a non-rectangular border around an image?

    - by KPL
    Hello people, I have three images, and they are not square or rectangular in shape. They are just like face of anyone. So, basically, my images are in the size 196x196 or anything like that, but complete square or rectangle with the face in the middle and transperant background in the rest of the portion. Now, I want to remove the transperant background too and just keep the faces. Don't know if this is possible and mind you, this isn't a programming question. EDIT (from comments): How do I put a border around the shape of the image, not a rectangular one around the boundary, using CSS.

    Read the article

  • How to display part of an image for the specific width and height?

    - by Brady Chu
    Recently I participated in a web project which has a huge large of images to handle and display on web page, we know that the width and height of images end users uploaded cannot be control easily and then they are hard to display. At first, I attempted to zoom in/out the images to rearch an appropriate presentation, and I made it, but my boss is still not satisfied with my solution, the following is my way: var autoResizeImage = function(maxWidth, maxHeight, objImg) { var img = new Image(); img.src = objImg.src; img.onload = function() { var hRatio; var wRatio; var Ratio = 1; var w = img.width; var h = img.height; wRatio = maxWidth / w; hRatio = maxHeight / h; if (maxWidth == 0 && maxHeight == 0) { Ratio = 1; } else if (maxWidth == 0) { if (hRatio < 1) { Ratio = hRatio; } } else if (maxHeight == 0) { if (wRatio < 1) { Ratio = wRatio; } } else if (wRatio < 1 || hRatio < 1) { Ratio = (wRatio <= hRatio ? wRatio : hRatio); } if (Ratio < 1) { w = w * Ratio; h = h * Ratio; } w = w <= 0 ? 250 : w; h = h <= 0 ? 370 : h; objImg.height = h; objImg.width = w; }; }; This way is only intended to limit the max width and height for the image so that every image in album still has different width and height which are still very urgly. And right at this minute, I know we can create a DIV and use the image as its background image, this way is too complicated and not direct I don't want to take. So I's wondering whether there is a better way to display images with the fixed width and height without presentation distortion? Thanks.

    Read the article

  • Parallelism in Python

    - by fmark
    What are the options for achieving parallelism in Python? I want to perform a bunch of CPU bound calculations over some very large rasters, and would like to parallelise them. Coming from a C background, I am familiar with three approaches to parallelism: Message passing processes, possibly distributed across a cluster, e.g. MPI. Explicit shared memory parallelism, either using pthreads or fork(), pipe(), et. al Implicit shared memory parallelism, using OpenMP. Deciding on an approach to use is an exercise in trade-offs. In Python, what approaches are available and what are their characteristics? Is there a clusterable MPI clone? What are the preferred ways of achieving shared memory parallelism? I have heard reference to problems with the GIL, as well as references to tasklets. In short, what do I need to know about the different parallelization strategies in Python before choosing between them?

    Read the article

  • Sending series of images to display like a movie on iPhone

    - by unknownthreat
    Allow me to elaborate more. On the server, we will have a program that will take data from iPhone and process that data and produce series of images. Each time an image is generated, it will be send back to display on iPhone. I have done all of the things above using UDP, OpenGL, and such. It works. The images are transferred to iPhone and can be displayed, but it is slow. The image's resolution is around 320 x 420 and we send the image pixels by pixels. This naive implementation leads to a slow framerate. I can see around 2-3 frames per second. There are also some UDP packets dropped, and this is expected. Are there any sort of compression method available for something like this? Are there any other method that can make this better? NOTE: please don't just write "compression" as an answer, because we are aware that we will need to do it in some ways.

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >