Search Results

Search found 21 results on 1 pages for 'rescale'.

Page 1/1 | 1 

  • Can pdflatex (or any tex package) automatically rescale included images which have been reduced in s

    - by drfrogsplat
    I'm writing my thesis in LaTeX, generating it with pdflatex. I have a large number of figures, many of which are bitmaps (as opposed to SVG) in PNG/JPEG format. I've generally created them to be fairly high resolution (say 1600x1200-ish) to ensure that whatever size they end up in the document, they'll be at least 300dpi when printed. As I'm writing/laying out the document, I'm including graphics (using \includegraphics from the graphicx package) and setting widths/heights as appropriate (e.g. subfigures are quite small). I don't need the images to be any more than about 300 dpi at best, so where I have shrunk a 1600x1200 image down to say 5cm, the image is now at 800 dpi. So despite including some very small (on the page) images, the PDF is becoming quite large. Is there a way to tell pdflatex or graphicx (or something else involved?) to convert all images to a maximum of 300 dpi, based on the dimensions I'm setting with say \includegraphics[width=2in]{filename}? i.e. so it scales the image to a max of 600x600 pixels as it includes it in the PDF (leaving the original file untouched). I know I can resize the original images with various command line applications, and include the pre-resized versions, but given the images vary in size considerably, it wouldn't be as simple as making sure they're all 300dpi for a constant printed size. It'd also be nice to be able to easily create different versions of PDFs (web vs final print) without resizing images manually, so that the 'web' PDF capped images at say 72-100 dpi while the final print one could cap at 600 (if at all).

    Read the article

  • ImageView doesn't rescale Image to selected size

    - by Buni
    I'm using a ImageView with a fixed size for adding an icon to a menu. In my application, I use it a lot of times, but on this ImageView the Layout Params seem to not work. Unlike the others ImageViews, in this case, I'm using a template directly, but I think that's not the problem. <?xml version="1.0" encoding="utf-8"?> <ImageView xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="1dp" android:layout_height="1dp" android:scaleType="fitCenter" android:src="@drawable/ic_menu_moreoverflow_normal_holo_dark" android:contentDescription="ICON" /> Its been used in code as follows. ImageView iview =(ImageView) View.inflate(context, R.layout.icon, null); Theoretically, It should resize automatically the image, however, the images continues with the original size, although the size was 1dp. Where is the problem? Thanks a lot!

    Read the article

  • Image rescale and write rescaled image file in blackberry

    - by Karthick
    I am using the following code to resize and save the file in to the blackberry device. After image scale I try to write image file into device. But it gives the same data. (Height and width of the image are same).I have to make rescaled image file.Can anyone help me ??? class ResizeImage extends MainScreen implements FieldChangeListener { private String path="file:///SDCard/BlackBerry/pictures/test.jpg"; private ButtonField btn; ResizeImage() { btn=new ButtonField("Write File"); btn.setChangeListener(this); add(btn); } public void fieldChanged(Field field, int context) { if (field == btn) { try { InputStream inputStream = null; //Get File Connection FileConnection fileConnection = (FileConnection) Connector.open(path); if (fileConnection.exists()) { inputStream = fileConnection.openInputStream(); //byte data[]=inputStream.toString().getBytes(); ByteArrayOutputStream baos = new ByteArrayOutputStream(); int j = 0; while((j=inputStream.read()) != -1) { baos.write(j); } byte data[] = baos.toByteArray(); inputStream.close(); fileConnection.close(); WriteFile("file:///SDCard/BlackBerry/pictures/org_Image.jpg",data); EncodedImage eImage = EncodedImage.createEncodedImage(data,0,data.length); int scaleFactorX = Fixed32.div(Fixed32.toFP(eImage.getWidth()), Fixed32.toFP(80)); int scaleFactorY = Fixed32.div(Fixed32.toFP(eImage.getHeight()), Fixed32.toFP(80)); eImage=eImage.scaleImage32(scaleFactorX, scaleFactorY); WriteFile("file:///SDCard/BlackBerry/pictures/resize.jpg",eImage.getData()); BitmapField bit=new BitmapField(eImage.getBitmap()); add(bit); } } catch(Exception e) { System.out.println("Exception is ==> "+e.getMessage()); } } } void WriteFile(String fileName,byte[] data) { FileConnection fconn = null; try { fconn = (FileConnection) Connector.open(fileName,Connector.READ_WRITE); } catch (IOException e) { System.out.print("Error opening file"); } if (fconn.exists()) try { fconn.delete(); } catch (IOException e) { System.out.print("Error deleting file"); } try { fconn.create(); } catch (IOException e) { System.out.print("Error creating file"); } OutputStream out = null; try { out = fconn.openOutputStream(); } catch (IOException e) { System.out.print("Error opening output stream"); } try { out.write(data); } catch (IOException e) { System.out.print("Error writing to output stream"); } try { fconn.close(); } catch (IOException e) { System.out.print("Error closing file"); } } }

    Read the article

  • Cocoa : Scale Image dragged into an ImageWell

    - by Holli
    I am working on application to keep a comic book collection in order. The user should be able to drag an image of the cover artwork into the program via an ImageWell. Since it is not possible to drag the image out of the application again I don't need to save the picture in it's original size. An image at the size of the ImageWell would be just fine. The question is how do I rescale the image with my application? To make things even more complicated the ImageWell is bound with Core Data. So I need to rescale the image before Core Data will save the picture in its original size.

    Read the article

  • .net Drawing.Graphics.FromImage() returns blank black image

    - by joox
    I'm trying to rescale uploaded jpeg in asp.net So I go: Image original = Image.FromStream(myPostedFile.InputStream); int w=original.Width, h=original.Height; using(Graphics g = Graphics.FromImage(original)) { g.ScaleTransform(0.5f, 0.5f); ... // e.g. using (Bitmap done = new Bitmap(w, h, g)) { done.Save( Server.MapPath(saveas), ImageFormat.Jpeg ); //saves blank black, though with correct width and height } } this saves a virgin black jpeg whatever file i give it. Though if i take input image stream immediately into done bitmap, it does recompress and save it fine, like: Image original = Image.FromStream(myPostedFile.InputStream); using (Bitmap done = new Bitmap(original)) { done.Save( Server.MapPath(saveas), ImageFormat.Jpeg ); } Do i have to make some magic with g? upd: i tried: Image original = Image.FromStream(fstream); int w=original.Width, h=original.Height; using(Bitmap b = new Bitmap(original)) //also tried new Bitmap(w,h) using (Graphics g = Graphics.FromImage(b)) { g.DrawImage(original, 0, 0, w, h); //also tried g.DrawImage(b, 0, 0, w, h) using (Bitmap done = new Bitmap(w, h, g)) { done.Save( Server.MapPath(saveas), ImageFormat.Jpeg ); } } same story - pure black of correct dimensions

    Read the article

  • Sampler referencing in HLSL - Sampler parameter must come from a literal expression

    - by user1423893
    The following method works fine when referencing a sampler in HLSL float3 P = lightScreenPos; sampler ShadowSampler = DPFrontShadowSampler; float depth; if (alpha >= 0.5f) { // Reference the correct sampler ShadowSampler = DPFrontShadowSampler; // Front hemisphere 'P0' P.z = P.z + 1.0; P.x = P.x / P.z; P.y = P.y / P.z; P.z = lightLength / LightAttenuation.z; // Rescale viewport to be [0, 1] (texture coordinate space) P.x = 0.5f * P.x + 0.5f; P.y = -0.5f * P.y + 0.5f; depth = tex2D(ShadowSampler, P.xy).x; depth = 1.0 - depth; } else { // Reference the correct sampler ShadowSampler = DPBackShadowSampler; // Back hemisphere 'P1' P.z = 1.0 - P.z; P.x = P.x / P.z; P.y = P.y / P.z; P.z = lightLength / LightAttenuation.z; // Rescale viewport to be [0, 1] (texture coordinate space) P.x = 0.5f * P.x + 0.5f; P.y = -0.5f * P.y + 0.5f; depth = tex2D(ShadowSampler, P.xy).x; depth = 1.0 - depth; } // [Standard Depth Calculation] float mydepth = P.z; shadow = depth + Bias.x < mydepth ? 0.0f : 1.0f; If I try and do anything with the sampler reference outside the if statement then I get the following error: Sampler parameter must come from a literal expression This code demonstrates that float3 P = lightScreenPos; sampler ShadowSampler = DPFrontShadowSampler; if (alpha >= 0.5f) { // Reference the correct sampler ShadowSampler = DPFrontShadowSampler; // Front hemisphere 'P0' P.z = P.z + 1.0; P.x = P.x / P.z; P.y = P.y / P.z; P.z = lightLength / LightAttenuation.z; } else { // Reference the correct sampler ShadowSampler = DPBackShadowSampler; // Back hemisphere 'P1' P.z = 1.0 - P.z; P.x = P.x / P.z; P.y = P.y / P.z; P.z = lightLength / LightAttenuation.z; } // Rescale viewport to be [0, 1] (texture coordinate space) P.x = 0.5f * P.x + 0.5f; P.y = -0.5f * P.y + 0.5f; // [Standard Depth Calculation] float depth = tex2D(ShadowSampler, P.xy).x; depth = 1.0 - depth; float mydepth = P.z; shadow = depth + Bias.x < mydepth ? 0.0f : 1.0f; How can I reference the sampler in this manner without triggering the error?

    Read the article

  • synchronized in java - Proper use

    - by ZoharYosef
    I'm building a simple program to use in multi processes (Threads). My question is more to understand - when I have to use a reserved word synchronized? Do I need to use this word in any method that affects the bone variables? I know I can put it on any method that is not static, but I want to understand more. thank you! here is the code: public class Container { // *** data members *** public static final int INIT_SIZE=10; // the first (init) size of the set. public static final int RESCALE=10; // the re-scale factor of this set. private int _sp=0; public Object[] _data; /************ Constructors ************/ public Container(){ _sp=0; _data = new Object[INIT_SIZE]; } public Container(Container other) { // copy constructor this(); for(int i=0;i<other.size();i++) this.add(other.at(i)); } /** return true is this collection is empty, else return false. */ public synchronized boolean isEmpty() {return _sp==0;} /** add an Object to this set */ public synchronized void add (Object p){ if (_sp==_data.length) rescale(RESCALE); _data[_sp] = p; // shellow copy semantic. _sp++; } /** returns the actual amount of Objects contained in this collection */ public synchronized int size() {return _sp;} /** returns true if this container contains an element which is equals to ob */ public synchronized boolean isMember(Object ob) { return get(ob)!=-1; } /** return the index of the first object which equals ob, if none returns -1 */ public synchronized int get(Object ob) { int ans=-1; for(int i=0;i<size();i=i+1) if(at(i).equals(ob)) return i; return ans; } /** returns the element located at the ind place in this container (null if out of range) */ public synchronized Object at(int p){ if (p>=0 && p<size()) return _data[p]; else return null; }

    Read the article

  • matlab: putting a circled number onto a graph

    - by Jason S
    I want to put a circled number on a graph as a marker near (but not on) a point. Sounds easy, but I also want to be invariant of zoom/aspect ratio changes. Because of this invariant, I can't draw a circle as a line object (without redrawing it upon rescale); if I use a circle marker, I'd have to adjust its offset upon rescale. The simplest approach I can think of is to use the Unicode or Wingdings characters ① ② ③ etc. in a string for the text() function. But unicode doesn't seem to work right, and the following sample only works with ① and not for the other numbers (which yield rectangle boxes): works: clf; text(0.5,0.5,char(129),'FontName','WingDings') doesn't work (should be a circled 2): clf; text(0.5,0.5,char(130),'FontName','WingDings') What gives, and can anyone suggest a workaround?

    Read the article

  • Deferred rendering with VSM - Scaling light depth loses moments

    - by user1423893
    I'm calculating my shadow term using a VSM method. This works correctly when using forward rendered lights but fails with deferred lights. // Shadow term (1 = no shadow) float shadow = 1; // [Light Space -> Shadow Map Space] // Transform the surface into light space and project // NB: Could be done in the vertex shader, but doing it here keeps the // "light shader" abstraction and doesn't limit the number of shadowed lights float4x4 LightViewProjection = mul(LightView, LightProjection); float4 surf_tex = mul(position, LightViewProjection); // Re-homogenize // 'w' component is not used in later calculations so no need to homogenize (it will equal '1' if homogenized) surf_tex.xyz /= surf_tex.w; // Rescale viewport to be [0,1] (texture coordinate system) float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = -surf_tex.y * 0.5f + 0.5f; // Half texel offset //shadow_tex += (0.5 / 512); // Scaled distance to light (instead of 'surf_tex.z') float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; //float rescaled_dist_to_light = surf_tex.z; // [Variance Shadow Map Depth Calculation] // No filtering float2 moments = tex2D(ShadowSampler, shadow_tex).xy; // Flip the moments values to bring them back to their original values moments.x = 1.0 - moments.x; moments.y = 1.0 - moments.y; // Compute variance float E_x2 = moments.y; float Ex_2 = moments.x * moments.x; float variance = E_x2 - Ex_2; variance = max(variance, Bias.y); // Surface is fully lit if the current pixel is before the light occluder (lit_factor == 1) // One-tailed inequality valid if float lit_factor = (rescaled_dist_to_light <= moments.x - Bias.x); // Compute probabilistic upper bound (mean distance) float m_d = moments.x - rescaled_dist_to_light; // Chebychev's inequality float p = variance / (variance + m_d * m_d); p = ReduceLightBleeding(p, Bias.z); // Adjust the light color based on the shadow attenuation shadow *= max(lit_factor, p); This is what I know for certain so far: The lighting is correct if I do not try and calculate the shadow term. (No shadows) The shadow term is correct when calculated using forward rendered lighting. (VSM works with forward rendered lights) With the current rescaled light distance (lightAttenuation.y is the far plane value): float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; The light is correct and the shadow appears to be zoomed in and misses the blurring: When I do not rescale the light and use the homogenized 'surf_tex': float rescaled_dist_to_light = surf_tex.z; the shadows are blurred correctly but the lighting is incorrect and the cube model is no longer lit Why is scaling by the far plane value (LightAttenuation.y) zooming in too far? The only other factor involved is my world pixel position, which is calculated as follows: // [Position] float4 position; // [Screen Position] position.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above position.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component position.z = 1.0 - position.z; position.w = 1.0; // 1.0 = position.w / position.w // [World Position] position = mul(position, CameraViewProjectionInverse); // Re-homogenize position (xyz AND w, otherwise shadows will bend when camera is close) position.xyz /= position.w; position.w = 1.0; Using the inverse matrix of the camera's view x projection matrix does work for lighting but maybe it is incorrect for shadow calculation? EDIT: Light calculations for shadow including 'dist_to_light' // Work out the light position and direction in world space float3 light_position = float3(LightViewInverse._41, LightViewInverse._42, LightViewInverse._43); // Direction might need to be negated float3 light_direction = float3(-LightViewInverse._31, -LightViewInverse._32, -LightViewInverse._33); // Unnormalized light vector float3 dir_to_light = light_position - position; // Direction from vertex float dist_to_light = length(dir_to_light); // Normalise 'toLight' vector for lighting calculations dir_to_light = normalize(dir_to_light); EDIT2: These are the calculations for the moments (depth) //============================================= //---[Vertex Shaders]-------------------------- //============================================= DepthVSOutput depth_VS( float4 Position : POSITION, uniform float4x4 shadow_view, uniform float4x4 shadow_view_projection) { DepthVSOutput output = (DepthVSOutput)0; // First transform position into world space float4 position_world = mul(Position, World); output.position_screen = mul(position_world, shadow_view_projection); output.light_vec = mul(position_world, shadow_view).xyz; return output; } //============================================= //---[Pixel Shaders]--------------------------- //============================================= DepthPSOutput depth_PS(DepthVSOutput input) { DepthPSOutput output = (DepthPSOutput)0; // Work out the depth of this fragment from the light, normalized to [0, 1] float2 depth; depth.x = length(input.light_vec) / FarPlane; depth.y = depth.x * depth.x; // Flip depth values to avoid floating point inaccuracies depth.x = 1.0f - depth.x; depth.y = 1.0f - depth.y; output.depth = depth.xyxy; return output; } EDIT 3: I have tried the folloiwng: float4 pp; pp.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above pp.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component pp.z = 1.0 - pp.z; pp.w = 1.0; // 1.0 = position.w / position.w // Determine the depth of the pixel with respect to the light float4x4 LightViewProjection = mul(LightView, LightProjection); float4x4 matViewToLightViewProj = mul(CameraViewProjectionInverse, LightViewProjection); float4 vPositionLightCS = mul(pp, matViewToLightViewProj); float fLightDepth = vPositionLightCS.z / vPositionLightCS.w; // Transform from light space to shadow map texture space. float2 vShadowTexCoord = 0.5 * vPositionLightCS.xy / vPositionLightCS.w + float2(0.5f, 0.5f); vShadowTexCoord.y = 1.0f - vShadowTexCoord.y; // Offset the coordinate by half a texel so we sample it correctly vShadowTexCoord += (0.5f / 512); //g_vShadowMapSize This suffers the same problem as the second picture. I have tried storing the depth based on the view x projection matrix: output.position_screen = mul(position_world, shadow_view_projection); //output.light_vec = mul(position_world, shadow_view); output.light_vec = output.position_screen; depth.x = input.light_vec.z / input.light_vec.w; This gives a shadow that has lots surface acne due to horrible floating point precision errors. Everything is lit correctly though. EDIT 4: Found an OpenGL based tutorial here I have followed it to the letter and it would seem that the uv coordinates for looking up the shadow map are incorrect. The source uses a scaled matrix to get the uv coordinates for the shadow map sampler /// <summary> /// The scale matrix is used to push the projected vertex into the 0.0 - 1.0 region. /// Similar in role to a * 0.5 + 0.5, where -1.0 < a < 1.0. /// <summary> const float4x4 ScaleMatrix = float4x4 ( 0.5, 0.0, 0.0, 0.0, 0.0, -0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0 ); I had to negate the 0.5 for the y scaling (M22) in order for it to work but the shadowing is still not correct. Is this really the correct way to scale? float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = surf_tex.y * -0.5f + 0.5f; The depth calculations are exactly the same as the source code yet they still do not work, which makes me believe something about the uv calculation above is incorrect.

    Read the article

  • Restricting joystick within a radius of center

    - by Phil
    I'm using Unity3d iOs and am using the example joysticks that came with one of the packages. It works fine but the area the joystick moves in is a rectangle which is unintuitive for my type of game. I can figure out how to see if the distance between the center and the current point is too far but I can't figure out how to constrain it to a certain distance without interrupting the finger tracking. Here's the relevant code: using UnityEngine; using System.Collections; public class Boundary { public Vector2 min = Vector2.zero; public Vector2 max = Vector2.zero; } public class Joystick : MonoBehaviour{ static private Joystick[] joysticks; // A static collection of all joysticks static private bool enumeratedJoysticks=false; static private float tapTimeDelta = 0.3f; // Time allowed between taps public bool touchPad; // Is this a TouchPad? public Rect touchZone; public Vector2 deadZone = Vector2.zero; // Control when position is output public bool normalize = false; // Normalize output after the dead-zone? public Vector2 position; // [-1, 1] in x,y public int tapCount; // Current tap count private int lastFingerId = -1; // Finger last used for this joystick private float tapTimeWindow; // How much time there is left for a tap to occur private Vector2 fingerDownPos; private float fingerDownTime; private float firstDeltaTime = 0.5f; private GUITexture gui; // Joystick graphic private Rect defaultRect; // Default position / extents of the joystick graphic private Boundary guiBoundary = new Boundary(); // Boundary for joystick graphic public Vector2 guiTouchOffset; // Offset to apply to touch input private Vector2 guiCenter; // Center of joystick private Vector3 tmpv3; private Rect tmprect; private Color tmpclr; public float allowedDistance; public enum JoystickType { movement, rotation } public JoystickType joystickType; public void Start() { // Cache this component at startup instead of looking up every frame gui = (GUITexture) GetComponent( typeof(GUITexture) ); // Store the default rect for the gui, so we can snap back to it defaultRect = gui.pixelInset; if ( touchPad ) { // If a texture has been assigned, then use the rect ferom the gui as our touchZone if ( gui.texture ) touchZone = gui.pixelInset; } else { // This is an offset for touch input to match with the top left // corner of the GUI guiTouchOffset.x = defaultRect.width * 0.5f; guiTouchOffset.y = defaultRect.height * 0.5f; // Cache the center of the GUI, since it doesn't change guiCenter.x = defaultRect.x + guiTouchOffset.x; guiCenter.y = defaultRect.y + guiTouchOffset.y; // Let's build the GUI boundary, so we can clamp joystick movement guiBoundary.min.x = defaultRect.x - guiTouchOffset.x; guiBoundary.max.x = defaultRect.x + guiTouchOffset.x; guiBoundary.min.y = defaultRect.y - guiTouchOffset.y; guiBoundary.max.y = defaultRect.y + guiTouchOffset.y; } } public void Disable() { gameObject.active = false; enumeratedJoysticks = false; } public void ResetJoystick() { if (joystickType != JoystickType.rotation) { //Don't do anything if turret mode // Release the finger control and set the joystick back to the default position gui.pixelInset = defaultRect; lastFingerId = -1; position = Vector2.zero; fingerDownPos = Vector2.zero; if ( touchPad ){ tmpclr = gui.color; tmpclr.a = 0.025f; gui.color = tmpclr; } } else { //gui.pixelInset = defaultRect; lastFingerId = -1; position = position; fingerDownPos = fingerDownPos; if ( touchPad ){ tmpclr = gui.color; tmpclr.a = 0.025f; gui.color = tmpclr; } } } public bool IsFingerDown() { return (lastFingerId != -1); } public void LatchedFinger( int fingerId ) { // If another joystick has latched this finger, then we must release it if ( lastFingerId == fingerId ) ResetJoystick(); } public void Update() { if ( !enumeratedJoysticks ) { // Collect all joysticks in the game, so we can relay finger latching messages joysticks = (Joystick[]) FindObjectsOfType( typeof(Joystick) ); enumeratedJoysticks = true; } //CHeck if distance is over the allowed amount //Get centerPosition //Get current position //Get distance //If over, don't allow int count = iPhoneInput.touchCount; // Adjust the tap time window while it still available if ( tapTimeWindow > 0 ) tapTimeWindow -= Time.deltaTime; else tapCount = 0; if ( count == 0 ) ResetJoystick(); else { for(int i = 0;i < count; i++) { iPhoneTouch touch = iPhoneInput.GetTouch(i); Vector2 guiTouchPos = touch.position - guiTouchOffset; bool shouldLatchFinger = false; if ( touchPad ) { if ( touchZone.Contains( touch.position ) ) shouldLatchFinger = true; } else if ( gui.HitTest( touch.position ) ) { shouldLatchFinger = true; } // Latch the finger if this is a new touch if ( shouldLatchFinger && ( lastFingerId == -1 || lastFingerId != touch.fingerId ) ) { if ( touchPad ) { tmpclr = gui.color; tmpclr.a = 0.15f; gui.color = tmpclr; lastFingerId = touch.fingerId; fingerDownPos = touch.position; fingerDownTime = Time.time; } lastFingerId = touch.fingerId; // Accumulate taps if it is within the time window if ( tapTimeWindow > 0 ) { tapCount++; print("tap" + tapCount.ToString()); } else { tapCount = 1; print("tap" + tapCount.ToString()); //Tell gameobject that player has tapped turret joystick if (joystickType == JoystickType.rotation) { //TODO: Call! } tapTimeWindow = tapTimeDelta; } // Tell other joysticks we've latched this finger foreach ( Joystick j in joysticks ) { if ( j != this ) j.LatchedFinger( touch.fingerId ); } } if ( lastFingerId == touch.fingerId ) { // Override the tap count with what the iPhone SDK reports if it is greater // This is a workaround, since the iPhone SDK does not currently track taps // for multiple touches if ( touch.tapCount > tapCount ) tapCount = touch.tapCount; if ( touchPad ) { // For a touchpad, let's just set the position directly based on distance from initial touchdown position.x = Mathf.Clamp( ( touch.position.x - fingerDownPos.x ) / ( touchZone.width / 2 ), -1, 1 ); position.y = Mathf.Clamp( ( touch.position.y - fingerDownPos.y ) / ( touchZone.height / 2 ), -1, 1 ); } else { // Change the location of the joystick graphic to match where the touch is tmprect = gui.pixelInset; tmprect.x = Mathf.Clamp( guiTouchPos.x, guiBoundary.min.x, guiBoundary.max.x ); tmprect.y = Mathf.Clamp( guiTouchPos.y, guiBoundary.min.y, guiBoundary.max.y ); //Check distance float distance = Vector2.Distance(new Vector2(defaultRect.x, defaultRect.y), new Vector2(tmprect.x, tmprect.y)); float angle = Vector2.Angle(new Vector2(defaultRect.x, defaultRect.y), new Vector2(tmprect.x, tmprect.y)); if (distance < allowedDistance) { //Ok gui.pixelInset = tmprect; } else { //This is where I don't know what to do... } } if ( touch.phase == iPhoneTouchPhase.Ended || touch.phase == iPhoneTouchPhase.Canceled ) ResetJoystick(); } } } if ( !touchPad ) { // Get a value between -1 and 1 based on the joystick graphic location position.x = ( gui.pixelInset.x + guiTouchOffset.x - guiCenter.x ) / guiTouchOffset.x; position.y = ( gui.pixelInset.y + guiTouchOffset.y - guiCenter.y ) / guiTouchOffset.y; } // Adjust for dead zone float absoluteX = Mathf.Abs( position.x ); float absoluteY = Mathf.Abs( position.y ); if ( absoluteX < deadZone.x ) { // Report the joystick as being at the center if it is within the dead zone position.x = 0; } else if ( normalize ) { // Rescale the output after taking the dead zone into account position.x = Mathf.Sign( position.x ) * ( absoluteX - deadZone.x ) / ( 1 - deadZone.x ); } if ( absoluteY < deadZone.y ) { // Report the joystick as being at the center if it is within the dead zone position.y = 0; } else if ( normalize ) { // Rescale the output after taking the dead zone into account position.y = Mathf.Sign( position.y ) * ( absoluteY - deadZone.y ) / ( 1 - deadZone.y ); } } } So the later portion of the code handles the updated position of the joystick thumb. This is where I'd like it to track the finger position in a direction it still is allowed to move (like if the finger is too far up and slightly to the +X I'd like to make sure the joystick is as close in X and Y as allowed within the radius) Thanks for reading!

    Read the article

  • What's viewport mean to the jQuery guys?

    - by justSteve
    I came here intent on asking a 'how do i' type question focused on a particular UI problem i have to solve and i started using the term 'viewport'. Seems there's lots of contexts that tag evokes. Before i outline mine i should ask - is there a generally recognized consensus as to what a 'viewport' is? Any gallery or object browser that focuses on a UI principle where multiple frames/panes show feeds of different graphs where each feed might be a complex grid/graph? I'm imagining something where my page may open with a half dozen different grids (tabular data) stay updated via ajax interactions with various servers. UI elements provide control scale up or down a given grid with a quasi accordion slide kinda thingie. For example, one click on a scale up-or-down control would rescale the other viewports in the opposite direction. More clicks - more pronounced effect. So my 'how do i' question morphs into a 'how have others...' thx

    Read the article

  • Resizing screenshots/screen captures for inclusion in Beamer

    - by Stephen
    Sorry, this may or may not be a programming question directly, but I am trying to resize screenshots with Imagemagick and Gimp to include in a Beamer presentation, but it comes out even blurrier than the resizing done by LaTeX. For instance, in Beamer I might have a command to rescale the image \includegraphics[width=.5\textwidth]{fig.png}. Using something like \begin{frame} \message{width = \the\textwidth} \message{height = \the\textheight} \end{frame} I have gotten the \textwidth and \textheight parameters in points (345.69548, 261.92444). So I have a script (in Python) that sends a system call to Imagemagick: 'convert %s -resize %.6f@ resized_%s' % (f,a,f) where a is calculated as \textwidth*\textheight*0.5**2. When I then go back into my Beamer presentation and include the resized figure, \includegraphics{resized_fig.png}, the size looks approximately correct but it's super-blurry. I also tried resizing in Gimp (using the GUI) but no luck either... help? Thanks...

    Read the article

  • Scaling an Image in GWT

    - by Daniel
    Changing the size of an Image Widget in GWT changes the size of the image element, but does not rescale the image on the screen. Therefore, the following will not work: Image image = new Image(myImageResource); image.setHeight(newHeight); image.setWidth(newWidth); image.setPixelSize(newWidth, newHeight); This is because GWT implements its Image widget by setting the background-image of the HTML <img... /> element as the image, using CSS. How does one get the actual image to resize?

    Read the article

  • How to place text on top of image in ReportViewer

    - by Christoffer
    I feel very silly asking this, since it seems like it just should work, but I cannot make it work and cannot find anything in the documentation about this. The problem: I'm developing an application in Visual Studio 2010 that is utilizing ReportViewer, rendering the report locally. In the report designer, I place a textbox on top of an image. This looks fine in the designer, but when rendering the report, the textbox is rendered below the image. Now, before you suggest it, I have tried placing a textbox on top of a rectangle (or other control) with the BackgroundImage property set. This works. But: I cannot find a way to rescale the background image to fit the control. Setting a different dpi for the background image does nothing. Simply rescaling the image to a different resolution is not an option, since I eventually want to print the report. Does anybody have a solution to this?

    Read the article

  • iPhone animation with stretchable image

    - by ryyst
    I got a UIView subclass that draws a UIImage as its background. The image is created using the stretchableImageWithLeftCapWidth:topCapHeight: method, it has round edges. I also use an animation block inside of which I resize my view. I had hoped that during animation, the drawRect:method would get called sometimes, which would result in the background image getting drawn correctly. Unfortunately, the animation seems to render the image and then just rescale it during the animation, which obviously makes the formerly round edges getting nastily stretched. The only workaround I can imagine is put three separate UIImageViews (top cap, middle fill, bottom cap) above my original background image and then reposition the caps images and scaling the fill image. However, this seems quite complicated... Is there any better way I can prevent this from happening?

    Read the article

  • How to get the coordinate of Gridlayout

    - by Jessy
    Hi, I set my JPanel to GridLayout (6,6), with dimension (600,600) Each cell of the grid will display one pictures with different widths and heights. The picture first add to a JLabel, and the JLabel then added to the cells. How can retrieved the coordinate of the pictures in the cells and not the coordinate of cells? So far the out give these coordinate which equal height and width even on screen the pictures showed in different sizes. e.g. java.awt.Rectangle[x=100,y=100,width=100,height=100] java.awt.Rectangle[x=200,y=100,width=100,height=100] java.awt.Rectangle[x=300,y=100,width=100,height=100] The reason why I used GridLayout instead of gridBagLayout is that, I want each pictures to have boundary. If I use GridBagLayout, the grid will expand according to the picture size. I want grid size to be in fix size. JPanel pDraw = new JPanel(new GridLayout(6,6)); pDraw.setPreferredSize(new Dimension(600,600)); for (int i =0; i<(6*6); i++) { //get random number for height and width of the image int x = rand.nextInt(40)+(50); int y = rand.nextInt(40)+(50); ImageIcon icon = createImageIcon("bird.jpg"); //rescale the image according to the size selected Image img = icon.getImage().getScaledInstance(x,y,img.SCALE_SMOOTH); icon.setImage(img ); JLabel label = new JLabel(icon); pDraw.add(label); } for(Component component:components) { //retrieve the coordinate System.out.println(component.getBounds()); }

    Read the article

  • Good way to identify similar images?

    - by Nick
    I've developed a simple and fast algorithm in PHP to compare images for similarity. Its fast (~40 per second for 800x600 images) to hash and a unoptimised search algorithm can go through 3,000 images in 22 mins comparing each one against the others (3/sec). The basic overview is you get a image, rescale it to 8x8 and then convert those pixels for HSV. The Hue, Saturation and Value are then truncated to 4 bits and it becomes one big hex string. Comparing images basically walks along two strings, and then adds the differences it finds. If the total number is below 64 then its the same image. Different images are usually around 600 - 800. Below 20 and extremely similar. Are there any improvements upon this model I can use? I havent looked at how relevant the different components (hue, saturation and value) are to the comparison. Hue is probably quite important but the others? To speed up searches I could probably split the 4 bits from each part in half, and put the most significant bits first so if they fail the check then the lsb doesnt need to be checked at all. I dont know a efficient way to store bits like that yet still allow them to be searched and compared easily. I've been using a dataset of 3,000 photos (mostly unique) and there havent been any false positives. Its completely immune to resizes and fairly resistant to brightness and contrast changes.

    Read the article

  • Creating collaborative whiteboard drawing application

    - by Steven Sproat
    I have my own drawing program in place, with a variety of "drawing tools" such as Pen, Eraser, Rectangle, Circle, Select, Text etc. It's made with Python and wxPython. Each tool mentioned above is a class, which all have polymorphic methods, such as left_down(), mouse_motion(), hit_test() etc. The program manages a list of all drawn shapes -- when a user has drawn a shape, it's added to the list. This is used to manage undo/redo operations too. So, I have a decent codebase that I can hook collaborative drawing into. Each shape could be changed to know its owner -- the user who drew it, and to only allow delete/move/rescale operations to be performed on shapes owned by one person. I'm just wondering the best way to develop this. One person in the "session" will have to act as the server, I have no money to offer free central servers. Somehow users will need a way to connect to servers, meaning some kind of "discover servers" browser...or something. How do I broadcast changes made to the application? Drawing in realtime and broadcasting a message on each mouse motion event would be costly in terms of performance and things get worse the more users there are at a given time. Any ideas are welcome, I'm not too sure where to begin with developing this (or even how to test it)

    Read the article

  • Unusual RJS error

    - by rrb
    Hi, I am getting the following error in my RoR application: RJS error: TypeError: element is null Element.update("notice", "Comment Posted"); Element.update("allcomments", "\n\n\n \n\n waht now?\n\n \n\n \n\n \n\n asdfasdfa\n \n\n \n\n asdfasdf\n \n\n\n\n\n"); But when I hit the refresh button, I can see my partial updated. Here's my code: show_comments View: <table> <% comments.each do |my_comment| %> <tr> <td><%=h my_comment.comment%></td> </tr> <% end %> </table> show View: <div class="wrapper"> <div class="rescale"> <div class="img-main"> <%= image_tag @deal.photo.url %> </div> </div> <div class="description"> <p class ="description_content"> <%=h @deal.description %> </p> </div> </div> <p> <b>Category:</b> <%=h @deal.category %> </p> <p> <b>Base price:</b> <%=h @deal.base_price %> </p> <%#*<p>%> <%#*<b>Discount:</b>%> <%#=h @deal.discount %> <%#*</p>%> <%= link_to 'Edit', edit_deal_path(@deal) %> | <%= link_to 'Back', deals_path %> <p> <%= render :partial=>'deal_comments', :locals=>{ :comments=>Comment.new(:deal_id=>@deal.id)} %> </p> <div id="allcomments"> <%= render :partial=>'show_comments', :locals=>{ :comments=>Comment.find(@deal.comments)} %> </div> Controller: def create @comment = Comment.new(params[:comment]) render :update do |page| if @comment.save page.replace_html 'notice', 'Comment Posted' else page.replace_html 'notice', 'Something went wrong' end page.replace_html 'allcomments', :partial=> 'deals/show_comments', :locals=>{:comments=> @comment.deal.comments} end end def show_comments @deal = Deal.find(params[:deal_id]) render :partial=> "deals/show_comments", :locals=>{:comments=>@deal.comments} end end

    Read the article

1