Search Results

Search found 5072 results on 203 pages for 'graph drawing'.

Page 168/203 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • HOW TO: Draggable legend in matplotlib

    - by Adam Fraser
    QUESTION: I'm drawing a legend on an axes object in matplotlib but the default positioning which claims to place it in a smart place doesn't seem to work. Ideally, I'd like to have the legend be draggable by the user. How can this be done? SOLUTION: Well, I found bits and pieces of the solution scattered among mailing lists. I've come up with a nice modular chunk of code that you can drop in and use... here it is: class DraggableLegend: def __init__(self, legend): self.legend = legend self.gotLegend = False legend.figure.canvas.mpl_connect('motion_notify_event', self.on_motion) legend.figure.canvas.mpl_connect('pick_event', self.on_pick) legend.figure.canvas.mpl_connect('button_release_event', self.on_release) legend.set_picker(self.my_legend_picker) def on_motion(self, evt): if self.gotLegend: dx = evt.x - self.mouse_x dy = evt.y - self.mouse_y loc_in_canvas = self.legend_x + dx, self.legend_y + dy loc_in_norm_axes = self.legend.parent.transAxes.inverted().transform_point(loc_in_canvas) self.legend._loc = tuple(loc_in_norm_axes) self.legend.figure.canvas.draw() def my_legend_picker(self, legend, evt): return self.legend.legendPatch.contains(evt) def on_pick(self, evt): if evt.artist == self.legend: bbox = self.legend.get_window_extent() self.mouse_x = evt.mouseevent.x self.mouse_y = evt.mouseevent.y self.legend_x = bbox.xmin self.legend_y = bbox.ymin self.gotLegend = 1 def on_release(self, event): if self.gotLegend: self.gotLegend = False ...and in your code... def draw(self): ax = self.figure.add_subplot(111) scatter = ax.scatter(np.random.randn(100), np.random.randn(100)) legend = DraggableLegend(ax.legend()) I emailed the Matplotlib-users group and John Hunter was kind enough to add my solution it to SVN HEAD. On Thu, Jan 28, 2010 at 3:02 PM, Adam Fraser wrote: I thought I'd share a solution to the draggable legend problem since it took me forever to assimilate all the scattered knowledge on the mailing lists... Cool -- nice example. I added the code to legend.py. Now you can do leg = ax.legend() leg.draggable() to enable draggable mode. You can repeatedly call this func to toggle the draggable state. I hope this is helpful to people working with matplotlib.

    Read the article

  • iPhone: how do i redraw subviews while pinch zooming a uiscrollview

    - by Mike
    I am developing an iPhone app that places multiple custom UIViews as subviews in a UIScrollView. The subviews are placed on top of each other as transparent views as each view has its own drawing routines that traces parts of the base view. The base view is a UIImageView that is typically a large image that I want the user to be able to pan and zoom in and out of. The problem I am having is that when I zoom in and out of my UIScrollView, the subviews do not redraw themselves while the user is zooming. I can reposition and scale the subviews properly once the zoom is completed, but the user experience is less than desirable. I have not been able to find a way to either hide or redraw the subviews as the zoom is taking place to scale the subviews along with the ImageView. Any ideas? thanks! Here is the code that I have implemented: - (void)scrollViewDidEndZooming:(UIScrollView *)scrollView withView:(UIView *)view atScale:(float)scale { for (UIView *view in subViews) { [view updateView:scale]; } } - (UIView *)viewForZoomingInScrollView:(UIScrollView *) scrollView { return imageView; }

    Read the article

  • Cast Graphics to Image in C#

    - by WebDevHobo
    I have a pictureBox on a Windows Form. I do the following to load a PNG file into it. Bitmap bm = (Bitmap)Image.FromFile("Image.PNG", true); Bitmap tmp; public Form1() { InitializeComponent(); this.tmp = new Bitmap(bm.Width, bm.Height); } private void pictureBox1_Paint(object sender, PaintEventArgs e) { e.Graphics.DrawImage(this.bm, new Rectangle(0, 0, tmp.Width, tmp.Height), 0, 0, tmp.Width, tmp.Height, GraphicsUnit.Pixel); } However, I need to draw things on the image and then have the result displayed again. Drawing rectangles can only be done via the Graphics class. I'd need to draw the needed rectangles on the image, make it an instance of the Image class again and save that to this.bm I can add a button that executes this.pictureBox1.Refresh();, forcing the pictureBox to be painted again, but I can't cast Graphics to Image. Because of that, I can't save the edits to the this.bm bitmap. That's my problem, and I see no way out.

    Read the article

  • Draw Bitmap with alpha channel

    - by Paja
    I have a Format32bppArgb backbuffer, where I draw some lines: var g = Graphics.FromImage(bitmap); g.Clear(Color.FromArgb(0)); var rnd = new Random(); for (int i = 0; i < 5000; i++) { int x1 = rnd.Next(ClientRectangle.Left, ClientRectangle.Right); int y1 = rnd.Next(ClientRectangle.Top, ClientRectangle.Bottom); int x2 = rnd.Next(ClientRectangle.Left, ClientRectangle.Right); int y2 = rnd.Next(ClientRectangle.Top, ClientRectangle.Bottom); Color color = Color.FromArgb(rnd.Next(0, 255), rnd.Next(0, 255), rnd.Next(0, 255)); g.DrawLine(new Pen(color), x1, y1, x2, y2); } Now I want to copy bitmap in Paint event. I do it like this: void Form1Paint(object sender, PaintEventArgs e) { e.Graphics.DrawImageUnscaled(bitmap, 0, 0); } Hovewer, the DrawImageUnscaled copies pixels and applies the alpha channel, thus pixels with alpha == 0 won't have any effect. But I need raw byte copy, so pixels with alpha == 0 are also copied. So the result of these operations should be that e.Graphics contains exact byte-copy of the bitmap. How to do that? Summary: When drawing a bitmap, I don't want to apply the alpha channel, I merely want to copy the pixels.

    Read the article

  • How to update a QPixmap in a QGraphicsView with PyQt

    - by pops
    I am trying to paint on a QPixmap inside a QGraphicsView. The painting works fine, but the QGraphicsView doesn't update it. Here is some working code: #!/usr/bin/env python from PyQt4 import QtCore from PyQt4 import QtGui class Canvas(QtGui.QPixmap): """ Canvas for drawing""" def __init__(self, parent=None): QtGui.QPixmap.__init__(self, 64, 64) self.parent = parent self.imH = 64 self.imW = 64 self.fill(QtGui.QColor(0, 255, 255)) self.color = QtGui.QColor(0, 0, 0) def paintEvent(self, point=False): if point: p = QtGui.QPainter(self) p.setPen(QtGui.QPen(self.color, 1, QtCore.Qt.SolidLine)) p.drawPoints(point) def clic(self, mouseX, mouseY): self.paintEvent(QtCore.QPoint(mouseX, mouseY)) class GraphWidget(QtGui.QGraphicsView): """ Display, zoom, pan...""" def __init__(self): QtGui.QGraphicsView.__init__(self) self.im = Canvas(self) self.imH = self.im.height() self.imW = self.im.width() self.zoomN = 1 self.scene = QtGui.QGraphicsScene(self) self.scene.setItemIndexMethod(QtGui.QGraphicsScene.NoIndex) self.scene.setSceneRect(0, 0, self.imW, self.imH) self.scene.addPixmap(self.im) self.setScene(self.scene) self.setTransformationAnchor(QtGui.QGraphicsView.AnchorUnderMouse) self.setResizeAnchor(QtGui.QGraphicsView.AnchorViewCenter) self.setMinimumSize(400, 400) self.setWindowTitle("pix") def mousePressEvent(self, event): if event.buttons() == QtCore.Qt.LeftButton: pos = self.mapToScene(event.pos()) self.im.clic(pos.x(), pos.y()) #~ self.scene.update(0,0,64,64) #~ self.updateScene([QtCore.QRectF(0,0,64,64)]) self.scene.addPixmap(self.im) print('items') print(self.scene.items()) else: return QtGui.QGraphicsView.mousePressEvent(self, event) def wheelEvent(self, event): if event.delta() > 0: self.scaleView(2) elif event.delta() < 0: self.scaleView(0.5) def scaleView(self, factor): n = self.zoomN * factor if n < 1 or n > 16: return self.zoomN = n self.scale(factor, factor) if __name__ == '__main__': import sys app = QtGui.QApplication(sys.argv) widget = GraphWidget() widget.show() sys.exit(app.exec_()) The mousePressEvent does some painting on the QPixmap. But the only solution I have found to update the display is to make a new instance (which is not a good solution). How do I just update it?

    Read the article

  • Populating Models from other Models in Django?

    - by JT
    This is somewhat related to the question posed in this question but I'm trying to do this with an abstract base class. For the purposes of this example lets use these models: class Comic(models.Model): name = models.CharField(max_length=20) desc = models.CharField(max_length=100) volume = models.IntegerField() ... <50 other things that make up a Comic> class Meta: abstract = True class InkedComic(Comic): lines = models.IntegerField() class ColoredComic(Comic): colored = models.BooleanField(default=False) In the view lets say we get a reference to an InkedComic id since the tracer, err I mean, inker is done drawing the lines and it's time to add color. Once the view has added all the color we want to save a ColoredComic to the db. Obviously we could do inked = InkedComic.object.get(pk=ink_id) colored = ColoredComic() colored.name = inked.name etc, etc. But really it'd be nice to do: colored = ColoredComic(inked_comic=inked) colored.colored = True colored.save() I tried to do class ColoredComic(Comic): colored = models.BooleanField(default=False) def __init__(self, inked_comic = False, *args, **kwargs): super(ColoredComic, self).__init__(*args, **kwargs) if inked_comic: self.__dict__.update(inked_comic.__dict__) self.__dict__.update({'id': None}) # Remove pk field value but it turns out the ColoredComic.objects.get(pk=1) call sticks the pk into the inked_comic keyword, which is obviously not intended. (and actually results in a int does not have a dict exception) My brain is fried at this point, am I missing something obvious, or is there a better way to do this?

    Read the article

  • C# Web Reference and PHP

    - by Louis
    I am attempting to call a Web Service (created in PHP) from my C# application. I have successfully added the Web Reference in Visual Studios, however I cannot figure out exactly how to invoke the call to the web service. I have been following this tutorial: http://sanity-free.org/article25.html however when I try and compile I get a "The type or namespace name 'SimpleService' could not be found". My C# code is as follows: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; using System.IO; namespace inVision_OCR { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void translateButton_Click(object sender, EventArgs e) { // We need to snap the image here somehow . . . // Open up the image and read its contents into a string FileStream file = new FileStream("\\Hard Disk\\ocr_images\\test.jpg", FileMode.Open); StreamReader sr = new StreamReader(file); string s = sr.ReadToEnd(); sr.Close(); // Using SOAP, pass this message to our development server SimpleService svc = new SimpleService(); string s1 = svc.getOCR("test"); MessageBox.Show(s); } } } Any help would be appreciated.

    Read the article

  • Android SDK fails to install

    - by Paul Breed
    When I try to install the android SDK it fails to install. My OS is Windows XP I just downloaded and installed Java JDK 1.6 Java -version from the command line returns: http://stackoverflow.com/questions/ask java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) Client VM (build 14.3-b01, mixed mode, sharing) My environment vars have: JAVA_HOME=c:\progra~1\java\jdk1.6.0_11 I downloaded android-sdk-r04-windows.zip I unziped it in V:\AndroidInstall\ When I go to the V:\androindinstall\android-sdk-windows and type "SDK Install.exe" nothing happens... If I go this from graph When I do this from a graphical file viewer I get a quick flash that looks like a command line window and nothing.... When I try to run android list targets from the tool directory I get: Error: Error parsing the sdk. Error: V:\androindinstall\android-sdk-windows\platforms is missing. Error: Unable to parse SDK content. So the basic install setup is not happening. Additional clues: I have a G1 and Android 1.0 was running on this machine. (Almost a year ago) I've updated my G1 to 1.6 so I thought I'd update my SDK before starting new development. When I tried to upgrade it tried and then died as the "directory was in use" So I cleaned out all the android directories, rebooted and redownloaded everythign from scratch. Now it won't run at all. I've clearly got something in an unhappy state, but I've cleaned up all the directories and no remanants seem to be running I've rebooted.... I've missed somethign I just can't figure out what. Paul

    Read the article

  • How to best transfer large payloads of data using wsHttp with WCF with message security

    - by jpierson
    I have a case where I need to transfer large amounts of serialized object graphs (via NetDataContractSerializer) using WCF using wsHttp. I'm using message security and would like to continue to do so. Using this setup I would like to transfer serialized object graph which can sometimes approach around 300MB or so but when I try to do so I've started seeing a exception of type System.InsufficientMemoryException appear. After a little research it appears that by default in WCF that a result to a service call is contained within a single message by default which contains the serialized data and this data is buffered by default on the server until the whole message is completely written. Thus the memory exception is being caused by the fact that the server is running out of memory resources that it is allowed to allocate because that buffer is full. The two main recommendations that I've come across are to use streaming or chunking to solve this problem however it is not clear to me what that involves and whether either solution is possible with my current setup (wsHttp/NetDataContractSerializer/Message Security). So far I understand that to use streaming message security would not work because message encryption and decryption need to work on the whole set of data and not a partial message. Chunking however sounds like it might be possible however it is not clear to me how it would be done with the other constraints that I've listed. If anybody could offer some guidance on what solutions are available and how to go about implementing it I would greatly appreciate it. Related resources: Chunking Channel How to: Enable Streaming Large attachments over WCF Custom Message Encoder Another spotting of InsufficientMemoryException I'm also interested in any type of compression that could be done on this data but it looks like I would probably be best off doing this at the transport level once I can transition into .NET 4.0 so that the client will automatically support the gzip headers if I understand this properly.

    Read the article

  • Metaprogramming - self explanatory code - tutorials, articles, books

    - by elena
    Hello everybody, I am looking into improving my programming skils (actually I try to do my best to suck less each year, as our Jeff Atwood put it), so I was thinking into reading stuff about metaprogramming and self explanatory code. I am looking for something like an idiot's guide to this (free books for download, online resources). Also I want more than your average wiki page and also something language agnostic or preferably with Java examples. Do you know of such resources that will allow to efficiently put all of it into practice (I know experience has a lot to say in all of this but i kind of want to build experience avoiding the flow bad decisions - experience - good decisions)? EDIT: Something of the likes of this example from the Pragmatic Programmer: ...implement a mini-language to control a simple drawing package... The language consists of single-letter commands. Some commands are followed by a single number. For example, the following input would draw a rectangle: P 2 # select pen 2 D # pen down W 2 # draw west 2cm N 1 # then north 1 E 2 # then east 2 S 1 # then back south U # pen up Thank you!

    Read the article

  • PrintableArea in C# - Bug?

    - by Brandi
    I am having an issue with PageSettings.PrintableArea's width and height values. Width, Height, and Size properties claim to "get or set" the values. Also, the inflate() function claims to change the size based on values passed in. However, all of these attempts to change the value have not worked. Inflate() is ignore (no error, just passes as if it worked, but the values remain unchanged. Attempting to set the height, width, or size gives a compiler error: "Cannot modify the return value of 'System.Drawing.Printing.PageSettings.PrintableArea' because it is not a variable". I get the feeling that this means the "or set" part of the description is a lie. Why I want to know this: (Someone always asks...) I have a printing application (C#, WinForm) that for most things is working rather well. I can set the printer settings and page settings objects to control what displays in the print dialog's printer properties. However, with Microsoft Office Document Image Writer, these settings are sometimes ignored, and the paper size returns as 0, 0 even when it displayed something else. All I really want it for it to be WYSIWYG as far as the displayed values go, so I change the paper size back to what it should be, but the printable area, if it is wrong, makes the resulting image wonky. The resulting image is the size of the printable area instead of the value in papersize. Just wondering if there was a reason for this or a way to get it not to do that. Thanks in advance. :)

    Read the article

  • CGContextSetShadow() - shadow direction reversed between iOS 3.0 and 4.0?

    - by Pascal
    I've been using CGContextSetShadowWithColor() in my Quartz drawing code on the iPhone to generate the "stomped in" look for text and other things (in drawRect: and drawLayer:inContext:). Worked perfectly, but when running the exact same code against iOS 3.2 and now iOS 4.0 I noticed that the shadows are all in the opposite direction. E.g. in the following code I set a black shadow to be 1 pixel above the text, which gave it a "pressed in" look, and now this shadow is 1px below the text, giving it a standard shadow. ... CGContextSetShadowWithColor(context, CGSizeMake(0.f, 1.f), 0.5f, shadowColor); CGContextShowGlyphsAtPoint(context, origin.x, origin.y, glyphs, length); ... Now I don't know whether I am (or have been) doing something wrong or whether there has been a change to the handling of this setting. I haven't applied any transformation that would explain this to me, at least not knowingly. I've flipped the text matrix in one instance, but not in others and this behavior is consistent. Plus I wasn't able to find anything about this in the SDK Release Notes, so it looks like it's probably me. What might be the issue?

    Read the article

  • JScript.NET private variables

    - by Paul Podlipensky
    I'm wondering about JScript.NET private variables. Please take a look on the following code: import System; import System.Windows.Forms; import System.Drawing; var jsPDF = function(){ var state = 0; var beginPage = function(){ state = 2; out('beginPage'); } var out = function(text){ if(state == 2){ var st = 3; } MessageBox.Show(text + ' ' + state); } var addHeader = function(){ out('header'); } return { endDocument: function(){ state = 1; addHeader(); out('endDocument'); }, beginDocument: function(){ beginPage(); } } } var j = new jsPDF(); j.beginDocument(); j.endDocument(); Output: beginPage 2 header 2 endDocument 2 if I run the same script in any browser, the output is: beginPage 2 header 1 endDocument 1 Why it is so?? Thanks, Paul.

    Read the article

  • Force-directed graphing

    - by David
    Hello, I'm trying to write a force-directed or force-atlas code base for a graphing application I'm building for myself. Here is an example of what I'm attempting: http://sawamuland.com/flash/graph.html I managed to find some pseudo code to accomplish what I'd like on the Wiki Force-atlas article. I've converted this into ActionScript 3.0 code since it's a Flash application. Here is my source: var timestep:int = 0; var damping:int = 0; var total_kinetic_engery:int = 0; for (var node in list) { var net_force:int = 0; for (var other_node in list) { net_force += coulombRepulsion(node, other_node, nodeList); } for (var spring in list[node].relations) { net_force += hookeAttraction(node, spring, nodeList); } list[node].velocity += (timestep * net_force) * damping; list[node].position += timestep * list[node].velocity; total_kinetic_engery += list[node].mass * (list[node].velocity) ^ 2; } The problem now is finding pseudo code or a function to perform the the coulomb repulsion and hooke attraction code. I'm not exactly sure how to accomplish this. Does anyone know of a good reference I can look at...understand and implement quickly? Best.

    Read the article

  • MS Chart in WPF, Setting the DataSource is not creating the Series

    - by Shaik Phakeer
    Hi All, Here I am trying to assign the datasource (using same code given in the sample application) and create a graph, only difference is i am doing it in WPF WindowsFormsHost. due to some reason the datasource is not being assigned properly and i am not able to see the series ("Series 1") being created. wired thing is that it is working in the Windows Forms application but not in the WPF one. am i missing something and can somebody help me? Thanks <Window x:Class="SEDC.MDM.WinUI.WindowsFormsHostWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:wf="clr-namespace:System.Windows.Forms;assembly=System.Windows.Forms" xmlns:CHR="clr- namespace:System.Windows.Forms.DataVisualization.Charting;assembly=System.Windows.Forms.Dat aVisualization" Title="HostingWfInWpf" Height="230" Width="338"> <Grid x:Name="grid1"> </Grid> </Window> private void drawChartDataBinding() { System.Windows.Forms.Integration.WindowsFormsHost host = new System.Windows.Forms.Integration.WindowsFormsHost(); string fileNameString = @"C:\Users\Shaik\MSChart\WinSamples\WinSamples\data\chartdata.mdb"; // initialize a connection string string myConnectionString = "PROVIDER=Microsoft.Jet.OLEDB.4.0;Data Source=" + fileNameString; // define the database query string mySelectQuery = "SELECT * FROM REPS;"; // create a database connection object using the connection string OleDbConnection myConnection = new OleDbConnection(myConnectionString); // create a database command on the connection using query OleDbCommand myCommand = new OleDbCommand(mySelectQuery, myConnection); Chart Chart1 = new Chart(); // set chart data source Chart1.DataSource = myCommand; // set series members names for the X and Y values Chart1.Series"Series 1".XValueMember = "Name"; Chart1.Series"Series 1".YValueMembers = "Sales"; // data bind to the selected data source Chart1.DataBind(); myCommand.Dispose(); myConnection.Close(); host.Child = Chart1; this.grid1.Children.Add(host); } Shaik

    Read the article

  • Memory leak when using Workflow 4.0 SqlWorkflowInstanceStore and PersistableIdleAction.Unload

    - by Rohland
    Hi, This particular problem is driving me nuts. I wonder if anyone has experienced a similar problem. If I load up a workflow then unload it and perform a memory snapshot then the result is predictable - my workflow is no longer in memory. However, if I load up a workflow and set the PersistableIdle action to PersistableIdleAction.Unload and let the workflow idle the workflow remains in memory even though the Unload action fires. I used ANTS Memory Profiler to debug this issue. This is the object retention graph outputted showing that an internal object is hanging onto my workflow instance. Can anyone else verify this problem? My code amounts to the following: Create SqlWorkflowInstanceStore and setup lock owner handle -- At this point I take a memory snapshot Create an instance of Workflow1 Set the PersistableIdle action Apply the instancestore to Workflow1 Setup action event handlers for Idle, Unload, UnhandledException etc. Persist the workflow instance Run the workflow instance Wait for instance to idle (caused by Delay activity) Ensure the Unload action is fired -- At this point I take a second memory snapshot From the above image, it is clear that the only object referencing Workflow1 is some internal event handlers result which I have no ability to dispose of. Any clues?

    Read the article

  • Force a UIView to redraw immediately, instead of during next run loop

    - by Justin Kent
    I've created a UIImagePicker / camera view, with a toolbar and custom button for taking a snapshot. I can't really change to using the default way because of the custom button, and I'm drawing on top of the view. When you hit the button, I want to take a screenshot using UIGetScreenImage(); however, the toolbar is showing up in the image, even if I hide it first: //hide the toolbar self.toolbar.hidden = YES; // capture the screen pixels CGImageRef screenCap = UIGetScreenImage(); I'm pretty sure this is because even though the toolbar is hidden, it gets redrawn once the function returns and we enter the next run loop - after UIGetScreenImage is called. I tried making the following addition, but it didn't help: //hide the toolbar self.toolbar.hidden = YES; [self.toolbar drawRect:CGRectMake(0, 0, 320, 52)]; // capture the screen pixels CGImageRef screenCap = UIGetScreenImage(); I also tried using setNeedsDisplay, but that doesn't work either because once again the draw happens after the current function returns. Any suggestions? Thanks!

    Read the article

  • Pixel Perfect Collision Detection in HTML5 Canvas

    - by Armin Ronacher
    Hi, I want to check a collision between two Sprites in HTML5 canvas. So for the sake of the discussion, let's assume that both sprites are IMG objects and a collision means that the alpha channel is not 0. Now both of these sprites can have a rotation around the object's center but no other transformation in case this makes this any easier. Now the obvious solution I came up with would be this: calculate the transformation matrix for both figure out a rough estimation of the area where the code should test (like offset of both + calculated extra space for the rotation) for all the pixels in the intersecting rectangle, transform the coordinate and test the image at the calculated position (rounded to nearest neighbor) for the alpha channel. Then abort on first hit. The problem I see with that is that a) there are no matrix classes in JavaScript which means I have to do that in JavaScript which could be quite slow, I have to test for collisions every frame which makes this pretty expensive. Furthermore I have to replicate something I already have to do on drawing (or what canvas does for me, setting up the matrices). I wonder if I'm missing anything here and if there is an easier solution for collision detection.

    Read the article

  • How to add Custom View +Relative Layout into ViewGroup

    - by TimothyMiller
    Hi I am creating a View where you can draw on the screen, using a view, where I would like to have a button/titlebar drawn at the top of the screen. Here is my current code public class FingerPaint extends Activity implements ColorPickerDialog.OnColorChangedListener { private Paint mPaint; private MyView mView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); LinearLayout main = new LinearLayout(this); mView = new MyView(this); main.addView(this.getLayoutInflater().inflate( R.layout.topbar, null )); main.addView(mView); main.bringChildToFront(mView); setContentView(main); // mView.addView(this.getLayoutInflater().inflate( R.layout.topbar, null )); mPaint = new Paint(); mPaint.setAntiAlias(true); mPaint.setDither(true); mPaint.setColor(0xFFFF0000); mPaint.setStyle(Paint.Style.STROKE); mPaint.setStrokeJoin(Paint.Join.ROUND); mPaint.setStrokeCap(Paint.Cap.ROUND); mPaint.setStrokeWidth(12); mBitmaps=new Bitmap[100]; location=0; actualSize=0; mEmboss = new EmbossMaskFilter(new float[] { 1, 1, 1 }, 0.4f, 6, 3.5f); mBlur = new BlurMaskFilter(8, BlurMaskFilter.Blur.NORMAL); setContentView(main); } public class MyView extends View{ ......... } But when run, only the topbar.xml view is shown. I want the status bar from topbar and the rest down to be from the myView (for drawing on the screen like paint). Am I using ViewGroup properly?

    Read the article

  • What is the target color profile in Image.FromFile?

    - by Jan Zich
    I am curious what the useEmbeddedColorManagement parameter in System.Drawing.Image.FromFile actually does. This parameter directory corresponds to a GDI+ parameter in the same method of the same class. So debugging .NET source does not lead anywhere. If my understanding of color profiles is correct, a color profile is basically a mapping which describes how particular RGB triples (or CMYK or something else) map into the so called Profile Connection Space (CIELAB or CIEXYZ). Now, if I open an image with embedded color in .NET setting useEmbeddedColorManagement to true, my experience is I get an image whose RGB values are not exactly the same as the original values in the file, i.e. is transformed. Since the original image was an RGB and the new is also an RGB, there must have been a transformation from the embedded color profile to a Profile Connection Space and the back to RGB. The thing which I don’t understand is what is the target color system? Is it some default Windows color profile? Is the current monitor profile? Is it sRGB?

    Read the article

  • C# Check if character exists in encoding

    - by Alvin Wong
    I am writing a program that a part renders a bitmap font in CP437. In a function that renders the text with I want to be able to check whether a char is available in CP437 before the encoding conversion, like: public static void DrawCharacter(this Graphics g, char c) { if (char_exist_in_encoding(Encoding.GetEncoding(437), c) { byte[] src = Encoding.Unicode.GetBytes(c.ToString()); byte[] dest = Encoding.Convert(Encoding.Unicode, Encoding.GetEncoding(437), src); DrawCharacter(g, dest[0]); // Call the void(this Graphics, byte) overload } } Without the check, any characters outside CP437 will result in a '?' (63, 0x3F). I want to hide any invalid characters completely. Is there an implementation of char_exist_in_encoding other than the following stupid approach? public static bool char_exist_in_encoding(Encoding e, char c) { if (c == '?') return true; byte[] src = Encoding.Unicode.GetBytes(c.ToString()); byte[] dest = Encoding.Convert(Encoding.Unicode, Encoding.GetEncoding(437), src); if (dest[0] == 0x3F) return false; return true; } Perhaps not very relevant, but the bitmap is created like this: Bitmap b = new Bitmap(256 * 8, 16); Graphics g = Graphics.FromImage(b); g.TextRenderingHint = System.Drawing.Text.TextRenderingHint.SingleBitPerPixelGridFit; Font f = new Font("Whatever 8x16 bitmap font", 16, GraphicsUnit.Pixel); for (byte i = 0; i < 255; i++) { byte[] arr = Encoding.Convert(Encoding.GetEncoding(437), Encoding.Unicode, new byte[] { i }); char c = Encoding.Unicode.GetChars(arr)[0]; g.DrawString(c.ToString(), f, Brushes.Black, i * 8 - 3, 0); // Don't know why it needs a 3px offset } b.Save(@"D:\chars.png");

    Read the article

  • How do you use asynchronous ORMs without huge callback chains?

    - by hornairs
    I'm using the relatively immature Joose Javascript ORM plugin (project page) to persist objects in an Appcelerator Titanium (company page) mobile project. Since it's client side storage, the application has to check to see if the database is initialized before starting up the ORM since it inspects the DB tables to construct the classes. My problem is that this sequence of operations (and if this one is like this, other things down the road) takes a lot of callbacks to complete. I have a lot of jumping around in the code that isn't apparent to a maintainer and results in some complex call graphs and whatnot. So, I ask these questions: How would you asynchronously initialize a database and populate it with seed data using an ORM that needs the schema to be correct to function? Do you have any general strategies or links for async/event driven programming and keeping the call graph simple and understandable? Do you have any suggestions for Javascript ORMs/meta object systems that work with HTML 5 as a storage engine and are hopefully framework agnostic? Am I just a big newb and should be able to work this out with ease? Thanks folks!

    Read the article

  • App crashes only after second execution only in Release configuration

    - by denbec
    Hey all, i know this is probably not an easy question to answer, as it's hard to describe on my hand. I have an app that runs without problems on the device in Debug Configuration (also multiple times). Once I put it into Release Configuration (which I need before publishing?), the app starts without problems and I can proceed to the next page, where I show an core-plot graph. BUT only if I run it from xcode. As soon as I end the App and start it again, it opens without problems, but on the next page, it crashes. Now I don't have anything to debug other than the crash report: Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0xcf10000a Crashed Thread: 0 Thread 0 Crashed: 0 libobjc.A.dylib 0x000026f2 objc_msgSend + 14 1 StandbyCheck 0x0001fbea -[CPXYTheme newGraph] (CPXYTheme.m:36) 2 StandbyCheck 0x00007c06 -[SCGraphCell initWithStyle:reuseIdentifier:] (SCGraphCell.m:28) 3 StandbyCheck 0x00076b4a -[TTTableViewDataSource tableView:cellForRowAtIndexPath:] (TTTableViewDataSource.m:128) 4 UIKit 0x0007797a -[UITableView(UITableViewInternal) _createPreparedCellForGlobalRow:withIndexPath:] + 514 5 UIKit 0x000776b0 -[UITableView(UITableViewInternal) _createPreparedCellForGlobalRow:] + 28 6 UIKit 0x00037e78 -[UITableView(_UITableViewPrivate) _updateVisibleCellsNow] + 940 7 UIKit 0x000367d4 -[UITableView layoutSubviews] + 176 8 StandbyCheck 0x000734b8 -[TTTableView layoutSubviews] (TTTableView.m:226) [...] Now, can someone point in any direction? What are the differences in Debug/Release Modes? How could I possibly debug this failure? I've been searching for hours now, please help me :( Thanks, Dennis

    Read the article

  • Problem with my whiteboard application

    - by swift
    I have to develop a whiteboard application in which both the local user and the remote user should be able to draw simultaneously, is this possible? If possible then any logic? I have already developed a code but in which i am not able to do this, when the remote user starts drawing the shape which i am drawing is being replaced by his shape and co-ordinates. This problem is only when both draw simultaneously. any idea guys? Here is my code class Paper extends JPanel implements MouseListener,MouseMotionListener,ActionListener { static BufferedImage image; int bpressed; Color color; Point start; Point end; Point mp; Button elipse=new Button("elipse"); Button rectangle=new Button("rect"); Button line=new Button("line"); Button empty=new Button(""); JButton save=new JButton("Save"); JButton erase=new JButton("Erase"); String selected; int ex,ey;//eraser DatagramSocket dataSocket; JButton button = new JButton("test"); Client client; Point p=new Point(); int w,h; public Paper(DatagramSocket dataSocket) { this.dataSocket=dataSocket; client=new Client(dataSocket); System.out.println("paper"); setBackground(Color.white); addMouseListener(this); addMouseMotionListener(this); color = Color.black; setBorder(BorderFactory.createLineBorder(Color.black)); //save.setPreferredSize(new Dimension(100,20)); save.setMaximumSize(new Dimension(75,27)); erase.setMaximumSize(new Dimension(75,27)); } public void paintComponent(Graphics g) { try { g.drawImage(image, 0, 0, this); Graphics2D g2 = (Graphics2D)g; g2.setPaint(Color.black); if(selected==("elipse")) g2.drawOval(start.x, start.y,(end.x-start.x),(end.y-start.y)); else if(selected==("rect")) g2.drawRect(start.x, start.y, (end.x-start.x),(end.y-start.y)); else if(selected==("line")) g2.drawLine(start.x,start.y,end.x,end.y); } catch(Exception e) {} } //Function to draw the shape on image public void draw() { Graphics2D g2 = image.createGraphics(); g2.setPaint(color); if(selected=="line") g2.drawLine(start.x, start.y, end.x, end.y); if(selected=="elipse") g2.drawOval(start.x, start.y, (end.x-start.x),(end.y-start.y)); if(selected=="rect") g2.drawRect(start.x, start.y, (end.x-start.x),(end.y-start.y)); repaint(); g2.dispose(); start=null; } //To add the point to the board which is broadcasted by the server public synchronized void addPoint(Point ps,String varname,String shape,String event) { try { if(end==null) end = new Point(); if(start==null) start = new Point(); if(shape.equals("elipse")) selected="elipse"; else if(shape.equals("line")) selected="line"; else if(shape.equals("rect")) selected="rect"; else if(shape.equals("erase")) { selected="erase"; erase(); } if(end!=null && start!=null) { if(varname.equals("end")) end=ps; if(varname.equals("mp")) mp=ps; if(varname.equals("start")) start=ps; if(event.equals("drag")) repaint(); else if(event.equals("release")) draw(); } } catch(Exception e) { e.printStackTrace(); } } //To set the size of the image public void setWidth(int x,int y) { System.out.println("("+x+","+y+")"); w=x; h=y; image = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB); Graphics2D g2 = image.createGraphics(); g2.setPaint(Color.white); g2.fillRect(0,0,w,h); g2.dispose(); } //Function which provides the erase functionality public void erase() { Graphics2D pic=(Graphics2D) image.getGraphics(); pic.setPaint(Color.white); pic.fillRect(start.x, start.y, 10, 10); } //Function to add buttons into the panel, calling this function returns a panel public JPanel addButtons() { JPanel buttonpanel=new JPanel(); JPanel row1=new JPanel(); JPanel row2=new JPanel(); JPanel row3=new JPanel(); JPanel row4=new JPanel(); buttonpanel.setPreferredSize(new Dimension(80,80)); //buttonpanel.setMinimumSize(new Dimension(150,150)); row1.setLayout(new BoxLayout(row1,BoxLayout.X_AXIS)); row1.setPreferredSize(new Dimension(150,150)); row2.setLayout(new BoxLayout(row2,BoxLayout.X_AXIS)); row3.setLayout(new BoxLayout(row3,BoxLayout.X_AXIS)); row4.setLayout(new BoxLayout(row4,BoxLayout.X_AXIS)); buttonpanel.setLayout(new BoxLayout(buttonpanel,BoxLayout.Y_AXIS)); elipse.addActionListener(this); rectangle.addActionListener(this); line.addActionListener( this); save.addActionListener( this); erase.addActionListener( this); buttonpanel.add(Box.createRigidArea(new Dimension(10,10))); row1.add(elipse); row1.add(Box.createRigidArea(new Dimension(5,0))); row1.add(rectangle); buttonpanel.add(row1); buttonpanel.add(Box.createRigidArea(new Dimension(10,10))); row2.add(line); row2.add(Box.createRigidArea(new Dimension(5,0))); row2.add(empty); buttonpanel.add(row2); buttonpanel.add(Box.createRigidArea(new Dimension(10,10))); row3.add(save); buttonpanel.add(row3); buttonpanel.add(Box.createRigidArea(new Dimension(10,10))); row4.add(erase); buttonpanel.add(row4); return buttonpanel; } //To save the image drawn public void save() { try { ByteArrayOutputStream bos = new ByteArrayOutputStream(); JPEGImageEncoder encoder = JPEGCodec.createJPEGEncoder(bos); JFileChooser fc = new JFileChooser(); fc.showSaveDialog(this); encoder.encode(image); byte[] jpgData = bos.toByteArray(); FileOutputStream fos = new FileOutputStream(fc.getSelectedFile()+".jpeg"); fos.write(jpgData); fos.close(); //add replce confirmation here } catch (IOException e) { System.out.println(e); } } public void mouseClicked(MouseEvent arg0) { // TODO Auto-generated method stub } @Override public void mouseEntered(MouseEvent arg0) { } public void mouseExited(MouseEvent arg0) { // TODO Auto-generated method stub } public void mousePressed(MouseEvent e) { if(selected=="line"||selected=="erase") { start=e.getPoint(); client.broadcast(start,"start", selected,"press"); } else if(selected=="elipse"||selected=="rect") { mp = e.getPoint(); client.broadcast(mp,"mp", selected,"press"); } } public void mouseReleased(MouseEvent e) { if(start!=null) { if(selected=="line") { end=e.getPoint(); client.broadcast(end,"end", selected,"release"); } else if(selected=="elipse"||selected=="rect") { end.x = Math.max(mp.x,e.getX()); end.y = Math.max(mp.y,e.getY()); client.broadcast(end,"end", selected,"release"); } draw(); } //start=null; } public void mouseDragged(MouseEvent e) { if(end==null) end = new Point(); if(start==null) start = new Point(); if(selected=="line") { end=e.getPoint(); client.broadcast(end,"end", selected,"drag"); } else if(selected=="erase") { start=e.getPoint(); erase(); client.broadcast(start,"start", selected,"drag"); } else if(selected=="elipse"||selected=="rect") { start.x = Math.min(mp.x,e.getX()); start.y = Math.min(mp.y,e.getY()); end.x = Math.max(mp.x,e.getX()); end.y = Math.max(mp.y,e.getY()); client.broadcast(start,"start", selected,"drag"); client.broadcast(end,"end", selected,"drag"); } repaint(); } @Override public void mouseMoved(MouseEvent arg0) { // TODO Auto-generated method stub } public void actionPerformed(ActionEvent e) { if(e.getSource()==elipse) selected="elipse"; if(e.getSource()==line) selected="line"; if(e.getSource()==rectangle) selected="rect"; if(e.getSource()==save) save(); if(e.getSource()==erase) { selected="erase"; erase(); } } } class Button extends JButton { String name; public Button(String name) { this.name=name; Dimension buttonSize = new Dimension(35,35); setMaximumSize(buttonSize); } public void paintComponent(Graphics g) { super.paintComponent(g); Graphics2D g2 = (Graphics2D)g; g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); //g2.setStroke(new BasicStroke(1.2f)); if (name == "line") g.drawLine(5,5,30,30); if (name == "elipse") g.drawOval(5,7,25,20); if (name== "rect") g.drawRect(5,5,25,23); } }

    Read the article

  • Problem with using APACHE-POI to convert PPT to Image

    - by SpawnCxy
    Hi all, I got a problem when I try to use Apache POI project to convert my PPT to Images.My code as follows: FileInputStream is = new FileInputStream("test.ppt"); SlideShow ppt = new SlideShow(is); is.close(); Dimension pgsize = ppt.getPageSize(); Slide[] slide = ppt.getSlides(); for (int i = 0; i < slide.length; i++) { BufferedImage img = new BufferedImage(pgsize.width, pgsize.height, BufferedImage.TYPE_INT_RGB); Graphics2D graphics = img.createGraphics(); //clear the drawing area graphics.setPaint(Color.white); graphics.fill(new Rectangle2D.Float(0, 0, pgsize.width, pgsize.height)); //render slide[i].draw(graphics); //save the output FileOutputStream out = new FileOutputStream("slide-" + (i+1) + ".png"); javax.imageio.ImageIO.write(img, "png", out); out.close(); It works fine except that all Chinese words are converted to some squares.The png image I got is like following image: Then how can I fix this?Thanks in advance!

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >