Search Results

Search found 5521 results on 221 pages for 'replacement parts'.

Page 200/221 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • FFMPEG dropping frames while encoding JPEG sequence at color change

    - by Matt
    I'm trying to put together a slide show using imagemagick and FFMPEG. I use imagemagick to expand a single photo into 30fps video (imagemagick also handles things like putting some text captions on the frames along the way). When I go to let ffmpeg digest it into a video it clips along nicely on the color parts of the video, but when it gets to a black and white section it reports "frame= 2030 fps=102 q=32766.0 Lsize= 5203kB time=00:01:07.60 bitrate= 630.5kbits/s dup=0 drop=703" and drops every frame of video until it hits something with color. As you can imagine this results in entire photos being removed from the slideshow. Here is my latest dump... ffmpeg -y -r 30 -i "teststream/%06d.jpg" -c:v libx264 -r 30 newffmpeg.mp4 ffmpeg version git-2012-12-10-c3bb333 Copyright (c) 2000-2012 the FFmpeg developers built on Dec 10 2012 22:02:04 with gcc 4.6.1 (Ubuntu/Linaro 4.6.1-9ubuntu3) configuration: --enable-gpl --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libx264 --enable-nonfree --enable-version3 libavutil 52. 12.100 / 52. 12.100 libavcodec 54. 79.101 / 54. 79.101 libavformat 54. 49.100 / 54. 49.100 libavdevice 54. 3.102 / 54. 3.102 libavfilter 3. 26.101 / 3. 26.101 libswscale 2. 1.103 / 2. 1.103 libswresample 0. 17.102 / 0. 17.102 libpostproc 52. 2.100 / 52. 2.100 Input #0, image2, from 'teststream/%06d.jpg': Duration: 00:12:02.80, start: 0.000000, bitrate: N/A Stream #0:0: Video: mjpeg, yuvj444p, 720x480 [SAR 72:72 DAR 3:2], 25 fps, 25 tbr, 25 tbn, 25 tbc [libx264 @ 0x3450140] using SAR=1/1 [libx264 @ 0x3450140] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0x3450140] profile High, level 3.0 [libx264 @ 0x3450140] 264 - core 129 r2 1cffe9f - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'newffmpeg.mp4': Metadata: encoder : Lavf54.49.100 Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuvj420p, 720x480 [SAR 1:1 DAR 3:2], q=-1--1, 15360 tbn, 30 tbc Stream mapping: Stream #0:0 - #0:0 (mjpeg - libx264) Press [q] to stop, [?] for help Input stream #0:0 frame changed from size:720x480 fmt:yuvj444p to size:720x480 fmt:yuvj422p Input stream #0:0 frame changed from size:720x480 fmt:yuvj422p to size:720x480 fmt:yuvj444pp=584 frame= 2030 fps=102 q=32766.0 Lsize= 5203kB time=00:01:07.60 bitrate= 630.5kbits/s dup=0 drop=703 video:5179kB audio:0kB subtitle:0 global headers:0kB muxing overhead 0.472425% [libx264 @ 0x3450140] frame I:9 Avg QP:20.10 size: 33933 [libx264 @ 0x3450140] frame P:636 Avg QP:24.12 size: 6737 [libx264 @ 0x3450140] frame B:1385 Avg QP:27.04 size: 514 [libx264 @ 0x3450140] consecutive B-frames: 2.5% 15.2% 13.2% 69.2% [libx264 @ 0x3450140] mb I I16..4: 8.3% 80.3% 11.5% [libx264 @ 0x3450140] mb P I16..4: 1.5% 2.5% 0.2% P16..4: 41.7% 18.0% 10.3% 0.0% 0.0% skip:25.9% [libx264 @ 0x3450140] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 26.6% 0.6% 0.1% direct: 0.2% skip:72.3% L0:35.0% L1:60.3% BI: 4.7% [libx264 @ 0x3450140] 8x8 transform intra:64.1% inter:75.1% [libx264 @ 0x3450140] coded y,uvDC,uvAC intra: 51.6% 78.0% 43.7% inter: 10.6% 14.9% 2.1% [libx264 @ 0x3450140] i16 v,h,dc,p: 29% 19% 6% 46% [libx264 @ 0x3450140] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 23% 15% 17% 5% 9% 10% 7% 8% 6% [libx264 @ 0x3450140] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 31% 18% 11% 5% 9% 10% 6% 6% 4% [libx264 @ 0x3450140] i8c dc,h,v,p: 46% 18% 24% 12% [libx264 @ 0x3450140] Weighted P-Frames: Y:20.1% UV:18.7% [libx264 @ 0x3450140] ref P L0: 59.2% 23.2% 13.1% 4.3% 0.2% [libx264 @ 0x3450140] ref B L0: 88.7% 8.3% 3.0% [libx264 @ 0x3450140] ref B L1: 95.0% 5.0% [libx264 @ 0x3450140] kb/s:626.88 Received signal 2: terminating. One last note: If I remove the -r 30 from the input and output it works flawlessly. I have no idea why the -r 30 is causing it to freak out.

    Read the article

  • Windows 7 disk errors after a few hours of runtime

    - by GFK
    I'm having trouble understanding what is going on with my work PC. Whenever I boot it, it runs fine for a while, then starts to randomly show disk errors. The displayed error often contains the message "not enough storage is available to process this command", although depending on the application that fails it can be different. This has happened for weeks now and is getting worse. This is what troubles me: It never seems to impact critical parts of the system (no BSOD, no freeze). Only some applications seem impacted, refusing to function correctly after a while: Outlook 2010 cannot download RSS feeds anymore, Firefox 6 or IE9 cannot download anything bigger than 3MB without failing, Windows Update fails, all msi installers fail, Visual Studio 2010 starts failing in weird manners... It only happens after a while using it (typically 3 hours, but it seems that installing a program or compiling several times makes it shorter) Rebooting solves it (temporarily). The system: The OS is Windows 7 Pro Spanish SP1, 32 bits The system is an HP Compaq 6000 Pro with 4 GB memory (only 3.4GB usable since the system is 32bit), one 500GB hard drive. Installed applications include: Visual Studio 2010, SQL Server 2008 R2, VMWare Workstation 7, Microsoft Security Essentials, Office 2010. Shutting down all related services and processes doesn't seem to change anything. The diagnostics I've run so far: Hard drive : 465GB, 165GB free Process Explorer : physical and virtual memory seem ok (pagefile is 5.3GB, physical memory usage 70%, system commit 39%) Windows Memory diagnostic tool: OK CHKDSK returned: 488282111 KB total disk space. 281668248 KB in 265779 files. 150188 KB in 62949 indexes. 0 KB in bad sectors. 571755 KB in use by the system. The log file has occupied 65536 kilobytes. 205891920 KB available on disk. For non-spanish speakers, that means all ok. SMART diagnostic tools (DiskCheckup) report all values normal. temperatures are in the normal range (HWinfo). The event viewer doesn't seem to contain any significant message. ran CCleaner 3, without any noticeable effect. I was thinking about some file number limit (between Visual Studio projects and other applications, there are around 300.000 files on the hard drive), but I couldn't find any. It's possible there is something related with the use of the temporary folders (it's the only explanation I have for why applications fail but Windows doesn't), but I cannot confirm that. Only thing I cannot find out is if chkdsk reporting 65MB for the log is normal. It seems since Vista it always reports this. Any other cleaning/diagnostic tool you might know of? Edit: I ran several other tools since I first published the question: Seagate SeaTools (the HD manufacturer's analysis tool): complete test run OK. Intel Rapid 10.1 (the HD controller manufacturer's troubleshooting tool): the HD's ok. Microsoft Desktop Heap Monitor: Desktop Heap Information Monitor Tool (Version 8.1.2925.0) Copyright (c) Microsoft Corporation. All rights reserved. Session ID: 1 Total Desktop: ( 46464 KB - 11 desktops) WinStation\Desktop Heap Size(KB) Used Rate(%) WinSta0\Winlogon (s1) 128 3.6 WinSta0\Disconnect (s1) 64 3.8 WinSta0\Default (s1) 20480 3.0 msswindowstation\mssrestricteddesk (s0) 1024 0.2 __X78B95_89_IW__A8D9S1_42_ID (s0) 1024 0.2 Service-0x0-3e5$\Default (s0) 1024 0.6 Service-0x0-3e4$\Default (s0) 1024 0.3 Service-0x0-3e7$\Default (s0) 1024 2.1 WinSta0\Winlogon (s0) 128 1.9 WinSta0\Disconnect (s0) 64 3.8 WinSta0\Default (s0) 20480 0.0 All ok, desktop heap usage < 5% Edit 2: I tried totally resetting my account by creating a new one, logging under this new one and delete the first one (local rights and files), then logging back with this deleted account (it is a domain account). No luck. Also, I found out often the error is "not enough storage is available to process this command". Searching on the internet, I found an old troubleshooting tip (setting a registry key to raise the IRP stack limit, whatever it is) which did not change anything.

    Read the article

  • Hard drive write speed - finding a lighter antivirus?

    - by Shingetsu
    I recently have been getting a lot of system lag here (for example, the mouse and the display in general take about 15 seconds to react in the worst cases). After a lot of monitoring the resources, I found that the problem mainly happens when too much Disk I/O is being done. Three culprits have been identified: My browser had the highest write I/O with 35,000,000 I/O Write Bytes. Steam had the highest read I/O (when IDLE!!!) with 106,000,000 I/O Read Bytes. My antivirus (in both cases I will soon mention) was the runner up in both cases with: 30,000,000ish write and 80,000,000ish read. The first AV I had was Avast! which I had liked on my previous system. After noticing it taking so much I/O I switched to Panda (supposing it wouldn't use TOO much during idle phase). However it only used a bit less I/O. Just a lot less memory and cpu and somewhat more network. My browser at the moment is Maxthon 3 (which I like a lot). Before this I was running chrome which had similar data and much higher cpu when running in the background was enabled. I'm not going to be running steam all the time and there aren't many alternatives to it. I like my browser very much, but I AM willing to switch if there's an obvious problem (I'm in programming, however I'm not a very good sysadmin, especially not when it comes to windows). Finally, my system almost stops lagging when I turn off the antivirus (and preferably steam) (some remains but once in every 5-6 hours for a few seconds so it isn't a big problem). My question (has a few parts): Is it possible to configure steam to lower it's I/O usage? (and maybe network while we're at it?) Which antivirus (very preferably free) uses lowest I/O while idle (I leave PC alone during active scans so that isn't a problem). Is there an obvious problem with my current browser and, if so, is there a way to fix it or should I switch and, if so, to what? (P.S. I've been on FFox for some time too). Info on system: Windows 7 (32 bit T_T, I am getting a new one in a few months but I want to keep using system during that time though). Hard Drive (main) is a Raid0. (Also have an external 1TB one which contains steam (and steam alone). As such it doesn't get used by much anything other than steam and isn't a very large problem. However steam still uses some I/O of registry) CPU: Intel(R) Core(TM)2 CPU [email protected] RAM: 6GB (3.25GB usable) (this and CPU have little effect as shown in next section) Additional info: Memory usage during problematic times: 44% CPU usage during problematic times: 35% Page File: main drive: system managed. 1TB drive: none. The current system I'm using is about 6 years old and is mainly a place holder while I await the new one in a few months. Final words: this is my 1st post on Super User (this question wouldn't feel right on Stack Overflow where I usually stay). If it doesn't have it's place here please tell me. If anything is wrong with it, same. Edit Technically I'm looking for a live thread detection program with minimal IO usage. I already have good active scan capability: Kaspersky (the free scanner uses the paid database) and MalwareBytes. Edit 2 Noticed another one, it seems that windows media player has been using stuff even when off! Turning it off and restarting now. If the problem is fixed I'll tell you guys. The reason I didn't notice it before was because I didn't have resource manager in front of me at the MOMENT of the problem. Now I did and it was at the very top of the list!

    Read the article

  • WPF - ListView within listView Scrollbar problem

    - by Josh
    So I currently have a ListView "List A" who's items each have an expander control which contains another ListView "List B". The problem I am having is that sometimes List B grows so big that it extends beyond the range of List A's view area. List A's scroll bars do not pick up the fact that parts of List B are not being displayed. Is there a way to setup my xaml so that the scroll bars for List A will detect when the list inside the expander is longer then the viewing area of List A. Here is the section of code I need to modify: <TabItem Header="Finished"> <TabItem.Resources> <ResourceDictionary> <DataTemplate x:Key="EpisodeItem"> <DockPanel Margin="30,3"> <TextBlock Text="{Binding Title}" DockPanel.Dock="Left" /> <WrapPanel Margin="10,0" DockPanel.Dock="Right"> <TextBlock Text="Finished at: " /> <TextBlock Text="{Binding TimeAdded}" /> </WrapPanel> </DockPanel> </DataTemplate> <DataTemplate x:Key="AnimeItem"> <DockPanel Margin="5,10"> <Image Height="75" Width="Auto" Source="{Binding ImagePath}" DockPanel.Dock="Left" VerticalAlignment="Top"/> <Expander Template="{StaticResource AnimeExpanderControlTemplate}" > <Expander.Header> <TextBlock FontWeight="Bold" Text="{Binding AnimeTitle}" /> </Expander.Header> <ListView ItemsSource="{Binding Episodes}" ItemTemplate="{StaticResource EpisodeItem}" BorderThickness="0,0,0,0" /> </Expander> </DockPanel> </DataTemplate> </ResourceDictionary> </TabItem.Resources> <ListView Name="finishedView" ItemsSource="{Binding UploadedAnime, diagnostics:PresentationTraceSources.TraceLevel=High}" ItemTemplate="{StaticResource AnimeItem}" /> </TabItem> List A is the ListView with name "finishedView" and List B is the ListView with ItemSource "Episodes"

    Read the article

  • Troubleshooting .NET "Fatal Execution Engine Error"

    - by JYelton
    Summary: I periodically get a .NET Fatal Execution Engine Error on an application which I cannot seem to debug. The dialog that comes up only offers to close the program or send information about the error to Microsoft. I've tried looking at the more detailed information but I don't know how to make use of it. Error: The error is visible in Event Viewer under Applications and is as follows: .NET Runtime version 2.0.50727.3607 - Fatal Execution Engine Error (7A09795E) (80131506) The computer running it is Windows XP Professional SP 3. (Intel Core2Quad Q6600 2.4GHz w/ 2.0 GB of RAM) Other .NET-based projects that lack multi-threaded downloading (see below) seem to run just fine. Application: The application is written in C#/.NET 3.5 using VS2008, and installed via a setup project. The app is multi-threaded and downloads data from multiple web servers using System.Net.HttpWebRequest and its methods. I've determined that the .NET error has something to do with either threading or HttpWebRequest but I haven't been able to get any closer as this particular error seems impossible to debug. I've tried handling errors on many levels, including the following in Program.cs: // handle UI thread exceptions Application.ThreadException += new ThreadExceptionEventHandler(Application_ThreadException); // handle non-UI thread exceptions AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException); Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); // force all windows forms errors to go through our handler Application.SetUnhandledExceptionMode(UnhandledExceptionMode.CatchException); More Notes and What I've Tried... Installed Visual Studio 2008 on the target machine and tried running in debug mode, but the error still occurs, with no hint as to where in source code it occurred. When running the program from its installed version (Release) the error occurs more frequently, usually within minutes of launching the application. When running the program in debug mode inside of VS2008, it can run for hours or days before generating the error. Reinstalled .NET 3.5 and made sure all updates are applied. Broke random cubicle objects in frustration. Rewritten parts of code that deal with threading and downloading in attempts to catch and log exceptions, though logging seemed to aggravate the problem (and never provided any data). Question: What steps can I take to troubleshoot or debug this kind of error? Memory dumps and the like seem to be the next step, but I'm not experienced at interpreting them. Perhaps there's something more I can do in the code to try and catch errors... It would be nice if the "Fatal Execution Engine Error" was more informative, but internet searches have only told me that it's a common error for a lot of .NET-related items.

    Read the article

  • Silverlight and Unexpected Font Sizes

    - by Eric J.
    Someone please teach me to fish here... I'm just learning Silverlight and have ran into a few situations where the font size actually used is drastically different than I would expect. There's probably something conceptual that I'm missing. Case A In one instance, I have defined a user control that presents a Label to show text. If one clicks on the label, the label (that is in a stack panel, in the user control) is replaced with a TextBox. When used at the top of a page (as in the example below with lblName) the label text is very small (around 8 points). When clicked on, the text box that replaces the label uses the specified fonts size. That same user control, used in different parts of the app, uses the same font for Label and TextBox. <Grid x:Name="LayoutRoot" Background="White"> <Grid.RowDefinitions> <RowDefinition Height="33" /> <RowDefinition Height="267*" /> </Grid.RowDefinitions> <StackPanel Height="Auto" HorizontalAlignment="Left" Name="stackPanel" VerticalAlignment="Top" Width="Auto" Grid.Row="1" /> <my:EditLabel Height="33" HorizontalAlignment="Left" x:Name="lblName" VerticalAlignment="Top" Width="Auto" FlexText="{Binding Name, Mode=TwoWay}" FontSize="20" MinHeight="24" /> </Grid> Case B I'm using the LiquidMenu.Menu control to pop up a menu when a button is pressed. The font looks huge compared to the rest of my page (maybe 36 points?). I tried forcing it to a very small by explicitly setting it to 8pt, but that had no effect. <Grid x:Name="LayoutRoot" Background="{x:Null}"> <StackPanel x:Name="labelStackPanel" Orientation="Horizontal"> <TextBlock Height="24" HorizontalAlignment="Left" Name="labelText" VerticalAlignment="Top" Width="200" Text="(Value Goes Here)" /> </StackPanel> <liquidMenu:Menu x:Name="popupMenu" Canvas.Left="40" Canvas.Top="40" ItemSelected="MenuList_ItemSelected" Visibility="Collapsed" Height="Auto" FontSize="8"> <liquidMenu:MenuItem ID="delete" Icon="Images/Delete10.png" Text="Delete" Shortcut="Del" /> <liquidMenu:MenuItem ID="exclusive" Icon="" Text="Exclusive" Shortcut="Ctrl+E" /> <liquidMenu:MenuItem ID="properties" Icon="" Text="Properties" Shortcut="Ctrl+P" /> </liquidMenu:Menu> </Grid> Answers to these specific issues are great, a new way to think about this type of issue so that I understand how to control font size is better.

    Read the article

  • Exception seem in an eclipse GMF application when file not present in workspace

    - by rush00121
    I am developing a GMF application . In this situation , I start my workspace and load a file in the workspace . Then I close the workspace and then delete the file from the workspace . After that, I try to restart my application and restore the workspace . Of course , now since the file does not exist , there are going to be exceptions . I tried a lot to handle these exceptions but if I try to get rid of one exception , then another pops up . Does anyone know what methods do I have to override in order to handle this situation correctly . I am attaching the stacktrace that I get in this exception for a better picture . org.eclipse.core.runtime.CoreException: ERROR at org.eclipse.gmf.runtime.diagram.ui.resources.edito r.parts.DiagramDocumentEditor.createPartControl(Di agramDocumentEditor.java:1509) at com.fnfr.itest.topology.tbml.diagram.custom.part.S ingleFileTbmlDiagramEditor.createPartControl(Singl eFileTbmlDiagramEditor.java:120) at org.eclipse.ui.internal.EditorReference.createPart Helper(EditorReference.java:662) at org.eclipse.ui.internal.EditorReference.createPart (EditorReference.java:462) at org.eclipse.ui.internal.WorkbenchPartReference.get Part(WorkbenchPartReference.java:595) at org.eclipse.ui.internal.EditorAreaHelper.setVisibl eEditor(EditorAreaHelper.java:271) at org.eclipse.ui.internal.EditorManager.setVisibleEd itor(EditorManager.java:1417) at org.eclipse.ui.internal.EditorManager$5.runWithExc eption(EditorManager.java:942) at org.eclipse.ui.internal.StartupThreading$StartupRu nnable.run(StartupThreading.java:31) at org.eclipse.swt.widgets.RunnableLock.run(RunnableL ock.java:35) at org.eclipse.swt.widgets.Synchronizer.runAsyncMessa ges(Synchronizer.java:134) at org.eclipse.swt.widgets.Display.runAsyncMessages(D isplay.java:3855) at org.eclipse.swt.widgets.Display.readAndDispatch(Di splay.java:3476) at org.eclipse.ui.application.WorkbenchAdvisor.openWi ndows(WorkbenchAdvisor.java:803) at org.eclipse.ui.internal.Workbench$28.runWithExcept ion(Workbench.java:1384) at org.eclipse.ui.internal.StartupThreading$StartupRu nnable.run(StartupThreading.java:31) at org.eclipse.swt.widgets.RunnableLock.run(RunnableL ock.java:35) at org.eclipse.swt.widgets.Synchronizer.runAsyncMessa ges(Synchronizer.java:134) at org.eclipse.swt.widgets.Display.runAsyncMessages(D isplay.java:3855) at org.eclipse.swt.widgets.Display.readAndDispatch(Di splay.java:3476) at org.eclipse.ui.internal.Workbench.runUI(Workbench. java:2316) at org.eclipse.ui.internal.Workbench.access$4(Workben ch.java:2221) at org.eclipse.ui.internal.Workbench$5.run(Workbench. java:500) at org.eclipse.core.databinding.observable.Realm.runW ithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWork bench(Workbench.java:493) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(Pl atformUI.java:149) at com.fnfr.svt.rcp.Application.runWorkbench(Applicat ion.java:205) at com.fnfr.svt.rcp.Application.start(Application.jav a:190) at org.eclipse.equinox.internal.app.EclipseAppHandle. run(EclipseAppHandle.java:194) at org.eclipse.core.runtime.internal.adaptor.EclipseA ppLauncher.runApplication(EclipseAppLauncher.java: 110) at org.eclipse.core.runtime.internal.adaptor.EclipseA ppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.ru n(EclipseStarter.java:368) at org.eclipse.core.runtime.adaptor.EclipseStarter.ru n(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Nativ e Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknow n Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Un known Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.equinox.launcher.Main.invokeFramework( Main.java:559) at org.eclipse.equinox.launcher.Main.basicRun(Main.ja va:514) at org.eclipse.equinox.launcher.Main.run(Main.java:13 11) Add to rshah's Reputation

    Read the article

  • Drawing a TextBox in an extended Glass Frame (C# w/o WPF)

    - by Lazlo
    I am trying to draw a TextBox on the extended glass frame of my form. I won't describe this technique, it's well-known. Here's an example for those who haven't heard of it: http://www.danielmoth.com/Blog/Vista-Glass-In-C.aspx The thing is, it is complex to draw over this glass frame. Since black is considered to be the 0-alpha color, anything black disappears. There are apparently ways of countering this problem: drawing complex GDI+ shapes are not affected by this alpha-ness. For example, this code can be used to draw a Label on glass (note: GraphicsPath is used instead of DrawString in order to get around the horrible ClearType problem): public class GlassLabel : Control { public GlassLabel() { this.BackColor = Color.Black; } protected override void OnPaint(PaintEventArgs e) { GraphicsPath font = new GraphicsPath(); font.AddString( this.Text, this.Font.FontFamily, (int)this.Font.Style, this.Font.Size, Point.Empty, StringFormat.GenericDefault); e.Graphics.SmoothingMode = SmoothingMode.HighQuality; e.Graphics.FillPath(new SolidBrush(this.ForeColor), font); } } Similarly, such an approach can be used to create a container on the glass area. Note the use of the polygons instead of the rectangle - when using the rectangle, its black parts are considered as alpha. public class GlassPanel : Panel { public GlassPanel() { this.BackColor = Color.Black; } protected override void OnPaint(PaintEventArgs e) { Point[] area = new Point[] { new Point(0, 1), new Point(1, 0), new Point(this.Width - 2, 0), new Point(this.Width - 1, 1), new Point(this.Width -1, this.Height - 2), new Point(this.Width -2, this.Height-1), new Point(1, this.Height -1), new Point(0, this.Height - 2) }; Point[] inArea = new Point[] { new Point(1, 1), new Point(this.Width - 1, 1), new Point(this.Width - 1, this.Height - 1), new Point(this.Width - 1, this.Height - 1), new Point(1, this.Height - 1) }; e.Graphics.FillPolygon(new SolidBrush(Color.FromArgb(240, 240, 240)), inArea); e.Graphics.DrawPolygon(new Pen(Color.FromArgb(55, 0, 0, 0)), area); base.OnPaint(e); } } Now my problem is: How can I draw a TextBox? After lots of Googling, I came up with the following solutions: Subclassing the TextBox's OnPaint method. This is possible, although I could not get it to work properly. It should involve painting some magic things I don't know how to do yet. Making my own custom TextBox, perhaps on a TextBoxBase. If anyone has good, valid and working examples, and thinks this could be a good overall solution, please tell me. Using BufferedPaintSetAlpha. (http://msdn.microsoft.com/en-us/library/ms649805.aspx). The downsides of this method may be that the corners of the textbox might look odd, but I can live with that. If anyone knows how to implement that method properly from a Graphics object, please tell me. I personally don't, but this seems the best solution so far. Thanks!

    Read the article

  • JBoss 6 deployment of message-driven bean error

    - by AntonioP
    Hello, I have an java EE application which has one message-driven bean and it runs fine on JBoss 4, however when I configure the project for JBoss 6 and deploy on it, I get this error; WARN [org.jboss.ejb.deployers.EjbDeployer.verifier] EJB spec violation: ... The message driven bean must declare one onMessage() method. ... org.jboss.deployers.spi.DeploymentException: Verification of Enterprise Beans failed, see above for error messages. But my bean HAS the onMessage method! It would not have worked on jboss 4 either then. Why do I get this error!? Edit: The class in question looks like this package ... imports ... public class MyMDB implements MessageDrivenBean, MessageListener { AnotherSessionBean a; OneMoreSessionBean b; public MyMDB() {} public void onMessage(Message message) { if (message instanceof TextMessage) { try { //Lookup sessionBeans by jndi, create them lookupABean(); // check message-type, then invokie a.handle(message); // else b.handle(message); } catch (SomeException e) { //handling it } } } public void lookupABean() { try { // code to lookup session beans and create. } catch (CreateException e) { // handling it and catching NamingException too } } } Edit 2: And this is the jboss.xml relevant parts <message-driven> <ejb-name>MyMDB</ejb-name> <destination-jndi-name>topic/A_Topic</destination-jndi-name> <local-jndi-name>A_Topic</local-jndi-name> <mdb-user>user</mdb-user> <mdb-passwd>pass</mdb-passwd> <mdb-client-id>MyMessageBean</mdb-client-id> <mdb-subscription-id>subid</mdb-subscription-id> <resource-ref> <res-ref-name>jms/TopicFactory</res-ref-name> <jndi-name>jms/TopicFactory</jndi-name> </resource-ref> </message-driven>

    Read the article

  • FILESTREAM/FILETABLE Clarifications for Implementation

    - by user1209734
    Recently our team was looking at FILESTREAM to expand the capabilities of our proprietary application. The main purpose of this app is managing the various PDFS, Images and documents to all of the parts we manufacture. Our ASP application uses a few third party tools to allow viewing of these files. We currently have 980GB of data on the Fileserver. We have around 200GB of Binary data in SQL Server that we would like to extract since it is not performing well hence FILESTREAM seems to be a good compromise to the two major data storage/access issues. A few things are not exactly clear to us: FILESTREAM Can or Cannot store its data on a drive that is not locally attached. We already have a File Server with a RAID 10 (1.5TB drives). This server stores all of the documents right now, would we have to move these drives to the SQL Server for FILESTREAM? That would be a tough bullet to bite since the server also is doubling as the Application Server (Two VMs on one physical server). FILETABLE stores the common metadata about the files but where is the Full Text part of it stored to allow searching of files like doc/docx? Is this separate? Are you able to freely add criteria to this to search by? If so any links to clarify would be appreciated. Can FILETABLE be referenced in another table with a foreign key? Thank you in advance EDIT: For those having these questions this web video covered everything and more in terms of explaining filestream from 2008 to 2012 and the cavets to consider (I would seriously rep him if I could): http://channel9.msdn.com/Events/TechDays/Techdays-2012-the-Netherlands/2270 In conclusion we will not be using FILESTREAM as it would be way to huge of an upsurge to accommodate for investment. EDIT 2: Update to #1 - After carefully assessing FileTable in addition to FILESTREAM we got a winning combination. We did have to move the files over to the new server (wasn't to painful since they were on the same VM).It honestly took more time to write an extraction tool to dump the binary data within SQL to the File System. Update to #2 - This was seperate but again Bob had an excellent webinar explaining this: http://channel9.msdn.com/Events/TechEd/Europe/2012/DBI411 Update to #3 - Using TFT inheritance we recycled the Docs table we had (minus the huge binary blobs) which required very little changes in our legacy apps. This was a huge upshot for the developer team.

    Read the article

  • How can I make a WPF TreeView data binding lazy and asynchronous?

    - by pauldoo
    I am learning how to use data binding in WPF for a TreeView. I am procedurally creating the Binding object, setting Source, Path, and Converter properties to point to my own classes. I can even go as far as setting IsAsync and I can see the GUI update asynchronously when I explore the tree. So far so good! My problem is that WPF eagerly evaluates parts of the tree prior to them being expanded in the GUI. If left long enough this would result in the entire tree being evaluated (well actually in this example my tree is infinite, but you get the idea). I would like the tree only be evaluated on demand as the user expands the nodes. Is this possible using the existing asynchronous data binding stuff in the WPF? As an aside I have not figured out how ObjectDataProvider relates to this task. My XAML code contains only a single TreeView object, and my C# code is: public partial class Window1 : Window { public Window1() { InitializeComponent(); treeView.Items.Add( CreateItem(2) ); } static TreeViewItem CreateItem(int number) { TreeViewItem item = new TreeViewItem(); item.Header = number; Binding b = new Binding(); b.Converter = new MyConverter(); b.Source = new MyDataProvider(number); b.Path = new PropertyPath("Value"); b.IsAsync = true; item.SetBinding(TreeView.ItemsSourceProperty, b); return item; } class MyDataProvider { readonly int m_value; public MyDataProvider(int value) { m_value = value; } public int[] Value { get { // Sleep to mimick a costly operation that should not hang the UI System.Threading.Thread.Sleep(2000); System.Diagnostics.Debug.Write(string.Format("Evaluated for {0}\n", m_value)); return new int[] { m_value * 2, m_value + 1, }; } } } class MyConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { // Convert the double to an int. int[] values = (int[])value; IList<TreeViewItem> result = new List<TreeViewItem>(); foreach (int i in values) { result.Add(CreateItem(i)); } return result; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new InvalidOperationException("Not implemented."); } } } Note: I have previously managed to do lazy evaluation of the tree nodes by adding WPF event handlers and directly adding items when the event handlers are triggered. I'm trying to move away from that and use data binding instead (which I understand is more in spirit with "the WPF way").

    Read the article

  • How to avoid open-redirect vulnerability and safely redirect on successful login (HINT: ASP.NET MVC

    - by Brad B.
    Normally, when a site requires that you are logged in before you can access a certain page, you are taken to the login screen and after successfully authenticating yourself, you are redirected back to the originally requested page. This is great for usability - but without careful scrutiny, this feature can easily become an open redirect vulnerability. Sadly, for an example of this vulnerability, look no further than the default LogOn action provided by ASP.NET MVC 2: [HttpPost] public ActionResult LogOn(LogOnModel model, string returnUrl) { if (ModelState.IsValid) { if (MembershipService.ValidateUser(model.UserName, model.Password)) { FormsService.SignIn(model.UserName, model.RememberMe); if (!String.IsNullOrEmpty(returnUrl)) { return Redirect(returnUrl); // open redirect vulnerability HERE } else { return RedirectToAction("Index", "Home"); } } else { ModelState.AddModelError("", "User name or password incorrect..."); } } return View(model); } If a user is successfully authenticated, they are redirected to "returnUrl" (if it was provided via the login form submission). Here is a simple example attack (one of many, actually) that exploits this vulnerability: Attacker, pretending to be victim's bank, sends an email to victim containing a link, like this: http://www.mybank.com/logon?returnUrl=http://www.badsite.com Having been taught to verify the ENTIRE domain name (e.g., google.com = GOOD, google.com.as31x.example.com = BAD), the victim knows the link is OK - there isn't any tricky sub-domain phishing going on. The victim clicks the link, sees their actual familiar banking website and is asked to logon Victim logs on and is subsequently redirected to http://www.badsite.com which is made to look exactly like victim's bank's website, so victim doesn't know he is now on a different site. http://www.badsite.com says something like "We need to update our records - please type in some extremely personal information below: [ssn], [address], [phone number], etc." Victim, still thinking he is on his banking website, falls for the ploy and provides attacker with the information Any ideas on how to maintain this redirect-on-successful-login functionality yet avoid the open-redirect vulnerability? I'm leaning toward the option of splitting the "returnUrl" parameter into controller/action parts and use "RedirectToRouteResult" instead of simply "Redirect". Does this approach open any new vulnerabilities? Side note: I know this open-redirect may not seem to be a big deal compared to the likes of XSS and CSRF, but us developers are the only thing protecting our customers from the bad guys - anything we can do to make the bad guys' job harder is a win in my book. Thanks, Brad

    Read the article

  • ASP.NET MVC & Silverlight development - on Ubuntu

    - by queen3
    I recently moved my working environment from Windows 7 to Ubuntu, and enjoy this every minute of my working day. My work is currently to develop ASP.NET MVC and Silverlight applications. Thus, important Windows stuff is being still run in VirtualBox (such as IIS, MSSQL, Silverlight 3, and legacy COM stuff). For now, I use Visual Studio under VirtualBox as editor/IDE/debugger. But since I prefer Ubuntu fonts and UI much more I'd like to move at least editor (and better IDE) to native Ubuntu. Things I have already done: I store project files in my home folder and run Visual Studio from \vboxsvr share. With few tricks for ASP.NET it works. I use svn on Ubuntu. I test my ASP.NET MVC site using FireFox/FireBug on Ubuntu. What I need on Ubuntu: SQL client to manage MSSQL. I mostly need querying, but I would miss SQL intellisense support. I enjoy command-line svn a lot, but there're times when it's not enough (e.g. view files / check diffs / selectively commit at the sames time) so I wonder if there're any addons - I don't mean replacement for svn, just addons for rare cases like above. I wonder if there're editors that can provide some C# intellisense. Yes I know about MonoDevelop, but will it provide intellisense without compiling (since I'm going to compile remotely in Win box)? And pretty big topic, what's the best way (editor/IDE) to do "folder-based" development? What I mean: The project is /trunk. Everything is there. I don't want to manually add files to project or like that. The project is the folder and files down there. Main task is to edit files of course. I need a quick way to open / search for files in the project. Like in Resharper, I can click Ctrl-Shift-T (IIRC) and just type file name, a list of matching files in the project folder and below is shown. For example, gedit has file tree browser, but I can't quickly type XYZ to find all XYZ files there; moreover it doesn't automatically switch focus to/from editor; so it's more mouse-oriented; I need 100% keyboard way. I need syntax highlighting for C#/HTML/JS. Most importantly, I need HTML tags autocompletion. I can live without it, but I'll be sorry. I need to run compilation remotely (via ssh I think, invoking NAnt script which does MSBuild) and grab results such as errors and warnings, and I'd prefer to quickly go to error line/file. In short, I need to edit/search/open/svn/compile/run files in some folder. Looks like a case for command-line, but imaging I'm in /trunk and want to open file.cs inside /trunk/foo/bar/boo/far, I wouldn't want to type all this path even with bash autocompletion help. I'd prefer to enter :open file.cs, and maybe then select from list of file.cs and file1.cs. Well, maybe I'll add more soon. Actually, I don't have exact requirements; for example, I don't even know if I need to ask for IDE or editor; or should it have svn support integrated or I need to use it from console; do people work with files from console (search/commit/delete/etc) and open editor from there, or they work from editor/IDE and manage files (search/commit/delete) from there? What's better? I have a feeling that vim might have everything I need. I'd like to confirm that, before I spend a lot of time learning it. No I don't want Emacs. Any other IDE? I like Eclipse, but is it good for such stuff? And after all, do you feel like it's a good way to go? I enjoy Ubuntu, enjoy learning new stuff, and Ubuntu + Windows in VirtualBox actually runs faster than Windows7 on my mahcine, but maybe I need to keep development (editor/files management/etc) in Windows/VirtualBox only, leaving other stuff for Ubuntu?

    Read the article

  • IIS7 MVC deploy - 404 not found on actions that accept "id" parameter.

    - by majkinetor
    Hello. Once deployed parts of my web-application stop working. Index-es on each controller do work, and one form posting via Ajax, Login works too. Other then that yields 404. I understand that nothing particular should be done in integrated mode. I don't know how to proceed with troubleshooting. Some info: App is using default app pool set to integrated mode. WebApp is done in net framework 3.5. I use default routing model. Along web.config in root there is web.config in /View folder referencing HttpNotFoundHandler. OS is Windows Server 2008. Admins issued aspnet_regiis.exe -i IIS 7 Any help is appreciated. Thx. EDIT: I determined that only actions that accept ID parameter don't work. On the contrary, when I add dummy id method in Home controller of default MVC app it works. My Views/Web.config <?xml version="1.0"?> <configuration> <system.web> <httpHandlers> <add path="*" verb="*" type="System.Web.HttpNotFoundHandler"/> </httpHandlers> <!-- Enabling request validation in view pages would cause validation to occur after the input has already been processed by the controller. By default MVC performs request validation before a controller processes the input. To change this behavior apply the ValidateInputAttribute to a controller or action. --> <pages validateRequest="false" pageParserFilterType="System.Web.Mvc.ViewTypeParserFilter, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" pageBaseType="System.Web.Mvc.ViewPage, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" userControlBaseType="System.Web.Mvc.ViewUserControl, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <controls> <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" namespace="System.Web.Mvc" tagPrefix="mvc" /> </controls> </pages> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <handlers> <remove name="BlockViewHandler"/> <add name="BlockViewHandler" path="*" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler"/> </handlers> </system.webServer> </configuration>

    Read the article

  • Application Architecture using WCF and System.AddIn

    - by Silverhalide
    A little background -- we're designing an application that uses a client/server architecture consisting of: A server which loads server-side modules, potentially developed by other teams. A client which loads corresponding client-side modules (also potentially developed by those other teams; each client module corresponds with a server module). The client side communicates with the server side for general coordination, and as well as module specific tasks. (At this point, I think that means client talks to server, client modules talk to server modules.) Environment is .NET 3.5, and client side is WPF. The deployment scenario introduces the potential to upgrade the server, any server-side module, the client, and any client-side module independently. However, being able to "work" using mismatched versions is required. I'm therefore concerned about versioning issues. My thinking so far: A Windows Service for the server. Using System.AddIn for the server to load and communicate with the server modules will give us the greatest flexibility in terms of version compatability between server and server modules. The server and each server module vend WCF services for communication to the client side; communication between the server and a server module, or between two server modules use the AddIn contracts. (One advantage of this is that a module can expose a different interface within the server and outside it.) Similarly, the client uses System.AddIn to find, load, and communicate with the client modules. Client communications with client modules is via the AddIn interface; communications from the client and from client modules to the server side are via WCF. For maximum resilience, each module will run in a separate app-domain. In general, the system has modest performance requirements, so marshalling and crossing process boundaries is not expected to be a performance concern. (Performance requirement is basically summed up by: don't get in the way of the other parts of the system not described here.) My questions are around the idea of having two different communication and versioning models to work with which will be an added burden on our developers. System.AddIn seems quite powerful, but also a little unwieldly. (I'm also unsure of Microsoft's commitment to it in the future.) On the other hand, I'm not thrilled with WCF's versioning capabilities. I have a feeling that it would be possible to implement the System.AddIn view/adapter/contract system within WCF, but being fairly new to both technologies, I would have no idea of where to start. So... Am I on the right track here? Am I doing this the hard way? Are there gotchas I need to be aware of on this road? Thanks.

    Read the article

  • Java ReentrantReadWriteLocks - how to safely acquire write lock?

    - by Andrzej Doyle
    I am using in my code at the moment a ReentrantReadWriteLock to synchronize access over a tree-like structure. This structure is large, and read by many threads at once with occasional modifications to small parts of it - so it seems to fit the read-write idiom well. I understand that with this particular class, one cannot elevate a read lock to a write lock, so as per the Javadocs one must release the read lock before obtaining the write lock. I've used this pattern successfully in non-reentrant contexts before. What I'm finding however is that I cannot reliably acquire the write lock without blocking forever. Since the read lock is reentrant and I am actually using it as such, the simple code lock.getReadLock().unlock(); lock.getWriteLock().lock() can block if I have acquired the readlock reentrantly. Each call to unlock just reduces the hold count, and the lock is only actually released when the hold count hits zero. EDIT to clarify this, as I don't think I explained it too well initially - I am aware that there is no built-in lock escalation in this class, and that I have to simply release the read lock and obtain the write lock. My problem is/was that regardless of what other threads are doing, calling getReadLock().unlock() may not actually release this thread's hold on the lock if it acquired it reentrantly, in which case the call to getWriteLock().lock() will block forever as this thread still has a hold on the read lock and thus blocks itself. For example, this code snippet will never reach the println statement, even when run singlethreaded with no other threads accessing the lock: final ReadWriteLock lock = new ReentrantReadWriteLock(); lock.getReadLock().lock(); // In real code we would go call other methods that end up calling back and // thus locking again lock.getReadLock().lock(); // Now we do some stuff and realise we need to write so try to escalate the // lock as per the Javadocs and the above description lock.getReadLock().unlock(); // Does not actually release the lock lock.getWriteLock().lock(); // Blocks as some thread (this one!) holds read lock System.out.println("Will never get here"); So I ask, is there a nice idiom to handle this situation? Specifically, when a thread that holds a read lock (possibly reentrantly) discovers that it needs to do some writing, and thus wants to "suspend" its own read lock in order to pick up the write lock (blocking as required on other threads to release their holds on the read lock), and then "pick up" its hold on the read lock in the same state afterwards? Since this ReadWriteLock implementation was specifically designed to be reentrant, surely there is some sensible way to elevate a read lock to a write lock when the locks may be acquired reentrantly? This is the critical part that means the naive approach does not work.

    Read the article

  • General advice and guidelines on how to properly override object.GetHashCode()

    - by Svish
    According to MSDN, a hash function must have the following properties: If two objects compare as equal, the GetHashCode method for each object must return the same value. However, if two objects do not compare as equal, the GetHashCode methods for the two object do not have to return different values. The GetHashCode method for an object must consistently return the same hash code as long as there is no modification to the object state that determines the return value of the object's Equals method. Note that this is true only for the current execution of an application, and that a different hash code can be returned if the application is run again. For the best performance, a hash function must generate a random distribution for all input. I keep finding myself in the following scenario: I have created a class, implemented IEquatable<T> and overridden object.Equals(object). MSDN states that: Types that override Equals must also override GetHashCode ; otherwise, Hashtable might not work correctly. And then it usually stops up a bit for me. Because, how do you properly override object.GetHashCode()? Never really know where to start, and it seems to be a lot of pitfalls. Here at StackOverflow, there are quite a few questions related to GetHashCode overriding, but most of them seems to be on quite particular cases and specific issues. So, therefore I would like to get a good compilation here. An overview with general advice and guidelines. What to do, what not to do, common pitfalls, where to start, etc. I would like it to be especially directed at C#, but I would think it will work kind of the same way for other .NET languages as well(?). I think maybe the best way is to create one answer per topic with a quick and short answer first (close to one-liner if at all possible), then maybe some more information and end with related questions, discussions, blog posts, etc., if there are any. I can then create one post as the accepted answer (to get it on top) with just a "table of contents". Try to keep it short and concise. And don't just link to other questions and blog posts. Try to take the essence of them and then rather link to source (especially since the source could disappear. Also, please try to edit and improve answers instead of created lots of very similar ones. I am not a very good technical writer, but I will at least try to format answers so they look alike, create the table of contents, etc. I will also try to search up some of the related questions here at SO that answers parts of these and maybe pull out the essence of the ones I can manage. But since I am not very stable on this topic, I will try to stay away for the most part :p

    Read the article

  • Flash AS3 blur or liquify part of an image with mouse

    - by hamlet
    Hi, I am very beginner in flash. I want to load an image, show a cursor over the image and on mousedown I want to blur that actual part of the image. (e.g you can blur your face on the image and then save the new image). I can delete parts of the image with white line, but I would like to blur it instead // LIVE JPEG ENCODER 0.3 // from bytearray.org import asfiles.encoding.JPEGEncoder; import flash.external.ExternalInterface; ExternalInterface.addCallback("flash_saveImage", inflash_saveImage); var loader:Loader = new Loader(); loader.contentLoaderInfo.addEventListener(Event.COMPLETE, handleComplete); loader.load(new URLRequest(loaderInfo.parameters._filename)); //loader.load(new URLRequest("b.jpg")); var container_mc:MovieClip = new MovieClip;//create movieclip function handleComplete(e:Event):void { addChild(container_mc); var bitmapData:BitmapData = Bitmap(e.target.content).bitmapData; var matrix:Matrix = new Matrix(); container_mc.graphics.clear(); container_mc.graphics.beginBitmapFill(bitmapData, matrix, false); //container_mc.graphics.beginFill(0xFFFFFF,0); container_mc.graphics.drawRect(0, 0, bitmapData.width, bitmapData.height); container_mc.graphics.endFill(); swapChildren(container_mc, pencil); container_mc.addEventListener(MouseEvent.MOUSE_DOWN, startDrawing); container_mc.addEventListener(MouseEvent.MOUSE_UP, stopDrawing); container_mc.addEventListener(MouseEvent.MOUSE_MOVE, makeLine); } stage.addEventListener(MouseEvent.MOUSE_MOVE, moveCursor); Mouse.hide(); function moveCursor(event:MouseEvent):void { pencil.x = event.stageX; pencil.y = event.stageY; } function startDrawing(event:MouseEvent):void{ container_mc.graphics.lineStyle(20, 0xFFFFFF, 1); container_mc.graphics.moveTo(mouseX, mouseY); container_mc.addEventListener(MouseEvent.MOUSE_MOVE, makeLine); } function stopDrawing(event:MouseEvent):void{ container_mc.removeEventListener(MouseEvent.MOUSE_MOVE, makeLine); } function makeLine(event:MouseEvent):void{ container_mc.graphics.lineTo(mouseX, mouseY); } function inflash_saveImage ( ):void { var myURLLoader:URLLoader = new URLLoader(); var myBitmapSource:BitmapData = new BitmapData ( container_mc.width, container_mc.height ); // render the player as a bitmapdata myBitmapSource.draw ( container_mc ); // create the encoder with the appropriate quality var myEncoder:JPEGEncoder = new JPEGEncoder( 80 ); // generate a JPG binary stream to have a preview var myCapStream:ByteArray = myEncoder.encode ( myBitmapSource ); var header:URLRequestHeader = new URLRequestHeader ("Content-type", "application/octet-stream"); var myRequest:URLRequest = new URLRequest ( "save.php" ); myRequest.requestHeaders.push (header); myRequest.method = URLRequestMethod.POST; myRequest.data = myCapStream; myURLLoader.load ( myRequest ); } Thanks, hamlet

    Read the article

  • Bazaar newbie question about repository structures

    - by esc1729
    I want to use Bazaar on Windows XP for web-development and related tasks. Most of the files are edited locally and then transferred via FTP to the server. Just now the repository sits on my local workstation. Later on it should be shared locally with some co-workers. Perhaps we will use a local Linux server as a centralized repository, but this structure is not decided for now. But first I need to understand the impacts of the different repository setups, which I do not at all. Using Bazaar-Explorer on Windows XP I’ve created a ‘shared tree repository’ from the option list of the init-dialogue in some location dev-filter/. Bazaar Explorer tells me: Created repository with treeless branches at F:/bzr.local/dev-filter Created branch at F:/bzr.local/dev-filter/trunk Created working tree at F:/bzr.local/dev-filter/work OK so far. Now I move a bunch of files into the work directory and add and commit them as Rev 1 ‘Start Revision’. Then I work on some of these files and commit them again as Rev 2. Here my confusion starts. Shouldn’t both revisions go into the trunk? The trunk is still empty, beside the .bzr directory which only holds some management information. If I delete my working directory, which I have tried during these first experiments, everything is gone. There’s obviously no hidden storage of those files. OK. Perhaps I need to push it into the trunk? This does not work either. Entering the work/ directory and initializing the ‘push’ to the trunk, Bazaar-Explorer tells me No new revisions to push. So what? This looks like a severe conceptual misunderstanding about what should happen on my side. Edit, 2010-02-03: Some conclusions What I learned meanwhile is this: I think I should switch to the command line until I really understand what’s going on, at least for creating the repositories and branches. Bazaar Explorer introduces a new level of abstraction which I only can handle if I understand the level beneath One of the secrets of working with Bazaar at least for me is to understand those .bzr directories, their particular properties and states when created with ‘bzr init’, ‘bzr init-repository’, ‘bzr branch’ etc. in all their variants and how they are plumped together. While there’s a whole chapter of ‘Organizing your workspace’ in the Bazaar User Guide, it’s more or less workflow oriented. The manual contains a lot of directory structures for the given examples. What I would prefer beside this and have not (or only rudimentary) found so far is some graphical representation of those ‘Lego like’ .bzr building blocks which create the linking of all the parts. So I started to invent some simple notation while working through the examples and looking into the .bzr directories to document what information is stored there, where does it come from, how and to what is it linked, is it complete or shared, etc. Erich Schreiber

    Read the article

  • Entity Attribute Value Database vs. strict Relational Model Ecommerce question

    - by Dr. Zim
    It is safe to say that the EAV/CR database model is bad. That said, Question: What database model, technique, or pattern should be used to deal with "classes" of attributes describing e-commerce products which can be changed at run time? In a good E-commerce database, you will store classes of options (like TV resolution then have a resolution for each TV, but the next product may not be a TV and not have "TV resolution"). How do you store them, search efficiently, and allow your users to setup product types with variable fields describing their products? If the search engine finds that customers typically search for TVs based on console depth, you could add console depth to your fields, then add a single depth for each tv product type at run time. There is a nice common feature among good e-commerce apps where they show a set of products, then have "drill down" side menus where you can see "TV Resolution" as a header, and the top five most common TV Resolutions for the found set. You click one and it only shows TVs of that resolution, allowing you to further drill down by selecting other categories on the side menu. These options would be the dynamic product attributes added at run time. Further discussion: So long story short, are there any links out on the Internet or model descriptions that could "academically" fix the following setup? I thank Noel Kennedy for suggesting a category table, but the need may be greater than that. I describe it a different way below, trying to highlight the significance. I may need a viewpoint correction to solve the problem, or I may need to go deeper in to the EAV/CR. Love the positive response to the EAV/CR model. My fellow developers all say what Jeffrey Kemp touched on below: "new entities must be modeled and designed by a professional" (taken out of context, read his response below). The problem is: entities add and remove attributes weekly (search keywords dictate future attributes) new entities arrive weekly (products are assembled from parts) old entities go away weekly (archived, less popular, seasonal) The customer wants to add attributes to the products for two reasons: department / keyword search / comparison chart between like products consumer product configuration before checkout The attributes must have significance, not just a keyword search. If they want to compare all cakes that have a "whipped cream frosting", they can click cakes, click birthday theme, click whipped cream frosting, then check all cakes that are interesting knowing they all have whipped cream frosting. This is not specific to cakes, just an example.

    Read the article

  • iPhone SDK can't get UIScrollView to work with a UIWebView

    - by Maxwell Segal
    I am building an app that in part of it has two views - one portrait and one landscape. And I want to offer scrolling and pinch & zooming on the landscape orientation. I've built the two views and the simple UIImages in the landscape view all work OK but it does not seem to insert my UIWebView into the UIScrollView. I've done this before with UIImage and all was OK but I must be missing something here - can anybody help with some advice. I'm sure it's something very simple - perhaps to do with the adding of the subview??. Thanks in advance of your help. I've shown the relevant parts of my code below: @interface MonthViewController : UIViewController <UIScrollViewDelegate> { MonthViewController *monthController; IBOutlet UIView *portraitView; IBOutlet UIView *landscapeView; IBOutlet UIWebView *monthKWHChartPortrait; UIWebView *monthKWHChartLandscape; IBOutlet UIScrollView *scrollChartLandscape; IBOutlet UIImageView *eLogoPortrait; IBOutlet UIImageView *eLogoLandscape; IBOutlet UIImageView *clientLogoPortrait; IBOutlet UIImageView *clientLogoLandscape; IBOutlet UIActivityIndicatorView *loadIndicatorPortrait; IBOutlet UIActivityIndicatorView *loadIndicatorLandscape; NSTimer *timer; IBOutlet UILabel *test; } -(UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollChartLandscape { return monthKWHChartLandscape; } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return (interfaceOrientation == UIInterfaceOrientationPortrait || interfaceOrientation == UIInterfaceOrientationLandscapeLeft || interfaceOrientation == UIInterfaceOrientationLandscapeRight); } -(void)willRotateToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation duration:(NSTimeInterval)duration { [super willRotateToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation duration:duration]; if (toInterfaceOrientation ==UIInterfaceOrientationLandscapeRight) { NSLog(@"right"); test.text=@"right"; NSString *urlAddressLandscape = @"http://PRIVATEURLHERE.aspx?WIDTH=440"; NSURL *urlLandscape = [NSURL URLWithString:urlAddressLandscape]; NSURLRequest *requestObjLandscape = [NSURLRequest requestWithURL:urlLandscape]; [self.monthKWHChartLandscape loadRequest:requestObjLandscape]; scrollChartLandscape.contentSize = CGSizeMake(monthKWHChartLandscape.frame.size.width, monthKWHChartLandscape.frame.size.height); scrollChartLandscape.maximumZoomScale = 2.5; scrollChartLandscape.minimumZoomScale = 0.6; scrollChartLandscape.showsHorizontalScrollIndicator = YES; scrollChartLandscape.showsVerticalScrollIndicator = YES; scrollChartLandscape.clipsToBounds = YES; scrollChartLandscape.delegate = self; [self.scrollChartLandscape addSubview:monthKWHChartLandscape]; self.view=landscapeView; //self.view.transform=CGAffineTransformMakeRotation(deg2rad*(90)); //self.view.bounds=CGRectMake(0.0, 0.0, 480.0, 320.0); } else

    Read the article

  • XML Schema to Java Classes with XJC

    - by nevets1219
    I am using xjc to generate Java classes from the XML schema and the following is an excerpt of the XSD. <xs:element name="NameInfo"> <xs:complexType> <xs:sequence> <xs:choice> <xs:element ref="UnstructuredName"/> <!-- This line --> <xs:sequence> <xs:element ref="StructuredName"/> <xs:element ref="UnstructuredName" minOccurs="0"/> <!-- and this line! --> </xs:sequence> </xs:choice> <xs:element ref="SomethingElse" minOccurs="0"/> </xs:sequence> </xs:complexType> </xs:element> For the most part the generated classes are fine but for the above block I would get something like: public List<Object> getContent() { if (content == null) { content = new ArrayList<Object>(); } return this.content; } with the following comment above it: * You are getting this "catch-all" property because of the following reason: * The field name "UnstructuredName" is used by two different parts of a schema. See: * line XXXX of file:FILE.xsd * line XXXX of file:FILE.xsd * To get rid of this property, apply a property customization to one * of both of the following declarations to change their names: * Gets the value of the content property. I have placed a comment at the end of the two line in question. At the moment, I don't think it will be easy to change the schema since this was decided between vendors and I would not want to go this route (if possible) as it will slow down progress quite a bit. I searched and have found this page, is the external customization what I want to do? I have been mostly working with the generated classes so I'm not entirely familiar with the process that generates these classes. A simple example of the "property customization" would be great! Alternative method of generating the Java classes would be fine as long as the schema can still be used.

    Read the article

  • Using commit monitors as a form of code review

    - by Jeff Dege
    I'm working in a small company - four developers, working on a variety of projects. We've been looking at what we can do as cost-effective methods of process improvement, and an idea came up. Given what we do, we often have single developers working on parts of a system, independently of the other developers. This can have a number of negative affects: A developer might not be fully aware of the context in which a change is being implemented, and make the change in a way that will meet the current customer's needs, but will break functionality that other customers depend on. A developer might make a change that breaks the current architectural design, introducing a dependency that will cause problems in future development. Other developers might not be aware of how the system has changed, in areas that they have not worked on. We've talked about doing code reviews, as a way of dealing with these issues. But we've not had much success when we tried. It takes a lot of time to prepare a change for a code review, and it takes everybody out of production while the review is being performed. And the benefits of any review we've tried has been minimal. We're using Subversion (with TortioseSVN) as our VCS. I've been looking at the SubVersion CommitMonitor tool, and wondering whether it might work as a sort of poor-man's code review. It lists every commit made on the repository, allowing someone to see the changes that have been made, the log messages made for that change, the files that were included in the change, and the specific lines in each file that were changed. Rather than scheduling a meeting, trying to get everybody together to review every change, we could just have every developer review every other developer's commits, at whatever time was convenient. This would keep every developer abreast of what changes were being made elsewhere in the system, and would have every change reviewed for customer conflicts and design consistency, at a fairly low cost. If someone saw a problem with the code that was being checked in, he could discuss it with the developer who did the commit, or more likely, schedule a meeting to discuss how the new feature could be implemented in a way that would not impact other users or screw up the architecture. Anyone else doing anything like this, using commit monitors for such a purpose?

    Read the article

  • ELMAH not logging in ASP.NET MVC 2

    - by PsychoCoder
    I cannot figure out what I'm doing wrong here, trying to use ELMAH in my MVC 2 application and it doesnt log anything, ever. Here's what I have in my web.config (relevant parts) <sectionGroup name="elmah"> <section name="security" requirePermission="false" type="Elmah.SecuritySectionHandler, Elmah" /> <section name="errorLog" requirePermission="false" type="Elmah.ErrorLogSectionHandler, Elmah" /> <section name="errorMail" requirePermission="false" type="Elmah.ErrorMailSectionHandler, Elmah" /> <section name="errorFilter" requirePermission="false" type="Elmah.ErrorFilterSectionHandler, Elmah" /> </sectionGroup> <elmah> <security allowRemoteAccess="0" /> <errorLog type="Elmah.SqlErrorLog, Elmah" connectionStringName="ELMAH.SqlServer" /> <!-- <errorMail from="[email protected]" to="[email protected]" cc="" subject="Elmah Error" async="true" smtpPort="25" smtpServer="[EmailServerName]" userName="" password="" /> <errorLog type="Elmah.XmlFileErrorLog, Elmah" logPath="~/App_Data" /> --> </elmah> <connectionStrings> ... <add name="ELMAH.SqlServer" connectionString="data source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\ELMAH_Logging.mdf;Integrated Security=SSPI;Connect Timeout=30;User Instance=True;" providerName="System.Data.SqlClient"/> </connectionStrings> <system.Web> <httpHandlers> <add verb="POST,GET,HEAD" path="elmah.axd" type="Elmah.ErrorLogPageFactory, Elmah" /> ... </httpHandlers> <httpModules> <add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah" /> <add name="ErrorFilter" type="Elmah.ErrorFilterModule, Elmah" /> <add name="ErrorMail" type="Elmah.ErrorMailModule, Elmah" /> ... </httpModules> </system.Web> <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true"> <add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah" /> <add name="ErrorFilter" type="Elmah.ErrorFilterModule, Elmah" /> <add name="ErrorMail" type="Elmah.ErrorMailModule, Elmah" /> </modules> <handlers> <add name="Elmah" verb="POST,GET,HEAD" path="elmah.axd" type="Elmah.ErrorLogPageFactory, Elmah"/> ... </handlers> </system.webServer> Then using the code from DotNetDarren.com but no matter what I do no exceptions are ever logged?

    Read the article

  • Asp.net MVC VirtualPathProvider views parse error

    - by madcapnmckay
    Hi, I am working on a plugin system for Asp.net MVC 2. I have a dll containing controllers and views as embedded resources. I scan the plugin dlls for controller using StructureMap and I then can pull them out and instantiate them when requested. This works fine. I then have a VirtualPathProvider which I adapted from this post public class AssemblyResourceProvider : VirtualPathProvider { protected virtual string WidgetDirectory { get { return "~/bin"; } } private bool IsAppResourcePath(string virtualPath) { var checkPath = VirtualPathUtility.ToAppRelative(virtualPath); return checkPath.StartsWith(WidgetDirectory, StringComparison.InvariantCultureIgnoreCase); } public override bool FileExists(string virtualPath) { return (IsAppResourcePath(virtualPath) || base.FileExists(virtualPath)); } public override VirtualFile GetFile(string virtualPath) { return IsAppResourcePath(virtualPath) ? new AssemblyResourceVirtualFile(virtualPath) : base.GetFile(virtualPath); } public override CacheDependency GetCacheDependency(string virtualPath, IEnumerable virtualPathDependencies, DateTime utcStart) { return IsAppResourcePath(virtualPath) ? null : base.GetCacheDependency(virtualPath, virtualPathDependencies, utcStart); } } internal class AssemblyResourceVirtualFile : VirtualFile { private readonly string path; public AssemblyResourceVirtualFile(string virtualPath) : base(virtualPath) { path = VirtualPathUtility.ToAppRelative(virtualPath); } public override Stream Open() { var parts = path.Split('/'); var resourceName = Path.GetFileName(path); var apath = HttpContext.Current.Server.MapPath(Path.GetDirectoryName(path)); var assembly = Assembly.LoadFile(apath); return assembly != null ? assembly.GetManifestResourceStream(assembly.GetManifestResourceNames().SingleOrDefault(s => string.Compare(s, resourceName, true) == 0)) : null; } } The VPP seems to be working fine also. The view is found and is pulled out into a stream. I then receive a parse error Could not load type 'System.Web.Mvc.ViewUserControl<dynamic>'. which I can't find mentioned in any previous example of pluggable views. Why would my view not compile at this stage? Thanks for any help, Ian EDIT: Getting closer to an answer but not quite clear why things aren't compiling. Based on the comments I checked the versions and everything is in V2, I believe dynamic was brought in at V2 so this is fine. I don't even have V3 installed so it can't be that. I have however got the view to render, if I remove the <dynamic> altogether. So a VPP works but only if the view is not strongly typed or dynamic This makes sense for the strongly typed scenario as the type is in the dynamically loaded dll so the viewengine will not be aware of it, even though the dll is in the bin. Is there a way to load types at app start? Considering having a go with MEF instead of my bespoke Structuremap solution. What do you think?

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >