Search Results

Search found 6492 results on 260 pages for 'trigger io'.

Page 62/260 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Is there a way to edit a control's template based on the default template?

    - by Adam S
    I'm trying to make just a few minor tweaks to the default template of a checkbox. Now I understand how to make a new template from scratch, but this I do not know. I did manage to (I think?) extract the default template via the method here. And it spat out: <ControlTemplate TargetType="CheckBox" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:s="clr-namespace:System;assembly=mscorlib" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mwt="clr-namespace:Microsoft.Windows.Themes;assembly=PresentationFramework.Luna"> <BulletDecorator Background="#00FFFFFF" SnapsToDevicePixels="True"> <BulletDecorator.Bullet> <mwt:BulletChrome Background="{TemplateBinding Panel.Background}" BorderBrush="{TemplateBinding Border.BorderBrush}" BorderThickness="{TemplateBinding Border.BorderThickness}" RenderMouseOver="{TemplateBinding UIElement.IsMouseOver}" RenderPressed="{TemplateBinding ButtonBase.IsPressed}" IsChecked="{TemplateBinding ToggleButton.IsChecked}" /> </BulletDecorator.Bullet> <ContentPresenter RecognizesAccessKey="True" Content="{TemplateBinding ContentControl.Content}" ContentTemplate="{TemplateBinding ContentControl.ContentTemplate}" ContentStringFormat="{TemplateBinding ContentControl.ContentStringFormat}" Margin="{TemplateBinding Control.Padding}" HorizontalAlignment="{TemplateBinding Control.HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding Control.VerticalContentAlignment}" SnapsToDevicePixels="{TemplateBinding UIElement.SnapsToDevicePixels}" /> </BulletDecorator> <ControlTemplate.Triggers> <Trigger Property="ContentControl.HasContent"> <Setter Property="FrameworkElement.FocusVisualStyle"> <Setter.Value> <Style TargetType="IFrameworkInputElement"> <Style.Resources> <ResourceDictionary /> </Style.Resources> <Setter Property="Control.Template"> <Setter.Value> <ControlTemplate> <Rectangle Stroke="#FF000000" StrokeThickness="1" StrokeDashArray="1 2" Margin="14,0,0,0" SnapsToDevicePixels="True" /> </ControlTemplate> </Setter.Value> </Setter> </Style> </Setter.Value> </Setter> <Setter Property="Control.Padding"> <Setter.Value> <Thickness>2,0,0,0</Thickness> </Setter.Value> </Setter> <Trigger.Value> <s:Boolean>True</s:Boolean> </Trigger.Value> </Trigger> <Trigger Property="UIElement.IsEnabled"> <Setter Property="TextElement.Foreground"> <Setter.Value> <DynamicResource ResourceKey="{x:Static SystemColors.GrayTextBrushKey}" /> </Setter.Value> </Setter> <Trigger.Value> <s:Boolean>False</s:Boolean> </Trigger.Value> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> Okay, sure, looks just fine I guess. I don't have enough experience to know if that looks like it's right or not. Now, I get two errors: Assembly 'PresentationFramework.Luna' was not found. Verify that you are not missing an assembly reference. Also, verify that your project and all referenced assemblies have been built. and The type 'mwt:BulletChrome' was not found. Verify that you are not missing an assembly reference and that all referenced assemblies have been built. Now I'm wondering, how can I resolve these errors so that I can actually start working on the template? Is there a better way of going about this? My boss wants a three-state checkbox with a red square instead of green, and he won't take no for an answer.

    Read the article

  • Need help on Coda slider tabs to move inside an overflow:hidden div

    - by Reden
    I'm not too good at javascript. and hope I can get a bit of help. I'm using Coda Slider 2.0, and have designed it to where the tabs are another slider to the right of the main slider, and each item. Basically like this mootools plugin http://landofcoder.com/demo/mootool/lofslidernews/index2.1.html Problem is the items will not scroll. How do I make the items (or tabs to the right) scroll down as the slider rotates? Otherwise the slider will show the 4th slide but not scroll to the 4th item on the right, but Thanks everyone. Here is the Coda-Slider plugin: // when the DOM is ready... $(document).ready(function () { var $panels = $('#slider .scrollContainer > div'); var $container = $('#slider .scrollContainer'); // if false, we'll float all the panels left and fix the width // of the container var horizontal = true; // float the panels left if we're going horizontal if (horizontal) { $panels.css({ 'float' : 'left', 'position' : 'relative' // IE fix to ensure overflow is hidden }); // calculate a new width for the container (so it holds all panels) $container.css('width', $panels[0].offsetWidth * $panels.length); } // collect the scroll object, at the same time apply the hidden overflow // to remove the default scrollbars that will appear var $scroll = $('#slider .scroll').css('overflow', 'hidden'); // apply our left + right buttons $scroll .before('<img class="scrollButtons left" src="images/scroll_left.png" />') .after('<img class="scrollButtons right" src="images/scroll_right.png" />'); // handle nav selection function selectNav() { $(this) .parents('ul:first') .find('a') .removeClass('selected') .end() .end() .addClass('selected'); } $('#slider .navigation').find('a').click(selectNav); // go find the navigation link that has this target and select the nav function trigger(data) { var el = $('#slider .navigation').find('a[href$="' + data.id + '"]').get(0); selectNav.call(el); } if (window.location.hash) { trigger({ id : window.location.hash.substr(1) }); } else { $('ul.navigation a:first').click(); } // offset is used to move to *exactly* the right place, since I'm using // padding on my example, I need to subtract the amount of padding to // the offset. Try removing this to get a good idea of the effect var offset = parseInt((horizontal ? $container.css('paddingTop') : $container.css('paddingLeft')) || 0) * -1; var scrollOptions = { target: $scroll, // the element that has the overflow // can be a selector which will be relative to the target items: $panels, navigation: '.navigation a', // selectors are NOT relative to document, i.e. make sure they're unique prev: 'img.left', next: 'img.right', // allow the scroll effect to run both directions axis: 'xy', onAfter: trigger, // our final callback offset: offset, // duration of the sliding effect duration: 500, // easing - can be used with the easing plugin: // http://gsgd.co.uk/sandbox/jquery/easing/ easing: 'swing' }; // apply serialScroll to the slider - we chose this plugin because it // supports// the indexed next and previous scroll along with hooking // in to our navigation. $('#slider').serialScroll(scrollOptions); // now apply localScroll to hook any other arbitrary links to trigger // the effect $.localScroll(scrollOptions); // finally, if the URL has a hash, move the slider in to position, // setting the duration to 1 because I don't want it to scroll in the // very first page load. We don't always need this, but it ensures // the positioning is absolutely spot on when the pages loads. scrollOptions.duration = 1; $.localScroll.hash(scrollOptions); /////////////////////////////////////////////// // autoscroll /////////////////////////////////////////////// // start to automatically cycle the tabs cycleTimer = setInterval(function () { $scroll.trigger('next'); }, 2000); // how many milliseconds, change this to whatever you like // select some trigger elements to stop the auto-cycle var $stopTriggers = $('#slider .navigation').find('a') // tab headers .add('.scroll') // panel itself .add('.stopscroll') // links to the stop class div .add('.navigation') // links to navigation id for tabs .add("a[href^='#']"); // links to a tab // this is the function that will stop the auto-cycle function stopCycle() { // remove the no longer needed stop triggers clearInterval(cycleTimer); // stop the auto-cycle itself $buttons.show(); // show the navigation buttons document.getElementById('stopscroll').style.display='none'; // hide the stop div document.getElementById('startscroll').style.display='block'; // block the start div } // bind stop cycle function to the click event using namespaces $stopTriggers.bind('click.cycle', stopCycle); /////////////////////////////////////////////// // end autoscroll /////////////////////////////////////////////// // edit to start again /////////////////////////////////////////////// // select some trigger elements to stop the auto-cycle var $startTriggers_start = $('#slider .navigation').find('a') // tab headers .add('.startscroll'); // links to the start class div // this is the function that will stop the auto-cycle function startCycle() { // remove the no longer needed stop triggers $buttons.hide(); // show the navigation buttons $scroll.trigger('next'); // directly to the next first cycleTimer = setInterval(function () { // now set timer again $scroll.trigger('next'); }, 5000); // how many milliseconds, change this to whatever you like document.getElementById('stopscroll').style.display='block'; // block the stop div document.getElementById('startscroll').style.display='none'; // hide the start div } // bind stop cycle function to the click event using namespaces $startTriggers_start.bind('click.cycle', startCycle); /////////////////////////////////////////////// // end edit to start /////////////////////////////////////////////// });

    Read the article

  • GZip/Deflate Compression in ASP.NET MVC

    - by Rick Strahl
    A long while back I wrote about GZip compression in ASP.NET. In that article I describe two generic helper methods that I've used in all sorts of ASP.NET application from WebForms apps to HttpModules and HttpHandlers that require gzip or deflate compression. The same static methods also work in ASP.NET MVC. Here are the two routines:/// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } The first method checks whether the client sending the request includes the accept-encoding for either gzip or deflate, and if if it does it returns true. The second function uses IsGzipSupported() to decide whether it should encode content and uses an Response Filter to do its job. Basically response filters look at the Response output stream as it's written and convert the data flowing through it. Filters are a bit tricky to work with but the two .NET filter streams for GZip and Deflate Compression make this a snap to implement. In my old code and even now in MVC I can always do:public ActionResult List(string keyword=null, int category=0) { WebUtils.GZipEncodePage(); …} to encode my content. And that works just fine. The proper way: Create an ActionFilterAttribute However in MVC this sort of thing is typically better handled by an ActionFilter which can be applied with an attribute. So to be all prim and proper I created an CompressContentAttribute ActionFilter that incorporates those two helper methods and which looks like this:/// <summary> /// Attribute that can be added to controller methods to force content /// to be GZip encoded if the client supports it /// </summary> public class CompressContentAttribute : ActionFilterAttribute { /// <summary> /// Override to compress the content that is generated by /// an action method. /// </summary> /// <param name="filterContext"></param> public override void OnActionExecuting(ActionExecutingContext filterContext) { GZipEncodePage(); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } } It's basically the same code wrapped into an ActionFilter attribute, which intercepts requests MVC requests to Controller methods and lets you hook up logic before and after the methods have executed. Here I want to override OnActionExecuting() which fires before the Controller action is fired. With the CompressContentAttribute created, it can now be applied to either the controller as a whole:[CompressContent] public class ClassifiedsController : ClassifiedsBaseController { … } or to one of the Action methods:[CompressContent] public ActionResult List(string keyword=null, int category=0) { … } The former applies compression to every action method, while the latter is selective and only applies it to the individual action method. Is the attribute better than the static utility function? Not really, but it is the standard MVC way to hook up 'filter' content and that's where others are likely to expect to set options like this. In fact,  you have a bit more control with the utility function because you can conditionally apply it in code, but this is actually much less likely in MVC applications than old WebForms apps since controller methods tend to be more focused. Compression Caveats Http compression is very cool and pretty easy to implement in ASP.NET but you have to be careful with it - especially if your content might get transformed or redirected inside of ASP.NET. A good example, is if an error occurs and a compression filter is applied. ASP.NET errors don't clear the filter, but clear the Response headers which results in some nasty garbage because the compressed content now no longer matches the headers. Another issue is Caching, which has to account for all possible ways of compression and non-compression that the content is served. Basically compressed content and caching don't mix well. I wrote about several of these issues in an old blog post and I recommend you take a quick peek before diving into making every bit of output Gzip encoded. None of these are show stoppers, but you have to be aware of the issues. Related Posts GZip Compression with ASP.NET Content ASP.NET GZip Encoding Caveats© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • usb mouse/keyboard doesn't work with 3.11.0-12-generic kernel

    - by x-yuri
    I can't use my usb keyboard/mouse after upgrade from raring to saucy. The keyboard works in grub menu and if I boot with the previous kernel version (3.8.0-31-generic). My new kernel version is 3.11.0-12-generic. I've got Mad Catz R.A.T.7 wired USB mouse, Canyon CNL-MBMSO02 wired usb mouse and Logitech diNovo Edge wireless keyboard, connected to computer through Logitech Unifying Receiver. Using PS/2 keyboard I've managed to get some information. dmesg says: [ 0.166273] ACPI: bus type USB registered [ 0.166273] usbcore: registered new interface driver usbfs [ 0.166273] usbcore: registered new interface driver hub [ 0.166273] usbcore: registered new device driver usb ... [ 3.534226] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 3.534228] ehci-pci: EHCI PCI platform driver [ 3.534291] ehci-pci 0000:00:1a.7: setting latency timer to 64 [ 3.534299] ehci-pci 0000:00:1a.7: EHCI Host Controller [ 3.534304] ehci-pci 0000:00:1a.7: new USB bus registered, assigned bus number 1 [ 3.534315] ehci-pci 0000:00:1a.7: debug port 1 [ 3.538218] ehci-pci 0000:00:1a.7: cache line size of 64 is not supported [ 3.538231] ehci-pci 0000:00:1a.7: irq 18, io mem 0xd3325400 [ 3.548017] ehci-pci 0000:00:1a.7: USB 2.0 started, EHCI 1.00 [ 3.548042] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002 [ 3.548045] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.548048] usb usb1: Product: EHCI Host Controller [ 3.548050] usb usb1: Manufacturer: Linux 3.11.0-12-generic ehci_hcd [ 3.548053] usb usb1: SerialNumber: 0000:00:1a.7 [ 3.548155] hub 1-0:1.0: USB hub found [ 3.548159] hub 1-0:1.0: 6 ports detected [ 3.548311] ehci-pci 0000:00:1d.7: setting latency timer to 64 [ 3.548319] ehci-pci 0000:00:1d.7: EHCI Host Controller [ 3.548323] ehci-pci 0000:00:1d.7: new USB bus registered, assigned bus number 2 [ 3.548333] ehci-pci 0000:00:1d.7: debug port 1 [ 3.552228] ehci-pci 0000:00:1d.7: cache line size of 64 is not supported [ 3.552239] ehci-pci 0000:00:1d.7: irq 23, io mem 0xd3325000 [ 3.564014] ehci-pci 0000:00:1d.7: USB 2.0 started, EHCI 1.00 [ 3.564044] usb usb2: New USB device found, idVendor=1d6b, idProduct=0002 [ 3.564047] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.564050] usb usb2: Product: EHCI Host Controller [ 3.564052] usb usb2: Manufacturer: Linux 3.11.0-12-generic ehci_hcd [ 3.564056] usb usb2: SerialNumber: 0000:00:1d.7 [ 3.564163] hub 2-0:1.0: USB hub found [ 3.564167] hub 2-0:1.0: 6 ports detected [ 3.564274] ehci-platform: EHCI generic platform driver [ 3.564280] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 3.564281] ohci-platform: OHCI generic platform driver [ 3.564287] uhci_hcd: USB Universal Host Controller Interface driver [ 3.564345] uhci_hcd 0000:00:1a.0: setting latency timer to 64 [ 3.564347] uhci_hcd 0000:00:1a.0: UHCI Host Controller [ 3.564352] uhci_hcd 0000:00:1a.0: new USB bus registered, assigned bus number 3 [ 3.564378] uhci_hcd 0000:00:1a.0: irq 16, io base 0x0000f0c0 [ 3.564402] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.564404] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.564406] usb usb3: Product: UHCI Host Controller [ 3.564408] usb usb3: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.564410] usb usb3: SerialNumber: 0000:00:1a.0 [ 3.564478] hub 3-0:1.0: USB hub found [ 3.564482] hub 3-0:1.0: 2 ports detected [ 3.564589] uhci_hcd 0000:00:1a.1: setting latency timer to 64 [ 3.564592] uhci_hcd 0000:00:1a.1: UHCI Host Controller [ 3.564597] uhci_hcd 0000:00:1a.1: new USB bus registered, assigned bus number 4 [ 3.564623] uhci_hcd 0000:00:1a.1: irq 21, io base 0x0000f0a0 [ 3.564647] usb usb4: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.564649] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.564651] usb usb4: Product: UHCI Host Controller [ 3.564653] usb usb4: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.564654] usb usb4: SerialNumber: 0000:00:1a.1 [ 3.564727] hub 4-0:1.0: USB hub found [ 3.564730] hub 4-0:1.0: 2 ports detected [ 3.564834] uhci_hcd 0000:00:1a.2: setting latency timer to 64 [ 3.564837] uhci_hcd 0000:00:1a.2: UHCI Host Controller [ 3.564843] uhci_hcd 0000:00:1a.2: new USB bus registered, assigned bus number 5 [ 3.564863] uhci_hcd 0000:00:1a.2: irq 18, io base 0x0000f080 [ 3.564885] usb usb5: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.564887] usb usb5: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.564889] usb usb5: Product: UHCI Host Controller [ 3.564891] usb usb5: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.564892] usb usb5: SerialNumber: 0000:00:1a.2 [ 3.564962] hub 5-0:1.0: USB hub found [ 3.564966] hub 5-0:1.0: 2 ports detected [ 3.565073] uhci_hcd 0000:00:1d.0: setting latency timer to 64 [ 3.565076] uhci_hcd 0000:00:1d.0: UHCI Host Controller [ 3.565081] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 6 [ 3.565101] uhci_hcd 0000:00:1d.0: irq 23, io base 0x0000f060 [ 3.565124] usb usb6: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.565127] usb usb6: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.565128] usb usb6: Product: UHCI Host Controller [ 3.565130] usb usb6: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.565132] usb usb6: SerialNumber: 0000:00:1d.0 [ 3.565195] hub 6-0:1.0: USB hub found [ 3.565198] hub 6-0:1.0: 2 ports detected [ 3.565303] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [ 3.565306] uhci_hcd 0000:00:1d.1: UHCI Host Controller [ 3.565310] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 7 [ 3.565329] uhci_hcd 0000:00:1d.1: irq 19, io base 0x0000f040 [ 3.565352] usb usb7: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.565354] usb usb7: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.565356] usb usb7: Product: UHCI Host Controller [ 3.565358] usb usb7: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.565359] usb usb7: SerialNumber: 0000:00:1d.1 [ 3.565424] hub 7-0:1.0: USB hub found [ 3.565427] hub 7-0:1.0: 2 ports detected [ 3.565534] uhci_hcd 0000:00:1d.2: setting latency timer to 64 [ 3.565537] uhci_hcd 0000:00:1d.2: UHCI Host Controller [ 3.565541] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 8 [ 3.565560] uhci_hcd 0000:00:1d.2: irq 18, io base 0x0000f020 [ 3.565584] usb usb8: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.565587] usb usb8: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.565588] usb usb8: Product: UHCI Host Controller [ 3.565590] usb usb8: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.565592] usb usb8: SerialNumber: 0000:00:1d.2 [ 3.565658] hub 8-0:1.0: USB hub found [ 3.565661] hub 8-0:1.0: 2 ports detected ... [ 4.120014] usb 2-3: new high-speed USB device number 2 using ehci-pci ... [ 4.468908] usb 2-3: New USB device found, idVendor=046d, idProduct=0825 [ 4.468912] usb 2-3: New USB device strings: Mfr=0, Product=0, SerialNumber=2 [ 4.468914] usb 2-3: SerialNumber: AF582E10 ... [ 5.284019] usb 5-2: new full-speed USB device number 2 using uhci_hcd [ 5.465903] usb 5-2: New USB device found, idVendor=046d, idProduct=0b04 [ 5.465908] usb 5-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 5.465911] usb 5-2: Product: Logitech BT Mini-Receiver [ 5.465914] usb 5-2: Manufacturer: Logitech [ 5.468948] hub 5-2:1.0: USB hub found [ 5.470898] hub 5-2:1.0: 3 ports detected [ 5.476096] Switched to clocksource tsc [ 5.712099] usb 7-2: new full-speed USB device number 2 using uhci_hcd [ 5.896366] usb 7-2: New USB device found, idVendor=046d, idProduct=c52b [ 5.896370] usb 7-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 5.896372] usb 7-2: Product: USB Receiver [ 5.896374] usb 7-2: Manufacturer: Logitech [ 6.140016] usb 8-1: new full-speed USB device number 2 using uhci_hcd [ 6.324597] usb 8-1: New USB device found, idVendor=0738, idProduct=1708 [ 6.324603] usb 8-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 6.324605] usb 8-1: Product: Mad Catz R.A.T.7 Mouse [ 6.324608] usb 8-1: Manufacturer: Mad Catz [ 6.564012] usb 8-2: new low-speed USB device number 3 using uhci_hcd [ 6.746602] usb 8-2: New USB device found, idVendor=1d57, idProduct=0010 [ 6.746608] usb 8-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 6.746610] usb 8-2: Product: usb mouse with wheel [ 6.746613] usb 8-2: Manufacturer: HID-Compliant Mouse [ 7.337898] usb 5-2.2: new full-speed USB device number 3 using uhci_hcd [ 7.490902] usb 5-2.2: New USB device found, idVendor=046d, idProduct=c713 [ 7.490907] usb 5-2.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 7.490910] usb 5-2.2: Product: Logitech BT Mini-Receiver [ 7.490913] usb 5-2.2: Manufacturer: Logitech [ 7.490915] usb 5-2.2: SerialNumber: 001F203BD6A7 [ 7.569898] usb 5-2.3: new full-speed USB device number 4 using uhci_hcd [ 7.722901] usb 5-2.3: New USB device found, idVendor=046d, idProduct=c714 [ 7.722906] usb 5-2.3: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 7.722909] usb 5-2.3: Product: Logitech BT Mini-Receiver [ 7.722911] usb 5-2.3: Manufacturer: Logitech [ 7.722913] usb 5-2.3: SerialNumber: 001F203BD6A7 lsusb (more output): Bus 002 Device 002: ID 046d:0825 Logitech, Inc. Webcam C270 Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 008 Device 003: ID 1d57:0010 Xenta Bus 008 Device 002: ID 0738:1708 Mad Catz, Inc. Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 002: ID 046d:c52b Logitech, Inc. Unifying Receiver Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 005 Device 004: ID 046d:c714 Logitech, Inc. diNovo Edge Keyboard Bus 005 Device 003: ID 046d:c713 Logitech, Inc. Bus 005 Device 002: ID 046d:0b04 Logitech, Inc. Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub More background. Before that I had a problem with logging in to GNOME. During which I upgraded all the packages at one point (apt-get upgrade) and it stopped booting at all (it didn't get to login screen). Then I fixed PATH issue and now I've got this usb-not-working issue. I tried reinstalling kernel, to no effect. Is there anything else I can do to fix or diagnose the problem?

    Read the article

  • Writing the tests for FluentPath

    - by Bertrand Le Roy
    Writing the tests for FluentPath is a challenge. The library is a wrapper around a legacy API (System.IO) that wasn’t designed to be easily testable. If it were more testable, the sensible testing methodology would be to tell System.IO to act against a mock file system, which would enable me to verify that my code is doing the expected file system operations without having to manipulate the actual, physical file system: what we are testing here is FluentPath, not System.IO. Unfortunately, that is not an option as nothing in System.IO enables us to plug a mock file system in. As a consequence, we are left with few options. A few people have suggested me to abstract my calls to System.IO away so that I could tell FluentPath – not System.IO – to use a mock instead of the real thing. That in turn is getting a little silly: FluentPath already is a thin abstraction around System.IO, so layering another abstraction between them would double the test surface while bringing little or no value. I would have to test that new abstraction layer, and that would bring us back to square one. Unless I’m missing something, the only option I have here is to bite the bullet and test against the real file system. Of course, the tests that do that can hardly be called unit tests. They are more integration tests as they don’t only test bits of my code. They really test the successful integration of my code with the underlying System.IO. In order to write such tests, the techniques of BDD work particularly well as they enable you to express scenarios in natural language, from which test code is generated. Integration tests are being better expressed as scenarios orchestrating a few basic behaviors, so this is a nice fit. The Orchard team has been successfully using SpecFlow for integration tests for a while and I thought it was pretty cool so that’s what I decided to use. Consider for example the following scenario: Scenario: Change extension Given a clean test directory When I change the extension of bar\notes.txt to foo Then bar\notes.txt should not exist And bar\notes.foo should exist This is human readable and tells you everything you need to know about what you’re testing, but it is also executable code. What happens when SpecFlow compiles this scenario is that it executes a bunch of regular expressions that identify the known Given (set-up phases), When (actions) and Then (result assertions) to identify the code to run, which is then translated into calls into the appropriate methods. Nothing magical. Here is the code generated by SpecFlow: [NUnit.Framework.TestAttribute()] [NUnit.Framework.DescriptionAttribute("Change extension")] public virtual void ChangeExtension() { TechTalk.SpecFlow.ScenarioInfo scenarioInfo = new TechTalk.SpecFlow.ScenarioInfo("Change extension", ((string[])(null))); #line 6 this.ScenarioSetup(scenarioInfo); #line 7 testRunner.Given("a clean test directory"); #line 8 testRunner.When("I change the extension of " + "bar\\notes.txt to foo"); #line 9 testRunner.Then("bar\\notes.txt should not exist"); #line 10 testRunner.And("bar\\notes.foo should exist"); #line hidden testRunner.CollectScenarioErrors();} The #line directives are there to give clues to the debugger, because yes, you can put breakpoints into a scenario: The way you usually write tests with SpecFlow is that you write the scenario first, let it fail, then write the translation of your Given, When and Then into code if they don’t already exist, which results in running but failing tests, and then you write the code to make your tests pass (you implement the scenario). In the case of FluentPath, I built a simple Given method that builds a simple file hierarchy in a temporary directory that all scenarios are going to work with: [Given("a clean test directory")] public void GivenACleanDirectory() { _path = new Path(SystemIO.Path.GetTempPath()) .CreateSubDirectory("FluentPathSpecs") .MakeCurrent(); _path.GetFileSystemEntries() .Delete(true); _path.CreateFile("foo.txt", "This is a text file named foo."); var bar = _path.CreateSubDirectory("bar"); bar.CreateFile("baz.txt", "bar baz") .SetLastWriteTime(DateTime.Now.AddSeconds(-2)); bar.CreateFile("notes.txt", "This is a text file containing notes."); var barbar = bar.CreateSubDirectory("bar"); barbar.CreateFile("deep.txt", "Deep thoughts"); var sub = _path.CreateSubDirectory("sub"); sub.CreateSubDirectory("subsub"); sub.CreateFile("baz.txt", "sub baz") .SetLastWriteTime(DateTime.Now); sub.CreateFile("binary.bin", new byte[] {0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0xFF}); } Then, to implement the scenario that you can read above, I had to write the following When: [When("I change the extension of (.*) to (.*)")] public void WhenIChangeTheExtension( string path, string newExtension) { var oldPath = Path.Current.Combine(path.Split('\\')); oldPath.Move(p => p.ChangeExtension(newExtension)); } As you can see, the When attribute is specifying the regular expression that will enable the SpecFlow engine to recognize what When method to call and also how to map its parameters. For our scenario, “bar\notes.txt” will get mapped to the path parameter, and “foo” to the newExtension parameter. And of course, the code that verifies the assumptions of the scenario: [Then("(.*) should exist")] public void ThenEntryShouldExist(string path) { Assert.IsTrue(_path.Combine(path.Split('\\')).Exists); } [Then("(.*) should not exist")] public void ThenEntryShouldNotExist(string path) { Assert.IsFalse(_path.Combine(path.Split('\\')).Exists); } These steps should be written with reusability in mind. They are building blocks for your scenarios, not implementation of a specific scenario. Think small and fine-grained. In the case of the above steps, I could reuse each of those steps in other scenarios. Those tests are easy to write and easier to read, which means that they also constitute a form of documentation. Oh, and SpecFlow is just one way to do this. Rob wrote a long time ago about this sort of thing (but using a different framework) and I highly recommend this post if I somehow managed to pique your interest: http://blog.wekeroad.com/blog/make-bdd-your-bff-2/ And this screencast (Rob always makes excellent screencasts): http://blog.wekeroad.com/mvc-storefront/kona-3/ (click the “Download it here” link)

    Read the article

  • Deploying Django on EC2 using Bitnami Djangostack: WSGI script cannot be loadded

    - by Arman
    I've been struggling to deploy Django application on Amazon EC2 using Bitnami Djangostack for the last couple of days. When I go to http://dewey.io I see the default bitnami page (/opt/bitnami/apache2/htdocs/index.html), however, when I open http://dewey.io/portnoy, I get 'Internal Server Error'. But it's known that if mod_wsgi is setup correctly, the DocumentRoot value from httpd.conf is ignored, thus, I should see my Django application when accessing http://dewey.io. Essentially, the main error is this - 'Target WSGI script cannot be loaded as Python module'. Two questions: 1) any ideas how to fix these mod_wsgi errors (the Apache logs are below)? 2) how to disable the default /opt/bitnami/apache2/htdocs/index.html page and show my homepage from django application when accessing http://dewey.io? Thank you in advance! The details On my EC2 instance I"m running 64-bit Ubuntu 12.04 with DjangoStack 1.4-1. My Django project is located here - /opt/bitnami/apps/django/django_projects/portnoy. root@dewey:/opt/bitnami/apps/django/django_projects/portnoy# ls manage.py README.md settings.py site_media users Procfile sandbox static test.py topics urls.py views.py __init__.pyc templates testviews.py Apache error logs (/opt/bitnami/apache2/logs/error_log): [Wed Jul 04 02:29:00 2012] [error] [client 140.180.6.212] File does not exist: /opt/bitnami/apache2/htdocs/favicon.ico [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] mod_wsgi (pid=3990): Target WSGI script '/opt/bitnami/apps/django/scripts/django.wsgi' cannot be loaded as Python module. [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] mod_wsgi (pid=3990): Exception occurred processing WSGI script '/opt/bitnami/apps/django/scripts/django.wsgi'. [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] Traceback (most recent call last): [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File "/opt/bitnami/apps/django/scripts/django.wsgi", line 8, in <module> [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] import django.core.handlers.wsgi [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File "/opt/bitnami/apps/django/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 8, in <module> [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] from django import http [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File "/opt/bitnami/apps/django/lib/python2.7/site-packages/django/http/__init__.py", line 119, in <module> [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] from django.http.multipartparser import MultiPartParser [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File "/opt/bitnami/apps/django/lib/python2.7/site-packages/django/http/multipartparser.py", line 13, in <module> [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] from django.utils.text import unescape_entities [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File "/opt/bitnami/apps/django/lib/python2.7/site-packages/django/utils/text.py", line 4, in <module> [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] from gzip import GzipFile [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File "/opt/bitnami/python/lib/python2.7/gzip.py", line 10, in <module> [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] import io [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File "/opt/bitnami/python/lib/python2.7/io.py", line 60, in <module> [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] import _io [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] ImportError: /opt/bitnami/python/lib/python2.7/lib-dynload/_io.so: undefined symbol: PyUnicodeUCS2_AsEncodedString [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File does not exist: /opt/bitnami/apache2/htdocs/favicon.ico [Wed Jul 04 02:44:00 2012] [error] [client 140.180.6.212] File does not exist: /opt/bitnami/apache2/htdocs/favicon.ico Let me quickly introduce the contents of the files to make the case more concrete. This is my /etc/apache2/sites-available/default file <VirtualHost *:80> ServerAdmin root@dewey.io ServerName dewey.io Alias /site_media/ /opt/bitnami/apps/django/django_projects/portnoy/site_media/ Alias /static/ /opt/bitnami/apps/django/lib/python2.7/site-packages/django/contrib/admin/static/ Alias /robots.txt /opt/bitnami/apps/django/django_projects/portnoy/site_media/robots.txt Alias /favicon.ico /opt/bitnami/apps/django/django_projects/portnoy/site_media/favicon.ico CustomLog "|/usr/sbin/rotatelogs /opt/bitnami/apps/django/django_projects/logs/access.log.%Y%m%d-%H%M%S 5M" combined ErrorLog "|/usr/sbin/rotatelogs /opt/bitnami/apps/django/django_projects/logs/error.log.%Y%m%d-%H%M%S 5M" LogLevel warn WSGIProcessGroup dewey.io WSGIScriptAlias / /opt/bitnami/apps/django/scripts/django.wsgi <Directory /opt/bitnami/apps/django/django_projects/portnoy/site_media> Order deny,allow Allow from all Options -Indexes FollowSymLinks </Directory> <Directory /opt/bitnami/apps/django/django_projects/portnoy/conf/apache> Order deny,allow Allow from all </Directory> </VirtualHost> This is my /opt/bitnami/apps/django/scripts/django.wsgi file import os, sys sys.path.append('/opt/bitnami/apps/django/lib/python2.7/site-packages/') sys.path.append('/opt/bitnami/apps/django/django_projects') sys.path.append('/opt/bitnami/apps/django/django_projects/portnoy') os.environ['DJANGO_SETTINGS_MODULE'] = 'portnoy.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() Here is the relevant portion of /opt/bitnami/apache2/conf/httpd.conf file: ServerRoot "/opt/bitnami/apache2" Listen 80 ServerName dewey.io DocumentRoot "/opt/bitnami/apache2/htdocs" LoadModule wsgi_module modules/mod_wsgi.so WSGIPythonHome /opt/bitnami/python Include "/opt/bitnami/apache2/conf/ssi.conf" Include "/opt/bitnami/apps/django/conf/django.conf" Include "/opt/bitnami/apache2/conf/bitnami/httpd.conf"

    Read the article

  • SQL SERVER – PAGEIOLATCH_DT, PAGEIOLATCH_EX, PAGEIOLATCH_KP, PAGEIOLATCH_SH, PAGEIOLATCH_UP – Wait Type – Day 9 of 28

    - by pinaldave
    It is very easy to say that you replace your hardware as that is not up to the mark. In reality, it is very difficult to implement. It is really hard to convince an infrastructure team to change any hardware because they are not performing at their best. I had a nightmare related to this issue in a deal with an infrastructure team as I suggested that they replace their faulty hardware. This is because they were initially not accepting the fact that it is the fault of their hardware. But it is really easy to say “Trust me, I am correct”, while it is equally important that you put some logical reasoning along with this statement. PAGEIOLATCH_XX is such a kind of those wait stats that we would directly like to blame on the underlying subsystem. Of course, most of the time, it is correct – the underlying subsystem is usually the problem. From Book On-Line: PAGEIOLATCH_DT Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Destroy mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_EX Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Exclusive mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_KP Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Keep mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_SH Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Shared mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_UP Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Update mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_XX Explanation: Simply put, this particular wait type occurs when any of the tasks is waiting for data from the disk to move to the buffer cache. ReducingPAGEIOLATCH_XX wait: Just like any other wait type, this is again a very challenging and interesting subject to resolve. Here are a few things you can experiment on: Improve your IO subsystem speed (read the first paragraph of this article, if you have not read it, I repeat that it is easy to say a step like this than to actually implement or do it). This type of wait stats can also happen due to memory pressure or any other memory issues. Putting aside the issue of a faulty IO subsystem, this wait type warrants proper analysis of the memory counters. If due to any reasons, the memory is not optimal and unable to receive the IO data. This situation can create this kind of wait type. Proper placing of files is very important. We should check file system for the proper placement of files – LDF and MDF on separate drive, TempDB on separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. It is very possible that there are no proper indexes on the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can significantly reduce lots of CPU, Memory and IO (considering cover index has much lesser columns than cluster table and all other it depends conditions). You can refer to the two articles’ links below previously written by me that talk about how to optimize indexes. Create Missing Indexes Drop Unused Indexes Updating statistics can help the Query Optimizer to render optimal plan, which can only be either directly or indirectly. I have seen that updating statistics with full scan (again, if your database is huge and you cannot do this – never mind!) can provide optimal information to SQL Server optimizer leading to efficient plan. Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All of the discussions of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Timeout event in netty 4

    - by user1819425
    Hi I would like receive an event where messageReceived does not get called within an expected time. I tried with ReadTimeoutHandler where it generates exception where I can handle in exceptionCaught() where I would would do some work and return without closing the context. but right after that I got a bunch of exception Nov 18, 2012 8:56:34 AM io.netty.channel.ChannelInitializer WARNING: Failed to initialize a channel. Closing: [id: 0xa81de260, /127.0.0.1:59763 = /127.0.0.1:59724] io.netty.channel.ChannelHandlerLifeCycleException: io.netty.handler.timeout.ReadTimeoutHandler is not a @Sharable handler, so can't be added or removed multiple times. at io.netty.channel.DefaultChannelPipeline.callBeforeAdd(DefaultChannelPipeline.java:629) at io.netty.channel.DefaultChannelPipeline.addLast0(DefaultChannelPipeline.java:173) Am I doing correctly? Thanks

    Read the article

  • NoSuchPortException using RXTX Java library on Windows?

    - by Steve
    I have followed the instructions to setup rxtx on windows from http://www.jcontrol.org/download/readme_rxtx_en.html. What I did exactly was copy rxtxSerial.dll to "C:\Program Files\Java\jdk1.6.0_07\jre\bin" and copied RXTXcomm.jar to "C:\Program Files\Java\jdk1.6.0_07\jre\lib\ext" (my JAVA_HOME variable is set to C:\Program Files\Java\jdk1.6.0_07\jre) I also added RXTXcomm.jar to my eclipse project. But when I run it, it still says "NoSuchPortException" Devel Library ========================================= Native lib Version = RXTX-2.0-7pre1 Java lib Version = RXTX-2.0-7pre1 java.lang.ClassCastException: gnu.io.RXTXCommDriver cannot be cast to gnu.io.CommDriver thrown while loading gnu.io.RXTXCommDriver gnu.io.NoSuchPortException at gnu.io.CommPortIdentifier.getPortIdentifier(CommPortIdentifier.java:218) at TwoWaySerialComm.connect(TwoWaySerialComm.java:20) at TwoWaySerialComm.main(TwoWaySerialComm.java:107) In my java file, I tell it: try { (new TwoWaySerialComm()).connect("COM4"); } and I've also tried the Java Comm API. Both cannot recognize my serial port but I am sure I followed the instruction correctly. There files are there. Does anybody have any idea what it could be?

    Read the article

  • LinqToSQL _conn ? LinqToSQLConnection ?

    - by nCdy
    here is a code : using System; using Nemerle.Collections; using Nemerle.Text; //using Nemerle.Utility; using System.Linq; using Nemerle.Data.Linq; using NUnit.Framework; using System.Data.Linq; namespace LinqTestes { [TestFixture] public class Linq2SqlTests { static ReadConnectionString() : string { def currAssm = Uri(typeof(Linq2SqlTests).Assembly.CodeBase).LocalPath; def path = IO.Path.GetDirectoryName(currAssm); def connStrPath = IO.Path.Combine(path, "connectionString.txt"); def connStr = try { IO.File.ReadAllText(connStrPath, Text.Encoding.UTF8) } catch { | e is IO.FileNotFoundException => throw IO.FileNotFoundException( $"You should define connection string to NorthWind DB in: '$connStrPath'", e.FileName, e) }; connStr } _conn : LinqDataConnection = LinqDataConnection(ReadConnectionString()); and I'm making the same but what is LinqDataConnection type ? and where does it comes from ?

    Read the article

  • ubuntu: sem_timedwait not waking (C)

    - by gillez
    I have 3 processes which need to be synchronized. Process one does something then wakes process two and sleeps, which does something then wakes process three and sleeps, which does something and wakes process one and sleeps. The whole loop is timed to run around 25hz (caused by an external sync into process one before it triggers process two in my "real" application). I use sem_post to trigger (wake) each process, and sem_timedwait() to wait for the trigger. This all works successfully for several hours. However at some random time (usually after somewhere between two and four hours), one of the processes starts timing out in sem_timedwait(), even though I am sure the semaphore is being triggered with sem_post(). To prove this I even use sem_getvalue() immediately after the timeout, and the value is 1, so the timedwait should have been triggered. Please see following code: #include <stdio.h> #include <time.h> #include <string.h> #include <errno.h> #include <semaphore.h> sem_t trigger_sem1, trigger_sem2, trigger_sem3; // The main thread process. Called three times with a different num arg - 1, 2 or 3. void *thread(void *arg) { int num = (int) arg; sem_t *wait, *trigger; int val, retval; struct timespec ts; struct timeval tv; switch (num) { case 1: wait = &trigger_sem1; trigger = &trigger_sem2; break; case 2: wait = &trigger_sem2; trigger = &trigger_sem3; break; case 3: wait = &trigger_sem3; trigger = &trigger_sem1; break; } while (1) { // The first thread delays by 40ms to time the whole loop. // This is an external sync in the real app. if (num == 1) usleep(40000); // print sem value before we wait. If this is 1, sem_timedwait() will // return immediately, otherwise it will block until sem_post() is called on this sem. sem_getvalue(wait, &val); printf("sem%d wait sync sem%d. val before %d\n", num, num, val); // get current time and add half a second for timeout. gettimeofday(&tv, NULL); ts.tv_sec = tv.tv_sec; ts.tv_nsec = (tv.tv_usec + 500000); // add half a second if (ts.tv_nsec > 1000000) { ts.tv_sec++; ts.tv_nsec -= 1000000; } ts.tv_nsec *= 1000; /* convert to nanosecs */ retval = sem_timedwait(wait, &ts); if (retval == -1) { // timed out. Print value of sem now. This should be 0, otherwise sem_timedwait // would have woken before timeout (unless the sem_post happened between the // timeout and this call to sem_getvalue). sem_getvalue(wait, &val); printf("!!!!!! sem%d sem_timedwait failed: %s, val now %d\n", num, strerror(errno), val); } else printf("sem%d wakeup.\n", num); // get value of semaphore to trigger. If it's 1, don't post as it has already been // triggered and sem_timedwait on this sem *should* not block. sem_getvalue(trigger, &val); if (val <= 0) { printf("sem%d send sync sem%d. val before %d\n", num, (num == 3 ? 1 : num+1), val); sem_post(trigger); } else printf("!! sem%d not sending sync, val %d\n", num, val); } } int main(int argc, char *argv[]) { pthread_t t1, t2, t3; // create semaphores. val of sem1 is 1 to trigger straight away and start the whole ball rolling. if (sem_init(&trigger_sem1, 0, 1) == -1) perror("Error creating trigger_listman semaphore"); if (sem_init(&trigger_sem2, 0, 0) == -1) perror("Error creating trigger_comms semaphore"); if (sem_init(&trigger_sem3, 0, 0) == -1) perror("Error creating trigger_vws semaphore"); pthread_create(&t1, NULL, thread, (void *) 1); pthread_create(&t2, NULL, thread, (void *) 2); pthread_create(&t3, NULL, thread, (void *) 3); pthread_join(t1, NULL); pthread_join(t2, NULL); pthread_join(t3, NULL); } The following output is printed when the program is running correctly (at the start and for a random but long time after). The value of sem1 is always 1 before thread1 waits as it sleeps for 40ms, by which time sem3 has triggered it, so it wakes straight away. The other two threads wait until the semaphore is received from the previous thread. [...] sem1 wait sync sem1. val before 1 sem1 wakeup. sem1 send sync sem2. val before 0 sem2 wakeup. sem2 send sync sem3. val before 0 sem2 wait sync sem2. val before 0 sem3 wakeup. sem3 send sync sem1. val before 0 sem3 wait sync sem3. val before 0 sem1 wait sync sem1. val before 1 sem1 wakeup. sem1 send sync sem2. val before 0 [...] However, after a few hours, one of the threads begins to timeout. I can see from the output that the semaphore is being triggered, and when I print the value after the timeout is is 1. So sem_timedwait should have woken up well before the timeout. I would never expect the value of the semaphore to be 1 after the timeout, save for the very rare occasion (almost certainly never but it's possible) when the trigger happens after the timeout but before I call sem_getvalue. Also, once it begins to fail, every sem_timedwait() on that semaphore also fails in the same way. See the following output, which I've line-numbered: 01 sem3 wait sync sem3. val before 0 02 sem1 wakeup. 03 sem1 send sync sem2. val before 0 04 sem2 wakeup. 05 sem2 send sync sem3. val before 0 06 sem2 wait sync sem2. val before 0 07 sem1 wait sync sem1. val before 0 08 !!!!!! sem3 sem_timedwait failed: Connection timed out, val now 1 09 sem3 send sync sem1. val before 0 10 sem3 wait sync sem3. val before 1 11 sem3 wakeup. 12 !! sem3 not sending sync, val 1 13 sem3 wait sync sem3. val before 0 14 sem1 wakeup. [...] On line 1, thread 3 (which I have confusingly called sem1 in the printf) waits for sem3 to be triggered. On line 5, sem2 calls sem_post for sem3. However, line 8 shows sem3 timing out, but the value of the semaphore is 1. thread3 then triggers sem1 and waits again (10). However, because the value is already 1, it wakes straight away. It doesn't send sem1 again as this has all happened before control is given to thread1, however it then waits again (val is now 0) and sem1 wakes up. This now repeats for ever, sem3 always timing out and showing that the value is 1. So, my question is why does sem3 timeout, even though the semaphore has been triggered and the value is clearly 1? I would never expect to see line 08 in the output. If it times out (because, say thread 2 has crashed or is taking too long), the value should be 0. And why does it work fine for 3 or 4 hours first before getting into this state? This is using Ubuntu 9.4 with kernel 2.6.28. The same procedure has been working properly on Redhat and Fedora. But I'm now trying to port to ubuntu! Thanks for any advice, Giles

    Read the article

  • Sharepoint Variations Error

    - by marcocampos
    I'm getting some crazy errors when trying to create variations in Sharepoint. Has anybody seen this error? PublishingPage::AttemptPairUpWithPage() Ends. this: http://wseasp05/PT/Paginas/Destaque1.aspx, destPageUrl: /ES/Paginas/Destaque1.aspx Begin DeploymentWrapper::SynchronizePeerPages(), sourcePage = Paginas/Destaque1.aspx DeploymentWrapper::SynchronizePeerPages(), synchronizeDestUrl = /ES/Paginas/Destaque1.aspx Access to the path 'C:\Windows\TEMP\11c7c12e-030d-4860-a942-f5ab71f0930d\ExportSettings.xml' is denied. at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options) at System.IO.FileInfo.Create() at Microsoft.SharePoint.Deployment.Ex... ...portDataFileManager.Initialize() at Microsoft.SharePoint.Deployment.SPExport.InitializeExport() at Microsoft.SharePoint.Deployment.SPExport.Run() Export Completed. DeploymentWrapper.SynchronizePeerPages() catches UnauthorizedAccessException. Spawn failed for /ES/Paginas/Destaque1.aspx End of DeploymentWrapper.SynchronizePeerPages() Thanks in advance.

    Read the article

  • How do I organize a GUI application for passing around events and for setting up reads from a shared resource

    - by Savanni D'Gerinel
    My tools involved here are GTK and Haskell. My questions are probably pretty trivial for anyone who has done significant GUI work, but I've been off in the equivalent of CGI applications for my whole career. I'm building an application that displays tabular data, displays the same data in a graph form, and has an edit field for both entering new data and for editing existing data. After asking about sharing resources, I decided that all of the data involved will be stored in an MVar so that every component can just read the current state from the MVar. All of that works, but now it is time for me to rearrange the application so that it can be interactive. With that in mind, I have three widgets: a TextView (for editing), a TreeView (for displaying the data), and a DrawingArea (for displaying the data as a graph). I THINK I need to do two things, and the core of my question is, are these the right things, or is there a better way. Thing the first: All event handlers, those functions that will be called any time a redisplay is needed, need to be written at a high level and then passed into the function that actually constructs the widget to begin with. For instance: drawStatData :: DrawingArea -> MVar Core.ST -> (Core.ST -> SetRepWorkout.WorkoutStore) -> IO () createStatView :: (DrawingArea -> IO ()) -> IO VBox createUI :: MVar Core.ST -> (Core.ST -> SetRepWorkout.WorkoutStore) -> IO HBox createUI storeMVar field = do graphs <- createStatView (\area -> drawStatData area storeMVar field) hbox <- hBoxNew False 10 boxPackStart hbox graphs PackNatural 0 return hbox In this case, createStatView builds up a VBox that contains a DrawingArea to graph the data and potentially other widgets. It attaches drawStatData to the realize and exposeEvent events for the DrawingArea. I would do something similar for the TreeView, but I am not completely sure what since I have not yet done it and what I am thinking of would involve replacing the TreeModel every time the TreeView needs to be updated. My alternative to the above would be... drawStatData :: DrawingArea -> MVar Core.ST -> (Core.ST -> SetRepWorkout.WorkoutStore) -> IO () createStatView :: IO (VBox, DrawingArea) ... but in this case, I would arrange createUI like so: createUI :: MVar Core.ST -> (Core.ST -> SetRepWorkout.WorkoutStore) -> IO HBox createUI storeMVar field = do (graphbox, graph) <- createStatView (\area -> drawStatData area storeMVar field) hbox <- hBoxNew False 10 boxPackStart hbox graphs PackNatural 0 on graph realize (drawStatData graph storeMVar field) on graph exposeEvent (do liftIO $ drawStatData graph storeMVar field return ()) return hbox I'm not sure which is better, but that does lead me to... Thing the second: it will be necessary for me to rig up an event system so that various events can send signals all the way to my widgets. I'm going to need a mediator of some kind to pass events around and to translate application-semantic events to the actual events that my widgets respond to. Is it better for me to pass my addressable widgets up the call stack to the level where the mediator lives, or to pass the mediator down the call stack and have the widgets register directly with it? So, in summary, my two questions: 1) pass widgets up the call stack to a global mediator, or pass the global mediator down and have the widgets register themselves to it? 2) pass my redraw functions to the builders and have the builders attach the redraw functions to the constructed widgets, or pass the constructed widgets back and have a higher level attach the redraw functions (and potentially link some widgets together)? Okay, and... 3) Books or wikis about GUI application architecture, preferably coherent architectures where people aren't arguing about minute details? The application in its current form (displays data but does not write data or allow for much interaction) is available at https://bitbucket.org/savannidgerinel/fitness . You can run the application by going to the root directory and typing runhaskell -isrc src/Main.hs data/ or... cabal build dist/build/fitness/fitness data/ You may need to install libraries, but cabal should tell you which ones.

    Read the article

  • Should I make sure arguments aren't null before using them in a function.

    - by Nathan W
    The title may not really explain what I'm really trying to get at, couldn't really think of a way to describe what I mean. I was wondering if it is good practice to check the arguments that a function accepts for nulls or empty before using them. I have this function which just wraps some hash creation like so. Public Shared Function GenerateHash(ByVal FilePath As IO.FileInfo) As String If (FilePath Is Nothing) Then Throw New ArgumentNullException("FilePath") End If Dim _sha As New Security.Cryptography.MD5CryptoServiceProvider Dim _Hash = Convert.ToBase64String(_sha.ComputeHash(New IO.FileStream(FilePath.FullName, IO.FileMode.Open, IO.FileAccess.Read))) Return _Hash End Function As you can see I just takes a IO.Fileinfo as an argument, at the start of the function I am checking to make sure that it is not nothing. I'm wondering is this good practice or should I just let it get to the actual hasher and then throw the exception because it is null.? Thanks.

    Read the article

  • Using FileSystemWatcher in detecting a xml file, using Linq in reading the xml file and prompt the results error "Root Element is Missing"

    - by GrayFullBuster
    My application is already working it can detect the xml file and prompt the content of the xml file but sometimes it will prompt "Root element is missing", and sometimes also it is okay but when I open the xml file, it is ok, it has contents on it. How to solve this issue. Here is the screenshot of the error: Here is the code: private void fileSystemWatcher_Created(object sender, System.IO.FileSystemEventArgs e) { string invoice = ""; using (var stream = System.IO.File.Open(e.FullPath, System.IO.FileMode.Open, System.IO.FileAccess.Read, System.IO.FileShare.ReadWrite)) { var doc = System.Xml.Linq.XDocument.Load(stream); var transac = from r in doc.Descendants("Transaction") select new { InvoiceNumber = r.Element("InvoiceNumber").Value, }; foreach (var i in transac) { invoice = i.InvoiceNumber; } } MessageBox.Show(invoice); fileSystemWatcher.EnableRaisingEvents = false; } The error goes here var doc = System.Xml.Linq.XDocument.Load(stream);

    Read the article

  • Class.Class vs Namespace.Class for top level general use class libraries?

    - by Joan Venge
    Which one is more acceptable (best-practice)?: namespace NP public static class IO public static class Xml ... // extension methods using NP; IO.GetAvailableResources (); vs public static class NP public static class IO public static class Xml ... // extension methods NP.IO.GetAvailableResources (); Also for #2, the code size is managed by having partial classes so each nested class can be in a separate file, same for extension methods (except that there is no nested class for them) I prefer #2, for a couple of reasons like being able to use type names that are already commonly used, like IO, that I don't want to replace or collide. Which one do you prefer? Any pros and cons for each? What's the best practice for this case? EDIT: Also would there be a performance difference between the two?

    Read the article

  • Monitoring Events in your BPEL Runtime - RSS Feeds?

    - by Ramkumar Menon
    @10g - It had been a while since I'd tried something different. so here's what I did this week!Whenever our Developers deployed processes to the BPEL runtime, or perhaps when a process gets turned off due to connectivity issues, or maybe someone retired a process, I needed to know. So here's what I did. Step 1: Downloaded Quartz libraries and went through the documentation to understand what it takes to schedule a recurring job. Step 2: Cranked out two components using Oracle JDeveloper. [Within a new Web Project] a) A simple Java Class named FeedUpdater that extends org.quartz.Job. All this class does is to connect to your BPEL Runtime [via opmn:ormi] and fetch all events that occured in the last "n" minutes. events? - If it doesn't ring a bell - its right there on the BPEL Console. If you click on "Administration > Process Log" - what you see are events.The API to retrieve the events is //get the locator reference for the domain you are interested in.Locator l = .... //Predicate to retrieve events for last "n" minutesWhereCondition wc = new WhereCondition(...) //get all those events you needed.BPELProcessEvent[] events = l.listProcessEvents(wc); After you get all these events, write out these into an RSS Feed XML structure and stream it into a file that resides either in your Apache htdocs, or wherever it can be accessed via HTTP.You can read all about RSS 2.0 here. At a high level, here is how it looks like. <?xml version = '1.0' encoding = 'UTF-8'?><rss version="2.0">  <channel>    <title>Live Updates from the Development Environment</title>    <link>http://soadev.myserver.com/feeds/</link>    <description>Live Updates from the Development Environment</description>    <lastBuildDate>Fri, 19 Nov 2010 01:03:00 PST</lastBuildDate>    <language>en-us</language>    <ttl>1</ttl>    <item>      <guid>1290213724692</guid>      <title>Process compiled</title>      <link>http://soadev.myserver.com/BPELConsole/mdm_product/administration.jsp?mode=processLog&amp;processName=&amp;dn=all&amp;eventType=all&amp;eventDate=600&amp;Filter=++Filter++</link>      <pubDate>Fri Nov 19 00:00:37 PST 2010</pubDate>      <description>SendPurchaseOrderRequestService: 3.0 Time : Fri Nov 19 00:00:37                   PST 2010</description>    </item>   ...... </channel> </rss> For writing ut XML content, read through Oracle XML Parser APIs - [search around for oracle.xml.parser.v2] b) Now that my "Job" was done, my job was half done. Next, I wrote up a simple Scheduler Servlet that schedules the above "Job" class to be executed ever "n" minutes. It is very straight forward. Here is the primary section of the code.           try {        Scheduler sched = StdSchedulerFactory.getDefaultScheduler();         //get n and make a trigger that executes every "n" seconds        Trigger trigger = TriggerUtils.makeSecondlyTrigger(n);        trigger.setName("feedTrigger" + System.currentTimeMillis());        trigger.setGroup("feedGroup");                JobDetail job = new JobDetail("SOA_Feed" + System.currentTimeMillis(), "feedGroup", FeedUpdater.class);        sched.scheduleJob(job,trigger);         }catch(Exception ex) {            ex.printStackTrace();            throw new ServletException(ex.getMessage());        } Look up the Quartz API and documentation. It will make this look much simpler.   Now that both components were ready, I packaged the Application into a war file and deployed it onto my Application Server. When the servlet initialized, the "n" second schedule was set/initialized. From then on, the servlet kept populating the RSS Feed file. I just ensured that my "Job" code keeps only 30 latest events within it, so that the feed file is small and under control. [a few kbs]   Next I opened up the feed xml on my browser - It requested a subscription - and Here I was - watching new deployments/life cycle events all popping up on my browser toolbar every 5 (actually n)  minutes!   Well, you could do it on a browser/reader of your choice - or perhaps read them like you read an email on your thunderbird!.      

    Read the article

  • Using BizTalk to bridge SQL Job and Human Intervention (Requesting Permission)

    - by Kevin Shyr
    I start off the process with either a BizTalk Scheduler (http://biztalkscheduledtask.codeplex.com/releases/view/50363) or a manual file drop of the XML message.  The manual file drop is to allow the SQL  Job to call a "File Copy" SSIS step to copy the trigger file for the next process and allows SQL  Job to be linked back into BizTalk processing. The Process Trigger XML looks like the following.  It is basically the configuration hub of the business process <ns0:MsgSchedulerTriggerSQLJobReceive xmlns:ns0="urn:com:something something">   <ns0:IsProcessAsync>YES</ns0:IsProcessAsync>   <ns0:IsPermissionRequired>YES</ns0:IsPermissionRequired>   <ns0:BusinessProcessName>Data Push</ns0:BusinessProcessName>   <ns0:EmailFrom>[email protected]</ns0:EmailFrom>   <ns0:EmailRecipientToList>[email protected]</ns0:EmailRecipientToList>   <ns0:EmailRecipientCCList>[email protected]</ns0:EmailRecipientCCList>   <ns0:EmailMessageBodyForPermissionRequest>This message was sent to request permission to start the Data Push process.  The SQL Job to be run is WeeklyProcessing_DataPush</ns0:EmailMessageBodyForPermissionRequest>   <ns0:SQLJobName>WeeklyProcessing_DataPush</ns0:SQLJobName>   <ns0:SQLJobStepName>Push_To_Production</ns0:SQLJobStepName>   <ns0:SQLJobMinToWait>1</ns0:SQLJobMinToWait>   <ns0:PermissionRequestTriggerPath>\\server\ETL-BizTalk\Automation\TriggerCreatedByBizTalk\</ns0:PermissionRequestTriggerPath>   <ns0:PermissionRequestApprovedPath>\\server\ETL-BizTalk\Automation\Approved\</ns0:PermissionRequestApprovedPath>   <ns0:PermissionRequestNotApprovedPath>\\server\ETL-BizTalk\Automation\NotApproved\</ns0:PermissionRequestNotApprovedPath> </ns0:MsgSchedulerTriggerSQLJobReceive>   Every node of this schema was promoted to a distinguished field so that the values can be used for decision making in the orchestration.  The first decision made is on the "IsPermissionRequired" field.     If permission is required (IsPermissionRequired=="YES"), BizTalk will use the configuration info in the XML trigger to format the email message.  Here is the snippet of how the email message is constructed. SQLJobEmailMessage.EmailBody     = new Eai.OrchestrationHelpers.XlangCustomFormatters.RawString(         MsgSchedulerTriggerSQLJobReceive.EmailMessageBodyForPermissionRequest +         "<br><br>" +         "By moving the file, you are either giving permission to the process, or disapprove of the process." +         "<br>" +         "This is the file to move: \"" + PermissionTriggerToBeGenereatedHere +         "\"<br>" +         "(You may find it easier to open the destination folder first, then navigate to the sibling folder to get to this file)" +         "<br><br>" +         "To approve, move(NOT copy) the file here: " + MsgSchedulerTriggerSQLJobReceive.PermissionRequestApprovedPath +         "<br><br>" +         "To disapprove, move(NOT copy) the file here: " + MsgSchedulerTriggerSQLJobReceive.PermissionRequestNotApprovedPath +         "<br><br>" +         "The file will be IMMEDIATELY picked up by the automated process.  This is normal.  You should receive a message soon that the file is processed." +         "<br>" +         "Thank you!"     ); SQLJobSendNotification(Microsoft.XLANGs.BaseTypes.Address) = "mailto:" + MsgSchedulerTriggerSQLJobReceive.EmailRecipientToList; SQLJobEmailMessage.EmailBody(Microsoft.XLANGs.BaseTypes.ContentType) = "text/html"; SQLJobEmailMessage(SMTP.Subject) = "Requesting Permission to Start the " + MsgSchedulerTriggerSQLJobReceive.BusinessProcessName; SQLJobEmailMessage(SMTP.From) = MsgSchedulerTriggerSQLJobReceive.EmailFrom; SQLJobEmailMessage(SMTP.CC) = MsgSchedulerTriggerSQLJobReceive.EmailRecipientCCList; SQLJobEmailMessage(SMTP.EmailBodyFileCharset) = "UTF-8"; SQLJobEmailMessage(SMTP.SMTPHost) = "localhost"; SQLJobEmailMessage(SMTP.MessagePartsAttachments) = 2;   After the Permission request email is sent, the next step is to generate the actual Permission Trigger file.  A correlation set is used here on SQLJobName and a newly generated GUID field. <?xml version="1.0" encoding="utf-8"?><ns0:SQLJobAuthorizationTrigger xmlns:ns0="somethingsomething"><SQLJobName>Data Push</SQLJobName><CorrelationGuid>9f7c6b46-0e62-46a7-b3a0-b5327ab03753</CorrelationGuid></ns0:SQLJobAuthorizationTrigger> The end user (the human intervention piece) will either grant permission for this process, or deny it, by moving the Permission Trigger file to either the "Approved" folder or the "NotApproved" folder.  A parallel Listen shape is waiting for either response.   The next set of steps decide how the SQL Job is to be called, or whether it is called at all.  If permission denied, it simply sends out a notification.  If permission is granted, then the flag (IsProcessAsync) in the original Process Trigger is used.  The synchonous part is not really synchronous, but a loop timer to check the status within the calling stored procedure (for more information, check out my previous post:  http://geekswithblogs.net/LifeLongTechie/archive/2010/11/01/execute-sql-job-synchronously-for-biztalk-via-a-stored-procedure.aspx)  If it's async, then the sp starts the job and BizTalk sends out an email.   And of course, some error notification:   Footnote: The next version of this orchestration will have an additional parallel line near the Listen shape with a Delay built in and a Loop to send out a daily reminder if no response has been received from the end user.  The synchronous part is used to gather results and execute a data clean up process so that the SQL Job can be re-tried.  There are manu possibilities here.

    Read the article

  • Periodic updates of an object in Unity

    - by Blue
    I'm trying to make a collider appear every 1 second. But I can't get the code right. I tried enabling the collider in the Update function and putting a yield to make it update every second or so. But it's not working (it gives me an error: Update() cannot be a coroutine.) How would I fix this? Would I need a timer system to toggle the collider? var waitTime : float = 1; var trigger : boolean = false; function Update () { if(!trigger){ collider.enabled = false; yield WaitForSeconds(waitTime); } if(trigger){ collider.enabled = true; yield WaitForSeconds(waitTime); } } }

    Read the article

  • Enabling and Disabling Colliders Unity

    - by Blue
    I'm trying to make the collider appear every 1 second. But I can't get the code write. I tried enabling the collider under a boolean and putting a yield to make it every second or so. But it's not working(gives me an error: Update() can not be a coroutine.). How would I fix this? Would I need a timer system and set the collider to be enabled every 'x' seconds and disabled every 'y' seconds? var waitTime : float = 1; var trigger : boolean = false; function Update () { if(!trigger){ collider.enabled = false; yield WaitForSeconds(waitTime); } if(trigger){ collider.enabled = true; yield WaitForSeconds(waitTime); } } }

    Read the article

  • Segmentation fault on the server, but not local machine

    - by menachem-almog
    As stated in the title, the program is working on my local machine (ubuntu 9.10) but not on the server (linux). It's a grid hosting hosting package of godaddy. Please help.. Here is the code: #include <stdio.h> #include <stdlib.h> int main(int argc, char **argv) { long offset; FILE *io; unsigned char found; unsigned long loc; if (argc != 2) { printf("syntax: find 0000000\n"); return 255; } offset = atol(argv[1]) * (sizeof(unsigned char)+sizeof(unsigned long)); io = fopen("index.dat","rb"); fseek(io,offset,SEEK_SET); fread(&found,sizeof(unsigned char),1,io); fread(&loc,sizeof(unsigned long),1,io); if (found == 1) printf("%d\n",loc); else printf("-1\n"); fclose(io); return 0; } EDIT: It's not my program. I wish I knew enough C in order to fix it, but I'm on a deadline. This program is meant to find the first occurrence of a 7 digit number in the PI sequence, index.dat contains an huge array number = position. http://jclement.ca/fun/pi/search.cgi

    Read the article

  • Segmentation Fault (C) occur on the server, but works on local machine

    - by menachem-almog
    As stated in the title, the program is working on my local machine (ubuntu 9.10) but not on the server (linux). It's a grid hosting hosting package of godaddy. Please help.. Here is the code: #include <stdio.h> #include <stdlib.h> int main(int argc, char **argv) { long offset; FILE *io; unsigned char found; unsigned long loc; if (argc != 2) { printf("syntax: find 0000000\n"); return 255; } offset = atol(argv[1]) * (sizeof(unsigned char)+sizeof(unsigned long)); io = fopen("index.dat","rb"); fseek(io,offset,SEEK_SET); fread(&found,sizeof(unsigned char),1,io); fread(&loc,sizeof(unsigned long),1,io); if (found == 1) printf("%d\n",loc); else printf("-1\n"); fclose(io); return 0; }

    Read the article

  • Weblogic EJB calls start to fail under moderate load with OptionalDataException

    - by MarkoU
    Our system setup consists of two Weblogic 10.3 servers: one hosts the presentation layer and the other hosts the EJBs. The system runs fine under moderate load for some time (one to several days) after which EJB method calls from the presentation server to the EJB server start to fail with the following error: java.rmi.RemoteException: java.rmi.UnmarshalException: error unmarshalling arguments; nested exception is: java.io.OptionalDataException Stack trace: java.io.OptionalDataException at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1349) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:351) at weblogic.utils.io.ChunkedObjectInputStream.readObject(ChunkedObjectInputStream.java:197) at weblogic.rjvm.MsgAbbrevInputStream.readObject(MsgAbbrevInputStream.java:564) at weblogic.utils.io.ChunkedObjectInputStream.readObject(ChunkedObjectInputStream.java:193) at weblogic.jndi.internal.RootNamingNode_WLSkel.invoke(Unknown Source) at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:589) at weblogic.rmi.cluster.ClusterableServerRef.invoke(ClusterableServerRef.java:230) at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:477) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363) at weblogic.security.service.SecurityManager.runAs(Unknown Source) at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:473) at weblogic.rmi.internal.wls.WLSExecuteRequest.run(WLSExecuteRequest.java:118) Once the first OptionalDataException is encountered all subsequent calls fail with the same result. Some sources suggest that this might be related to cluster multicast port being misconfigured. However, these servers do not belong to a cluster. Booting the EJB server always temporarily resolves the issue, but the issue seems to occur again after some time. Any ideas?

    Read the article

  • Put message after long idle time does not work

    - by Sydney
    I wrote a simple Java client using MQ v7 libraries (No JMS). I try to put a message in a queue using the following pattern: Put a message Wait for x minutes Put a message again It works but if the idle time is too long (between 5-7 minutes), I get the following error: MQJE001: An MQException occurred: Completion Code 2, Reason 2195 MQJE007: IO error reading message data Error occured during API call - reason code0 MQJE001: Completion Code 2, Reason 2009 MQJE001: An MQException occurred: Completion Code 2, Reason 2009 MQJE003: IO error transmitting message buffer MQJE001: Completion Code 2, Reason 2009 MQJE001: An MQException occurred: Completion Code 2, Reason 2009 MQJE003: IO error transmitting message buffer MQJE001: Completion Code 2, Reason 2009 MQJE001: An MQException occurred: Completion Code 2, Reason 2009 MQJE003: IO error transmitting message buffer MQJE001: Completion Code 2, Reason 2009 MQJE001: An MQException occurred: Completion Code 2, Reason 2009 MQJE003: IO error transmitting message buffer MQJE001: An MQException occurred: Completion Code 2, Reason 2009 MQJE003: IO error transmitting message buffer MQJE001: Completion Code 2, Reason 2009 An MQSeries error occurred : Completion code 2 Reason code 2009 com.ibm.mq.MQException: MQJE001: Completion Code 2, Reason 2009 at com.ibm.mq.MQQueue.put(MQQueue.java:1511) After reading several threads on the subject, this error is usually creating FDC dumps but I have nothing in the system and queue manager logs. The channel is a SVRCONN channel.

    Read the article

  • File Locked by Services (after service code reading the text file)

    - by rvpals
    I have a windows services written in C# .NET. The service is running on a internal timer, every time the interval hits, it will go and try to read this log file into a String. My issue is every time the log file is read, the service seem to lock the log file. The lock on that log file will continue until I stop the windows service. At the same time the service is checking the log file, the same log file needs to be continuously updated by another program. If the file lock is on, the other program could not update the log file. Here is the code I use to read the text log file. private string ReadtextFile(string filename) { string res = ""; try { System.IO.FileStream fs = new System.IO.FileStream(filename, System.IO.FileMode.Open, System.IO.FileAccess.Read); System.IO.StreamReader sr = new System.IO.StreamReader(fs); res = sr.ReadToEnd(); sr.Close(); fs.Close(); } catch (System.Exception ex) { HandleEx(ex); } return res; } Thank you.

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >