Search Results

Search found 6052 results on 243 pages for 'manage'.

Page 227/243 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • Is there a way to edit a control's template based on the default template?

    - by Adam S
    I'm trying to make just a few minor tweaks to the default template of a checkbox. Now I understand how to make a new template from scratch, but this I do not know. I did manage to (I think?) extract the default template via the method here. And it spat out: <ControlTemplate TargetType="CheckBox" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:s="clr-namespace:System;assembly=mscorlib" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mwt="clr-namespace:Microsoft.Windows.Themes;assembly=PresentationFramework.Luna"> <BulletDecorator Background="#00FFFFFF" SnapsToDevicePixels="True"> <BulletDecorator.Bullet> <mwt:BulletChrome Background="{TemplateBinding Panel.Background}" BorderBrush="{TemplateBinding Border.BorderBrush}" BorderThickness="{TemplateBinding Border.BorderThickness}" RenderMouseOver="{TemplateBinding UIElement.IsMouseOver}" RenderPressed="{TemplateBinding ButtonBase.IsPressed}" IsChecked="{TemplateBinding ToggleButton.IsChecked}" /> </BulletDecorator.Bullet> <ContentPresenter RecognizesAccessKey="True" Content="{TemplateBinding ContentControl.Content}" ContentTemplate="{TemplateBinding ContentControl.ContentTemplate}" ContentStringFormat="{TemplateBinding ContentControl.ContentStringFormat}" Margin="{TemplateBinding Control.Padding}" HorizontalAlignment="{TemplateBinding Control.HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding Control.VerticalContentAlignment}" SnapsToDevicePixels="{TemplateBinding UIElement.SnapsToDevicePixels}" /> </BulletDecorator> <ControlTemplate.Triggers> <Trigger Property="ContentControl.HasContent"> <Setter Property="FrameworkElement.FocusVisualStyle"> <Setter.Value> <Style TargetType="IFrameworkInputElement"> <Style.Resources> <ResourceDictionary /> </Style.Resources> <Setter Property="Control.Template"> <Setter.Value> <ControlTemplate> <Rectangle Stroke="#FF000000" StrokeThickness="1" StrokeDashArray="1 2" Margin="14,0,0,0" SnapsToDevicePixels="True" /> </ControlTemplate> </Setter.Value> </Setter> </Style> </Setter.Value> </Setter> <Setter Property="Control.Padding"> <Setter.Value> <Thickness>2,0,0,0</Thickness> </Setter.Value> </Setter> <Trigger.Value> <s:Boolean>True</s:Boolean> </Trigger.Value> </Trigger> <Trigger Property="UIElement.IsEnabled"> <Setter Property="TextElement.Foreground"> <Setter.Value> <DynamicResource ResourceKey="{x:Static SystemColors.GrayTextBrushKey}" /> </Setter.Value> </Setter> <Trigger.Value> <s:Boolean>False</s:Boolean> </Trigger.Value> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> Okay, sure, looks just fine I guess. I don't have enough experience to know if that looks like it's right or not. Now, I get two errors: Assembly 'PresentationFramework.Luna' was not found. Verify that you are not missing an assembly reference. Also, verify that your project and all referenced assemblies have been built. and The type 'mwt:BulletChrome' was not found. Verify that you are not missing an assembly reference and that all referenced assemblies have been built. Now I'm wondering, how can I resolve these errors so that I can actually start working on the template? Is there a better way of going about this? My boss wants a three-state checkbox with a red square instead of green, and he won't take no for an answer.

    Read the article

  • Creating Persistent Drive Labels With UDEV Using /dev/disk/by-path

    - by Matt
    I have a new BackBlaze Pod (BackBlaze Pod 2.0). It has 45 3TB drives and they when I first set it up they were labeled /dev/sda through /dev/sdz and /dev/sdaa through /dev/sdas. I used mdadm to setup three really big 15 drive RAID6 arrays. However, since first setup a few weeks ago I had a couple of the hard drives fail on me. I've replaced them but now the arrays are complaining because they can't find the missing drives. When I list the the disks... ls -l /dev/sd* I see that /dev/sda /dev/sdf /dev/sdk /dev/sdp no longer appear and now there are 4 new ones... /dev/sdau /dev/sdav /dev/sdaw /dev/sdax I also just found that I can do this... ls -l /dev/disk/by-path/ total 0 lrwxrwxrwx 1 root root 10 Sep 19 18:08 pci-0000:02:04.0-scsi-0:0:0:0 -> ../../sdau lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-0:1:0:0 -> ../../sdb lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-0:2:0:0 -> ../../sdc lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-0:3:0:0 -> ../../sdd lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-0:4:0:0 -> ../../sde lrwxrwxrwx 1 root root 10 Sep 19 18:08 pci-0000:02:04.0-scsi-2:0:0:0 -> ../../sdae lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-2:1:0:0 -> ../../sdg lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-2:2:0:0 -> ../../sdh lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-2:3:0:0 -> ../../sdi lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-2:4:0:0 -> ../../sdj lrwxrwxrwx 1 root root 10 Sep 19 18:08 pci-0000:02:04.0-scsi-3:0:0:0 -> ../../sdav lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-3:1:0:0 -> ../../sdl lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-3:2:0:0 -> ../../sdm lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-3:3:0:0 -> ../../sdn lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-3:4:0:0 -> ../../sdo lrwxrwxrwx 1 root root 10 Sep 19 18:08 pci-0000:04:04.0-scsi-0:0:0:0 -> ../../sdax lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-0:1:0:0 -> ../../sdq lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-0:2:0:0 -> ../../sdr lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-0:3:0:0 -> ../../sds lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-0:4:0:0 -> ../../sdt lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-2:0:0:0 -> ../../sdu lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-2:1:0:0 -> ../../sdv lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-2:2:0:0 -> ../../sdw lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-2:3:0:0 -> ../../sdx lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-2:4:0:0 -> ../../sdy lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-3:0:0:0 -> ../../sdz I didn't list them all....you can see the problem above. They're sorted by scsi id here but sda is missing...replaced by sdau...etc... So obviously the arrays are complaining. Is it possible to get Linux to reread the drive labels in the correct order or am I screwed? My initial design with 15 drive arrays is not ideal. With 3TB drives the rebuild times were taking 3 or 4 days....maybe more. I'm scrapping the whole design and I think I am going to go with 6 x 7 RAID5 disk arrays and 3 hot spares to make the arrays a bit easier to manage and shorten the rebuild times. But I'd like to clean up the drive labels so they aren't out of order. I haven't figured out how to do this yet. Does anyone know how to get this straightened out? Thanks, Matt

    Read the article

  • Problem trying to achieve a join using the `comments` contrib in Django

    - by NiKo
    Hi, Django rookie here. I have this model, comments are managed with the django_comments contrib: class Fortune(models.Model): author = models.CharField(max_length=45, blank=False) title = models.CharField(max_length=200, blank=False) slug = models.SlugField(_('slug'), db_index=True, max_length=255, unique_for_date='pub_date') content = models.TextField(blank=False) pub_date = models.DateTimeField(_('published date'), db_index=True, default=datetime.now()) votes = models.IntegerField(default=0) comments = generic.GenericRelation( Comment, content_type_field='content_type', object_id_field='object_pk' ) I want to retrieve Fortune objects with a supplementary nb_comments value for each, counting their respectve number of comments ; I try this query: >>> Fortune.objects.annotate(nb_comments=models.Count('comments')) From the shell: >>> from django_fortunes.models import Fortune >>> from django.db.models import Count >>> Fortune.objects.annotate(nb_comments=Count('comments')) [<Fortune: My first fortune, from NiKo>, <Fortune: Another One, from Dude>, <Fortune: A funny one, from NiKo>] >>> from django.db import connection >>> connection.queries.pop() {'time': '0.000', 'sql': u'SELECT "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes", COUNT("django_comments"."id") AS "nb_comments" FROM "django_fortunes_fortune" LEFT OUTER JOIN "django_comments" ON ("django_fortunes_fortune"."id" = "django_comments"."object_pk") GROUP BY "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes" LIMIT 21'} Below is the properly formatted sql query: SELECT "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes", COUNT("django_comments"."id") AS "nb_comments" FROM "django_fortunes_fortune" LEFT OUTER JOIN "django_comments" ON ("django_fortunes_fortune"."id" = "django_comments"."object_pk") GROUP BY "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes" LIMIT 21 Can you spot the problem? Django won't LEFT JOIN the django_comments table with the content_type data (which contains a reference to the fortune one). This is the kind of query I'd like to be able to generate using the ORM: SELECT "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", COUNT("django_comments"."id") AS "nb_comments" FROM "django_fortunes_fortune" LEFT OUTER JOIN "django_comments" ON ("django_fortunes_fortune"."id" = "django_comments"."object_pk") LEFT OUTER JOIN "django_content_type" ON ("django_comments"."content_type_id" = "django_content_type"."id") GROUP BY "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes" LIMIT 21 But I don't manage to do it, so help from Django veterans would be much appreciated :) Hint: I'm using Django 1.2-DEV Thanks in advance for your help.

    Read the article

  • Can overlay work with a portion of a page that has codebehind functionallity?

    - by rolando
    i have a complete DIV in wich a have a gridview and a multiview with codebehind action: <cc3:CRDataSource EnableViewState="true" ID="DsOpciones" runat="server" SQLSelect="CartelElectronico,OpcionSeleccion_Todos"> <Parameters> <asp:QueryStringParameter Name="@FactorEvaluacionId" QueryStringField="FactorEvaluacionId" Type="String" /> </Parameters> </cc3:CRDataSource> <cc3:CRGridView ID="gvTipoFactorSeleccion" runat="server" DataKeyNames="OpcionSeleccionID" AllowPaging="false" AllowSorting="False" AutoGenerateColumns="False" Titulo="Opciones de Factor de Evaluacion" NoRowsMsg="No se encontraron Opciones para el factor de evaluación" CssClass="Table" AllowExport="False" AllowFilter="False" AllowCollapse="True" EnableViewState="False" PageSize="1000000000" DataSourceID="DsOpciones" OnRowUpdating="gvTipoFactorSeleccion_RowUpdating" OnRowDeleting="gvTipoFactorSeleccion_RowDeleting"> <Columns> <asp:BoundField DataField="Nombre" HeaderText="Nombre" SortExpression="Nombre" ItemStyle-HorizontalAlign="Left" /> <asp:BoundField DataField="Puntos" HeaderText="Puntos" ItemStyle-HorizontalAlign="Center" /> <asp:CommandField ButtonType="Link" ShowDeleteButton="true" DeleteText="Eliminar" ShowEditButton="true" EditText="Modificar" UpdateText="Actualizar" CancelText="Cancelar" /> <asp:TemplateField Visible="false"> <ItemTemplate> </ItemTemplate> </asp:TemplateField> </Columns> </cc3:CRGridView> <br /> <table> <tr valign="baseline"> <td class="contratacionTablaSubrayadoTitulosLineas"> <label class="contratacionEtiquetas"> Opción&nbsp;(*) :</label> </td> <td class="contratacionTablaSubrayadoContenidoLineas"> <asp:TextBox ID="tbOpcion" runat="server" ToolTip="Nombre del Factor de la metodología de evaluación" TabIndex="1" MaxLength="100"></asp:TextBox> <asp:RequiredFieldValidator ID="RequiredFieldValidator13" runat="server" ControlToValidate="tbOpcion" ErrorMessage="&nbsp;Debe digitar un nombre para la Opción" Display="Dynamic" CssClass="inlineError" ValidationGroup="InsertarTipoFactorSeleccion" SetFocusOnError="false"></asp:RequiredFieldValidator> </td> </tr> <tr valign="baseline"> <td colspan="2" align="right"> <asp:LinkButton ID="LinkButton1" runat="server" CssClass="contratacionVinculos" ValidationGroup="InsertarTipoFactorSeleccion" OnClick="lbInsertarTipoFactorSeleccion_Click">Insertar</asp:LinkButton>&nbsp;|&nbsp; <asp:LinkButton ID="LinkButton2" runat="server" OnClick="lbCancelarInsertarTipoFactorSeleccion_Click" CssClass="contratacionVinculos" CausesValidation="false">Cancelar</asp:LinkButton> .... Thats just a portion of the div, my question is How can I show that DIV in a jquery overlay without loosing its functionallity? i'm asking because i manage to get it working but when i do a "that-div-postback" the screen loses the DIV and keeps only the background of the overlay. a couple more information: <button class="modalInput button" rel="#prompt"> Buscar</button> <script type="text/javascript"> function pageLoad() { var triggers = $("button.modalInput").overlay({ // some expose tweaks suitable for modal dialogs expose: { color: '#333', loadSpeed: 200, opacity: 0.3, zIndex: 99 }, top: '25%', closeOnClick: true }); } </script> Thanks in advance ;)

    Read the article

  • Can't launch Oneiric x64 instance on Eucalyptus

    - by Bruno Reis
    EDIT: after many hours, I've found out that the problem has nothing to do with Eucalyptus. It looks like the image is buggy. Very, very buggy. More details in the end. I didn't manage to fix it, and I will file a bug. EDIT 2: I managed to fix it, it apparently works. I have a 4-machine cluster running Ubuntu Server Natty (11.04) x64. I've installed "Ubuntu Enterprise Cloud" from the installtion CD (then updated it) on each of these machines. The cloud seems to work fine, I have lots of virtual machines running Natty servers on them. Now I'd like to run Oneiric in a virtual machine, but somehow I can't. I downloaded Oneiric's (x64) image from http://cloud-images.ubuntu.com/oneiric/current/, published it (uec-publish-tarball oneiric-server-cloudimg-amd64.tar.gz oneiric-server-cloudimg-amd64) exactly as I did with Natty, then tried to launch an instance (euca-run-instances -n 1 -k my-key -t m1.small -z my-cloud emi-XXXXXXXX) using Oneiric's image, but the instance is not able to boot. With euca-get-console-output I get the following: [ 0.461269] VFS: Cannot open root device "sda1" or unknown-block(0,0) [ 0.462388] Please append a correct "root=" boot option; here are the available partitions: [ 0.463855] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) [ 0.465331] Pid: 1, comm: swapper Not tainted 3.0.0-13-generic #22-Ubuntu [ 0.466526] Call Trace: [ 0.466989] [<ffffffff815d3ee5>] panic+0x91/0x194 [ 0.467860] [<ffffffff81ad1031>] mount_block_root+0xdc/0x18e [ 0.468891] [<ffffffff81ad126a>] mount_root+0x54/0x59 [ 0.469829] [<ffffffff81ad13dc>] prepare_namespace+0x16d/0x1a7 [ 0.470883] [<ffffffff81ad0d76>] kernel_init+0x140/0x145 [ 0.471837] [<ffffffff815f38e4>] kernel_thread_helper+0x4/0x10 [ 0.472889] [<ffffffff81ad0c36>] ? start_kernel+0x3df/0x3df [ 0.473884] [<ffffffff815f38e0>] ? gs_change+0x13/0x13 The filesystem is labeled "cloudimg-rootfs", inside the image both /etc/fstab and /boot/grub/grub.cfg always refer to the image by the label, everything seems to be correct, yet the kernel says it can't find the root file system. I've spent many hours googling, but nothing came out. I've asked on #ubuntu-server, but nobody knew what to do. I've asked on #eucalyptus but got no answer at all. Any ideas on why this is happening and how to solve it? Thanks EDIT: after many hours, I've found out that the problem has nothing to do with Eucalyptus. It looks like the image is buggy. Very, very buggy. The first problem is that the Kernel in the image is a -generic kernel, while I suppose it should be a -virtual one. I chrooted into the image, removed the -generic packages, replaced it with the -virtual ones. Then I extracted the new kernel (and replaced the original one (-generic) that came with the tarball) because I need it when I publish and launch an image with Eucalyptus. The problem described above was solved. But then, the console started showing this: mount: mount point ext4 does not exist If you check the /etc/fstab file in the image, it says: LABEL=cloudimg-rootfs ext4 defaults 0 1 Damnt, where's my mount point? Note that it is missing /proc as well. Well, when you think it is over, you will notice that your instance will have no network connectivity. Let's check /etc/network/interface: # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback Oh my! It is missing eth0... here I stopped. I can't take no more. I give up. Looks like Canonical has just forgotten to properly set up this image. At first, I though: "have I downloaded a server image by mistake?", but no, I double checked. It is really the cloud image, it has even "cloud-init" installed (which is not, by default, on server images). They just forgot to prepare it. I will file a bug (and reference it here once this is done), and hope they fix it soon! EDIT 2: it looks like the network configuration was the last thing missing. I decided to test it with the fixes above, and it booted properly! However, I haven't got the slightest idea if the image is now good to go...

    Read the article

  • jQuery image crossfader problem

    - by Dr Casper Black
    Hey!, I have a image switcher fadein/out (it will crossfade iamges - auto(1 - x) ) and a pager but i cant manage to make the image rotation listen the click action on pager, when clicked on pager the image will NOT jup tu the specific img. The problem is in the rotate function the triggerID will hold the "rel" num of the current pager-element which is the equivalent to the image "list" num, so when clicked on the pager, the triggerID will show the rel number that was clicked... can i use that to display the image Here is the code for JQ: $(".paging a:first").addClass("active"); //Rotation rotate = function(){ var triggerID = $active.attr("rel"); //Get number of times to images $(".paging a").removeClass('active'); //Remove all active class $active.addClass('active'); //Add active class (the $active is declared in the rotateSwitch function) //CrossFade Animation var $activeImg = $('.image_reel img.active'); if ( $activeImg.length == 0 ) $activeImg = $('.image_reel img:last'); var $next = $activeImg.next().length ? $activeImg.next() : $('.image_reel img:first'); $activeImg.addClass('last-active'); $next.css({opacity: 0.0}) .addClass('active') .animate({opacity: 1.0}, 500, function() { $activeImg.removeClass('active last-active'); }); }; //Rotation and Timing Event rotateSwitch = function(){ play = setInterval(function(){ //Set timer - this will repeat itself every 3 seconds $active = $('.paging a.active').next(); //Move to the next paging if ( $active.length === 0) { //If paging reaches the end... $active = $('.paging a:first'); //go back to first } rotate(); //Trigger the paging and slider function }, 3000); //Timer speed in milliseconds (3 seconds) }; rotateSwitch(); //Run function on launch //On Click $(".paging a").click(function() { $active = $(this); //Activate the clicked paging //Reset Timer clearInterval(play); //Stop the rotation rotate(); //Trigger rotation immediately rotateSwitch(); // Resume rotation timer return false; //Prevent browser jump to link anchor }); The HTML code: <div class="image_reel"> <img src="images/slideshow/img1.jpg" alt="image 1" class="active"> <img src="images/slideshow/img2.jpg" alt="image 2"> <img src="images/slideshow/img3.jpg" alt="image 3"> <img src="images/slideshow/img4.jpg" alt="image 4"> </div> <div class="paging"> <a href="#" rel="1" title="image 1">&nbsp;</a> <a href="#" rel="2" title="image 2">&nbsp;</a> <a href="#" rel="3" title="image 3">&nbsp;</a> <a href="#" rel="4" title="image 4">&nbsp;</a> </div> plz help.

    Read the article

  • Issue with SPI (Serial Port Comm), stuck on ioctl()

    - by stef
    I'm trying to access a SPI sensor using the SPIDEV driver but my code gets stuck on IOCTL. I'm running embedded Linux on the SAM9X5EK (mounting AT91SAM9G25). The device is connected to SPI0. I enabled CONFIG_SPI_SPIDEV and CONFIG_SPI_ATMEL in menuconfig and added the proper code to the BSP file: static struct spi_board_info spidev_board_info[] { { .modalias = "spidev", .max_speed_hz = 1000000, .bus_num = 0, .chips_select = 0, .mode = SPI_MODE_3, }, ... }; spi_register_board_info(spidev_board_info, ARRAY_SIZE(spidev_board_info)); 1MHz is the maximum accepted by the sensor, I tried 500kHz but I get an error during Linux boot (too slow apparently). .bus_num and .chips_select should correct (I also tried all other combinations). SPI_MODE_3 I checked the datasheet for it. I get no error while booting and devices appear correctly as /dev/spidevX.X. I manage to open the file and obtain a valid file descriptor. I'm now trying to access the device with the following code (inspired by examples I found online). #define MY_SPIDEV_DELAY_USECS 100 // #define MY_SPIDEV_SPEED_HZ 1000000 #define MY_SPIDEV_BITS_PER_WORD 8 int spidevReadRegister(int fd, unsigned int num_out_bytes, unsigned char *out_buffer, unsigned int num_in_bytes, unsigned char *in_buffer) { struct spi_ioc_transfer mesg[2] = { {0}, }; uint8_t num_tr = 0; int ret; // Write data mesg[0].tx_buf = (unsigned long)out_buffer; mesg[0].rx_buf = (unsigned long)NULL; mesg[0].len = num_out_bytes; // mesg[0].delay_usecs = MY_SPIDEV_DELAY_USECS, // mesg[0].speed_hz = MY_SPIDEV_SPEED_HZ; mesg[0].bits_per_word = MY_SPIDEV_BITS_PER_WORD; mesg[0].cs_change = 0; num_tr++; // Read data mesg[1].tx_buf = (unsigned long)NULL; mesg[1].rx_buf = (unsigned long)in_buffer; mesg[1].len = num_in_bytes; // mesg[1].delay_usecs = MY_SPIDEV_DELAY_USECS, // mesg[1].speed_hz = MY_SPIDEV_SPEED_HZ; mesg[1].bits_per_word = MY_SPIDEV_BITS_PER_WORD; mesg[1].cs_change = 1; num_tr++; // Do the actual transmission if(num_tr > 0) { ret = ioctl(fd, SPI_IOC_MESSAGE(num_tr), mesg); if(ret == -1) { printf("Error: %d\n", errno); return -1; } } return 0; } Then I'm using this function: #define OPTICAL_SENSOR_ADDR "/dev/spidev0.0" ... int fd; fd = open(OPTICAL_SENSOR_ADDR, O_RDWR); if (fd<=0) { printf("Device not found\n"); exit(1); } uint8_t buffer1[1] = {0x3a}; uint8_t buffer2[1] = {0}; spidevReadRegister(fd, 1, buffer1, 1, buffer2); When I run it, the code get stuck on IOCTL! I did this way because, in order to read a register on the sensor, I need to send a byte with its address in it and then get the answer back without changing CS (however, when I tried using write() and read() functions, while learning, I got the same result, stuck on them). I'm aware that specifying .speed_hz causes a ENOPROTOOPT error on Atmel (I checked spidev.c) so I commented that part. Why does it get stuck? I though it can be as the device is created but it actually doesn't "feel" any hardware. As I wasn't sure if hardware SPI0 corresponded to bus_num 0 or 1, I tried both, but still no success (btw, which one is it?). UPDATE: I managed to have the SPI working! Half of it.. MOSI is transmitting the right data, but CLK doesn't start... any idea?

    Read the article

  • How to structure a Kohana MVC application with dynamically added fields and provide validation and f

    - by Matt H
    I've got a bit of a problem. I have a Kohana application that has dynamically added fields. The fields that are added are called DISA numbers. In the model I look these up and the result is returned as an array. I encode the array into a JSON string and use JQuery to populate them The View knows the length of the array and so creates as many DISA elements as required before display. See the code below for a summary of how that works. What I'm finding is that this is starting to get difficult to manage. The code is becoming messy. Error handling of this type of dynamic content is ending up being spread all over the place. Not only that, it doesn't work how I want. What you see here is just a small snippet of code. For error handling I am using the validation library. I started by using add_rules on all the fields that come back in the post. As they are always phone numbers I set a required rule (when it's there) and a digit rule on the validation-as_array() keys. That works. The difficulty is actually giving it back to the view. i.e. dynamically added javascript field. Submits back to form. Save contents into a session. View has to load up fields from database + those from the previous post and signal the fields that have problems. It's all quite messy and I'm getting this code spread through both the view the controller and the model. So my question is. Have you done this before in Kohana and how have you handled it? There must be an easier way right? Code snippet. -- edit.php -- public function phone($id){ ... $this->template->content->disa_numbers = $phones->fetch_disa_numbers($this->account, $id); ... } -- phones.php -- public function fetch_disa_numbers($account, $id) { $query = $this->db->query("SELECT id, cid_in FROM disa WHERE owner_ext=?", array($id)); if (!$query){ return ''; } return $query; } -- edit_phones.php --- <script type="text/javascript"> var disaId = 1; function delDisaNumber(element){ /* Put 'X_' on the front of the element name to mark this for deletion */ $(element).prev().attr('name', 'X_'+$(element).prev().attr('name')); $(element).parent().hide(); } function addDisaNumber(){ /* input name is prepended with 'N_' which means new */ $("#disa_numbers").append("<li><input name='N_disa"+disaId+"' id='disa'"+ "type='text'/><a class='hide' onClick='delDisaNumber(this)'></a></li>"); disaId++; } </script> ... <php echo form::open("edit/saveDisaNumbers/".$phone, array("class"=>"section", "id"=>"disa_form")); echo form::open_fieldset(array("class"=>"balanced-grid")); ?> <ul class="fields" id="disa_numbers"> <?php $disaId = 1; foreach ( $disa_numbers as $disa_number ){ echo '<li>'; echo form::input('disa'.$disaId, $disa_number->cid_in); echo'<a class="hide" onclick="delDisaNumber(this)"></a>'; echo "</li>"; $disaId++; } ?> </ul> <button type="button"onclick="addDisaNumber()"><a class="add"></a>Add number</button> <?php echo form::submit('submit', 'Save'); echo form::close(); ?>

    Read the article

  • How can I get my Web API app to run again after upgrading to MVC 5 and Web API 2?

    - by Clay Shannon
    I upgraded my Web API app to the funkelnagelneu versions using this guidance: http://www.asp.net/mvc/tutorials/mvc-5/how-to-upgrade-an-aspnet-mvc-4-and-web-api-project-to-aspnet-mvc-5-and-web-api-2 However, after going through the steps (it seems all this should be automated, anyway), I tried to run it and got, "A project with an Output Type of Class Library cannot be started directly" What in Sam Hills Brothers Coffee is going on here? Who said this was a class library? So I opened Project Properties, and changed it (it was marked as "Class Library" for some reason - it either wasn't yesterday, or was and worked fine) to an Output Type of "Windows Application" ("Console Application" and "Class Library" being the only other options). Now it won't compile, complaining: "*Program 'c:\Platypus_Server_WebAPI\PlatypusServerWebAPI\PlatypusServerWebAPI\obj\Debug\PlatypusServerWebAPI.exe' does not contain a static 'Main' method suitable for an entry point...*" How can I get my Web API app back up and running in view of this quandary? UPDATE Looking in packages.config, two entries seem chin-scratch-worthy: <package id="Microsoft.AspNet.Providers" version="1.2" targetFramework="net40" /> <package id="Microsoft.Web.Infrastructure" version="1.0.0.0" targetFramework="net40" /> All the others target net451. Could this be the problem? Should I remove these packages? UPDATE 2 I tried to uninstall the Microsoft.Web.Infrastructure package (its description leads me to believe I don't need it; also, it has no dependencies) via the NuGet package manager, but it tells me, "NuGet failed to install or uninstall the selected package in the following project(s). [mine]" UPDATE 3 I went through the steps in again, and found that I had missed one step. I had to change this entry in the Application web.config File : <dependentAssembly> <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="1.0.0.0-5.0.0.0" newVersion="5.0.0.0" /> </dependentAssembly> (from "4.0.0.0" to "5.0.0.0") ...but I still get the same result - it rebuilds/compiles, but won't run, with "A project with an Output Type of Class Library cannot be started directly" UPDATE 4 Thinking about the err msg, that it can't directly open a class library, I thought, "Sure you can't/won't -- this is a web app, not a project. So I followed a hunch, closed the project, and reopened it as a website (instead of reopening a project). That has gotten me further, at least; now I see a YSOD: Could not load file or assembly 'System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified. UPDATE 5 Note: The project is now (after being opened as a web site) named "localhost_48614" And...there is no longer a "References" folder...?!?!? As to that YSOD I'm getting, the official instructions (http://www.asp.net/mvc/tutorials/mvc-5/how-to-upgrade-an-aspnet-mvc-4-and-web-api-project-to-aspnet-mvc-5-and-web-api-2) said to do this, and I quote: "Update all elements that contain “System.Web.WebPages.Razor” from version “2.0.0.0” to version“3.0.0.0”." UPDATE 6 When I select Tools Library Package Manager Manage NuGet Packages for Solution now, I get, "Operation failed. Unable to locate the solution directory. Please ensure that the solution has been saved." So I save it, and it saves it with this funky new name (C:\Users\clay\Documents\Visual Studio 2013\Projects\localhost_48614\localhost_48614.sln) I get the Yellow Strip of Enlightenment across the top of the NuGet Package Manager telling me, "Some NuGet packages are missing from this solution. Click to restore from your online package sources." I do (click the "Restore" button, that is), and it downloads the missing packages ... I end up with the 30 packages. I try to run the app/site again, and ... the erstwhile YSOD becomes a compilation error: The pre-application start initialization method Start on type System.Web.Mvc.PreApplicationStartCode threw an exception with the following error message: Could not load file or assembly 'System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.. Argghhhh!!! (and it's not even talk-like-a-pirate day).

    Read the article

  • Getting the constructor of an Interface Type through reflection, is there a better approach than loo

    - by Will Marcouiller
    I have written a generic type: IDirectorySource<T> where T : IDirectoryEntry, which I'm using to manage Active Directory entries through my interfaces objects: IGroup, IOrganizationalUnit, IUser. So that I can write the following: IDirectorySource<IGroup> groups = new DirectorySource<IGroup>(); // Where IGroup implements `IDirectoryEntry`, of course.` foreach (IGroup g in groups.ToList()) { listView1.Items.Add(g.Name).SubItems.Add(g.Description); } From the IDirectorySource<T>.ToList() methods, I use reflection to find out the appropriate constructor for the type parameter T. However, since T is given an interface type, it cannot find any constructor at all! Of course, I have an internal class Group : IGroup which implements the IGroup interface. No matter how hard I have tried, I can't figure out how to get the constructor out of my interface through my implementing class. [DirectorySchemaAttribute("group")] public interface IGroup { } internal class Group : IGroup { internal Group(DirectoryEntry entry) { NativeEntry = entry; Domain = NativeEntry.Path; } // Implementing IGroup interface... } Within the ToList() method of my IDirectorySource<T> interface implementation, I look for the constructor of T as follows: internal class DirectorySource<T> : IDirectorySource<T> { // Implementing properties... // Methods implementations... public IList<T> ToList() { Type t = typeof(T) // Let's assume we're always working with the IGroup interface as T here to keep it simple. // So, my `DirectorySchema` property is already set to "group". // My `DirectorySearcher` is already instantiated here, as I do it within the DirectorySource<T> constructor. Searcher.Filter = string.Format("(&(objectClass={0}))", DirectorySchema) ConstructorInfo ctor = null; ParameterInfo[] params = null; // This is where I get stuck for now... Please see the helper method. GetConstructor(out ctor, out params, new Type() { DirectoryEntry }); SearchResultCollection results = null; try { results = Searcher.FindAll(); } catch (DirectoryServicesCOMException ex) { // Handling exception here... } foreach (SearchResult entry in results) entities.Add(ctor.Invoke(new object() { entry.GetDirectoryEntry() })); return entities; } } private void GetConstructor(out ConstructorInfo constructor, out ParameterInfo[] parameters, Type paramsTypes) { Type t = typeof(T); ConstructorInfo[] ctors = t.GetConstructors(BindingFlags.CreateInstance | BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.InvokeMethod); bool found = true; foreach (ContructorInfo c in ctors) { parameters = c.GetParameters(); if (parameters.GetLength(0) == paramsTypes.GetLength(0)) { for (int index = 0; index < parameters.GetLength(0); ++index) { if (!(parameters[index].GetType() is paramsTypes[index].GetType())) found = false; } if (found) { constructor = c; return; } } } // Processing constructor not found message here... } My problem is that T will always be an interface, so it never finds a constructor. Is there a better way than looping through all of my assembly types for implementations of my interface? I don't care about rewriting a piece of my code, I want to do it right on the first place so that I won't need to come back again and again and again. EDIT #1 Following Sam's advice, I will for now go with the IName and Name convention. However, is it me or there's some way to improve my code? Thanks! =)

    Read the article

  • Architecture for Qt SIGNAL with subclass-specific, templated argument type

    - by Barry Wark
    I am developing a scientific data acquisition application using Qt. Since I'm not a deep expert in Qt, I'd like some architecture advise from the community on the following problem: The application supports several hardware acquisition interfaces but I would like to provide an common API on top of those interfaces. Each interface has a sample data type and a units for its data. So I'm representing a vector of samples from each device as a std::vector of Boost.Units quantities (i.e. std::vector<boost::units::quantity<unit,sample_type> >). I'd like to use a multi-cast style architecture, where each data source broadcasts newly received data to 1 or more interested parties. Qt's Signal/Slot mechanism is an obvious fit for this style. So, I'd like each data source to emit a signal like typedef std::vector<boost::units::quantity<unit,sample_type> > SampleVector signals: void samplesAcquired(SampleVector sampleVector); for the unit and sample_type appropriate for that device. Since tempalted QObject subclasses aren't supported by the meta-object compiler, there doesn't seem to be a way to have a (tempalted) base class for all data sources which defines the samplesAcquired Signal. In other words, the following won't work: template<T,U> //sample type and units class DataSource : public QObject { Q_OBJECT ... public: typedef std::vector<boost::units::quantity<U,T> > SampleVector signals: void samplesAcquired(SampleVector sampleVector); }; The best option I've been able to come up with is a two-layered approach: template<T,U> //sample type and units class IAcquiredSamples { public: typedef std::vector<boost::units::quantity<U,T> > SampleVector virtual shared_ptr<SampleVector> acquiredData(TimeStamp ts, unsigned long nsamples); }; class DataSource : public QObject { ... signals: void samplesAcquired(TimeStamp ts, unsigned long nsamples); }; The samplesAcquired signal now gives a timestamp and number of samples for the acquisition and clients must use the IAcquiredSamples API to retrieve those samples. Obviously data sources must subclass both DataSource and IAcquiredSamples. The disadvantage of this approach appears to be a loss of simplicity in the API... it would be much nicer if clients could get the acquired samples in the Slot connected. Being able to use Qt's queued connections would also make threading issues easier instead of having to manage them in the acquiredData method within each subclass. One other possibility, is to use a QVariant argument. This necessarily puts the onus on subclass to register their particular sample vector type with Q_REGISTER_METATYPE/qRegisterMetaType. Not really a big deal. Clients of the base class however, will have no way of knowing what type the QVariant value type is, unless a tag struct is also passed with the signal. I consider this solution at least as convoluted as the one above, as it forces clients of the abstract base class API to deal with some of the gnarlier aspects of type system. So, is there a way to achieve the templated signal parameter? Is there a better architecture than the one I've proposed?

    Read the article

  • FullCalendar displaying last weeks events below this weeks events

    - by Brian
    I am developing an application to manage an On-Call calendar, using FullCalendar to render it. Most events are 1 week long, starting at 8:00 AM Tuesday and ending the following Tuesday at 8:00 AM. Another event, presumably with a different person on-call, will follow that event. During a hallway usability test, someone commented that the month calendar view was difficult to read because the previous weeks event is not at the top of the stack, instead rendering below the event that starts during that week. When being viewed, the eye perceives that it should go down 1 line to view the remaining timeline because the event from last week is there, instead of moving down to the following week. I investigated what I believe to be the problem: function segCmp(a, b) { return (b.msLength - a.msLength) * 100 + (a.event.start - b.event.start); } sorts the events for a row, but uses the length of the event in the calculation. Since the current week's event will have a longer duration, it always get sorted to the top. To test, I changed the start dates to Wednesday so the durations are closer. This cause the events to render how I would expect, with last weeks events at the top and this weeks at the bottom. I thought that if one of the events in the compare doesn't start that week, then it should only be compared based on the start times. I modified the function to be: function segCmp(a, b) { if (a.isStart == false || b.isStart == false) { return (a.event.start - b.event.start); } return (b.msLength - a.msLength) * 100 + (a.event.start - b.event.start); } This solved my problem, and the rendering now looks good - and passes the hallway test. What I don't know is if this would have an impact in other areas. I have taken a look at the other views (month, week, day) and they all seem to be rendering properly as well. I am just not familiar with FullCalendar enough to file a bug or feature request on this, or if this would even be considered a bug. I am wondering if what I modified is correct, or if it is not what a better modification would be to fix this issue. Thanks! Below I have the json results for what should be displayed: [{"title":"Person 1 - OnCall (OSS On Call)","id":12,"allDay":false,"start":"2010-11-30T15:00:00.0000000Z","end":"2010-12-07T15:00:00.0000000Z","editable":false,"className":"fc-event-title-calendar","url":"/TimeManagement/Edit/12"}, {"title":"Person 2 - OnCall (OSS On Call)","id":13,"allDay":false,"start":"2010-12-07T15:00:00.0000000Z","end":"2010-12-14T15:00:00.0000000Z","editable":false,"className":"fc-event-title-calendar","url":"/TimeManagement/Edit/13"}, {"title":"Person 3 - OnCall (OSS On Call)","id":14,"allDay":false,"start":"2010-12-14T15:00:00.0000000Z","end":"2010-12-21T15:00:00.0000000Z","editable":false,"className":"fc-event-title-calendar","url":"/TimeManagement/Edit/14"}, {"title":"Person 4 - OnCall (OSS On Call)","id":15,"allDay":false,"start":"2010-12-21T15:00:00.0000000Z","end":"2010-12-28T15:00:00.0000000Z","editable":false,"className":"fc-event-title-calendar","url":"/TimeManagement/Edit/15"}, {"title":"Person 5 - OnCall (OSS On Call)","id":16,"allDay":false,"start":"2010-12-28T15:00:00.0000000Z","end":"2011-01-04T15:00:00.0000000Z","editable":false,"className":"fc-event-title-calendar","url":"/TimeManagement/Edit/16"}, {"title":"Person 6 - OnCall (OSS On Call)","id":17,"allDay":false,"start":"2011-01-04T15:00:00.0000000Z","end":"2011-01-11T15:00:00.0000000Z","editable":false,"className":"fc-event-title-calendar","url":"/TimeManagement/Edit/17"}, {"title":"Christmas","id":2,"allDay":true,"start":"2010-12-25T07:00:00.0000000Z","end":null,"editable":false,"className":"fc-event-title-calendar","url":"/TimeManagement/Edit/2"}, {"title":"New Years Eve","id":3,"allDay":true,"start":"2010-12-31T07:00:00.0000000Z","end":null,"editable":false,"className":"fc-event-title-calendar","url":"/TimeManagement/Edit/3"}, {"title":"New Years Day","id":4,"allDay":true,"start":"2011-01-01T07:00:00.0000000Z","end":null,"editable":false,"className":"fc-event-title-calendar","url":"/TimeManagement/Edit/4"}]

    Read the article

  • Is it possible to embed Cockburn style textual UML Use Case content in the code base to improve code

    - by fooledbyprimes
    experimenting with Cockburn use cases in code I was writing some complicated UI code. I decided to employ Cockburn use cases with fish,kite,and sea levels (discussed by Martin Fowler in his book 'UML Distilled'). I wrapped Cockburn use cases in static C# objects so that I could test logical conditions against static constants which represented steps in a UI workflow. The idea was that you could read the code and know what it was doing because the wrapped objects and their public contants gave you ENGLISH use cases via namespaces. Also, I was going to use reflection to pump out error messages that included the described use cases. The idea is that the stack trace could include some UI use case steps IN ENGLISH.... It turned out to be a fun way to achieve a mini,psuedo light-weight Domain Language but without having to write a DSL compiler. So my question is whether or not this is a good way to do this? Has anyone out there ever done something similar? c# example snippets follow Assume we have some aspx page which has 3 user controls (with lots of clickable stuff). User must click on stuff in one particular user control (possibly making some kind of selection) and then the UI must visually cue the user that the selection was successful. Now, while that item is selected, the user must browse through a gridview to find an item within one of the other user controls and then select something. This sounds like an easy thing to manage but the code can get ugly. In my case, the user controls all sent event messages which were captured by the main page. This way, the page acted like a central processor of UI events and could keep track of what happens when the user is clicking around. So, in the main aspx page, we capture the first user control's event. using MyCompany.MyApp.Web.UseCases; protected void MyFirstUserControl_SomeUIWorkflowRequestCommingIn(object sender, EventArgs e) { // some code here to respond and make "state" changes or whatever // // blah blah blah // finally we have this (how did we know to call fish level method?? because we knew when we wrote the code to send the event in the user control) UpdateUserInterfaceOnFishLevelUseCaseGoalSuccess(FishLevel.SomeNamedUIWorkflow.SelectedItemForPurchase) } protected void UpdateUserInterfaceOnFishLevelGoalSuccess(FishLevel.SomeNamedUIWorkflow goal) { switch (goal) { case FishLevel.SomeNamedUIWorkflow.NewMasterItemSelected: //call some UI related methods here including methods for the other user controls if necessary.... break; case FishLevel.SomeNamedUIWorkFlow.DrillDownOnDetails: //call some UI related methods here including methods for the other user controls if necessary.... break; case FishLevel.SomeNamedUIWorkFlow.CancelMultiSelect: //call some UI related methods here including methods for the other user controls if necessary.... break; // more cases... } } } //also we have protected void UpdateUserInterfaceOnSeaLevelGoalSuccess(SeaLevel.SomeNamedUIWorkflow goal) { switch (goal) { case SeaLevel.CheckOutWorkflow.ChangedCreditCard: // do stuff // more cases... } } } So, in the MyCompany.MyApp.Web.UseCases namespace we might have code like this: class SeaLevel... class FishLevel... class KiteLevel... The workflow use cases embedded in the classes could be inner classes or static methods or enumerations or whatever gives you the cleanest namespace. I can't remember what I did originally but you get the picture.

    Read the article

  • HTML overflow:hidden doesn't format text correctly

    - by Rens Groenveld
    I'm working on a website for an American Football team. They have these newsitems on their front page which they can manage through a CMS system. I have a problem with alligning the text inside those news items. Two of the news items look like this: As you can see, the right newsitem text are displayed nicely. But the left cuts it off really bad. You can only see the top half of the text at the last sentence. I use overflow: hidden; to make sure the text doesn't make the div or newsitem bigger. Does anyone have any idea how to solve this through HTML and CSS or should I cut it off serverside with PHP? Here's my code (HTML): <div class="newsitem"> <div class="titlemessagewrapper"> <h2 class="titel" align="center"><?php echo $row['homepagetitel']; ?></h2> <div class="newsbericht"> <?php echo $row['homepagebericht']; ?> </div> </div> <div class="newsfooter"> <span class="footer_author"><a href=""><?php echo get_gebruikersnaam_by_id($row['poster_id']); ?></a></span> <span class="footer_comment"><a href="">Comments <span>todo</span></a></span> <a href="" class="footer_leesmeer">Lees meer</a> </div> </div> And here is the CSS: .newsitem{ float: left; height: 375px; width: 296px; margin: 20px 20px 0px 20px; background-color: #F5F5F5; } .newsitem .titel{ color:#132055; font-size:1.2em; line-height:1.3em; font-weight:bold; margin:10px 5px 5px 5px; padding:0 0 6px 0; border-bottom:1px dashed #9c0001; } .titlemessagewrapper{ height: 335px; overflow: hidden; } .newsitem .newsbericht{ padding:5px 5px 5px 5px; font-size: 0.8em; margin-bottom: 5px; } .newsitem .newsfooter{ width: 100%; height: 25px; background-color: #132055; margin: 0px auto; font-size: 0.8em; padding-top: 5px; margin-top: 10px; border: 1px solid #9c0001; }

    Read the article

  • Nginx no longer servers uwsgi application behind HAProxy - Looks for static file instead

    - by Ralph
    We implemented our web application using web2py. It consists of several modules offering a REST API at various resources (e.g. /dids, /replicas, ...). The API is used by clients implementing requests.py. My problem is that our web app works fine if it's behind HAProxy and hosted by Apache using mod_wsgi. It also works fine if the clients interact with nginx directly. It doesn't work though when using HAProxy in front of nginx. My guess is that HAProxy somehow modifies the request and thus nginx behaves differently i.e. looking for a static file instead of calling the WSGI container. Unfortunately I can't figure out what's exactly going (wr)on(g). Here are the relevant config sections of these three component's config files. At least I guess they are interesting. If you miss anything, please let me know. 1) haproxy.conf frontend app-lb bind loadbalancer:443 ssl crt /etc/grid-security/hostcertkey.pem default_backend nginx-servers mode http backend nginx-servers balance leastconn option forwardfor server nginx-01 nginx-server-int-01.domain.com:80 check 2) nginx.conf: sendfile off; #tcp_nopush on; keepalive_timeout 65; include /etc/nginx/conf.d/*.conf; server { server_name nginx-server-int-01.domain.com; root /path/to/app/; location / { uwsgi_pass unix:///tmp/app.sock; include uwsgi_params; uwsgi_read_timeout 600; # Requests can run for a serious long time } 3) uwsgi.ini [uwsgi] chdir = /path/to/app/ chmod-socket = 777 no-default-app = True socket = /tmp/app.sock manage-script-name = True mount = /dids=did.py mount = /replicas=replica.py callable = application Now when I let my clients go against nginx-server-int-01.domain.com everything is fine. In the access.log of nginx lines like these are appearing: 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/user.ogueta/cnt_mc12_8TeV.16304.stream_name_too_long.other.notype.004202218365415e990b9997ea859f20.user/dids HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 5282 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 5094 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 528 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "GET /dids/mc13_14TeV/dids/search?project=mc13_14TeV&stream_name=%2Adummy&type=dataset&datatype=NTUP_SMDYMUMU HTTP/1.1" 401 73 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "POST /replicas/list HTTP/1.1" 200 713 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" But when I switch the clients to go against HAProxy (loadbalancer.domain.com:443), the error.log of nginx shows lines like these: 2014/08/23 01:26:01 [error] 1705#0: *21231 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21232 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21233 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21234 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21235 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer" 2014/08/23 01:26:02 [error] 1705#0: *21238 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21239 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21242 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21244 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" As you can see, that request looks the same, only the client IP changed, from the client's host to the one from loadbalancer.domain.com. But due to what ever reasons ngxin seems to assume that it is a static file to be served which eventually results in the file not found message. I searched the web for multiple hours already, but without much luck so far. Any help is very much appreciated. Cheers, Ralph

    Read the article

  • Hardware/Software inventory open source projects

    - by Dick dastardly
    Dear Stackoverflowers I would like to develop a Network Inventory application that works on any operating system. Reports on every possible resource attacehd to a network. Reports all pertinent details of hardware and software. Thats (and i hate to use the phrase) my "End Game". However I am running before i can crawl here. I have no experience of this type of development, e.g. discovering a computers hardware and software settings. I've spent almost two weeks googling and come up short! :-(. So I am turning to you to ask these questions:- My first step is to find an existing open source project i can incorporate into my own code that extracts the fine grained details i am after, e.g. EVERYTHING there is to know about the hardaware and software on a single machine. Does this project exist? or do i have to develop that first? Have i got to write all this in C? I am guessing getting this information about a computer is going to be easier than for printers, scanners, routers etc... e.g. everything else you would find attached to a network. Once i have access to a single computers details i then need to investigate how i can traverse an entire newtork of printers, scanners, routers, load balancers, switches, firewalls, workstations, servers, storeage devices, laptops, monitors, the list goes on and on One problem i have is i dont have a 1000 machine newtork to play on! Is there any such resource available on theinternet? (is that a silly question?) Anywho, if you dont ask you wont find out! One aspect iam really looking forward to finding out how to travers the entire network, should i be using TCP/IP for this? Whats a good site, blog, usergorup, book for TCP/IP development? How do i go about getting through firewalls? How many questions can i ask in one go? :-) My previous question on this topic ended up with PYTHON being championed as the language/script to go with to develop this application in. Having looked at a few PYTHON examples they all seemed to be related to WINDOWS networks and interrogating Windows Management Instrumentation (WMI). I had the feeling you cant rely on whats in WMI, and even if you can that s no good for UNIX netwrks. Surely there exist common code for extracting hardware and software details from a computer? Why cant i find it on the internet? Pease help? Theres no prizes though :-( Thanks in advance I would like to appologise if i have broken forum rules or not tried hard enough on my own before asking for assistance. I just would like to start moving forward with this as its one of the best projects i have been involved with. I am inspired by the many differnt number of challenges involved and that if i manage to produce a useful application at the end of it it would hopefully be extremely helpful to many people. That sit Thanks in advance DD

    Read the article

  • Connecting Android device to multiple Bluetooth serial embedded peers

    - by TacB0sS
    I'm trying to find a solution for this setup: I have a single Android device, which I would like to connect to multiple serial embedded devices... And here is the thing, using the "Normal" way to retrieve the Bluetooth socket, doesn't work on all devices, and while it does, I can connect to multiple devices, and send and receive data to and from multiple devices. public final synchronized void connect() throws ConnectionException { if (socket != null) throw new IllegalStateException("Error socket is not null!!"); connecting = true; lastException = null; lastPacket = null; lastHeartBeatReceivedAt = 0; log.setLength(0); try { socket = fetchBT_Socket_Normal(); connectToSocket(socket); listenForIncomingSPP_Packets(); connecting = false; return; } catch (Exception e) { socket = null; logError(e); } try { socket = fetchBT_Socket_Workaround(); connectToSocket(socket); listenForIncomingSPP_Packets(); connecting = false; return; } catch (Exception e) { socket = null; logError(e); } connecting = false; if (socket == null) throw new ConnectionException("Error creating RFcomm socket for" + this); } private BluetoothSocket fetchBT_Socket_Normal() throws Exception { /* The getType() is a hex 0xXXXX value agreed between peers --- this is the key (in my case) to multiple connections in the "Normal" way */ String uuid = getType() + "1101-0000-1000-8000-00805F9B34FB"; try { logDebug("Fetching BT RFcomm Socket standard for UUID: " + uuid + "..."); socket = btDevice.createRfcommSocketToServiceRecord(UUID.fromString(uuid)); return socket; } catch (Exception e) { logError(e); throw e; } } private BluetoothSocket fetchBT_Socket_Workaround() throws Exception { Method m; int connectionIndex = 1; try { logDebug("Fetching BT RFcomm Socket workaround index " + connectionIndex + "..."); m = btDevice.getClass().getMethod("createRfcommSocket", new Class[]{int.class}); socket = (BluetoothSocket) m.invoke(btDevice, connectionIndex); return socket; } catch (Exception e1) { logError(e1); throw e1; } } private void connectToSocket(BluetoothSocket socket) throws ConnectionException { try { socket.connect(); } catch (IOException e) { try { socket.close(); } catch (IOException e1) { logError("Error while closing socket", e1); } finally { socket = null; } throw new ConnectionException("Error connecting to socket with" + this, e); } } And here is the thing, while on phones which the "Normal" way doesn't work, the "Workaround" way provides a solution for a single connection. I've searched far and wide, but came up with zip. The problem with the workaround is mentioned in the last link, both connection uses the same port, which in my case, causes a block, where both of the embedded devices can actually send data, that is not been processed on the Android, while both embedded devices can receive data sent from the Android. Did anyone handle this before? There is a bit more reference here, UPDATE: Following this (that I posted earlier) I wanted to give the mPort a chance, and perhaps to see other port indices, and how other devices manage them, and I found out the the fields in the BluetoothSocket object are different while it is the same class FQN in both cases: Detils from an HTC Vivid 2.3.4, uses the "workaround" Technic: The Socket class type is: [android.bluetooth.BluetoothSocket] mSocket BluetoothSocket (id=830008629928) EADDRINUSE 98 EBADFD 77 MAX_RFCOMM_CHANNEL 30 TAG "BluetoothSocket" (id=830002722432) TYPE_L2CAP 3 TYPE_RFCOMM 1 TYPE_SCO 2 mAddress "64:9C:8E:DC:56:9A" (id=830008516328) mAuth true mClosed false mClosing AtomicBoolean (id=830007851600) mDevice BluetoothDevice (id=830007854256) mEncrypt true mInputStream BluetoothInputStream (id=830008688856) mLock ReentrantReadWriteLock (id=830008629992) mOutputStream BluetoothOutputStream (id=830008430536) **mPort 1** mSdp null mSocketData 3923880 mType 1 Detils from an LG-P925 2.2.2, uses the "normal" Technic: The Socket class type is: [android.bluetooth.BluetoothSocket] mSocket BluetoothSocket (id=830105532880) EADDRINUSE 98 EBADFD 77 MAX_RFCOMM_CHANNEL 30 TAG "BluetoothSocket" (id=830002668088) TYPE_L2CAP 3 TYPE_RFCOMM 1 TYPE_SCO 2 mAccepted false mAddress "64:9C:8E:B9:3F:77" (id=830105544600) mAuth true mClosed false mConnected ConditionVariable (id=830105533144) mDevice BluetoothDevice (id=830105349488) mEncrypt true mInputStream BluetoothInputStream (id=830105532952) mLock ReentrantReadWriteLock (id=830105532984) mOutputStream BluetoothOutputStream (id=830105532968) mPortName "" (id=830002606256) mSocketData 0 mSppPort BluetoothSppPort (id=830105533160) mType 1 mUuid ParcelUuid (id=830105714176) Anyone have some insight...

    Read the article

  • Ado.net Fill method not throwing error on running a Stored Procedure that does not exist.

    - by Mike
    I am using a combination of the Enterprise library and the original Fill method of ADO. This is because I need to open and close the command connection myself as I am capture the event Info Message Here is my code so far // Set Up Command SqlDatabase db = new SqlDatabase(ConfigurationManager.ConnectionStrings[ConnectionName].ConnectionString); SqlCommand command = db.GetStoredProcCommand(StoredProcName) as SqlCommand; command.Connection = db.CreateConnection() as SqlConnection; // Set Up Events for Logging command.StatementCompleted += new StatementCompletedEventHandler(command_StatementCompleted); command.Connection.FireInfoMessageEventOnUserErrors = true; command.Connection.InfoMessage += new SqlInfoMessageEventHandler(Connection_InfoMessage); // Add Parameters foreach (Parameter parameter in Parameters) { db.AddInParameter(command, parameter.Name, (System.Data.DbType)Enum.Parse(typeof(System.Data.DbType), parameter.Type), parameter.Value); } // Use the Old Style fill to keep the connection Open througout the population // and manage the Statement Complete and InfoMessage events SqlDataAdapter da = new SqlDataAdapter(command); DataSet ds = new DataSet(); // Open Connection command.Connection.Open(); // Populate da.Fill(ds); // Dispose of the adapter if (da != null) { da.Dispose(); } // If you do not explicitly close the connection here, it will leak! if (command.Connection.State == ConnectionState.Open) { command.Connection.Close(); } ... Now if I pass into the variable StoredProcName = "ThisProcDoesNotExists" And run this peice of code. The CreateCommand nor da.Fill through an error message. Why is this. The only way I can tell it did not run was that it returns a dataset with 0 tables in it. But when investigating the error it is not appearant that the procedure does not exist. EDIT Upon further investigation command.Connection.FireInfoMessageEventOnUserErrors = true; is causeing the error to be surpressed into the InfoMessage Event From BOL When you set FireInfoMessageEventOnUserErrors to true, errors that were previously treated as exceptions are now handled as InfoMessage events. All events fire immediately and are handled by the event handler. If is FireInfoMessageEventOnUserErrors is set to false, then InfoMessage events are handled at the end of the procedure. What I want is each print statement from Sql to create a new log record. Setting this property to false combines it as one big string. So if I leave the property set to true, now the question is can I discern a print message from an Error ANOTHER EDIT So now I have the code so that the flag is set to true and checking the error number in the method void Connection_InfoMessage(object sender, SqlInfoMessageEventArgs e) { // These are not really errors unless the Number >0 // if Number = 0 that is a print message foreach (SqlError sql in e.Errors) { if (sql.Number == 0) { Logger.WriteInfo("Sql Message",sql.Message); } else { // Whatever this was it was an error throw new DataException(String.Format("Message={0},Line={1},Number={2},State{3}", sql.Message, sql.LineNumber, sql.Number, sql.State)); } } } The issue now that when I throw the error it does not bubble up to the statement that made the call or even the error handler that is above that. It just bombs out on that line The populate looks like // Populate try { da.Fill(ds); } catch (Exception e) { throw new Exception(e.Message, e); } Now even though I see the calling codes and methods still in the Call Stack, this exception does not seem to bubble up?

    Read the article

  • Cisco PIX 515 doesn't seem to be passing traffic through according to static route

    - by Liquidkristal
    Ok, so I am having a spot of bother with a Cisco PIX515, I have posted the current running config below, now I am no cisco expert by any means although I can do basic stuff with them, now I am having trouble with traffic sent from the outside to address: 10.75.32.25 it just doesn't appear to be going anywhere. Now this firewall is deep inside a private network, with an upstream firewall that we don't manage. I have spoken to the people that look after that firewall and they say they they have traffic routing to 10.75.32.21 and 10.75.32.25 and thats it (although there is a website that runs from the server 172.16.102.5 which (if my understanding is correct) gets traffic via 10.75.32.23. Any ideas would be greatly appreciated as to me it should all just work, but its not (obviously if the config is all correct then there could be a problem with the web server that we are trying to access on 10.75.32.25, although the users say that they can get to it internally (172.16.102.8) which is even more confusing) PIX Version 6.3(3) interface ethernet0 auto interface ethernet1 auto interface ethernet2 auto nameif ethernet0 outside security0 nameif ethernet1 inside security100 nameif ethernet2 academic security50 fixup protocol dns maximum-length 512 fixup protocol ftp 21 fixup protocol h323 h225 1720 fixup protocol h323 ras 1718-1719 fixup protocol http 80 fixup protocol rsh 514 fixup protocol rtsp 554 fixup protocol sip 5060 fixup protocol sip udp 5060 fixup protocol skinny 2000 fixup protocol smtp 25 fixup protocol sqlnet 1521 fixup protocol tftp 69 names name 195.157.180.168 outsideNET name 195.157.180.170 globalNAT name 195.157.180.174 gateway name 195.157.180.173 Mail-Global name 172.30.31.240 Mail-Local name 10.75.32.20 outsideIF name 82.219.210.17 frogman1 name 212.69.230.79 frogman2 name 78.105.118.9 frogman3 name 172.16.0.0 acadNET name 172.16.100.254 acadIF access-list acl_outside permit icmp any any echo-reply access-list acl_outside permit icmp any any unreachable access-list acl_outside permit icmp any any time-exceeded access-list acl_outside permit tcp any host 10.75.32.22 eq smtp access-list acl_outside permit tcp any host 10.75.32.22 eq 8383 access-list acl_outside permit tcp any host 10.75.32.22 eq 8385 access-list acl_outside permit tcp any host 10.75.32.22 eq 8484 access-list acl_outside permit tcp any host 10.75.32.22 eq 8485 access-list acl_outside permit ip any host 10.75.32.30 access-list acl_outside permit tcp any host 10.75.32.25 eq https access-list acl_outside permit tcp any host 10.75.32.25 eq www access-list acl_outside permit tcp any host 10.75.32.23 eq www access-list acl_outside permit tcp any host 10.75.32.23 eq https access-list acl_outside permit tcp host frogman1 host 10.75.32.23 eq ssh access-list acl_outside permit tcp host frogman2 host 10.75.32.23 eq ssh access-list acl_outside permit tcp host frogman3 host 10.75.32.23 eq ssh access-list acl_outside permit tcp any host 10.75.32.23 eq 2001 access-list acl_outside permit tcp host frogman1 host 10.75.32.24 eq 8441 access-list acl_outside permit tcp host frogman2 host 10.75.32.24 eq 8441 access-list acl_outside permit tcp host frogman3 host 10.75.32.24 eq 8441 access-list acl_outside permit tcp host frogman1 host 10.75.32.24 eq 8442 access-list acl_outside permit tcp host frogman2 host 10.75.32.24 eq 8442 access-list acl_outside permit tcp host frogman3 host 10.75.32.24 eq 8442 access-list acl_outside permit tcp host frogman1 host 10.75.32.24 eq 8443 access-list acl_outside permit tcp host frogman2 host 10.75.32.24 eq 8443 access-list acl_outside permit tcp host frogman3 host 10.75.32.24 eq 8443 access-list acl_outside permit tcp any host 10.75.32.23 eq smtp access-list acl_outside permit tcp any host 10.75.32.23 eq ssh access-list acl_outside permit tcp any host 10.75.32.24 eq ssh access-list acl_acad permit icmp any any echo-reply access-list acl_acad permit icmp any any unreachable access-list acl_acad permit icmp any any time-exceeded access-list acl_acad permit tcp any 10.0.0.0 255.0.0.0 eq www access-list acl_acad deny tcp any any eq www access-list acl_acad permit tcp any 10.0.0.0 255.0.0.0 eq https access-list acl_acad permit tcp any 10.0.0.0 255.0.0.0 eq 8080 access-list acl_acad permit tcp host 172.16.102.5 host 10.64.1.115 eq smtp pager lines 24 logging console debugging mtu outside 1500 mtu inside 1500 mtu academic 1500 ip address outside outsideIF 255.255.252.0 no ip address inside ip address academic acadIF 255.255.0.0 ip audit info action alarm ip audit attack action alarm pdm history enable arp timeout 14400 global (outside) 1 10.75.32.21 nat (academic) 1 acadNET 255.255.0.0 0 0 static (academic,outside) 10.75.32.22 Mail-Local netmask 255.255.255.255 0 0 static (academic,outside) 10.75.32.30 172.30.30.36 netmask 255.255.255.255 0 0 static (academic,outside) 10.75.32.23 172.16.102.5 netmask 255.255.255.255 0 0 static (academic,outside) 10.75.32.24 172.16.102.6 netmask 255.255.255.255 0 0 static (academic,outside) 10.75.32.25 172.16.102.8 netmask 255.255.255.255 0 0 access-group acl_outside in interface outside access-group acl_acad in interface academic route outside 0.0.0.0 0.0.0.0 10.75.32.1 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 rpc 0:10:00 h225 1:00:00 timeout h323 0:05:00 mgcp 0:05:00 sip 0:30:00 sip_media 0:02:00 timeout uauth 0:05:00 absolute aaa-server TACACS+ protocol tacacs+ aaa-server RADIUS protocol radius aaa-server LOCAL protocol local snmp-server host outside 172.31.10.153 snmp-server host outside 172.31.10.154 snmp-server host outside 172.31.10.155 no snmp-server location no snmp-server contact snmp-server community CPQ_HHS no snmp-server enable traps floodguard enable telnet 172.30.31.0 255.255.255.0 academic telnet timeout 5 ssh timeout 5 console timeout 0 terminal width 120 Cryptochecksum:hi2u : end PIX515#

    Read the article

  • MVC SiteMap - when different nodes point to same action SiteMap.CurrentNode does not map to the correct route

    - by awrigley
    Setup: I am using ASP.NET MVC 4, with mvcSiteMapProvider to manage my menus. I have a custom menu builder that evaluates whether a node is on the current branch (ie, if the SiteMap.CurrentNode is either the CurrentNode or the CurrentNode is nested under it). The code is included below, but essentially checks the url of each node and compares it with the url of the currentnode, up through the currentnodes "family tree". The CurrentBranch is used by my custom menu builder to add a class that highlights menu items on the CurrentBranch. The Problem: My custom menu works fine, but I have found that the mvcSiteMapProvider does not seem to evaluate the url of the CurrentNode in a consistent manner: When two nodes point to the same action and are distinguished only by a parameter of the action, SiteMap.CurrentNode does not seem to use the correct route (it ignores the distinguishing parameter and defaults to the first route that that maps to the action defined in the node). Example of the Problem: In an app I have Members. A Member has a MemberStatus field that can be "Unprocessed", "Active" or "Inactive". To change the MemberStatus, I have a ProcessMemberController in an Area called Admin. The processing is done using the Process action on the ProcessMemberController. My mvcSiteMap has two nodes that BOTH map to the Process action. The only difference between them is the alternate parameter (such are my client's domain semantics), that in one case has a value of "Processed" and in the other "Unprocessed": Nodes: <mvcSiteMapNode title="Process" area="Admin" controller="ProcessMembers" action="Process" alternate="Unprocessed" /> <mvcSiteMapNode title="Change Status" area="Admin" controller="ProcessMembers" action="Process" alternate="Processed" /> Routes: The corresponding routes to these two nodes are (again, the only thing that distinguishes them is the value of the alternate parameter): context.MapRoute( "Process_New_Members", "Admin/Unprocessed/Process/{MemberId}", new { controller = "ProcessMembers", action = "Process", alternate="Unprocessed", MemberId = UrlParameter.Optional } ); context.MapRoute( "Change_Status_Old_Members", "Admin/Members/Status/Change/{MemberId}", new { controller = "ProcessMembers", action = "Process", alternate="Processed", MemberId = UrlParameter.Optional } ); What works: The Html.ActionLink helper uses the routes and produces the urls I expect: @Html.ActionLink("Process", MVC.Admin.ProcessMembers.Process(item.MemberId, "Unprocessed") // Output (alternate="Unprocessed" and item.MemberId = 12): Admin/Unprocessed/Process/12 @Html.ActionLink("Status", MVC.Admin.ProcessMembers.Process(item.MemberId, "Processed") // Output (alternate="Processed" and item.MemberId = 23): Admin/Members/Status/Change/23 In both cases the output is correct and as I expect. What doesn't work: Let's say my request involves the second option, ie, /Admin/Members/Status/Change/47, corresponding to alternate = "Processed" and a MemberId of 47. Debugging my static CurrentBranch property (see below), I find that SiteMap.CurrentNode shows: PreviousSibling: null Provider: {MvcSiteMapProvider.DefaultSiteMapProvider} ReadOnly: false ResourceKey: "" Roles: Count = 0 RootNode: {Home} Title: "Process" Url: "/Admin/Unprocessed/Process/47" Ie, for a request url of /Admin/Members/Status/Change/47, SiteMap.CurrentNode.Url evaluates to /Admin/Unprocessed/Process/47. Ie, it is ignorning the alternate parameter and using the wrong route. CurrentBranch Static Property: /// <summary> /// ReadOnly. Gets the Branch of the Site Map that holds the SiteMap.CurrentNode /// </summary> public static List<SiteMapNode> CurrentBranch { get { List<SiteMapNode> currentBranch = null; if (currentBranch == null) { SiteMapNode cn = SiteMap.CurrentNode; SiteMapNode n = cn; List<SiteMapNode> ln = new List<SiteMapNode>(); if (cn != null) { while (n != null && n.Url != SiteMap.RootNode.Url) { // I don't need to check for n.ParentNode == null // because cn != null && n != SiteMap.RootNode ln.Add(n); n = n.ParentNode; } // the while loop excludes the root node, so add it here // I could add n, that should now be equal to SiteMap.RootNode, but this is clearer ln.Add(SiteMap.RootNode); // The nodes were added in reverse order, from the CurrentNode up, so reverse them. ln.Reverse(); } currentBranch = ln; } return currentBranch; } } The Question: What am I doing wrong? The routes are interpreted by Html.ActionLlink as I expect, but are not evaluated by SiteMap.CurrentNode as I expect. In other words, in evaluating my routes, SiteMap.CurrentNode ignores the distinguishing alternate parameter.

    Read the article

  • Webdriver PageObject Implementation using PageFactory in Java

    - by kamal
    here is what i have so far: A working Webdriver based Java class, which logs-in to the application and goes to a Home page: import java.io.File; import java.io.IOException; import java.util.concurrent.TimeUnit; import org.apache.commons.io.FileUtils; import org.openqa.selenium.By; import org.openqa.selenium.OutputType; import org.openqa.selenium.TakesScreenshot; import org.openqa.selenium.WebDriver; import org.openqa.selenium.firefox.FirefoxDriver; import org.openqa.selenium.firefox.FirefoxProfile; import org.testng.AssertJUnit; import org.testng.annotations.AfterMethod; import org.testng.annotations.BeforeMethod; import org.testng.annotations.Test; public class MLoginFFTest { private WebDriver driver; private String baseUrl; private String fileName = "screenshot.png"; @BeforeMethod public void setUp() throws Exception { FirefoxProfile profile = new FirefoxProfile(); profile.setPreference("network.http.phishy-userpass-length", 255); profile.setAssumeUntrustedCertificateIssuer(false); driver = new FirefoxDriver(profile); baseUrl = "https://a.b.c.d/"; driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS); } @Test public void testAccountLogin() throws Exception { driver.get(baseUrl + "web/certLogon.jsp"); driver.findElement(By.name("logonName")).clear(); AssertJUnit.assertEquals(driver.findElement(By.name("logonName")) .getTagName(), "input"); AssertJUnit.assertEquals(driver.getTitle(), "DA Logon"); driver.findElement(By.name("logonName")).sendKeys("username"); driver.findElement(By.name("password")).clear(); driver.findElement(By.name("password")).sendKeys("password"); driver.findElement(By.name("submit")).click(); driver.findElement(By.linkText("Account")).click(); AssertJUnit.assertEquals(driver.getTitle(), "View Account"); } @AfterMethod public void tearDown() throws Exception { File screenshot = ((TakesScreenshot) driver) .getScreenshotAs(OutputType.FILE); try { FileUtils.copyFile(screenshot, new File(fileName)); } catch (IOException e) { e.printStackTrace(); } driver.quit(); } } Now as we see there are 2 pages: 1. Login page, where i have to enter username and password, and homepage, where i would be taken, once the authentication succeeds. Now i want to implement this as PageObjects using Pagefactory: so i have : package com.example.pageobjects; import static com.example.setup.SeleniumDriver.getDriver; import java.util.concurrent.TimeUnit; import org.openqa.selenium.support.PageFactory; import org.openqa.selenium.support.ui.ExpectedCondition; import org.openqa.selenium.support.ui.FluentWait; import org.openqa.selenium.support.ui.Wait; public abstract class MPage<T> { private static final String BASE_URL = "https://a.b.c.d/"; private static final int LOAD_TIMEOUT = 30; private static final int REFRESH_RATE = 2; public T openPage(Class<T> clazz) { T page = PageFactory.initElements(getDriver(), clazz); getDriver().get(BASE_URL + getPageUrl()); ExpectedCondition pageLoadCondition = ((MPage) page).getPageLoadCondition(); waitForPageToLoad(pageLoadCondition); return page; } private void waitForPageToLoad(ExpectedCondition pageLoadCondition) { Wait wait = new FluentWait(getDriver()) .withTimeout(LOAD_TIMEOUT, TimeUnit.SECONDS) .pollingEvery(REFRESH_RATE, TimeUnit.SECONDS); wait.until(pageLoadCondition); } /** * Provides condition when page can be considered as fully loaded. * * @return */ protected abstract ExpectedCondition getPageLoadCondition(); /** * Provides page relative URL/ * * @return */ public abstract String getPageUrl(); } And for login Page not sure how i would implement that, as well as the Test, which would call these pages.

    Read the article

  • ScriptManager emits ScriptReferences after ClientScriptIncludes. Why?

    - by Chris F
    I have an application that uses a lot of javascript in a lot of different .js files. Each page can have any number and combination of different controls and each control can use a different set of js files. Instead of including all possible js files as <script src="xxx.js"> in the head of my master page, I thought I would reduce the amount of included files by getting each control to call ScriptManager.Scripts.Add(new ScriptReference("xxx.js")) - thus only the scripts that are actually needed by the page will be included. Well that bit works fine. But...the controls also use ScriptManager.RegisterClientScriptBlock fairly extensively too. (There are scenarios where the controls are inside update panels). I was disappointed to see that the ScriptManager emits the client script includes before the ScriptReferences. For example - see the test file and output at the end (this results in a JS error because $ is not defined). Am I being dumb or is this to be expected? I would have thought that a sensible thing for the ScriptManger to do would be to emit the ScriptReferences first and then the other stuff. Short of rolling my own ScriptManager like object to manage static JS file references, does anyone have any suggestions as to get the behaviour that I want from ScriptManager?? Thanks in advance. Example File <%@ Page Language="C#" AutoEventWireup="true" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <script runat="server"> protected void Page_Load(object sender, EventArgs e) { ScriptManager.RegisterClientScriptBlock(Page, Page.GetType(), "Test", "$(function() {alert('hello');});", true); } </script> </head> <body> <form id="form1" runat="server"> <asp:ScriptManager runat="server" > <Scripts> <asp:ScriptReference Path="~/jquery/jquery.js" /> </Scripts> </asp:ScriptManager> </form> </body> </html> Output <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > ... boring stuff removed .... <script src="/js/WebResource.axd?d=-nOHbla bla " type="text/javascript"></script> <script type="text/javascript"> //<![CDATA[ $(function() {alert('hello');});//]]> </script> ... boring stuff removed .... <script src="/js/ScriptResource.axd?d=qgCbla bla bla" type="text/javascript"></script> <script src="jquery/jquery.js" type="text/javascript"></script> ... boring stuff removed .... </form> </body> </html>

    Read the article

  • Counting leaf nodes in hierarchical tree

    - by timn
    This code fills a tree with values based their depths. But when traversing the tree, I cannot manage to determine the actual number of children. node-cnt is always 0. I've already tried node-parent-cnt but that gives me lots of warnings in Valgrind. Anyway, is the tree type I've chosen even appropriate for my purpose? #include <string.h> #include <stdio.h> #include <stdlib.h> #ifndef NULL #define NULL ((void *) 0) #endif // ---- typedef struct _Tree_Node { // data ptr void *p; // number of nodes int cnt; struct _Tree_Node *nodes; // parent nodes struct _Tree_Node *parent; } Tree_Node; typedef struct { Tree_Node root; } Tree; void Tree_Init(Tree *this) { this->root.p = NULL; this->root.cnt = 0; this->root.nodes = NULL; this->root.parent = NULL; } Tree_Node* Tree_AddNode(Tree_Node *node) { if (node->cnt == 0) { node->nodes = malloc(sizeof(Tree_Node)); } else { node->nodes = realloc( node->nodes, (node->cnt + 1) * sizeof(Tree_Node) ); } Tree_Node *res = &node->nodes[node->cnt]; res->p = NULL; res->cnt = 0; res->nodes = NULL; res->parent = node; node->cnt++; return res; } // ---- void handleNode(Tree_Node *node, int depth) { int j = depth; printf("\n"); while (j--) { printf(" "); } printf("depth=%d ", depth); if (node->p == NULL) { goto out; } printf("value=%s cnt=%d", node->p, node->cnt); out: for (int i = 0; i < node->cnt; i++) { handleNode(&node->nodes[i], depth + 1); } } Tree tree; int curdepth; Tree_Node *curnode; void add(int depth, char *s) { printf("%s: depth (%d) > curdepth (%d): %d\n", s, depth, curdepth, depth > curdepth); if (depth > curdepth) { curnode = Tree_AddNode(curnode); Tree_Node *node = Tree_AddNode(curnode); node->p = malloc(strlen(s)); memcpy(node->p, s, strlen(s)); curdepth++; } else { while (curdepth - depth > 0) { if (curnode->parent == NULL) { printf("Illegal nesting\n"); return; } curnode = curnode->parent; curdepth--; } Tree_Node *node = Tree_AddNode(curnode); node->p = malloc(strlen(s)); memcpy(node->p, s, strlen(s)); } } void main(void) { Tree_Init(&tree); curnode = &tree.root; curdepth = 0; add(0, "1"); add(1, "1.1"); add(2, "1.1.1"); add(3, "1.1.1.1"); add(4, "1.1.1.1.1"); add(2, "1.1.2"); add(0, "2"); handleNode(&tree.root, 0); }

    Read the article

  • local user cannot access vsftpd server

    - by Zloy Smiertniy
    I'm currently running a vsftpd server and I added the necessary configurations in vsftpd.conf so that local users can use clients like FileZilla to manage their homes in a server. I found out that only users in the sudoers list access without a problem only they can't download the files, but users that are not sudoers cannot even access their homes from a client but they can access by a web browser using the FTP protocol and they can only access their home directories (as intented) Im running a fedora 14 on my server and my vsftpd.conf looks like this: # Example config file /etc/vsftpd/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=NO # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. #anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log #xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. ascii_upload_enable=YES ascii_download_enable=YES # # You may fully customise the login banner string: ftpd_banner=Welcome to GAMBITA FTP service # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). chroot_local_user=YES chroot_list_enable=YES # (default follows) chroot_list_file=/etc/vsftpd/chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd with two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES use_localtime=YES Anyone has an idea of what might be happening? Nothing concerning vsftpd is written in any log

    Read the article

  • KB2667402 update fails to install with 0x80004005. Yet it's also showing as installed

    - by growse
    I've got 15 Windows 2008 R2 x64 servers that I manage with SCCM2012. I've noticed during my Windows updates reporting that there's two boxes that are showing as 'error' for total updates installed. Digging around, it looks like the update that is failing to install is KB2667402. The Software Centre on the server itself shows the following: The software change returned error code 0x80004005(-2147467259). So SCCM thinks it hasn't installed the update. However, if I go to the Programs and Features application and select 'Windows Updates', I can see an entry for KB2667402: If I try and uninstall this, I get an error: An error occurred. Not all of the updates were successfully uninstalled If I try and download the patch from Microsoft directly, I get the same error installing as displayed in Software Centre. The only odd thing about this setup that I can think would affect this is that I run the RDP service on a non-standard port. However, I do this across all the servers, so it seems odd that it would fail on just 2 out of 15. The tail of the WindowsUpdate.log file is below: 2012-06-26 15:33:53:184 3924 1608 COMAPI ------------- 2012-06-26 15:33:53:190 3924 1608 COMAPI -- START -- COMAPI: Install [ClientId = CcmExec] 2012-06-26 15:33:53:190 3924 1608 COMAPI --------- 2012-06-26 15:33:53:190 3924 1608 COMAPI - Allow source prompts: No; Forced: No; Force quiet: Yes 2012-06-26 15:33:53:190 3924 1608 COMAPI - Updates in request: 1 2012-06-26 15:33:53:190 3924 1608 COMAPI - ServiceID = {3DA21691-E39D-4DA6-8A4B-B43877BCB1B7} Managed 2012-06-26 15:33:53:199 860 1198 Agent ************* 2012-06-26 15:33:53:199 860 1198 Agent ** START ** Agent: Installing updates [CallerId = CcmExec] 2012-06-26 15:33:53:199 860 1198 Agent ********* 2012-06-26 15:33:53:199 860 1198 Agent * Updates to install = 1 2012-06-26 15:33:53:201 860 1198 Agent * Title = Security Update for Windows Server 2008 R2 x64 Edition (KB2667402) 2012-06-26 15:33:53:201 860 1198 Agent * UpdateId = {48859BE4-1331-4CD2-8E70-3B537180A0D0}.103 2012-06-26 15:33:53:201 860 1198 Agent * Bundles 1 updates: 2012-06-26 15:33:53:201 860 1198 Agent * {D854ECF1-99A7-4D67-B435-2D041BF79565}.103 2012-06-26 15:33:53:204 3924 1608 COMAPI - Updates to install = 1 2012-06-26 15:33:53:204 3924 1608 COMAPI <<-- SUBMITTED -- COMAPI: Install [ClientId = CcmExec] 2012-06-26 15:33:53:221 860 1198 Agent WARNING: failed to calculate prior restore point time with error 0x80070002; setting restore point 2012-06-26 15:33:53:222 860 1198 Agent WARNING: LoadLibrary failed for srclient.dll with hr:8007007e 2012-06-26 15:33:53:322 860 1198 DnldMgr Preparing update for install, updateId = {D854ECF1-99A7-4D67-B435-2D041BF79565}.103. 2012-06-26 15:33:53:325 5700 117c Misc =========== Logging initialized (build: 7.5.7601.17514, tz: +0100) =========== 2012-06-26 15:33:53:325 5700 117c Misc = Process: C:\Windows\system32\wuauclt.exe 2012-06-26 15:33:53:325 5700 117c Misc = Module: C:\Windows\system32\wuaueng.dll 2012-06-26 15:33:53:324 5700 117c Handler ::::::::::::: 2012-06-26 15:33:53:325 5700 117c Handler :: START :: Handler: CBS Install 2012-06-26 15:33:53:325 5700 117c Handler ::::::::: 2012-06-26 15:33:53:330 5700 117c Handler Starting install of CBS update D854ECF1-99A7-4D67-B435-2D041BF79565 2012-06-26 15:33:53:342 5700 117c Handler CBS package identity: Package_for_KB2667402~31bf3856ad364e35~amd64~~6.1.2.0 2012-06-26 15:33:53:366 5700 117c Handler Installing self-contained with source=C:\Windows\SoftwareDistribution\Download\44059e0415033d6f699a50ef69dd5ff2\windows6.1-kb2667402-v2-x64.cab, workingdir=C:\Windows\SoftwareDistribution\Download\44059e0415033d6f699a50ef69dd5ff2\inst 2012-06-26 15:33:56:270 5700 3b8 Handler FATAL: CBS called Error with 0x80004005, 2012-06-26 15:33:56:402 5700 117c Handler FATAL: Completed install of CBS update with type=0, requiresReboot=0, installerError=1, hr=0x80004005 2012-06-26 15:33:56:405 5700 117c Handler ::::::::: 2012-06-26 15:33:56:406 5700 117c Handler :: END :: Handler: CBS Install 2012-06-26 15:33:56:406 5700 117c Handler ::::::::::::: 2012-06-26 15:33:56:433 860 1198 Agent ********* 2012-06-26 15:33:56:433 860 1198 Agent ** END ** Agent: Installing updates [CallerId = CcmExec] 2012-06-26 15:33:56:433 860 1198 Agent ************* 2012-06-26 15:33:56:433 860 d14 AU Can not perform non-interactive scan if AU is interactive-only 2012-06-26 15:33:56:450 3924 e40 COMAPI >>-- RESUMED -- COMAPI: Install [ClientId = CcmExec] 2012-06-26 15:33:56:450 3924 e40 COMAPI - Install call complete (succeeded = 0, succeeded with errors = 0, failed = 1, unaccounted = 0) 2012-06-26 15:33:56:450 3924 e40 COMAPI - Reboot required = No 2012-06-26 15:33:56:450 3924 e40 COMAPI - WARNING: Exit code = 0x00000000; Call error code = 0x80240022 2012-06-26 15:33:56:451 3924 e40 COMAPI --------- 2012-06-26 15:33:56:451 3924 e40 COMAPI -- END -- COMAPI: Install [ClientId = CcmExec] 2012-06-26 15:33:56:451 3924 e40 COMAPI ------------- 2012-06-26 15:33:56:536 860 13a4 AU Triggering Offline detection (non-interactive) 2012-06-26 15:33:56:536 860 d14 AU ############# 2012-06-26 15:33:56:536 860 d14 AU ## START ## AU: Search for updates 2012-06-26 15:33:56:536 860 d14 AU ######### 2012-06-26 15:33:56:539 860 d14 AU <<## SUBMITTED ## AU: Search for updates [CallId = {2DBB046C-2265-421B-A37B-93BDECC6C261}] 2012-06-26 15:33:56:539 860 1788 Agent ************* 2012-06-26 15:33:56:539 860 1788 Agent ** START ** Agent: Finding updates [CallerId = AutomaticUpdates] 2012-06-26 15:33:56:539 860 1788 Agent ********* 2012-06-26 15:33:56:539 860 1788 Agent * Online = No; Ignore download priority = No 2012-06-26 15:33:56:539 860 1788 Agent * Criteria = "IsInstalled=0 and DeploymentAction='Installation' or IsPresent=1 and DeploymentAction='Uninstallation' or IsInstalled=1 and DeploymentAction='Installation' and RebootRequired=1 or IsInstalled=0 and DeploymentAction='Uninstallation' and RebootRequired=1" 2012-06-26 15:33:56:539 860 1788 Agent * ServiceID = {3DA21691-E39D-4DA6-8A4B-B43877BCB1B7} Managed 2012-06-26 15:33:56:539 860 1788 Agent * Search Scope = {Machine} 2012-06-26 15:33:58:562 860 1788 Agent * Found 0 updates and 70 categories in search; evaluated appl. rules of 180 out of 1072 deployed entities 2012-06-26 15:33:58:565 860 1788 Agent ********* 2012-06-26 15:33:58:565 860 1788 Agent ** END ** Agent: Finding updates [CallerId = AutomaticUpdates] 2012-06-26 15:33:58:565 860 1788 Agent ************* 2012-06-26 15:33:58:650 860 f2c AU >>## RESUMED ## AU: Search for updates [CallId = {2DBB046C-2265-421B-A37B-93BDECC6C261}] 2012-06-26 15:33:58:650 860 f2c AU # 0 updates detected 2012-06-26 15:33:58:650 860 f2c AU ######### 2012-06-26 15:33:58:650 860 f2c AU ## END ## AU: Search for updates [CallId = {2DBB046C-2265-421B-A37B-93BDECC6C261}] 2012-06-26 15:33:58:650 860 f2c AU ############# 2012-06-26 15:33:58:650 860 f2c AU Featured notifications is disabled. 2012-06-26 15:33:58:651 860 f2c AU Successfully wrote event for AU health state:0 2012-06-26 15:33:58:652 860 f2c AU Successfully wrote event for AU health state:0

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >