Search Results

Search found 1456 results on 59 pages for 'brian carroll'.

Page 6/59 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Ajax/PHP - should I use one long running script or polling?

    - by Brian
    Hello, I have a PHP script that is kicked off via ajax. This PHP script uses exec() to run a separate PHP script via the shell. The script that is called via exec() may take 30 seconds or so to complete. I need to update the UI once it is finished. Which of these options is preferred? a) Leave the HTTP connection open for the 30 seconds and wait for it to finish. b) Have exec() run the PHP script in the background and then use ajax polling to check for completion (every 5 seconds or so). c) Something else that I haven't thought of. Thank you, Brian

    Read the article

  • jQuery/javascript events - prototype event handler

    - by Brian M. Hunt
    The following code doesn't work as I intuitively expect it to: function MyObject(input) { input.change(this._foo); this.X = undefined; } MyObject.prototype._foo = function() { alert("This code is never called"); // but if it did this.X = true; } var test_input = $("input#xyz"); // a random, existing input var m = MyObject(test_input); // attach handler (or try to) test_input.change(); // trigger event alert(m.X); // undefined I'd expect that _foo() would be called (and, if that ever happens, that the this variable in _foo() would be an instantiation of MyObject. Does anyone know why this doesn't work, and of any alternative pattern for passing an object to an event handler? Thank you for reading. Brian

    Read the article

  • No device file for partition on logical volume (Linux LVM)

    - by Brian
    I created a logical volume (scandata) containing a single ext3 partition. It is the only logical volume in its volume group (case4t). Said volume group is comprised by 3 physical volumes, which are three primary partitions on a single block device (/dev/sdb). When I created it, I could mount the partition via the block device /dev/mapper/case4t-scandatap1. Since last reboot the aforementioned block device file has disappeared. It may be of note -- I'm not sure -- that my superior (a college professor) had prompted this reboot by running sudo chmod -R [his name] /usr/bin, which obliterated all suid in its path, preventing the both of us from sudo-ing. That issue has been (temporarily) rectified via this operation. Now I'll cut the chatter and get started with the terminal dumps: $ sudo pvs; sudo vgs; sudo lvs Logging initialised at Sat Jan 8 11:42:34 2011 Set umask to 0077 Scanning for physical volume names PV VG Fmt Attr PSize PFree /dev/sdb1 case4t lvm2 a- 819.32G 0 /dev/sdb2 case4t lvm2 a- 866.40G 0 /dev/sdb3 case4t lvm2 a- 47.09G 0 Wiping internal VG cache Logging initialised at Sat Jan 8 11:42:34 2011 Set umask to 0077 Finding all volume groups Finding volume group "case4t" VG #PV #LV #SN Attr VSize VFree case4t 3 1 0 wz--n- 1.69T 0 Wiping internal VG cache Logging initialised at Sat Jan 8 11:42:34 2011 Set umask to 0077 Finding all logical volumes LV VG Attr LSize Origin Snap% Move Log Copy% Convert scandata case4t -wi-a- 1.69T Wiping internal VG cache $ sudo vgchange -a y Logging initialised at Sat Jan 8 11:43:14 2011 Set umask to 0077 Finding all volume groups Finding volume group "case4t" 1 logical volume(s) in volume group "case4t" already active 1 existing logical volume(s) in volume group "case4t" monitored Found volume group "case4t" Activated logical volumes in volume group "case4t" 1 logical volume(s) in volume group "case4t" now active Wiping internal VG cache $ ls /dev | grep case4t case4t $ ls /dev/mapper case4t-scandata control $ sudo fdisk -l /dev/case4t/scandata Disk /dev/case4t/scandata: 1860.5 GB, 1860584865792 bytes 255 heads, 63 sectors/track, 226203 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00049bf5 Device Boot Start End Blocks Id System /dev/case4t/scandata1 1 226203 1816975566 83 Linux $ sudo parted /dev/case4t/scandata print Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/case4t-scandata: 1861GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 1861GB 1861GB primary ext3 $ sudo fdisk -l /dev/sdb Disk /dev/sdb: 1860.5 GB, 1860593254400 bytes 255 heads, 63 sectors/track, 226204 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000081 Device Boot Start End Blocks Id System /dev/sdb1 1 106955 859116006 83 Linux /dev/sdb2 113103 226204 908491815 83 Linux /dev/sdb3 106956 113102 49375777+ 83 Linux Partition table entries are not in disk order $ sudo parted /dev/sdb print Model: DELL PERC 6/i (scsi) Disk /dev/sdb: 1861GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 880GB 880GB primary reiserfs 3 880GB 930GB 50.6GB primary 2 930GB 1861GB 930GB primary I find it a bit strange that partition one above is said to be reiserfs, or if it matters -- it was previously reiserfs, but LVM recognizes it as a PV. To reiterate, neither /dev/mapper/case4t-scandatap1 (which I had used previously) nor /dev/case4t/scandata1 (as printed by fdisk) exists. And /dev/case4t/scandata (no partition number) cannot be mounted: $sudo mount -t ext3 /dev/case4t/scandata /mnt/new mount: wrong fs type, bad option, bad superblock on /dev/mapper/case4t-scandata, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so All I get on syslog is: [170059.538137] VFS: Can't find ext3 filesystem on dev dm-0. Thanks in advance for any help you can offer, Brian P.S. I am on Ubuntu GNU/Linux 2.6.28-11-server (Jaunty) (out of date, I know -- that's on the laundry list).

    Read the article

  • First Test Crashes using MSTEST with ASP.NET MVC 1

    - by Trey Carroll
    I'm trying to start using Unit Testing and I want to test the following Controller: public class AjaxController : Controller { ... public JsonResult RateVideo( int userRating, long videoId ) { string userName = User.Identity.Name; ... } } I have a created a TestClass with the following method: [ TestMethod public void TestRateVideo() { //Arrange AjaxController c = new AjaxController(); //Act JsonResult jr = c.RateVideo(1, 1); //Assert //Not implemented yet } I select debug and run the test. When the code reaches the 1st statement: string username = User.Identity.Name; Debugging stops and I am presented with a message that says that the test failed. Any guidance you can offer would be appreciated.

    Read the article

  • Thinking Sphinx and acts_as_taggable_on plugin

    - by Brian Roisentul
    Hi, I installed Sphinx and Thinking Sphinx for ruby on rails 2.3.2. When I search without conditions search works ok. Now, what I'd like to do is filter by tags, so, as I'm using the acts_as_taggable_on plugin, my Announcement model looks like this: class Announcement < ActiveRecord::Base acts_as_taggable_on :tags,:category define_index do indexes title, :as => :title, :sortable => true indexes description, :as => :description, :sortable => true indexes tags.name, :as => :tags indexes category.name, :as => :category has category(:id), :as => :category_ids has tags(:id), :as => :tag_ids end For some reason, when I run the following command, it will bring just one announcement, that has nothing to do with what I expect. I've got many announcements, so I expected a lot of results instead. Announcement.search params[:announcement][:search].to_s, :with => {:tag_ids => 1}, :page => params[:page], :per_page => 10 I guess something is wrong, and it's not searching correctly. Can anyone give my a clue of what's going on? Thanks, Brian

    Read the article

  • Trouble getting NSString from NSDictionary key into UILabel

    - by Brian
    I'm attempting to put the value associated with the key called "duration" into a UILabel but I'm getting a blank or "(null)" result showing up in the UILabel. My NSDictionary object with its keys seems to be logging as being full of the data and keys I think I want, as such: the content of thisRecordingsStats is { "12:48:25 AM, April 25" = { FILEPATH = "/Users/brian/Library/Application Support/iPhone Simulator/3.1.3/Applications/97256A91-FC47-4353-AD01-15CD494060DD/Documents/12:48:25 AM, April 25.aif"; duration = "00:04"; applesCountString = 0; ...and so on. Here's the code where I'm trying to put the NSString into the UILabel: cell.durationLabel.text = [NSString stringWithFormat:@"%@",[thisRecordingsStats objectForKey:@"duration"]]; I've also tried these other permutations: cell.durationLabel.text = [thisRecordingsStats objectForKey:@"duration"]; and I've also tried this tag-based approach: label = (UILabel *)[cell viewWithTag:8]; label.text = [[thisRecordingsStats objectForKey:@"duration"] objectAtIndex:1]; and: UILabel *label; label = (UILabel *)[cell viewWithTag:8]; label.text = [NSString stringWithFormat:@"%@",[[thisRecordingsStats objectForKey:@"duration"] objectAtIndex:1]]; I've also tried creating a string from the key's paired value and see a "(null)" value or blankness using that too. What am I missing? I assume it's something with the formatting of the string. Thanks for looking!!

    Read the article

  • WCF ReliableMessaging method called twice

    - by Brian
    Using Fiddler, we see 3 HTTP requests (and matching responses) for each call when: WS-ReliableMessaging is enabled, and, the method returns a large amount of data (17MB) The first HTTP request is a SOAP message with the action "CreateSequence" (presumable to establish the reliable session). The second and third HTTP requests are identical SOAP messages invoking our webservice method. Why are there two identical messages? Here is our config: <system.serviceModel> <client> <endpoint address="http://server/vdir/AccountingService.svc" binding="wsHttpBinding" bindingConfiguration="customWsHttpBinding" behaviorConfiguration="LargeServiceBehavior" contract="MyProject.Accounting.IAccountingService" name="BasicHttpBinding_IAccountingService" /> </client> <bindings> <wsHttpBinding> <binding name="customWsHttpBinding" maxReceivedMessageSize="90000000"> <reliableSession enabled="true"/> <security mode="None" /> </binding> </wsHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="LargeServiceBehavior"> <dataContractSerializer maxItemsInObjectGraph="2147483647"/> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> Thanks, Brian

    Read the article

  • How to improve the use of Delphi Frames

    - by Brian Frost
    I've used frames in Delphi for years, and they are one of the most powerful features of the VCL, but standard use of them seems to have some risk such as: It's easy to accidentally move or edit the frame sub-components on a frame's host form without realising that you are 'tweaking' with the frame - I know this does not affect the original frame code, but it's generally not what you would want. When working with the frame you are still exposed to its sub-components for visual editing, even when that frame is years old and should not be touched. So I got to thinking.... Is there a way of 'grouping' components such that their positions are 'locked'? This would be useful for finished forms as well as frames. Often other developers return code to me where only the form bounds have changed and even they did not intend any change. Is there any way of turning a frame and its components into a single Delphi component? If so, the frame internals would be completely hidden and its useability would increase further. I'm interested in any thoughts... Brian.

    Read the article

  • Create Jinja2 macros that put content in separate places

    - by Brian M. Hunt
    I want to create a table of contents and endnotes in a Jinja2 template. How can one accomplish these tasks? For example, I want to have a template as follows: {% block toc %} {# ... the ToC goes here ... #} {% endblock %} {% include "some other file with content.jnj" %} {% block endnotes %} {# ... the endnotes go here ... #} {% endblock %} Where the some other file with content.jnj has content like this: {% section "One" %} Title information for Section One (may be quite long); goes in Table of Contents ... Content of section One {% section "Two" %} Title information of Section Two (also may be quite long) <a href="#" id="en1">EndNote 1</a> <script type="text/javsacript">...(may be reasonably long) </script> {# ... Everything up to here is included in the EndNote #} Where I say "may be quite/reasonably long" I mean to say that it can't reasonably be put into quotes as an argument to a macro or global function. I'm wondering if there's a pattern for this that may accommodate this, within the framework of Jinja2. My initial thought is to create an extension, so that one can have a block for sections and end-notes, like-so: {% section "One" %} Title information goes here. {% endsection %} {% endnote "one" %} <a href="#">...</a> <script> ... </script> {% endendnote %} Then have global functions (that pass in the Jinja2 Environment): {{ table_of_contents() }} {% include ... %} {{ endnotes() }} However, while this will work for endnotes, I'd presume it requires a second pass by something for the table of contents. Thank you for reading. I'd be much obliged for your thoughts and input. Brian

    Read the article

  • jQuery AJAX call not working in Webkit

    - by Brian
    I've run into a strange issue with Webkit based browsers (both Safari and Chrome - I'm testing on a Mac) and I am not sure what is causing it. Here's a small script I've created that replicates the issue: <html> <head> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> <script type="text/javascript" language="javascript"> function doRequest() { document.test.submit(); $.ajax({ type: "GET", cache: false, url: 'ajax.php?tmp=1', success: doSuccess }); } function doSuccess(t_data,t_status,req) { alert('Data is: '+ t_data +', XMLHTTPRequest status is: '+ req.status); } </script> </head> <body> <form name="test" method="post" action="ajax.html" enctype="multipart/form-data"> <input type="file" name="file_1"> <br><input type="button" value="upload" onclick="doRequest();"> </form> </body> </html> ajax.php is: <?php echo $_REQUEST['tmp']; ?> This works as is on Firefox, but the XMLHTTPRequest status is always "0" on both Safari and Chrome. If I remove this line: document.test.submit(); then it works, but of course the form is not submitted. I've tried changing the form submit button from "button" to "submit", but that also prevents it from working on Safari or Chrome. What I am trying to accomplish is: submit the form call another script to get status on the file being uploaded via the form (it's for a small upload progress meter). Any help is really appreciated - I'm hopeful it is just a quirk I'm not familiar with. Thanks! Brian

    Read the article

  • create app that has plugin which contains PyQt widget

    - by brian
    I'm writing an application that will use plugins. In the plugin I want to include a widget that allows the options for that plugin to be setup. The plugin will also include methods to operate on the data. What is is the best way to include a widget in a plugin? Below is pseudo code for what I've tried to do. My original plan was to make the options widget: class myOptionsWidget(QWidget): “”” create widget for plug in options “”” …. Next I planned on including the widget in my plugin: class myPlugin def __init__(self): self.optionWidget = myOptionsWidget() self.pluginNum = 1 …. def getOptionWidget(self): return(self.optionWidget) Then at the top level I'd do something like a = myPlugin() form = createForm(option=a.getOptionWidget()) … where createForm would create the form and include my plugin options widget. But when I try "a = myPlugin()" I get the error "QWidget: Must construct a QApplication before a QpaintDevice" so this method won't work. I know I would store the widget as a string and call eval on it but I'd rather not do that in case later on I want to convert the program to C++. What is the best way to write a plugin that includes a widget that has the options? Brian

    Read the article

  • Moose read-only Attribute Traits and how to set them?

    - by Evan Carroll
    How do I set a Moose read only attribute trait? package AttrTrait; use Moose::Role; has 'ext' => ( isa => 'Str', is => 'ro' ); package Class; has 'foo' => ( isa => 'Str', is => 'ro', traits => [qw/AttrTrait/] ); package main; my $c = Class->new( foo => 'ok' ); $c->meta->get_attribute('foo')->ext('die') # ro attr trait What is the purpose of Read Only attribute traits if you can't set it in the constructor or in runtime? Is there something I'm missing in Moose::Meta::Attribute? Is there a way to set it using meta? $c->meta->get_attr('ext')->set_value('foo') # doesn't work either (attribute trait provided not class provided method)

    Read the article

  • Ldap query returns null result when deployed.

    - by Trey Carroll
    I'm using a very simple Ldap query in my asp.net mvc 2.0 site: String ldapPath = ConfigReader.LdapPath; String emailAddress = null; try { DirectorySearcher search = new DirectorySearcher(ConfigReader.LdapPath); search.Filter = String.Format("(&(objectClass=user)(objectCategory=person)(objectSid={0})) ", securityIdentifierValue); // add the mail property to the list of props to retrieve search.PropertiesToLoad.Add("mail"); var result = search.FindOne(); if (result == null) { throw new Exception("Ldap Query with filter:" + search.Filter.ToString() + " returned a null value (no match found)"); } else { emailAddress = result.Properties["mail"][0].ToString(); } } catch (ArgumentOutOfRangeException aoorEx) { throw new Exception( "The query could not find an email for this user."); } catch (Exception ex) { //_log.Error(string.Format("======!!!!!! ERROR ERROR ERROR !!!!! in LdapLookupUtil.cs getEmailFromLdap Exception: {0}", ex)); throw ex; } return emailAddress; It works fine on my localhost machine. It works fine when I run it in VS2010 on the server. It always returns a null result when deployed. Here is my web.config: Asp.Net Configuration option in Visual Studio. A full list of settings and comments can be found in machine.config.comments usually located in \Windows\Microsoft.Net\Framework\v2.x\Config -- section enables configuration of the security authentication mode used by ASP.NET to identify an incoming user. -- <!-- -- section enables configuration of what to do if/when an unhandled error occurs during the execution of a request. Specifically, it enables developers to configure html error pages to be displayed in place of a error stack trace. -- I'm running it under the default app pool. Does anybody see the problem? This is driving me crazy!

    Read the article

  • PerlIO in Windows PowerShell and CMD.exe

    - by Evan Carroll
    Apparently, a Perl script I have results in two different output files depending on if I run it under Windows PowerShell, or cmd.exe. The script can be found at the bottom of this question. The file handle is opened with IO::File, I believe that PerlIO is doing some screwy stuff. It seems as if under cmd.exe the encoding chosen is much more compact encoding (4.09 KB), as compared to PowerShell which generates a file nearly twice the size (8.19 KB). This script takes a shell script and generates a Windows batch file. It seems like the one generated under cmd.exe is just regular ASCII (1 byte character), while the other one appears to be UTF-16 (first two bytes FF FE) Can someone verify and explain why PerlIO works differently under Windows Powershell than cmd.exe? Also, how do I explicitly get an ASCII-magic PerlIO filehandle using IO::File? Currently, only the file generated with cmd.exe is executable. The UTF-16 .bat (I think that's the encoding) is not executable by either PowerShell or cmd.exe. BTW, we're using Perl 5.12.1 for MSWin32 #!/usr/bin/env perl use strict; use warnings; use File::Spec; use IO::File; use IO::Dir; use feature ':5.10'; my $bash_ftp_script = File::Spec->catfile( 'bin', 'dm-ftp-push' ); my $fh = IO::File->new( $bash_ftp_script, 'r' ) or die $!; my @lines = grep $_ !~ /^#.*/, <$fh>; my $file = join '', @lines; $file =~ s/ \\\n/ /gm; $file =~ tr/'\t/"/d; $file =~ s/ +/ /g; $file =~ s/\b"|"\b/"/g; my @singleLnFile = grep /ncftp|echo/, split $/, $file; s/\$PWD\///g for @singleLnFile; my $dh = IO::Dir->new( '.' ); my @files = grep /\.pl$/, $dh->read; say 'echo off'; say "perl $_" for @files; say for @singleLnFile; 1;

    Read the article

  • kick off a map reduce job from my java/mysql webapp

    - by Brian
    Hi guys, I need a bit of archecture advice. I have a java based webapp, with a JPA based ORM backed onto a mysql relational database. Now, as part of the application I have a batch job that compares thousands of database records with each other. This job has become too time consuming and needs to be parallelized. I'm looking at using mapreduce and hadoop in order to do this. However, I'm not too sure about how to integrate this into my current architecture. I think the easiest initial solution is to find a way to push data from mysql into hadoop jobs. I have done some initial research on this and found the following relevant information and possibilities: 1) https://issues.apache.org/jira/browse/HADOOP-2536 this gives an interesting overview of some inbuilt JDBC support 2) This article http://architects.dzone.com/articles/tools-moving-sql-database describes some third party tools to move data from mysql to hadoop. To be honest I'm just starting out with learning about hbase and hadoop but I really don't know how to integrate this into my webapp. Any advice is greatly appreciated. cheers, Brian

    Read the article

  • jQuery capture all changes to named inpt on a form

    - by Brian M. Hunt
    I'm trying to determine when any of a set of named input/select/radio/checked/hidden fields in a form change. In particular, I'd like to capture when any changes are made to fields matching jQuery's selector $("form :input"), and where that input is has a name attribute. However, the form isn't static i.e. some of the fields are dynamically added later. My initial thought is to keep track of when new named elements matching :input are added, and then add an event handler, like this: function on_change() { alert("The form element with name " + $(this).attr("name") + " has changed"); } function reg_new_e_handler(input_element) { input_element.change(on_change); } However, I'm quite hopeful I can avoid this with some jQuery magic. In particular, is there a way to register an event handler in jQuery that would handle input elements that match the following: $("form :input").filter( function () { $(this).attr("name") } ).change(on_change); However, have this event set update whenever new input elements are added. I've thought that it may be possible to capture keyup event on the form node with $("form").keyup(on_change), but I'm not so sure how one could capture change events. I'd also like this to capture keyup events. Thank you for reading. Brian

    Read the article

  • Asp.net mvc - trying to display images pulled from db \

    - by Trey Carroll
    //Inherits="System.Web.Mvc.ViewPage<FilmFestWeb.Models.ListVideosViewModel>" <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>ListVideos</h2> <% foreach(BusinessObjects.Video vid in Model.VideoList){%> <div class="videoBox"> <%= Html.Encode(vid.Name) %> <img src="<%= vid.ThumbnailImage %>" /> </div> <%} %> </asp:Content> //ListVideosViewModel public class ListVideosViewModel { public IList<Video> VideoList { get; set; } } //Video public class Video { public long VideoId { get; set; } public long TeamId { get; set; } public string Name { get; set; } public string Tags { get; set; } public string TeamMembers { get; set; } public string TranscriptFileName { get; set; } public string VideoFileName { get; set; } public int TotalNumRatings { get; set; } public int CumulativeTotalScore { get; set; } public string VideoUri { get; set; } public Image ThumbnailImage { get; set; } } I am getting the "red x" that I usually associate with image file not found. I have verified that my database table shows after the stored proc that uploads the image executes. Any insight or advice would be greatly appreciated.

    Read the article

  • How do I use text in one cell to trigger row to be copied on another sheet in Excel?

    - by Brian Eby
    I provide all of the cut lists for our cabinet manufacturing in Excel. I tally all parts for the entire job on the first worksheet in an Excel file, and then filter the rows based on the "Material" column, and manually copy/paste each row in to its own material-specific worksheet (example: I filter "Materials" column for "Maple Ply", and then copy all "Maple Ply" rows to the "Maple Ply" worksheet). Then the material specific worksheets are sent to the shop floor for cutting. This is time consuming, and if I need to change any data in the first page, I have to go and manually update the copied row in its material-specific page. Is there any way to make each material page "look" for its material, and automatically populate itself with any row that has the appropriate material in the material column (example: any time I enter "Maple Ply" in the material column of sheet one, that row is automatically copied to the "Maple Ply" worksheet)? If so, could this link be dynamic, rather than just a copy, so that if I change a cell in a particular row on sheet one, that data is also updated on the material-specific worksheet copy? Thank you, Brian

    Read the article

  • Add multiple entities to Javascript namespace from different files

    - by Brian M. Hunt
    Given a namespaces ns used in two different files: abc.js ns = ns || (function () { foo = function() { ... }; return { abc : foo }; }()); def.js // is this correct? ns = ns || {} ns.def = ns.def || (function () { defoo = function () { ... }; return { deFoo: defoo }; }()); Is this the proper way to add def to the ns to a namespace? In other words, how does one merge two contributions to a namespace in javascript? If abc.js comes before def.js I'd expect this to work. If def.js comes before abc.js I'd expect ns.abc to not exist because ns is defined at the time. It seems there ought to be a design pattern to eliminate any uncertainty of doing inclusions with the javascript namespace pattern. I'd appreciate thoughts and input on how best to go about this sort of 'inclusion'. Thanks for reading. Brian

    Read the article

  • Is Perl's flip-flop operator bugged? It has global state, how can I reset it?

    - by Evan Carroll
    I'm dismayed. Ok, so this was probably the most fun perl bug I've ever found. Even today I'm learning new stuff about perl. Essentially, the flip-flop operator .. which returns false until the left-hand-side returns true, and then true until the right-hand-side returns false keep global state (or that is what I assume.) My question is can I reset it, (perhaps this would be a good addition to perl4-esque hardly ever used reset())? Or, is there no way to use this operator safely? I also don't see this (the global context bit) documented anywhere in perldoc perlop is this a mistake? Code use feature ':5.10'; use strict; use warnings; sub search { my $arr = shift; grep { !( /start/ .. /never_exist/ ) } @$arr; } my @foo = qw/foo bar start baz end quz quz/; my @bar = qw/foo bar start baz end quz quz/; say 'first shot - foo'; say for search \@foo; say 'second shot - bar'; say for search \@bar; Spoiler $ perl test.pl first shot foo bar second shot

    Read the article

  • Reading a large file into Perl array of arrays and manipulating the output for different purposes

    - by Brian D.
    Hello, I am relatively new to Perl and have only used it for converting small files into different formats and feeding data between programs. Now, I need to step it up a little. I have a file of DNA data that is 5,905 lines long, with 32 fields per line. The fields are not delimited by anything and vary in length within the line, but each field is the same size on all 5905 lines. I need each line fed into a separate array from the file, and each field within the line stored as its own variable. I am having no problems storing one line, but I am having difficulties storing each line successively through the entire file. This is how I separate the first line of the full array into individual variables: my $SampleID = substr("@HorseArray", 0, 7); my $PopulationID = substr("@HorseArray", 9, 4); my $Allele1A = substr("@HorseArray", 14, 3); my $Allele1B = substr("@HorseArray", 17, 3); my $Allele2A = substr("@HorseArray", 21, 3); my $Allele2B = substr("@HorseArray", 24, 3); ...etc. My issues are: 1) I need to store each of the 5905 lines as a separate array. 2) I need to be able to reference each line based on the sample ID, or a group of lines based on population ID and sort them. I can sort and manipulate the data fine once it is defined in variables, I am just having trouble constructing a multidimensional array with each of these fields so I can reference each line at will. Any help or direction is much appreciated. I've poured over the Q&A sections on here, but have not found the answer to my questions yet. Thanks!! -Brian

    Read the article

  • Add centered text to the middle of a <hr/>-like line

    - by Brian M. Hunt
    I'm wondering what options one has in xhtml 1.0 strict to create a line on both sides of text like-so: Section one ----------------------- Next section ----------------------- Section two I've thought of doing some fancy things like this: <div style="float:left; width: 44%;"><hr/></div> <div style="float:right; width: 44%;"><hr/></div> Next section Or alternatively, because the above has problems with alignment (both vertical and horizontal): <table><tr> <td style="width:47%"><hr/></td> <td style="vertical-align:middle; text-align: center">Next section</td> <td style="width:47%"><hr/></td> </tr></table> However both options feel 'fudgy', and I'd be much obliged if you happened to have seen this before and know of an elegant solution. Thank you for reading. Brian

    Read the article

  • Using ARIMA to model and forecast stock prices using user-friendly stats program

    - by Brian
    Hi people, Can anyone please offer some insight into this for me? I'm coming from a functional magnetic resonance imaging research background where I analyzed a lot of time series data, and I'd like to analyze the time series of stock prices (or returns) by: 1) modeling a successful stock in a particular market sector and then cross-correlating the time series of this historically successful stock with that of other newer stocks to look for significant relationships; 2) model a stock's price time series and use forecasting (e.g., exponential smoothing) to predict future values of it. I'd like to use non-linear modeling methods (ARIMA and ARCH) to do this. Several questions: How often do ARIMA and ARCH modeling methods (given that the individual who implements them does so accurately) actually fit the stock time series data they target, and what is the optimal fit I can expect? Is the extent to which this model fits the data commensurate with the extent to which it predicts this stock time series' future values? Rather than randomly selecting stocks to compare or model, if profit is my goal, what is an efficient approach, if any, to selecting the stocks I'm going to analyze? Which stats program is the most user-friendly for this? Any thoughts on this would be great and would go a long way for me. Thanks, Brian

    Read the article

  • Gridview delete/edit not working when using select parameter

    - by Brian Carroll
    new to ASP.NET. I created a sqldatasource and set up basic select query (SELECT * FROM Accounts) using the wizard. I then had the sqldatasource wizard create the INSERT, EDIT and DELETE queries. Connected this datasource to a gridview with EDITING and DELETING enabled. Everything works fine. The SELECT query returns all records and I can edit/delete them. Now I need to send a parameter to the SELECT command to filter the records to those with the user's id (pulled from Membership.GetUser). When I add this parameter, the SELECT command works fine, but the EDIT/DELETE buttons in the gridview no longer work. No error is generated. The page refreshes but the records were not updated in the database. I don't understand what is wrong. CODE: <% Dim u As MembershipUser Dim userid As String u = Membership.GetUser(User.Identity.Name) userid = u.ProviderUserKey.ToString SqlDataSource1.SelectParameters("UserId").DefaultValue = userid %> <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataKeyNames="ID" DataSourceID="SqlDataSource1"> <Columns> <asp:CommandField ShowDeleteButton="True" ShowEditButton="True" /> <asp:BoundField DataField="ID" HeaderText="ID" InsertVisible="False" ReadOnly="True" SortExpression="ID" /> <asp:BoundField DataField="UserId" HeaderText="UserId" SortExpression="UserId" /> <asp:BoundField DataField="AccountName" HeaderText="AccountName" SortExpression="AccountName" /> <asp:BoundField DataField="DateAdded" HeaderText="DateAdded" SortExpression="DateAdded" /> <asp:BoundField DataField="LastModified" HeaderText="LastModified" SortExpression="LastModified" /> </Columns> </asp:GridView> <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:CheckingConnectionString %>" DeleteCommand="DELETE FROM [Accounts] WHERE [ID] = @ID" InsertCommand="INSERT INTO [Accounts] ([UserId], [AccountName], [DateAdded], [LastModified]) VALUES (@UserId, @AccountName, @DateAdded, @LastModified)" SelectCommand="SELECT * FROM [Accounts] WHERE [UserId] = @UserId" UpdateCommand="UPDATE [Accounts] SET [UserId] = @UserId, [AccountName] = @AccountName, [DateAdded] = @DateAdded, [LastModified] = @LastModified WHERE [ID] = @ID"> <DeleteParameters> <asp:Parameter Name="ID" Type="Int32" /> </DeleteParameters> <InsertParameters> <asp:Parameter Name="UserId" Type="String" /> <asp:Parameter Name="AccountName" Type="String" /> <asp:Parameter Name="DateAdded" Type="DateTime" /> <asp:Parameter Name="LastModified" Type="DateTime" /> </InsertParameters> <UpdateParameters> <asp:Parameter Name="UserId" Type="String" /> <asp:Parameter Name="AccountName" Type="String" /> <asp:Parameter Name="DateAdded" Type="DateTime" /> <asp:Parameter Name="LastModified" Type="DateTime" /> <asp:Parameter Name="ID" Type="Int32" /> </UpdateParameters> <SelectParameters> <asp:Parameter Name="UserId"/> </SelectParameters> </asp:SqlDataSource>

    Read the article

  • How do I join two git repos without a common root, where all modified files are the same?

    - by Evan Carroll
    I have a git-cpan-init of a repo which yielded a different root node from another already established git repo I found on github C:A:S:DBI. I've developed quite a bit on my repo, and I'd like to merge or replay my edits on a fork of the more authoritative repository. Does anyone know how to do this? I think it is safe to assume none of the file-contents of the modified files are different -- the code base hasn't been since Nov 08'. For clarity the git hub repo is the authoritative one. My local repo is the one I want to go up to git hub shown as a real git fork.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >