Search Results

Search found 59 results on 3 pages for 'a moose'.

Page 2/3 | < Previous Page | 1 2 3  | Next Page >

  • How do I call a function name that is stored in a hash in Perl?

    - by Ether
    I'm sure this is covered in the documentation somewhere but I have been unable to find it... I'm looking for the syntactic sugar that will make it possible to call a method on a class whose name is stored in a hash (as opposed to a simple scalar): use strict; use warnings; package Foo; sub foo { print "in foo()\n" } package main; my %hash = (func => 'foo'); Foo->$hash{func}; If I copy $hash{func} into a scalar variable first, then I can call Foo->$func just fine... but what is missing to enable Foo->$hash{func} to work? (EDIT: I don't mean to do anything special by calling a method on class Foo -- this could just as easily be a blessed object (and in my actual code it is); it was just easier to write up a self-contained example using a class method.) EDIT 2: Just for completeness re the comments below, this is what I'm actually doing (this is in a library of Moose attribute sugar, created with Moose::Exporter): # adds an accessor to a sibling module sub foreignTable { my ($meta, $table, %args) = @_; my $class = 'MyApp::Dir1::Dir2::' . $table; my $dbAccessor = lcfirst $table; eval "require $class" or do { die "Can't load $class: $@" }; $meta->add_attribute( $table, is => 'ro', isa => $class, init_arg => undef, # don't allow in constructor lazy => 1, predicate => 'has_' . $table, default => sub { my $this = shift; $this->debug("in builder for $class"); ### here's the line that uses a hash value as the method name my @args = ($args{primaryKey} => $this->${\$args{primaryKey}}); push @args, ( _dbObject => $this->_dbObject->$dbAccessor ) if $args{fkRelationshipExists}; $this->debug("passing these values to $class -> new: @args"); $class->new(@args); }, ); } I've replaced the marked line above with this: my $pk_accessor = $this->meta->find_attribute_by_name($args{primaryKey})->get_read_method_ref; my @args = ($args{primaryKey} => $this->$pk_accessor); PS. I've just noticed that this same technique (using the Moose meta class to look up the coderef rather than assuming its naming convention) cannot also be used for predicates, as Class::MOP::Attribute does not have a similar get_predicate_method_ref accessor. :(

    Read the article

  • How can I install a 32bit python on 64 bit Ubuntu

    - by moose
    I am using Ubuntu 10.10 (Linux pc07 2.6.35-27-generic #48-Ubuntu SMP Tue Feb 22 20:25:46 UTC 2011 x86_64 GNU/Linux) and the default python package (Python 2.6.6). I would like to install python-psyco to improve the performance of one of my scripts, but only python-psyco-doc is available for 64 bit. I tried a virtual machine, but the the performance boost is much less on the virtual machine than on a "real" installed 32-bit Ubuntu. So my question is: How can I install a 32Bit Python with psyco on my 64Bit Ubuntu machine? edit: I've found this article and made this: Download "Python 2.7.1 bzipped source tarball" from http://python.org/download/ Go in the directory where you decompressed "Python 2.7.1" $ OPT=-m32 LDFLAGS=-m32 ./configure --prefix=/opt/pym32 $ make But I got this error: gcc -pthread -m32 -Xlinker -export-dynamic -o python \ Modules/python.o \ libpython2.7.a -lpthread -ldl -lutil -lm libpython2.7.a(posixmodule.o): In function `posix_tmpnam': /home/moose/Downloads/Python-2.7.1/./Modules/posixmodule.c:7346: warning: the use of `tmpnam_r' is dangerous, better use `mkstemp' libpython2.7.a(posixmodule.o): In function `posix_tempnam': /home/moose/Downloads/Python-2.7.1/./Modules/posixmodule.c:7301: warning: the use of `tempnam' is dangerous, better use `mkstemp' Segmentation fault make: *** [sharedmods] Fehler 139 edit2: Now I've found http://indefinitestudies.org/2010/02/08/how-to-build-32-bit-python-on-ubuntu-9-10-x86_64/ and it seems like this worked: $ cd Python-2.7.1 $ CC="gcc -m32" LDFLAGS="-L/lib32 -L/usr/lib32 \ -Lpwd/lib32 -Wl,-rpath,/lib32 -Wl,-rpath,/usr/lib32" \ ./configure --prefix=/opt/pym32 $ make $ sudo make install But installing psyco didn't work: Download the lastest snapshot: http://psyco.sourceforge.net/download.html Extract it and go into the folder $ python setup.py install This error appeared: PROCESSOR = 'ivm' running install running build running build_py running build_ext building 'psyco._psyco' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DALL_STATIC=1 -Ic/ivm -I/usr/include/python2.6 -c c/psyco.c -o build/temp.linux-x86_64-2.6/c/psyco.o In file included from c/psyco.c:1: c/psyco.h:9: fatal error: Python.h: Datei oder Verzeichnis nicht gefunden compilation terminated. error: command 'gcc' failed with exit status 1

    Read the article

  • Does "diff" exist for images?

    - by moose
    You can compare two text files very easy with diff and even better with meld: If you use diff for images, you get an example like this: $ diff zivi-besch.tif zivildienst.tif Binary files zivi-besch.tif and zivildienst.tif differ Here is an example: Original from http://commons.wikimedia.org/wiki/File:Tux.svg Edited: I've added a white background to both images and applied GIMPs "Difference" filter to get this: It is a very simple method how a diff could work, but I can imagine much better (and more complicated) ones. Do you know a program which works for images like meld does for texts? (If a program existed that could give a percentage (0% the same image - 100% the same image) I would also be interested in it, but I am looking for one that gives me visual hints where differences are.)

    Read the article

  • Design review for application facing memory issues

    - by Mr Moose
    I apologise in advance for the length of this post, but I want to paint an accurate picture of the problems my app is facing and then pose some questions below; I am trying to address some self inflicted design pain that is now leading to my application crashing due to out of memory errors. An abridged description of the problem domain is as follows; The application takes in a “dataset” that consists of numerous text files containing related data An individual text file within the dataset usually contains approx 20 “headers” that contain metadata about the data it contains. It also contains a large tab delimited section containing data that is related to data in one of the other text files contained within the dataset. The number of columns per file is very variable from 2 to 256+ columns. The original application was written to allow users to load a dataset, map certain columns of each of the files which basically indicating key information on the files to show how they are related as well as identify a few expected column names. Once this is done, a validation process takes place to enforce various rules and ensure that all the relationships between the files are valid. Once that is done, the data is imported into a SQL Server database. The database design is an EAV (Entity-Attribute-Value) model used to cater for the variable columns per file. I know EAV has its detractors, but in this case, I feel it was a reasonable choice given the disparate data and variable number of columns submitted in each dataset. The memory problem Given the fact the combined size of all text files was at most about 5 megs, and in an effort to reduce the database transaction time, it was decided to read ALL the data from files into memory and then perform the following; perform all the validation whilst the data was in memory relate it using an object model Start DB transaction and write the key columns row by row, noting the Id of the written row (all tables in the database utilise identity columns), then the Id of the newly written row is applied to all related data Once all related data had been updated with the key information to which it relates, these records are written using SqlBulkCopy. Due to our EAV model, we essentially have; x columns by y rows to write, where x can by 256+ and rows are often into the tens of thousands. Once all the data is written without error (can take several minutes for large datasets), Commit the transaction. The problem now comes from the fact we are now receiving individual files containing over 30 megs of data. In a dataset, we can receive any number of files. We’ve started seen datasets of around 100 megs coming in and I expect it is only going to get bigger from here on in. With files of this size, data can’t even be read into memory without the app falling over, let alone be validated and imported. I anticipate having to modify large chunks of the code to allow validation to occur by parsing files line by line and am not exactly decided on how to handle the import and transactions. Potential improvements I’ve wondered about using GUIDs to relate the data rather than relying on identity fields. This would allow data to be related prior to writing to the database. This would certainly increase the storage required though. Especially in an EAV design. Would you think this is a reasonable thing to try, or do I simply persist with identity fields (natural keys can’t be trusted to be unique across all submitters). Use of staging tables to get data into the database and only performing the transaction to copy data from staging area to actual destination tables. Questions For systems like this that import large quantities of data, how to you go about keeping transactions small. I’ve kept them as small as possible in the current design, but they are still active for several minutes and write hundreds of thousands of records in one transaction. Is there a better solution? The tab delimited data section is read into a DataTable to be viewed in a grid. I don’t need the full functionality of a DataTable, so I suspect it is overkill. Is there anyway to turn off various features of DataTables to make them more lightweight? Are there any other obvious things you would do in this situation to minimise the memory footprint of the application described above? Thanks for your kind attention.

    Read the article

  • How can I create a zip archive of a whole directory via terminal without hidden files?

    - by moose
    I have a project with lots of hidden folders / files in it. I want to create a zip-archive of it, but in the archive shouldn't be any hidden folders / files. If files in a hidden folder are not hidden, they should also not be included. I know that I can create a zip archive of a directory like this: zip -r zipfile.zip directory I also know that I can exclude files with the -x option, so I thought this might work: zip -r zipfile.zip directory -x .* It didn't work. All hidden directories were still in the zip-file.

    Read the article

  • Is a function plotter a legitimate use of eval() in JavaScript?

    - by moose
    From PHP development I know that eval is evil and I've recently read What constitutes “Proper use” of the javascript Eval feature? and Don't be eval. The only proper use of eval I've read is Ajax. I'm currently developing a visualization tool that lets users see how polynomials can interpolate functions: Example Code on GitHub I use eval for evaluation of arbitrary functions. Is this a legitimate use of eval? How could I get rid of eval? I want the user to be able to execute any function of the following forms: a x^i with a,i in R sin, cos, tan b^x with b in R any combination that you can get by adding (e.g. x^2 + x^3 + sin(x)), multiplying (e.g. sin(x)*x^2) or inserting (e.g. sin(x^2))

    Read the article

  • How to install port versions of perl modules for perl5.14 in freebsd 9.0

    - by jm666
    Trying to use perl5.14 on Freebsd with port based p5-modules. uname -impr 9.0-RELEASE amd64 amd64 ALTQ delete all installed ports, start with a clean system # pkg_delete -a # rm -rf /var/db/pkg /var/db/ports /usr/local installing portmaster, checking /etc/make.conf (here is only WITHOUT_X11=YES). Now installing perl. # portmaster -g --force-config lang/perl5.14 # perl -v This is perl 5, version 14, subversion 2 (v5.14.2) built for amd64-freebsd-multi Now perl modules from the ports, # portmaster -g devel/p5-Moose #install Moose and its deps check with pkg_info and got zilion errors like: # pkg_info pkg_info: corrupted record (pkgdep line without argument), ignoring dpendecy check with portmaster - showing dependecies on perl5.12 #portmaster --check-depends Checking p5-Class-C3-0.24 ===>>> lang/perl5.12 is listed as a dependency ===>>> but there is no installed version ===>>> Delete this dependency data? y/n [n] when tried # perl-after-upgrade -f got: Fixed 0 packages (0 files moved, 0 files modified) In short: i got installed Moose into /usr/local/lib/perl5/site_perl/5.14.2/ but all its dependencies into /usr/local/lib/perl5/site_perl/5.12.4/ Yes, it is possible fix this with: # portmaster p5- what reinstall all installed p5-packages once again, now correctly for the 5.14 but it is terrible installing them twice... Questions: What is the correct way install p5-MODULES from ports with installed perl5.14 in an clean system? How to fix wrong dependency data on perl5.12 without the need install and reinstall them again What i'm doing wrong? Ps: know perlbrew and/or Local::lib - but for this case - want port versions.

    Read the article

  • Linksys WRT54G2 not showing connected PC's in DHCP table

    - by Moose
    For some reason all of a sudden my linksys WRT54G2 no longer shows PC's connected in the DHCP table. It was being touchy then all of a sudden they all just disappeared. It showed 2 of the 4 connected then it showed the other 2 then it only showed 1 and then it was showing none. I seem to be having a connection reset every 30-60 minutes also not sure if that is related to the router going bad or if it is just comcast being it's typical crappy self. I have tried to reset my router and my cable modem and both problems still seem to keep happening(no PC's in table and connection reset every 30-60 mins). Now I know the router doesn't store clients after a router reboot but they normally show up within a matter of minutes for me in the past. Before anyone asks yes all devices on the network are using DHCP and not static. Could this be a sign of the router starting to go bad?

    Read the article

  • Spellchecking po files

    - by moose
    Hi, I am translating some po-files and I would like to run a spell checker over them. I have Ubuntu 10.10 and use gtranslator. As far as I know, gtranslator can't spellcheck the whole file. I tried ispell: $ ispell lordsawar-0.2.0-pre4.de.po - this doesn't work, as English and German strings, as well as some programming-relevant comments appear in the .po-file. Do you know any program running on Ubuntu which can spell check po-files?

    Read the article

  • Open mail-links with gmail in Chrome 9

    - by moose
    it happens quite often, that I click a link over a persons name, just to see that it launches the default mail client of my system. I thought it would be a normal link, but it was a "mailto:"-link. I would like Chrome to start gmail, not my default email client. For Firefox, I had only to paste this in the URL-Bar: javascript:window.navigator.registerProtocolHandler("mailto","https://mail.google.com/mail/?extsrc=mailto&url=%s","GMail") Unfortunately, it doesn't work in Chrome 9. I have found this tutorial for Ubuntu, but I would like a solution in Chome. If it is only in Chrome, I can sync the settings. (So How do I make mailto: links open gmail in Ubuntu? doesn't fit) http://www.google.com/support/forum/p/Chrome/thread?tid=2aad08042607a4eb&hl=en could be related to my problem, but there is no answer. Gnome: $ gnome-default-applications-properties set Email-Client to gnome-open https://mail.google.com/mail/?extsrc=mailto&url=%s

    Read the article

  • ASP.NET Charting Control no longer working with .NET 4

    - by Moose Factory
    I've just upgraded to .NET 4 and my ASP.NET Chart Control no longer displays. For .NET 3.5, the HTML produced by the control used to look like this: <img id="20_Chart" src="/ChartImg.axd?i=chart_5f6a8fd179a246a5a0f4f44fcd7d5e03_0.png&amp;g=16eb7881335e47dcba16fdfd8339ba1a" alt="" style="height:300px;width:300px;border-width:0px;" /> and now, for .NET 4, it looks like this (note the change in the source path): <img id="20_Chart" src="/Statistics/Summary/ChartImg.axd?i=chart_5f6a8fd179a246a5a0f4f44fcd7d5e03_0.png&amp;g=16eb7881335e47dcba16fdfd8339ba1a" alt="" style="height:300px;width:300px;border-width:0px;" /> The chart is in an MVC partial view that is in an MVC Area folder called "Statistics" and a MVC Views folder called "Summary" (i.e. "/Areas/Statistics/Views/Summary"), so this is obviously where the change of path is coming from. All I've done is to switch the System.Web.DataVisualization assembly from, 3.5 to 4.0. Any help greatly appreciated.

    Read the article

  • XSLT disable output escaping when disable-output-escaping is not supported

    - by Moose Morals
    Hi folks, Since disable-output-escaping doesn't work on firefox (and isn't going to), whats the next best way of including raw markup in the output of an XSTL transform? (Background: I've got raw HTML in a database that I want to wrap in XML to send to a browser to render. I've got control of both the XML and the stylesheet, but no control of the HTML, which may be badly formed (even for HTML!)) Thanks

    Read the article

  • Determine the colours used by the ASP.NET Chart control

    - by Moose Factory
    I'd like to find out which colours are used for a particular pallette in the ASP.NET Chart control. I already know there is an enum on the Chart class to set the palette, e.g. myChart.Palette = ChartColorPalette.Berry; But I'd like to know which colours belong to the palette. Before anyone asks - as I know you will - the reason I need to know the colours is because I want to create my own legend outside of the chart image. I also know that I can set my own colours on the DataPoints for the chart, but I'd rather not have to implement my own palette.

    Read the article

  • Implementing parts of rfc4226 (HOTP) in mysql

    - by Moose Morals
    Like the title says, I'm trying to implement the programmatic parts of RFC4226 "HOTP: An HMAC-Based One-Time Password Algorithm" in SQL. I think I've got a version that works (in that for a small test sample, it produces the same result as the Java version in the code), but it contains a nested pair of hex(unhex()) calls, which I feel can be done better. I am constrained by a) needing to do this algorithm, and b) needing to do it in mysql, otherwise I'm happy to look at other ways of doing this. What I've got so far: -- From the inside out... -- Concatinate the users secret, and the number of time its been used -- find the SHA1 hash of that string -- Turn a 40 byte hex encoding into a 20 byte binary string -- keep the first 4 bytes -- turn those back into a hex represnetation -- convert that into an integer -- Throw away the most-significant bit (solves signed/unsigned problems) -- Truncate to 6 digits -- store into otp -- from the otpsecrets table select (conv(hex(substr(unhex(sha1(concat(secret, uses))), 1, 4)), 16, 10) & 0x7fffffff) % 1000000 into otp from otpsecrets; Is there a better (more efficient) way of doing this?

    Read the article

  • jQuery Autocomplete losing text on AutoPostBack

    - by Moose
    I have a jQuery Autocomplete field on an ASP.Net Webform and everything has been working great until now. I also have a DropDownList that I have a need to fire onSelectedIndexChanged with AutoPostBack. When I changed my code to do the AutoPostBack, the text field that has the jQuery AutoComplete on it comes back blank. However, if I look at the source of the page, the text is in the text field. If I now post the form, the page will send back a blank field. My Google-Fu is weak on this one, as I could not come up with any workaround for it. Has anyone had any issues like this with the Autocomplete field getting blanked out on an AutoPostBack, and how did you get around it? I can post code if it's really necessary, but I'd need to sanitize a lot of it before I could due to company policy.

    Read the article

  • ASP.NET MVC Default URL View

    - by Moose Factory
    I'm trying to set the Default URL of my MVC application to a view within an area of my application. The area is called "Common", the controller "Home" and the view "Index". I've tried setting the defaultUrl in the forms section of web.config to "~/Common/Home/Index" with no success. I've also tried mapping a new route in global.asax, thus: routes.MapRoute( "Area", "{area}/{controller}/{action}/{id}", new { area = "Common", controller = "Home", action = "Index", id = "" } ); Again, to no avail.

    Read the article

  • getElementById not a function (greasemonkey)

    - by Moose
    I'm running GM_xmlhttpRequest and storing the responseText into a newly created HTML element: var responseHTML = document.createElement('HTML'); onload: function() { responseHTML.innerHTML = response.responseText; } And then I am trying to find an element in responseHTML. console.log(responseHTML.getElementsByTagName('div')); console.log(responseHTML.getElementById('result_0')); The first works fine, but not the second. Any ideas?

    Read the article

  • Is there a shorthand way to denullify a string in C#?

    - by Moose Factory
    Is there a shorthand way to denullify a string in C#? It would be the equivalent of (if 'x' is a string): string y = x == null ? "" : x; I guess I'm hoping there's some operator that would work something like: string y = #x; Wishful thinking, huh? The closest I've got so far is an extension method on the string class: public static string ToNotNull(this string value) { return value == null ? "" : value; } which allows me to do: string y = x.ToNotNull(); Any improvements on that, anyone?

    Read the article

  • Strings, regexp and files

    - by A Moose
    <?php $iprange = array( "^12\.34\.", "^12\.35\.", ); foreach($iprange as $var) { if (preg_match($var, $_SERVER['REMOTE_ADDR'])) { I'm looking to have a list that will constitute each of the values inside the array. Let's call it iprange.txt, from which I would extract the variable $iprange. I would also be updating the file with new ranges, but I also want to convert those strings to regexp if that's something that's needed in php, as it is in the above example. If you could help me with the two following issues: I understand that somehow I would be using an array include, but I'm not sure how to implement it. I would like to run a cron that would update the text file and turn it into a regexp acceptable for use in the above example, if you think regexp is a good idea and there isn't another option. I know how to apply a cron in a directadmin gui, but I don't know what the cronned file would look like.

    Read the article

  • configuring local W3C validator on xampp - windows xp sp3

    - by Gabriel
    Hi I have no experience on perl. I am trying to configure the W3C validator on my localhost (win32). I have already followed all the instructions given by W3C @http://validator.w3.org/docs/install_win.html, but I am getting the following error: Can't locate loadable object for module Encode::HanExtra in @INC (@INC contains: C:/xampp/perl/lib C:/xampp/perl/site/lib .) at C:/xampp/validator-0.8.6/httpd/cgi-bin/check line 49 Compilation failed in require at C:/xampp/validator-0.8.6/httpd/cgi-bin/check line 49. BEGIN failed--compilation aborted at C:/xampp/validator-0.8.6/httpd/cgi-bin/check line 49. I'm running perl 5.10.1 on xampp and the following HandExtra Module http://cpansearch.perl.org/src/AUDREYT/Encode-HanExtra-0.23/lib/Encode/HanExtra.pm This is C:/xampp/validator-0.8.6/httpd/cgi-bin/check line 49 : use Encode::HanExtra qw(); # for some chinese character encodings if I document that line using # I get a similar message related to other object: Can't locate loadable object for module Sub::Name in @INC (@INC contains: C:/xampp/perl/lib C:/xampp/perl/site/lib .) at C:/xampp/perl/site/lib/Moose.pm line 12 This is Line 12 at Moose.pm : use Sub::Name 'subname'; I don't know how to proceed. I will appreciate any advise. Thanks

    Read the article

  • How do I find the module dependencies of my Perl script?

    - by zoul
    I want another developer to run a Perl script I have written. The script uses many CPAN modules that have to be installed before the script can be run. Is it possible to make the script (or the perl binary) to dump a list of all the missing modules? Perl prints out the missing modules’ names when I attempt to run the script, but this is verbose and does not list all the missing modules at once. I’d like to do something like: $ cpan -i `said-script --list-deps` Or even: $ list-deps said-script > required-modules # on my machine $ cpan -i `cat required-modules` # on his machine Is there a simple way to do it? This is not a show stopper, but I would like to make the other developer’s life easier. (The required modules are sprinkled across several files, so that it’s not easy for me to make the list by hand without missing anything. I know about PAR, but it seems a bit too complicated for what I want.) Update: Thanks, Manni, that will do. I did not know about %INC, I only knew about @INC. I settled with something like this: print join("\n", map { s|/|::|g; s|\.pm$||; $_ } keys %INC); Which prints out: Moose::Meta::TypeConstraint::Registry Moose::Meta::Role::Application::ToClass Class::C3 List::Util Imager::Color … Looks like this will work.

    Read the article

< Previous Page | 1 2 3  | Next Page >