Search Results

Search found 10355 results on 415 pages for 'shell extension'.

Page 368/415 | < Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >

  • Why is F# member not found when used in subclass

    - by James Black
    I have a base type that I want to inherit from, for all my DAO objects, but this member gets the error further down about not being defined: type BaseDAO() = member v.ExecNonQuery2(conn)(sqlStr) = let comm = new MySqlCommand(sqlStr, conn, CommandTimeout = 10) comm.ExecuteNonQuery |> ignore comm.Dispose |> ignore I inherit in this type: type CreateDatabase() = inherit BaseDAO() member private self.createDatabase(conn) = self.ExecNonQuery2 conn "DROP DATABASE IF EXISTS restaurant" This is what I see when my script runs in the interactive shell: --> Referenced 'C:\Program Files\MySQL\MySQL Connector Net 6.2.3\Assemblies\MySql.Data.dll' [Loading C:\Users\jblack\Documents\Visual Studio 2010\Projects\RestaurantService\RestaurantDAO\BaseDAO.fs] namespace FSI_0106.RestaurantServiceDAO type BaseDAO = class new : unit -> BaseDAO member ExecNonQuery2 : conn:MySql.Data.MySqlClient.MySqlConnection -> sqlStr:string -> unit member execNonQuery : sqlStr:string -> unit member execQuery : sqlStr:string * selectFunc:(MySql.Data.MySqlClient.MySqlDataReader -> 'a list) -> 'a list member f : x:obj -> string member Conn : MySql.Data.MySqlClient.MySqlConnection end [Loading C:\Users\jblack\Documents\Visual Studio 2010\Projects\RestaurantService\RestaurantDAO\CreateDatabase.fs] C:\Users\jblack\Documents\Visual Studio 2010\Projects\RestaurantService\RestaurantDAO\CreateDatabase.fs(56,14): error FS0039: The field, constructor or member 'ExecNonQuery2' is not defined I am curious what I am doing wrong. I have tried not inheriting, and just instantiating the BaseDAO type in the function, but I get the same error. I started on this path because I had a property that had the same error, so it seems there may be a problem with how I am defining my BaseDAO type, but it compiles with no error, which further confuses me about this problem.

    Read the article

  • Runing bcdedit from python in Windows 2008 SP2

    - by Lee-Man
    I do not know windows well, so that may explain my dilemma ... I am trying to run bcdedit in Windows 2008R2 from Python 2.6. My Python routine to run a command looks like this: def run_program(cmd_str): """Run the specified command, returning its output as an array of lines""" dprint("run_program(%s): entering" % cmd_str) cmd_args = cmd_str.split() subproc = subprocess.Popen(cmd_args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) (outf, errf) = (subproc.stdout, subproc.stderr) olines = outf.readlines() elines = errf.readlines() if Options.debug: if elines: dprint('Error output:') for line in elines: dprint(line.rstrip()) if olines: dprint('Normal output:') for line in olines: dprint(line.rstrip()) errf.close() outf.close() res = subproc.wait() dprint('wait result=', res) return (res, olines) I call this function thusly: (res, o) = run_program('bcdedit /set {current} MSI forcedisable') This command works when I type it from a cmd window, and it works when I put it in a batch file and run it from a command window (as Administrator, of course). But when I run it from Python (as Administrator), Python claims it can't find the command, returning: bcdedit is not recognized as an internal or external command, operable program or batch file Also, if I trying running my batch file from Python (which works from the command line), it also fails. I've also tried it with the full path to bcdedit, with the same results. What is it about calling bcdedit from Python that makes it not found? Note that I can call other EXE files from Python, so I have some level of confidence that my Python code is sane ... but who knows. Any help would be most appreciated.

    Read the article

  • Magic function `bash ` not found

    - by inspectorG4dget
    I have a bunch of simulations that I want to run on a high-performance cluster, on which I should make reservations to get computing time. Since the reservations are bounded by time, I am developing an automation script that I can scp into the cluster and run. This script will then download the relevant simulation files, run them, and upload the results. Part of this automation script is in bash (cp, scp, etc) and the rest is in python. In order to develop this automation, I am using an IPython notebook. So far, I've coded all the python automation stuff in my IPython notebook and am trying to write the bash part of it now. However, it seems that the magic %%bash doesn't work in my IPython notebook. I get the following error when I have this code in my cell: Cell %%bash echo hi Error File "<ipython-input-22-62ec98e35224>", line 3 echo hi ^ SyntaxError: invalid syntax On a whim, I tried this: Cell %%bash print "hi" Error hi ERROR: Magic function `bash` not found. So I tried this with %%system, %%! and %%shell. But none of those work; they all give me the same error. Why is this happening? How can I fix this? Metadata: IPython 0.13.dev Python 2.7.1 Mac OS X Lion

    Read the article

  • Java: extending Object class

    - by Fabio F.
    Hello, I'm writing (well, completing) an "extension" of Java which will help role programming. I translate my code to Java code with javacc. My compilers add to every declared class some code. Here's an example to be clearer: MyClass extends String implements ObjectWithRoles { //implements... is added /*Added by me */ public setRole(...){...} public ... /*Ends of stuff added*/ ...//myClass stuff } It adds Implements.. and the necessary methods to EVERY SINGLE CLASS you declare. Quite rough, isnt'it? It will be better if I write my methods in one class and all class extends that.. but.. if class already extends another class (just like the example)? I don't want to create a sort of wrapper that manage roles because i don't want that the programmer has to know much more than Java, few new reserved words and their use. My idea was to extends java.lang.Object.. but you can't. (right?) Other ideas? I'm new here, but I follow this site so thank you for reading and all the answers you give! (I apologize for english, I'm italian)

    Read the article

  • Syncing magento database froms development to production

    - by ringerce
    I use git for version control. I have a development, staging and production environment. When I finish in development I push to staging for review by the client. When approved, I push changes from staging to production. That works fine as long as there is no database changes. What happens if I install modules via Magento connect on local development and it makes database modifications. How would I push those changes up to the production server since the production server is always changing? Edit: I wrote two shell scripts. One that pulls the production database down to my development server, replaces base url with develpment url and updates my development db accordingly. It also leaves the production sql dump behind to be added to my git repo. I'm not really sure if it's beneficial to keep the raw dumps in source control but I'm going to try it out. The second scripts moves the development database up to staging and essentially performs the same operations as the first. Now when it comes time to move to production I pull the updated production repo into the production server and allow magento to do it's thing. I also started using SQLYog recently and it has a database comparison wizard which will give me the differences in my development and production databases and allow me to merge the changes in selectively. It always creates a migration script that I added to source control as well. If anything goes wrong I can run the comparison to see if anything was missed. Does this sounds like a decent workflow to you guys?

    Read the article

  • Vim hanging after parsing .vimrc (even a blank one) file on Solaris 10

    - by Seamus
    Hello all, I am having a problem with vim 7.2 hanging (for about 10 seconds) after it parses the .vimrc file. I had a similar issue in the past with tcsh on linux, but it was resolved by setting TERM to xterm-color. The same does not resolve the issue here. Any idea what may be causing this? $ env USER=redacted LOGNAME=redacted HOME=/home/redacted PATH=redacted MAIL=/var/spool/mail/redacted SHELL=/bin/tcsh TZ=redacted LC_COLLATE=C SSH_CLIENT=redacted SSH_CONNECTION=redacted SSH_TTY=/dev/pts/11 TERM=dtterm HOSTTYPE=sun4 VENDOR=sun OSTYPE=solaris MACHTYPE=sparc SHLVL=1 PWD=/home/redacted GROUP=redacted HOST=redacted REMOTEHOST=redacted QUOTA_CHECKED=1 WHOAMI=redacted HOSTNAME=redacted EDITOR=vim PRINTER=redacted INFOPATH=/software/gnu/gcc/2.8.1/sun4os5.10/info:/software/gnu/sun4os5/info:/software/gnu/emacs/20.3.1/sun4os5/info:/software/gnuish/sun4os5/info:/usr/local/gnu/info MANPATH=/software/gnu/gcc/2.8.1/sun4os5.10/man:/software/gnu/sun4os5/man:/software/gnu/emacs/20.3.1/sun4os5/man:/opt/rational/clearcase/doc/man:/usr/openwin/man:/usr/share/man:/usr/local/man:/usr/dt/man:/software/gnuish/sun4os5/man H_ARCH=sun4 H_ARCHOS=sun4os5 H_ARCHOS_SUB=sun4os5.10 H_OSTYPE=SUNOS H_OSREV=51000 T_ARCH=sun4 T_ARCHOS=sun4os5 T_ARCHOS_SUB=sun4os5.10 T_OSTYPE=SUNOS T_OSREV=51000 X11HOME=/usr/local/x11/sun4os5 OPENWINHOME=/usr/openwin LD_LIBRARY_PATH=/usr/dt/lib:/usr/openwin/lib MOTIFHOME=/usr/dt XINITRC=/usr/openwin/lib/Xinitrc GCC_REV=281

    Read the article

  • Ignore folders with certain filetypes

    - by gavin19
    I'm trying in vain to rewrite my old Powershell script found here - "$_.extension -eq" not working as intended? - for Python.I have no Python experience or knowledge and my 'script' is a mess but it mostly works. The only thing missing is that I would like to be able to ignore folders which don't contain 'mp3s', or whichever filetype I specify. Here is what I have so far - import os, os.path, fnmatch path = raw_input("Path : ") for filename in os.listdir(path): if os.path.isdir(filename): os.chdir(filename) j = os.path.abspath(os.getcwd()) mp3s = fnmatch.filter(os.listdir(j), '*.txt') if mp3s: target = open("pls.m3u", 'w') for filename in mp3s: target.write(filename) target.write("\n") os.chdir(path) All I would like to be able to do (if possible) is that when the script is looping through the folders that it ignores those which do NOT contain 'mp3s', and removes the 'pls.m3u'. I could only get the script to work properly if I created the 'pls.m3u' by default. The problem is that that creates a lot of empty 'pls.m3u' files in folders which contain only '.jpg' files for example. You get the idea. I'm sure this script is blasphemous to Python users but any help would be greatly appreciated.

    Read the article

  • Top navigation link in Magento that I can't get rid of

    - by Chris Baily
    I'm working on a new theme for an existing Magento installation, and I've got a rogue link. The last guy apparently decided to hard code a link to the AW Blog extension he was using in the top navigation. See here: derm2go.com - link is "articles". I'm getting rid of AW Blog in favor of an integrated wordpress install, but when I uninstall AW Blog, the site breaks (everything after the nav dissapears) and I get this error in my logs: 2011-11-19T08:56:19+00:00 ERR (3): Warning: include() [function.include]: Failed opening 'Mage/Blog/Helper/Data.php' for inclusion (include_path='/chroot/home/dermtwog/derm2go.com/html/app/code/local:/chroot/home/dermtwog/derm2go.com/html/app/code/community:/chroot/home/dermtwog/derm2go.com/html/app/code/core:/chroot/home/dermtwog/derm2go.com/html/lib:.:/usr/share/pear') in /chroot/home/dermtwog/derm2go.com/html/lib/Varien/Autoload.php on line 93 I've searched everywhere, I can think of that might effect the nav menu, and I don't know where the link is coming from - it's not in the CMS/static blocks, its not in any of the default template files on the server (I deleted and reinstalled all of them) and it's showing up even when I change templates, so it's probably not in on the sub themes. Does anyone out there know of other files it could be hiding in? I'm assuming the last guy did a quick and dirty hack somehow - and maybe messed with core files? Would really rather not have to do a full reinstall.

    Read the article

  • How to (unit-)test data intensive PL/SQL application

    - by doom2.wad
    Our team is willing to unit-test a new code written under a running project extending an existing huge Oracle system. The system is written solely in PL/SQL, consists of thousands of tables, hundreds of stored procedures packages, mostly getting data from tables and/or inserting/updating other data. Our extension is not an exception. Most functions return data from a quite complex SELECT statementa over many mutually bound tables (with a little added logic before returning them) or make transformation from one complicated data structure to another (complicated in another way). What is the best approach to unit-test such code? There are no unit tests for existing code base. To make things worse, only packages, triggers and views are source-controlled, table structures (including "alter table" stuff and necessary data transformations are deployed via channel other than version control). There is no way to change this within our project's scope. Maintaining testing data set seems to be impossible since there is new code deployed to the production environment on weekly basis, usually without prior notice, often changing data structure (add a column here, remove one there). I'd be glad for any suggestion or reference to help us. Some team members tend to be tired by figuring out how to even start for our experience with unit-testing does not cover PL/SQL data intensive legacy systems (only those "from-the-book" greenfield Java projects).

    Read the article

  • GitHub solution for personal repo

    - by Luke Maurer
    So I've got my private SVN repo on my home server, and it has maybe 30 different modules thrown together in it, ranging from abortive throw-away larks to a few endeavors that might actually go somewhere someday. But a recent filesystem failure (BTW, never ever EVER use XFS without a battery-backed hardware RAID) has me spooked and thinking of using a DVCS for all that. I've also just had quite the swig of the Git koolaid, and I've been working with GitHub of late, so that's where I'm looking right now. Of course, it would be silly to shell out major cash for a separate private Git repo for every little project, and I don't want to have to be selective about what I throw up there (I love all my children :-D ), so I'll have to be somewhat creative about this. I can happily use SSH to my home box to use Git the way I've been using SVN, and I'm thinking from there I could amalgamate everything into, say, a big project with 30 submodules, which I then push to GitHub. What'd be a sane way to set this up? Does using submodules sound feasible? How do I sync it all to my private GitHub repo? Cron job? Git hook? I'd love to hear it if anyone's done something similar. I'm not really married to Git or GitHub, so a sufficiently compelling feature of another solution might sway me. But if your answer does involve a different system (especially a different VCS), be advised it'll be a tougher sell :-)

    Read the article

  • Mercurial: applying changes one by one to resolve merging issues

    - by Webinator
    I recently tried to merge a series of changeset and encountered a huge number of merging issues. Hence I'd like to to try to apply each changeset, in order, one by one, in order to make the merging issues easier to manage. I'll give an example with 4 problematic changesets (514,515,516 and 517) [in my real case, I've got a bit more than that] o changeset: 517 | o changeset: 516 | o changeset: 515 | o changeset: 514 | | | @ changeset: 513 | | | o changeset: 512 | | | o | | | o | | | o |/ | | o changeset 508 Note that I've got clones of my repos before pulling the problematic changesets. When I pull the 4 changesets and try a merge, things are too complicated to resolve. So I wanted to pull only changeset 514, then merge. Then once I solve the merging issue, pull only changeset 515 and apply it, etc. (I know the numbering shall change, this is not my problem here). How am I supposed to do that, preferably without using any extension? (because I'd like to understand Mercurial and what I'm doing better). Is the way to go generate a patch between 508 and 514 and apply that patch? (if so, how would I generate that patch) Answers including concrete command-line example(s) most welcome :)

    Read the article

  • Error installing FeedZirra

    - by Gautam
    Hi, I am new to Ruby on Rails. I am excited about Feed parsing but when I install FeedZirra I am getting this error. I use Windows 7 and Ruby 1.8.7. Please help. Thanks in advance. C:\Ruby187>gem sources -a http://gems.github.com http://gems.github.com added to sources C:\Ruby187>gem install pauldix-feedzirra Building native extensions. This could take a while... ERROR: Error installing pauldix-feedzirra: ERROR: Failed to build gem native extension. C:/Ruby187/bin/ruby.exe extconf.rb checking for curl-config... no checking for main() in -lcurl... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=C:/Ruby187/bin/ruby --with-curl-dir --without-curl-dir --with-curl-include --without-curl-include=${curl-dir}/include --with-curl-lib --without-curl-lib=${curl-dir}/lib --with-curllib --without-curllib extconf.rb:12: Can't find libcurl or curl/curl.h (RuntimeError) Try passing --with-curl-dir or --with-curl-lib and --with-curl-include options to extconf. Gem files will remain installed in C:/Ruby187/lib/ruby/gems/1.8/gems/taf2-curb-0 .5.4.0 for inspection. Results logged to C:/Ruby187/lib/ruby/gems/1.8/gems/taf2-curb-0.5.4.0/ext/gem_ma ke.out

    Read the article

  • How to efficiently compare the sign of two floating-point values while handling negative zeros

    - by François Beaune
    Given two floating-point numbers, I'm looking for an efficient way to check if they have the same sign, given that if any of the two values is zero (+0.0 or -0.0), they should be considered to have the same sign. For instance, SameSign(1.0, 2.0) should return true SameSign(-1.0, -2.0) should return true SameSign(-1.0, 2.0) should return false SameSign(0.0, 1.0) should return true SameSign(0.0, -1.0) should return true SameSign(-0.0, 1.0) should return true SameSign(-0.0, -1.0) should return true A naive but correct implementation of SameSign in C++ would be: bool SameSign(float a, float b) { if (fabs(a) == 0.0f || fabs(b) == 0.0f) return true; return (a >= 0.0f) == (b >= 0.0f); } Assuming the IEEE floating-point model, here's a variant of SameSign that compiles to branchless code (at least with with Visual C++ 2008): bool SameSign(float a, float b) { int ia = binary_cast<int>(a); int ib = binary_cast<int>(b); int az = (ia & 0x7FFFFFFF) == 0; int bz = (ib & 0x7FFFFFFF) == 0; int ab = (ia ^ ib) >= 0; return (az | bz | ab) != 0; } with binary_cast defined as follow: template <typename Target, typename Source> inline Target binary_cast(Source s) { union { Source m_source; Target m_target; } u; u.m_source = s; return u.m_target; } I'm looking for two things: A faster, more efficient implementation of SameSign, using bit tricks, FPU tricks or even SSE intrinsics. An efficient extension of SameSign to three values.

    Read the article

  • Overriding content_type for Rails Paperclip plugin

    - by Fotios
    I think I have a bit of a chicken and egg problem. I would like to set the content_type of a file uploaded via Paperclip. The problem is that the default content_type is only based on extension, but I'd like to base it on another module. I seem to be able to set the content_type with the before_post_process class Upload < ActiveRecord::Base has_attached_file :upload before_post_process :foo def foo logger.debug "Changing content_type" #This works self.upload.instance_write(:content_type,"foobar") # This fails because the file does not actually exist yet self.upload.instance_write(:content_type,file_type(self.upload.path) end # Returns the filetype based on file command (assume it works) def file_type(path) return `file -ib '#{path}'`.split(/;/)[0] end end But...I cannot base the content type on the file because Paperclip doesn't write the file until after_create. And I cannot seem to set the content_type after it has been saved or with the after_create callback (even back in the controller) So I would like to know if I can somehow get access to the actual file object (assume there are no processors doing anything to the original file) before it is saved, so that I can run the file_type command on that. Or is there a way to modify the content_type after the objects have been created.

    Read the article

  • Selenium RC: How to assert an element contains text and any number of other elements?

    - by Andrew
    I am using Selenium RC and the PHPUnit Selenium Extension. I am trying to assert that a table row exists, with the content that I expect, while making it flexible enough to be reused. Thanks to this article, I figured out a way to select the table row, asserting that it contains all the text that I expect. $this->assertElementPresent("css=tr:contains(\"$text1\"):contains(\"$text2\")"); But now I would like to assert that a specific radio button appears in the table row also. Here's the element that I would like to assert that is within the table row. (I am currently asserting that it exists using XPath. I'm sure I could do the same using CSS). $this->assertElementPresent("//input[@type='radio'][@name='Contact_ID'][@value='$contactId']"); Currently I have a function that can assert that a table row exists which contains any number of texts, but I would like to add the ability to specify any number of elements and have it assert that the table row contains them. How can I achieve this? /** * Provides the ability to assert that all of the text appear in the same table row. * @param array $texts */ public function assertTextPresentInTableRow(array $texts) { $locator = 'css=tr'; foreach ($texts as $text) { $locator .= ":contains(\"$text\")"; } $this->assertElementPresent($locator); }

    Read the article

  • DRYing out implementation of ICloneable in several classes

    - by Sarah Vessels
    I have several different classes that I want to be cloneable: GenericRow, GenericRows, ParticularRow, and ParticularRows. There is the following class hierarchy: GenericRow is the parent of ParticularRow, and GenericRows is the parent of ParticularRows. Each class implements ICloneable because I want to be able to create deep copies of instances of each. I find myself writing the exact same code for Clone() in each class: object ICloneable.Clone() { object clone; using (var stream = new MemoryStream()) { var formatter = new BinaryFormatter(); // Serialize this object formatter.Serialize(stream, this); stream.Position = 0; // Deserialize to another object clone = formatter.Deserialize(stream); } return clone; } I then provide a convenience wrapper method, for example in GenericRows: public GenericRows Clone() { return (GenericRows)((ICloneable)this).Clone(); } I am fine with the convenience wrapper methods looking about the same in each class because it's very little code and it does differ from class to class by return type, cast, etc. However, ICloneable.Clone() is identical in all four classes. Can I abstract this somehow so it is only defined in one place? My concern was that if I made some utility class/object extension method, it would not correctly make a deep copy of the particular instance I want copied. Is this a good idea anyway?

    Read the article

  • iOS: Assignment to iVar in Block (ARC)

    - by manmal
    I have a readonly property isFinished in my interface file: typedef void (^MyFinishedBlock)(BOOL success, NSError *e); @interface TMSyncBase : NSObject { BOOL isFinished_; } @property (nonatomic, readonly) BOOL isFinished; and I want to set it to YES in a block at some point later, without creating a retain cycle to self: - (void)doSomethingWithFinishedBlock:(MyFinishedBlock)theFinishedBlock { __weak MyClass *weakSelf = self; MyFinishedBlock finishedBlockWrapper = ^(BOOL success, NSError *e) { [weakSelf willChangeValueForKey:@"isFinished"]; weakSelf -> isFinished_ = YES; [weakSelf didChangeValueForKey:@"isFinished"]; theFinishedBlock(success, e); }; self.finishedBlock = finishedBlockWrapper; // finishedBlock is a class ext. property } I'm unsure that this is the right way to do it (I hope I'm not embarrassing myself here ^^). Will this code leak, or break, or is it fine? Perhaps there is an easier way I have overlooked? SOLUTION Thanks to the answers below (especially Krzysztof Zablocki), I was shown the way to go here: Define isFinished as readwrite property in the class extension (somehow I missed that one) so no direct ivar assignment is needed, and change code to: - (void)doSomethingWithFinishedBlock:(MyFinishedBlock)theFinishedBlock { __weak MyClass *weakSelf = self; MyFinishedBlock finishedBlockWrapper = ^(BOOL success, NSError *e) { MyClass *strongSelf = weakSelf; strongSelf.isFinished = YES; theFinishedBlock(success, e); }; self.finishedBlock = finishedBlockWrapper; // finishedBlock is a class ext. property }

    Read the article

  • MVC View Model Intellesense / Compile error

    - by Marty Trenouth
    I have one Library with my ORM and am working with a MVC Application. I have a problem where the pages won't compile because the Views can't see the Model's properties (which are inherited from lower level base classes). They system throws a compile error saying that 'object' does not contain a definition for 'ID' and no extension method 'ID' accepting a first argument of type 'object' could be found (are you missing a using directive or an assembly reference?) implying that the View is not seeing the model. In the Controller I have full access to the Model and have check the Inherits from portion of the view to validate the correct type is being passed. Controller: return View(new TeraViral_Blog()); View: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<com.models.TeraViral_Blog>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Index2 </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>Index2</h2> <fieldset> <legend>Fields</legend> <p> ID: <%= Html.Encode(Model.ID) %> </p> </fieldset> </asp:Content>

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • Error in pom file in Maven project,after importing into eclipse

    - by dipti
    I am actually new to the Maven framework.I already have a Maven project.I installed the Maven plugin etc into my EclipseIDE from http://m2eclipse.sonatype.org/sites/m2e.Then I imported my project and enabled dependencies.But the project is showing too many errors.The pom.xml itself is showing errors.The errors are"Project Build Error:unknown packaging:apk",Project Build Error:unresolvable build extension:plugin" etc. My error area is: project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" 4.0.0 <groupId>com.nbc.cnbc.android</groupId> <artifactId>android.domain.tests</artifactId> <version>1.0.0</version> <packaging>apk</packaging> <parent> <groupId>com.nbc.cnbc.android</groupId> <artifactId>android.domain.parent</artifactId> <version>1.0.0</version> <relativePath>../android.domain.parent/pom.xml</relativePath> </parent> <name>android.domain.tests</name> <url>http://maven.apache.org</url> Could it be because the url specified in the last line could be different??? Any ideas why this could be happening?? Any reply is highly appreciated.Thanks a looot in advance!! Regards, Dipti

    Read the article

  • Update SQL Server 2000 to SQL Server 2008: Benefits please?

    - by Ciaran Archer
    Hi there I'm looking for the benefits of upgrading from SQL Server 2000 to 2008. I was wondering: What database features can we leverage with 2008 that we can't now? What new TSQL features can we look forward to using? What performance benefits can we expect to see? What else will make management go for it? And the converse: What problems can we expect to encounter? What other problems have people found when migrating? Why fix something that isn't (technically) broken? We work in a Java shop, so any .NET / CLR stuff won't rock our world. We also use Eclipse as our main development so any integration with Visual Studio won't be a plus. We do use SQL Server Management Studio however. Some background: Our main database machine is a 32bit Dell Intel Xeon MP CPU 2.0GHz, 40MB of RAM with Physical Address Extension running Windows Server 2003 Enterprise Edition. We will not be changing our hardware. Our databases in total are under a TB with some having more than 200 tables. But they are busy and during busy times we see 60-80% CPU utilisation. Apart form the fact that SQL Server 2000 is coming close to end of life, why should we upgrade? Any and all contributions are appreciated!

    Read the article

  • Why can't I simply copy installed Perl modules to other machines?

    - by pistacchio
    Being very new to Perl but not to dynamic languages, I'm a bit surprised at how not straight forward the manage of modules is. Sure, cpan X does theoretically work, but I'm working on the same project from three different machines and OSs (at work, at home, testing in an external environment). At work (Windows 7) I have problem using cpan because of our firewall that makes ftp unusable At home (Mac OS X) it does work In the external environment (Linux CentOs) it worked after hours because I don't have root access and I had to configure cpan to operate as a non-root user I've tried on another server where I have an access. If the previous external environment is a VPS and so I have a shell access, this other one is a cheap shared hosting where I have no way to install new modules other than the ones pre-installed At the moment I still can't install Template under Windows. I've seen that as an alternative I could compile it and I've also tried ActiveState's PPM but the module is not existent there. Now, my perplexity is about Perl being a dynamic language. I've had all these kind of problems while working, for example, with C where I had to compile all the libraries for all the platform, but I thought that with Perl the approach would have been very similar to Python's or PHP's where in 90% of the cases copying the module in a directory and importing it simply works. So, my question: if Perl's modules are written in Perl, why the copy/paste approach will not work? If some (or some part) of the modules have to be compiled, how to see in CPAN if a module is Perl-only or it relies upon compiled libraries? Isn't there a way to download the module (tar, zip...) and use cpan to deploy it? This would solve my problem under Windows.

    Read the article

  • How to specialize a type parameterized argument to multiple different types for in Scala?

    - by jmount
    I need a back-check (please). In an article ( http://www.win-vector.com/blog/2010/06/automatic-differentiation-with-scala/ ) I just wrote I stated that it is my belief in Scala that you can not specify a function that takes an argument that is itself a function with an unbound type parameter. What I mean is you can write: def g(f:Array[Double]=>Double,Array[Double]):Double but you can not write something like: def g(f[Y]:Array[Y]=>Double,Array[Double]):Double because Y is not known. The intended use is that inside g() I will specialize fY to multiple different types at different times. You can write: def g[Y](f:Array[Y]=>Double,Array[Double]):Double but then f() is of a single type per call to g() (which is exactly what we do not want). However, you can get all of the equivalent functionality by using a trait extension instead insisting on passing around a function. What I advocated in my article was: 1) Creating a trait that imitates the structure of Scala's Function1 trait. Something like: abstract trait VectorFN { def apply[Y](x:Array[Y]):Y } 2) declaring def g(f:VectorFN,Double):Double (using the trait is the type). This works (people here on StackOverflow helped me find it, and I am happy with it)- but am I mis-representing Scala by missing an even better solution?

    Read the article

  • How do you move an admin menu item in Magento

    - by Josh Pennington
    I currently have an extension that I created and it currently sits inside of its own top level menu. I would like to move it so that the item would appear inside of the Customers menu. Does anyone know how to do this? It looks like this is handled inside the extensions config.xml file. The code that I have for it right now is as follows: <menu> <testimonials module="testimonials"> <title>Testimonials</title> <sort_order>71</sort_order> <children> <items module="testimonials"> <title>Manage Items</title> <sort_order>0</sort_order> <action>testimonials/adminhtml_testimonials</action> </items> </children> </testimonials> </menu> I tried changing the title element to Customers and it just created a duplicate Customers menu. Does anyone have any ideas? Thanks Josh Pennington

    Read the article

  • How to do buffered intersection checks on an IPoint?

    - by Quigrim
    How would I buffer an IPoint to do an intersection check using IRelationalOperator? I have, for arguments sake: IPoint p1 = xxx; IPoint p2 = yyy; IRelationalOperator rel1 = (IRelationalOperator)p1; if (rel.Intersects (p2)) // Do something But now I want to add a tolerance to my check, so I assume the right way to do that is by either buffering p1 or p2. Right? How do I add such a buffer? Note: the Intersects method I am using is an extension method I wrote to simplify my code. Here it is: /// <summary> /// Returns true if the IGeometry is intersected. /// This method negates the Disjoint method. /// </summary> /// <param name="relOp">The rel op.</param> /// <param name="other">The other.</param> /// <returns></returns> public static bool Intersects ( this IRelationalOperator relOp, IGeometry other) { return (!relOp.Disjoint (other)); }

    Read the article

< Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >