Search Results

Search found 27581 results on 1104 pages for 'execute command'.

Page 458/1104 | < Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >

  • Maven: properties not being substituted

    - by jobrahms
    I'm using a maven plugin for install4j in my project, located here. That plugin lets you pass variables to install4j using the <compilerVariables> section. Here's the relevant section of my pom: <plugin> <groupId>com.google.code.maven-install4j</groupId> <artifactId>maven-install4j-plugin</artifactId> <version>0.1.1</version> <configuration> <executable>${devenv.install4jc}</executable> <configFile>${basedir}/newinstaller/ehd.install4j</configFile> <releaseId>${project.version}</releaseId> <attach>false</attach> <skipOnMissingExecutable>false</skipOnMissingExecutable> <compilerVariables> <property> <name>m2_home</name> <value>${settings.localRepository}</value> </property> </compilerVariables> </configuration> </plugin> The problem is that ${settings.localRepository} is not being substituted with the actual directory when I run the plugin. Here's the command line script that install4j is generating: [INFO] Running the following command for install4j compile: /bin/sh -c /home/zach/install4j/bin/install4jc --release=9.1-SNAPSHOT --destination="/home/zach/projects/java/ehdtrunk/target/install4j" -D m2_home=${settings.localRepository} /home/zach/projects/java/ehdtrunk/newinstaller/ehd.install4j Is this the fault of the plugin? If so, what needs to change to allow the substitution to happen?

    Read the article

  • How to convert a table column to another data type

    - by holden
    I have a column with the type of Varchar in my Postgres database which I meant to be integers... and now I want to change them, unfortunately this doesn't seem to work using my rails migration. change_column :table1, :columnB, :integer So I tried doing this: execute 'ALTER TABLE "table1" ALTER COLUMN "columnB" TYPE integer USING CAST(columnB AS INTEGER)' but cast doesn't work in this instance because some of the column are null... any ideas?

    Read the article

  • Question about DBD::CSB Statement-Functions

    - by sid_com
    From the SQL::Statement::Functions documentation: Function syntax When using SQL::Statement/SQL::Parser directly to parse SQL, functions (either built-in or user-defined) may occur anywhere in a SQL statement that values, column names, table names, or predicates may occur. When using the modules through a DBD or in any other context in which the SQL is both parsed and executed, functions can occur in the same places except that they can not occur in the column selection clause of a SELECT statement that contains a FROM clause. # valid for both parsing and executing SELECT MyFunc(args); SELECT * FROM MyFunc(args); SELECT * FROM x WHERE MyFuncs(args); SELECT * FROM x WHERE y < MyFuncs(args); # valid only for parsing (won't work from a DBD) SELECT MyFunc(args) FROM x WHERE y; Reading this I would expect that the first SELECT-statement of my example shouldn't work and the second should but it is quite the contrary. #!/usr/bin/env perl use warnings; use strict; use 5.010; use DBI; open my $fh, '>', 'test.csv' or die $!; say $fh "id,name"; say $fh "1,Brown"; say $fh "2,Smith"; say $fh "7,Smith"; say $fh "8,Green"; close $fh; my $dbh = DBI->connect ( 'dbi:CSV:', undef, undef, { RaiseError => 1, f_ext => '.csv', }); my $table = 'test'; say "\nSELECT 1"; my $sth = $dbh->prepare ( "SELECT MAX( id ) FROM $table WHERE name LIKE 'Smith'" ); $sth->execute (); $sth->dump_results(); say "\nSELECT 2"; $sth = $dbh->prepare ( "SELECT * FROM $table WHERE id = MAX( id )" ); $sth->execute (); $sth->dump_results(); outputs: SELECT 1 '7' 1 rows SELECT 2 Unknown function 'MAX' at /usr/lib/perl5/site_perl/5.10.0/SQL/Parser.pm line 2893. DBD::CSV::db prepare failed: Unknown function 'MAX' at /usr/lib/perl5/site_perl/5.10.0/SQL/Parser.pm line 2894. [for Statement "SELECT * FROM test WHERE id = MAX( id )"] at ./so_3.pl line 30. DBD::CSV::db prepare failed: Unknown function 'MAX' at /usr/lib/perl5/site_perl/5.10.0/SQL/Parser.pm line 2894. [for Statement "SELECT * FROM test WHERE id = MAX( id )"] at ./so_3.pl line 30. Could someone explaine me this behavior?

    Read the article

  • cos result incorrect

    - by suk
    I import "math.h". I can use the cos function, but when I execute cos(0.321139585333178) the result is 0.948876 If I use the calculator in Mac or use a normal calculator, the result is 0.999984292347418 Can anyone help me to solve that problem?

    Read the article

  • Executing Password Change over Ruby Net-SSH

    - by tesmar
    Hi all, I am looking to execute a password change over Net-ssh and this code seems to hang: Net::SSH.start(server_ip, "user", :verbose => :debug ) do |session| session.process.popen3("ls") do |input, output, error| ["old_pass","test", "test"].each do |x| input.puts x end end end I know the connection works because using a simple exec I can get the output from ls on the remote server, but this hangs. Any ideas? The last message from debug is that the public key succeeded.

    Read the article

  • C check if file exists, including on PATH

    - by Gary
    Hi, Given a filename in C, I want to determine whether this file exists and has execute permission on it. All I've got currently is: if( access( filename, X_OK) != 0 ) { But this wont search the PATH for files and will also match directories (which I don't want). Could anyone please help?

    Read the article

  • SQLAlchemy returns tuple not dictionary

    - by Ivan
    Hi everyone, I've updated SQLAlchemy to 0.6 but it broke everything. I've noticed it returns tuple not a dictionary anymore. Here's a sample query: query = session.query(User.id, User.username, User.email).filter(and_(User.id == id, User.username == username)).limit(1) result = session.execute(query).fetchone() This piece of code used to return a dictionary in 0.5. My question is how can I return a dictionary?

    Read the article

  • XQuery performance in SQL Server

    - by Carl Hörberg
    Why does this quite simple xquery takes 10min to execute in sql server (the 2mb xml document stored in one column) compared to 14 seconds when using oxygen/file based querying? SELECT model.query('declare default element namespace "http://www.sbml.org/sbml/level2"; for $all_species in //species, $all_reactions in //reaction where data($all_species/@compartment)="plasma_membrane" and $all_reactions/listOfReactants/speciesReference/@species=$all_species/@id return <result>{data($all_species/@id)}</result>') from sbml;

    Read the article

  • Add Access-Control-Allow-Origin to header in PHP

    - by SANDeveloper
    I am trying to workaround CORS restriction on a WebGL application. I have a Web Service which resolves URL and returns images. Since this web service is not CORS enabled, I can't use the returned images as textures. I was planning to: Write a PHP script to handle image requests Image requests would be sent through the query string as a url parameter The PHP Script will: Call the web service with the query string url Fetch the image response (web service returns a content-type:image response) Add the CORS header (Add Access-Control-Allow-Origin) to the response Send the response to the browser I tried to implement this using a variety of techniques including CURL, HTTPResponse, plain var_dump etc. but got stuck at some point in each. So I have 2 questions: Is the approach good enough? Considering the approach is good enough: I made the most progress with CURL. I could get the image header and data with: $ch = curl_init(); $url = $_GET["url"]; curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_HEADER, true); curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type:image/jpeg')); //Execute request $response = curl_exec($ch); //get the default response headers $headers = curl_getinfo($ch); //close connection curl_close($ch); But this doesn't actually change set the response content-type to image/jpeg. It dumps the header + response into a new response of content-type text/html and display the header and the image BLOB data in the browser. How do I get it to send the response in the format I want? Managed to get it working: $ch = curl_init(); $url = $_GET["url"]; curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_HEADER, false); //Execute request $response = curl_exec($ch); //get the default response headers $headers = curl_getinfo($ch); curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); header('Content-Type: image/jpeg'); header("Access-Control-Allow-Origin: *"); // header("Expires: Sat, 26 Jul 2017 05:00:00 GMT"); //close connection curl_close($ch); flush();

    Read the article

  • ADODB.Connection undefined

    - by Wes Groleau
    Reference http://stackoverflow.com/questions/1690622/excel-vba-to-sql-server-without-ssis After I got the above working, I copied all the global variables/constants from the routine, which included Const CS As String = "Driver={SQL Server};" _ & "Server=**;" _ & "Database=**;" _ & "UID=**;" _ & "PWD=**" Dim DB_Conn As ADODB.Connection Dim Command As ADODB.Command Dim DB_Status As Stringinto a similar module in another spreadsheet. I also copied Sub Connect_To_Lockbox() If DB_Status < "Open" Then Set DB_Conn = New Connection DB_Conn.ConnectionString = CS DB_Conn.Open ' problem! DB_Status = "Open" End If End SubI added the same reference (ADO 2.8) The first spreadsheet still works; the seccond at DB_Conn.Open pops up "Run-time error '-214767259 (80004005)': [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified" Removing the references on both, saving files, re-opening, re-adding the references doesn't help. The one still works and the other gets the error. ?!?

    Read the article

  • Visual Studio 2010: very slow web applications debugging!

    - by micha12
    I recently installed Visual Studio 2010 (Ultimate edition, final version released in April), and found that debugging a web application became very slow (2-3 times slower than in Visual Studio 2008)! I took the same web application and checked the speed of loading of one of its pages in VS 2008 and VS 2010, and compared the time it takes to load the page. I tested it using 2 approaches: 1) debugging under ASP.NET Development Server (by pressing the "Start" button) and 2) using ASP.NET Development Server without debugging (by using the "View in Browser" menu command). And I got the following results for Visual Studio 2008 and 2010. 1) ASP.NET Development Server withoud debugging ("View in Browser"): the speed of page loading is the same in VS 2008 and 2010. 2) Debugging under ASP.NET Development Server ("Start" button): in VS 2010 the page takes more time to load than in VS 2008 - VS 2010 debugging is 2-3 times slower than in VS 2008! 3) At the same time, when debugging a web application in VS 2008, it takes the same time to load the page compared to when using only the "View in Browser" command. That is, VS 2008 debugging does not introduce any overhead to page loading in the web browser! I wanted to make sure that other people have the same problem with slow debugging of web applications in VS 2010. Can this issue be solved by any means? BTW, I am using Windows XP SP3. Thank you.

    Read the article

  • mysql multiple insert - what happens on error?

    - by aviv
    What happens in mysql multiple records insert during an error. I have a table: id | value 2 | 100 UNIQUE(id) Now i try to execute the query: INSERT INTO table(id, value) VALUES (1,10),(2,20),(3,30) I will get a duplicate-key error for the (2,20) BUT... Will the (1,10) get into the database? Will the (3,30) get into the database?

    Read the article

  • Copying contents of a MySQL table to a table in another (local) database

    - by Philip Eve
    I have two MySQL databases for my site - one is for a production environment and the other, much smaller, is for a testing/development environment. Both have identical schemas (except when I am testing something I intend to change, of course). A small number of the tables are for internationalisation purposes: TransLanguage - non-English languages TransModule - modules (bundles of phrases for translation, that can be loaded individually by PHP scripts) TransPhrase - individual phrases, in English, for potential translation TranslatedPhrase - translations of phrases that are submitted by volunteers ChosenTranslatedPhrase - screened translations of phrases. The volunteers who do translation are all working on the production site, as they are regular users. I wanted to create a stored procedure that could be used to synchronise the contents of four of these tables - TransLanguage, TransModule, TransPhrase and ChosenTranslatedPhrase - from the production database to the testing database, so as to keep the test environment up-to-date and prevent "unknown phrase" errors from being in the way while testing. My first effort was to create the following procedure in the test database: CREATE PROCEDURE `SynchroniseTranslations` () LANGUAGE SQL NOT DETERMINISTIC MODIFIES SQL DATA SQL SECURITY DEFINER BEGIN DELETE FROM `TransLanguage`; DELETE FROM `TransModule`; INSERT INTO `TransLanguage` SELECT * FROM `PRODUCTION_DB`.`TransLanguage`; INSERT INTO `TransModule` SELECT * FROM `PRODUCTION_DB`.`TransModule`; INSERT INTO `TransPhrase` SELECT * FROM `PRODUCTION_DB`.`TransPhrase`; INSERT INTO `ChosenTranslatedPhrase` SELECT * FROM `PRODUCTION_DB`.`ChosenTranslatedPhrase`; END When I try to run this, I get an error message: "SELECT command denied to user 'username'@'localhost' for table 'TransLanguage'". I also tried to create the procedure to work the other way around (that is, to exist as part of the data dictionary for the production database rather than the test database). If I do it that, way, I get an identical message except it tells me I'm denied the DELETE command rather than SELECT. I have made sure that my user has INSERT, DELETE, SELECT, UPDATE and CREATE ROUTINE privileges on both databases. However, it seems as though MySQL is reluctant to let this user exercise its privileges on both databases at the same time. How come, and is there a way around this?

    Read the article

  • Unable to HTTP PUT with libcurl to django-piston

    - by Jesse Beder
    I'm trying to PUT data using libcurl to mimic the command curl -u test:test -X PUT --data-binary @data.yaml "http://127.0.0.1:8000/foo/" which works correctly. My options look like: curl_easy_setopt(handle, CURLOPT_USERPWD, "test:test"); curl_easy_setopt(handle, CURLOPT_URL, "http://127.0.0.1:8000/foo/"); curl_easy_setopt(handle, CURLOPT_VERBOSE, 1); curl_easy_setopt(handle, CURLOPT_UPLOAD, 1); curl_easy_setopt(handle, CURLOPT_READFUNCTION, read_data); curl_easy_setopt(handle, CURLOPT_READDATA, &yaml); curl_easy_setopt(handle, CURLOPT_INFILESIZE, yaml.size()); curl_easy_perform(handle); I believe the read_data function works correctly, but if you ask, I'll post that code. I'm using Django with django-piston, and my update function is never called! (It is called when I use the command line version above.) libcurl's output is: * About to connect() to 127.0.0.1 port 8000 (#0) * Trying 127.0.0.1... * connected * Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0) * Server auth using Basic with user 'test' > PUT /foo/ HTTP/1.1 Authorization: Basic dGVzdDp0ZXN0 Host: 127.0.0.1:8000 Accept: */* Content-Length: 244 Expect: 100-continue * Done waiting for 100-continue ** this is where my read_data handler confirms: read 244 bytes ** * HTTP 1.0, assume close after body < HTTP/1.0 400 BAD REQUEST < Date: Thu, 13 May 2010 08:22:52 GMT < Server: WSGIServer/0.1 Python/2.5.1 < Vary: Authorization < Content-Type: text/plain < Bad Request* Closing connection #0

    Read the article

  • function not working in production mode

    - by maps
    I am using the rvideo gem to transcode files to a .flv format. class Video < ActiveRecord::Base include AASM aasm_column :status aasm_initial_state :initial aasm_state :initial aasm_state :converting, :exit => :transcode aasm_state :transfering , :exit => :send_s3 aasm_state :completed aasm_state :failed aasm_event :convert do transitions :from => [:initial], :to => :converting end aasm_event :transfer do transitions :from => [:converting], :to => :transfering end aasm_event :complete do transitions :from => [:transfering], :to => :completed end aasm_event :error do transitions :from => [:initial, :converting, :transfering, :completed] end has_attached_file :asset, :path => "uploads/:attachment/:id.:basename.:extension" def flash_path return self.asset.path + '.flv' end def flash_name return File::basename(self.asset.path)# + '.flv' end def flash_url return "#{AWS_HOST}/#{AWS_BUCKET}/#{self.flash_name}" end # transcode file def transcode begin RVideo::Transcoder.logger = logger file = RVideo::Inspector.new(:file => self.asset.path) command = "ffmpeg -i $input_file$ -y -s $resolution$ -ar 44100 -b 64k -r 15 -sameq $output_file$" options = { :input_file => "#{RAILS_ROOT}/#{self.asset.path}", :output_file => "#{RAILS_ROOT}/#{self.flash_path}", :resolution => "320x200" } transcoder = RVideo::Transcoder.new transcoder.execute(command, options) rescue RVideo::TranscoderError => e logger.error "Encountered error transcoding #{self.asset.path}" logger.error e.message end end The input file is added to the asset directory, but I never get an outputted file. On the view page aasm hangs on "converting".

    Read the article

  • Run a external program with specified max running time

    - by jack
    I want to execute an external program in each thread of a multi-threaded python program. Let's say max running time is set to 1 second. If started process completes within 1 second, main program capture its output for further processing. If it doesn't finishes in 1 second, main program just terminate it and start another new process. How to implement this?

    Read the article

  • Looking for all-in-one drm/installer/CD creation kit.

    - by user30997
    The company I work for has a download manager in place that handles distribution, DRM, installation of our products - when a user gets them off our website. However, we're using an clunky system for packaging and protecting our products when we do press releases or make retail CDs. Part of the antiquation problem is the fact that the automated system that works with the installer- and DRM-creation software we have is a disaster that needs to be put out of my misery. The list of products that we currently produce, that I would like a new system MUST be capable of producing: Retail CDs, with a certain level of obfuscation to make copying difficult. Downloadable installers that time out after a few hours of use of the product. After the time has expired, removing and reinstalling the product will leave you still blocked from use. Installers that will fail to work after a certain date. I'd love to be able to just feed a tool the directory where a complete product resides and have the installer generated with a couple command-line operations. (The command-line issue is non-negotiable this well be called by an automated tool.) A single-solution package would be far preferable. Software with royalty-based or per-unit based licensing is not an option.

    Read the article

  • Calculate the retrieved rows in database Visual C#

    - by Tanya Lertwichaiworawit
    I am new in Visual C# and would want to know how to calculate the retrieved data from a database. Using the above GUI, when "Calculate" is click, the program will display the number of students in textBox1, and the average GPA of all students in textBox2. Here is my database table "Students": I was able to display the number of students but I'm still confused to how I can calculate the average GPA Here's my code: private void button1_Click(object sender, EventArgs e) { string connection = @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\Database1.accdb"; OleDbConnection connect = new OleDbConnection(connection); string sql = "SELECT * FROM Students"; connect.Open(); OleDbCommand command = new OleDbCommand(sql, connect); DataSet data = new DataSet(); OleDbDataAdapter adapter = new OleDbDataAdapter(command); adapter.Fill(data, "Students"); textBox1.Text = data.Tables["Students"].Rows.Count.ToString(); double gpa; for (int i = 0; i < data.Tables["Students"].Rows.Count; i++) { gpa = Convert.ToDouble(data.Tables["Students"].Rows[i][2]); } connect.Close(); }

    Read the article

  • JSF inner datatable not respecting rendered condition of outer table.

    - by Marc
    <h:dataTable cellpadding="0" cellspacing="0" styleClass="list_table" id="OuterItems" value="#{valueList.values}" var="item" border="0"> <h:column rendered="#{item.typeA"> <h:dataTable cellpadding="0" cellspacing="0" styleClass="list_table" id="InnerItems" value="#{item.options}" var="option" border="0"> <h:column > <h:outputText value="Option: #{option.displayValue}"/> </h:column> </h:dataTable> </h:column> <h:column rendered="#{item.typeB"> <h:dataTable cellpadding="0" cellspacing="0" styleClass="list_table" id="InnerItems" value="#{item.demands}" var="demand" border="0"> <h:column > <h:outputText value="Demand: #{demand.displayValue}"/> </h:column> </h:dataTable> </h:column> </h:dataTable> public class Item{ ... public boolean isTypeA(){ return this instanceof TypeA; } public boolean isTypeB(){ return this instanceof TypeB; } ... } public class typeA extends Item(){ ... public List getOptions(){ .... } ... } public class typeB extends Item(){ ... public List getDemands(){ ... } .... } I'm having an issue with JSF. I've abstracted the problem out here, and I'm hoping someone can help me understand how what I'm doing fails. I'm looping over a list of Items. These Items are actually instances of the subclasses TypeA and TypeB. For Type A, I want to display the options, for Type B I want to display the demands. When rendering the page for the first time, this works fine. However, when I post back to the page for some action, I get an error: [3/26/10 12:52:32:781 EST] 0000008c SystemErr R javax.faces.FacesException: Error getting property 'options' from bean of type TypeB at com.sun.faces.lifecycle.ApplyRequestValuesPhase.execute(ApplyRequestValuesPhase.java:89) at com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java(Compiled Code)) at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:91) at com.ibm.faces.portlet.FacesPortlet.processAction(FacesPortlet.java:193) My grasp on the JSF lifecyle is very rough. At this point, i understand there is an error in the ApplyRequestValues Phases which is very early and so the previous state is restored and nothing changes. What I don't understand is that in order to fufill the condition for rendering "item.typeA" that object has to be an instance of TypeA. But here, it looks like that object passed the condition even though it was an instance of TypeB. It is like it is evaluating the inner dataTable (InnerItems) before evaluating the outer (outerItems). My working assumption is that I just don't understand how/when the rendered attribute is actually evaluated.

    Read the article

  • capturing CMD batch file parameter list; write to file for later processing

    - by BobB
    I have written a batch file that is launched as a post processing utility by a program. The batch file reads ~24 parameters supplied by the calling program, stores them into variables, and then writes them to various text files. Since the max input variable in CMD is %9, it's necessary to use the 'shift' command to repeatedly read and store these individually to named variables. Because the program outputs several similar batch files, the result is opening several CMD windows sequentially, assigning variables and writing data files. This ties up the calling program for too long. It occurs to me that I could free up the calling program much faster if maybe there's a way to write a very simple batch file that can write all the command parameters to a text file, where I can process them later. Basically, just grab the parameter list, write it and done. Q: Is there some way to treat an entire series of parameter data as one big text string and write it to one big variable... and then echo the whole big thing to one text file? Then later read the string into %n variables when there's no program waiting to resume? Parameter list is something like 25 - 30 words, less than 200 characters. Sample parameter list: "First Name" "Lastname" "123 Steet Name Way" "Cityname" ST 12345 1004968 06/01/2010 "Firstname+Lastname" 101738 "On Account" 20.67 xy-1z 1 8.95 3.00 1.39 0 0 239 8.95 Items in quotes are processed as string variables. List is space delimited. Any suggestions?

    Read the article

  • Cann Boost Program_options separate comma separated argument values

    - by lrm
    If my command line is: > prog --mylist=a,b,c Can Boost's program_options be setup to see three distinct argument values for the mylist argument? I have configured program_options as: namespace po = boost::program_options; po::options_description opts("blah") opts.add_options() ("mylist", std::vector<std::string>>()->multitoken, "description"); po::variables_map vm; po::store(po::parse_command_line(argc, argv, opts), vm); po::notify(vm); When I check the value of the mylist argument, I see one value as a,b,c. I'd like to see three distinct values, split on comma. This works fine if I specify the command line as: > prog --mylist=a b c or > prog --mylist=a --mylist=b --mylist=c Is there a way to configure program_options so that it sees a,b,c as three values that should each be inserted into the vector, rather than one? I am using boost 1.41, g++ 4.5.0 20100520, and have enabled c++0x experimental extensions.

    Read the article

  • Init Startup of Tomcat

    - by azoor
    Hi I put a init script into /etc/init.d to startup the Apache-tomcat. when i'm reboot the machine. i think the Tomcat try to start and aborted in some level. i am using Ubuntu 9.04. my guessing is the given time to start the tomcat is not enough. if any way have to increase the time to execute a script in startup.

    Read the article

< Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >