Search Results

Search found 1291 results on 52 pages for 'redguy pl'.

Page 23/52 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Factorial in Prolog and C++

    - by Joshua Green
    I would like to work out a number's factorial. My factorial rule is in a Prolog file and I am connecting it to a C++ code. Can someone tell me what is wrong with my C++ interface please? % factorial.pl factorial( 1, 1 ):- !. factorial( X, Fac ):- X > 1, Y is X - 1, factorial( Y, New_Fac ), Fac is X * New_Fac. // factorial.cpp # headerfiles term_t t1; term_t t2; term_t goal_term; functor_t goal_functor; int main( int argc, char** argv ) { argc = 4; argv[0] = "libpl.dll"; argv[1] = "-G32m"; argv[2] = "-L32m"; argv[3] = "-T32m"; PL_initialise(argc, argv); if ( !PL_initialise(argc, argv) ) PL_halt(1); PlCall( "consult(swi('plwin.rc'))" ); PlCall( "consult('factorial.pl')" ); cout << "Enter your factorial number: "; long n; cin >> n; PL_put_integer( t1, n ); t1 = PL_new_term_ref(); t2 = PL_new_term_ref(); goal_term = PL_new_term_ref(); goal_functor = PL_new_functor( PL_new_atom("factorial"), 2 ); PL_put_atom( t1, t2 ); PL_cons_functor( goal_term, goal_functor, t1, t2 ); PL_halt( PL_toplevel() ? 0 : 1 ); }

    Read the article

  • String Concatenation Issue

    - by Nano HE
    Hi, Could you please have a look at my code below. #!C:\Perl\bin\perl.exe use strict; use warnings; use Data::Dumper; my $fh = \*DATA; my $str1 = "listBox1.Items.Add(\""; my $str2 = "\")\;"; while(my $line = <$fh>) { $line=~s/^\s+//g; print $str1.$line.$str2; chomp($line); } __DATA__ Hello World Output: D:\learning\perl>test.pl listBox1.Items.Add("Hello ");listBox1.Items.Add("World "); D:\learning\perl> Style error. I want the style below. Is ther anything wrong about my code? thanks. D:\learning\perl>test.pl listBox1.Items.Add("Hello"); listBox1.Items.Add("World"); D:\learning\perl>

    Read the article

  • how to send image to remote server using webservices in android only save to byte array

    - by satyamurthy
    get image from sdcard and store that image to remote server. i am getting the image from sdcard and i converterd that image to bytearray by using bitmap .but what's the problem if i oberver byte array it is showing some different values it is not matching with .net image byte array conversion. can u pl help if you have any solution it is very urgent to me following is the code i am using can u pl suggest me FileInputStream fin = new FileInputStream(new File("/sdcard/pictures/1.jpg")); BufferedInputStream bis = new BufferedInputStream(fin,3000); byte[] data = new byte[bis.available()]; bis.read(data, 0, data.length); byte[] data1=new byte[data.length]; for (int i = 0; i < data.length; i++) { System.out.print(data[i]); data1[i]=data[i]; } System.out.println("5..................."+data1); Bitmap bitmap = BitmapFactory.decodeByteArray(data1,0,data1.length); System.out.println("6..................."+data1.length); Log.v("hgfjohfjghjdfhgj",""+bitmap); if(bitmap!=null) image.setImageBitmap(bitmap); else Log.e("Bitmap "," Not Created");

    Read the article

  • Maven3 Issues with building a multi-module enterprise project

    - by Sujit K
    I just migrated from Maven2 to Maven3 and I'm able to build each module individually or all the modules in one shot by calling mvn clean install. However, in Maven2, since we have multi-module enterprise project, we build multiple ear's and each ear is built as its own module with its own child pom. To build an individual ear with its dependents, the below command works fine in Maven2 but not in Maven3. Let me explain the issue in Maven3 a bit later. mvn -pl ear_module -rf first_dependent_module -am clean install In Maven2 when the reactor lists the build order, I see first_dependent_module second_dependent_module ear_module End of the day I have my ear module also part of the reactor which is how it should be. The reason we call -rf is we don't want to delete the target folder at the main ${project.basedir} (so not to delete the output created in target from building the other ear modules). With Maven3, however, this is all I see when the reactor lists the build order: first_dependent_module second_dependent_module Maven3 totally ignores the argument (ear_module) set to -pl flag to be also built after its dependents have been. Not sure what I'm missing here. Any help/tips would be greatly appreciated. P.S: The build I'm making is similar to the one below.... Build specific module in multi-module project Thanks, SK

    Read the article

  • PostgreSQL: return select count(*) from old_ids;

    - by Alexander Farber
    Hello, please help me with 1 more PL/pgSQL question. I have a PHP-script run as daily cronjob and deleting old records from 1 main table and few further tables referencing its "id" column: create or replace function quincytrack_clean() returns integer as $BODY$ begin create temp table old_ids (id varchar(20)) on commit drop; insert into old_ids select id from quincytrack where age(QDATETIME) > interval '30 days'; delete from hide_id where id in (select id from old_ids); delete from related_mks where id in (select id from old_ids); delete from related_cl where id in (select id from old_ids); delete from related_comment where id in (select id from old_ids); delete from quincytrack where id in (select id from old_ids); return select count(*) from old_ids; end; $BODY$ language plpgsql; And here is how I call it from the PHP script: $sth = $pg->prepare('select quincytrack_clean()'); $sth->execute(); if ($row = $sth->fetch(PDO::FETCH_ASSOC)) printf("removed %u old rows\n", $row['count']); Why do I get the following error? SQLSTATE[42601]: Syntax error: 7 ERROR: syntax error at or near "select" at character 9 QUERY: SELECT select count(*) from old_ids CONTEXT: SQL statement in PL/PgSQL function "quincytrack_clean" near line 23 Thank you! Alex

    Read the article

  • Cost of logic in a query

    - by FrustratedWithFormsDesigner
    I have a query that looks something like this: select xmlelement("rootNode", (case when XH.ID is not null then xmlelement("xhID", XH.ID) else xmlelement("xhID", xmlattributes('true' AS "xsi:nil"), XH.ID) end), (case when XH.SER_NUM is not null then xmlelement("serialNumber", XH.SER_NUM) else xmlelement("serialNumber", xmlattributes('true' AS "xsi:nil"), XH.SER_NUM) end), /*repeat this pattern for many more columns from the same table...*/ FROM XH WHERE XH.ID = 'SOMETHINGOROTHER' It's ugly and I don't like it, and it is also the slowest executing query (there are others of similar form, but much smaller and they aren't causing any major problems - yet). Maintenance is relatively easy as this is mostly a generated query, but my concern now is for performance. I am wondering how much of an overhead there is for all of these case expressions. To see if there was any difference, I wrote another version of this query as: select xmlelement("rootNode", xmlforest(XH.ID, XH.SER_NUM,... (I know that this query does not produce exactly the same, thing, my plan was to move the logic to PL/SQL or XSL) I tried to get execution plans for both versions, but they are the same. I'm guessing that the logic does not get factored into the execution plan. My gut tells me the second version should execute faster, but I'd like some way to prove that (other than writing a PL/SQL test function with timing statements before and after the query and running that code over and over again to get a test sample). Is it possible to get a good idea of how much the case-when will cost? Also, I could write the case-when using the decode function instead. Would that perform better (than case-statements)?

    Read the article

  • how to send image to remote server using web services in android

    - by Aswan
    get image from sdcard and store that image to remote server. i am getting the image from sdcard and i converterd that image to bytearray by using bitmap .but what's the problem if i oberver byte array it is showing some different values it is not matching with .net image byte array conversion. can u pl help if you have any solution it is very urgent to me following is the code i am using can u pl suggest me FileInputStream fin = new FileInputStream(new File("/sdcard/pictures/1.jpg")); BufferedInputStream bis = new BufferedInputStream(fin,3000); byte[] data = new byte[bis.available()]; bis.read(data, 0, data.length); byte[] data1=new byte[data.length]; for (int i = 0; i < data.length; i++) { System.out.print(data[i]); data1[i]=data[i]; } System.out.println("5..................."+data1); Bitmap bitmap = BitmapFactory.decodeByteArray(data1,0,data1.length); System.out.println("6..................."+data1.length); Log.v("hgfjohfjghjdfhgj",""+bitmap); if(bitmap!=null) image.setImageBitmap(bitmap); else Log.e("Bitmap "," Not Created");

    Read the article

  • DBD::CSV: Problem with userdefined functions

    - by sid_com
    From the SQL::Statement::Functions documentation: Creating User-Defined Functions ... More complex functions can make use of a number of arguments always passed to functions automatically. Functions always receive these values in @_: sub FOO { my( $self, $sth, $rowhash, @params ); } #!/usr/bin/env perl use 5.012; use warnings; use strict; use DBI; my $dbh = DBI->connect( "DBI:CSV:", undef, undef, { RaiseError => 1, } ); my $table = 'wages'; my $array_ref = [ [ 'id', 'number' ], [ 0, 6900 ], [ 1, 3200 ], [ 2, 1800 ], ]; $dbh->do( "CREATE TEMP TABLE $table AS import( ? )", {}, $array_ref ); sub routine { my $self = shift; my $sth = shift; my $rowhash = shift; # return $_[0] / 30; }; $dbh->do( "CREATE FUNCTION routine" ); my $sth = $dbh->prepare( "SELECT id, routine( number ) AS result FROM $table" ); $sth->execute(); $sth->dump_results(); When I try this I get an error-message: DBD::CSV::st execute failed: Use of uninitialized value $_[0] in division (/) at ./so.pl line 27. [for Statement "SELECT id, routine( number ) AS result FROM "wages""] at ./so.pl line 34. When I comment out the third argument I works as expected ( because it looks as if the third argument is missing ): #!/usr/bin/env perl ... sub routine { my $self = shift; my $sth = shift; #my $rowhash = shift; return $_[0] / 30; }; ... 0, 230 1, 106.667 2, 60 3 rows Is this a bug?

    Read the article

  • Efficient way to remove empty lists from lists without evaluating held expressions?

    - by Alexey Popkov
    In previous thread an efficient way to remove empty lists ({}) from lists was suggested: Replace[expr, x_List :> DeleteCases[x, {}], {0, Infinity}] Using the Trott-Strzebonski in-place evaluation technique this method can be generalized for working also with held expressions: f1[expr_] := Replace[expr, x_List :> With[{eval = DeleteCases[x, {}]}, eval /; True], {0, Infinity}] This solution is more efficient than the one based on ReplaceRepeated: f2[expr_] := expr //. {left___, {}, right___} :> {left, right} But it has one disadvantage: it evaluates held expressions if they are wrapped by List: In[20]:= f1[Hold[{{}, 1 + 1}]] Out[20]= Hold[{2}] So my question is: what is the most efficient way to remove all empty lists ({}) from lists without evaluating held expressions? The empty List[] object should be removed only if it is an element of another List itself. Here are some timings: In[76]:= expr = Tuples[Tuples[{{}, {}}, 3], 4]; First@Timing[#[expr]] & /@ {f1, f2, f3} pl = Plot3D[Sin[x y], {x, 0, Pi}, {y, 0, Pi}]; First@Timing[#[pl]] & /@ {f1, f2, f3} Out[77]= {0.581, 0.901, 5.027} Out[78]= {0.12, 0.21, 0.18} Definitions: Clear[f1, f2, f3]; f3[expr_] := FixedPoint[ Function[e, Replace[e, {a___, {}, b___} :> {a, b}, {0, Infinity}]], expr]; f1[expr_] := Replace[expr, x_List :> With[{eval = DeleteCases[x, {}]}, eval /; True], {0, Infinity}]; f2[expr_] := expr //. {left___, {}, right___} :> {left, right};

    Read the article

  • How can I change the default startup directory for cmd.exe?

    - by Nano HE
    Hi. My Procedure last day as below Click Start, Run and type Regedit.exe Navigate to the following branch: HKEY_CURRENT_USER \ Software \ Microsoft \ Command Processor In the right-pane, double-click Autorun and set the startup folder path as its data, preceded by “CD /d “. If Autorun value is missing, you need to create one, of type REG_EXPAND_SZ or REG_SZ in the above location. Example: To set the startup directory to D:\learning\perl, set the Autorun value data to CD /d D:\learning\perl Then I clicked Start, run and type cmd. It successfully. I could do perl practice more conveniently now. But today, I find when I try to build my Visual Studio 2005 solution which included some Pre-build event Command like this: perl.exe MyAppVersion.pl perl.exe AttrScan.pl It doesn't work. Show error: can't find the path. I check the environment variable setting and find the variable-path and it's value-c:\perl\bin\; still exist. Finially, I try to removed the Regedit.exe configuration "Autorun" value and test again. The issue fixed. I only changed the default startup directory for cmd.exe command. Why the pre-build event perl command was impacted? (I am using winxp and activePerl 5.8)

    Read the article

  • Telling me my stored procedure isn't declared

    - by Scott
    Here is where the error is occuring in the stack: public static IKSList<DataParameter> Search(int categoryID, int departmentID, string title) { Database db = new Database(DatabaseConfig.CommonConnString, DatabaseConfig.CommonSchemaOwner, "pkg_data_params_new", "spdata_params_search"); db.AddParameter("category_id", categoryID); db.AddParameter("department_id", departmentID); db.AddParameter("title", title, title.Length); DataView temp = db.Execute_DataView(); IKSList<DataParameter> dps = new IKSList<DataParameter>(); foreach (DataRow dr in temp.Table.Rows) { DataParameter dp = new DataParameter(); dp.Load(dr); dps.Add(dp); } return dps; } And here is the error text: ORA-06550: line 1, column 38: PLS-00302: component 'SPDATA_PARAMS_SEARCH' must be declared ORA-06550: line 1, column 7: PL/SQL: Statement ignored Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.OracleClient.OracleException: ORA-06550: line 1, column 38: PLS-00302: component 'SPDATA_PARAMS_SEARCH' must be declared ORA-06550: line 1, column 7: PL/SQL: Statement ignored Source Error: Line 161: db.AddParameter("title", title, title.Length); Line 162: Line 163: DataView temp = db.Execute_DataView(); Line 164: Line 165: IKSList dps = new IKSList(); My webconfig is pointing to the correct place and everything so idk where this is coming from.

    Read the article

  • How security of the systems might be improved using database procedures?

    - by Centurion
    The usage of Oracle PL/SQL procedures for controlling access to data often emphasized in PL/SQL books and other sources as being more secure approach. I'v seen several systems where all business logic related with data is performed through packages, procedures and functions, so application code becomes quite "dumb" and is only responsible for visualization part. I even heard some devs call such approaches and driving architects as database nazi :) because all logic code resides in database. I do know about DB procedure performance benefits, but now I'm interested in a "better security" when using thick client model. I assume such design mostly used when Oracle (and maybe MS SQL Server) databases are used. I do agree such approach improves security but only if there are not much users and every system user has a database account, so we might control and monitor data access through standard database user security. However, how such approach could increase the security for an average web system where thick clients are used: for example one database user with DML grants on all tables, and other users are handled using "users" and"user_rights" tables? We could use DB procedures, save usernames into context use that for filtering but vulnerability resides at the root - if the main database account is compromised than nothing will help. Of course in a real system we might consider at least several main users (for example frontend_db_user, backend_db_user).

    Read the article

  • PHP cache header override

    - by Soyo
    I've been through over 100 answers here, lots to try, NOTHING working?? Have a PHP based site. I need caching OFF for all .php files EXCEPT A SELECT FEW. So, in .htaccess, I have the following: ExpiresActive On # Eliminate caching for certain dynamic files <FilesMatch "\.(php|cgi|pl)$"> ExpiresDefault A0 Header set Cache-Control "no-cache, no-store, must-revalidate, max-age=0, proxy-revalidate, no-transform" Header set Pragma "no-cache" </FilesMatch> Using Firebug, I see the following: Cache-Control no-cache, no-store, must-revalidate, max-age=0, proxy-revalidate, no-transform Connection Keep-Alive Content-Type text/html Date Sun, 02 Sep 2012 19:22:27 GMT Expires Sun, 02 Sep 2012 19:22:27 GMT Keep-Alive timeout=3, max=100 Pragma no-cache Server Apache Transfer-Encoding chunked X-Powered-By PHP/5.2.17 Hey, Looks great! BUT, I have a couple .php pages I need some very short caching on. I thought the simple answer was having this added to the very top of each php page in which I want caching enabled: <?php header("Cache-Control: max-age=360"); ?> Nope. Then I tried various versions of the above. Nope. Then I tried meta http-equiv variations. Nope. Then I tried variations of the .htaccess code along with the above variations, such as limiting it to: # Eliminate caching for certain dynamic files <FilesMatch "\.(php|cgi|pl)$"> Header set Cache-Control "no-cache, max-age=0" </FilesMatch> Nope. It seems nothing I do will allow a single .php to be cache enabled with the .htaccess code in place, short of removing the statements from the .htaccess file altogether. Where am I going wrong? What do I have to do to get individual php pages to be cacheable while the rest remain off?? Thank you for any thoughts.

    Read the article

  • Query to bring count from comma seperated Value

    - by Mugil
    I have Two Tables One for Storing Products and Other for Storing Orders List. CREATE TABLE ProductsList(ProductId INT NOT NULL PRIMARY KEY, ProductName VARCHAR(50)) INSERT INTO ProductsList(ProductId, ProductName) VALUES(1,'Product A'), (2,'Product B'), (3,'Product C'), (4,'Product D'), (5,'Product E'), (6,'Product F'), (7,'Product G'), (8,'Product H'), (9,'Product I'), (10,'Product J'); CREATE TABLE OrderList(OrderId INT NOT NULL PRIMARY KEY AUTO_INCREMENT, EmailId VARCHAR(50), CSVProductIds VARCHAR(50)) SELECT * FROM OrderList INSERT INTO OrderList(EmailId, CSVProductIds) VALUES('[email protected]', '2,4,1,5,7'), ('[email protected]', '5,7,4'), ('[email protected]', '2'), ('[email protected]', '8,9'), ('[email protected]', '4,5,9'), ('[email protected]', '1,2,3'), ('[email protected]', '9,10'), ('[email protected]', '1,5'); Output ItemName NoOfOrders Product A 4 Product B 3 Product C 1 Product D 3 Product E 4 Product F 0 Product G 2 Product H 1 Product I 2 Product J 1 The Order List Stores the ItemsId as Comma separated value for every customer who places order.Like this i am having more than 40k Records in my dB table Now I am assigned with a task of creating report in which I should display Items and No of People ordered Items as Shown Below I Used Query as below in my PHP to bring the Orders One By One and storing in array. SELECT COUNT(PL.EmailId) FROM OrderList PL WHERE CSVProductIds LIKE '2' OR CSVProductIds LIKE '%,2,%' OR CSVProductIds LIKE '%,2' OR CSVProductIds LIKE '2,%'; 1.Is it possible to get the same out put by using Single Query 2.Does using a like in mysql query slows down the dB when the table has more no of records i.e 40k rows

    Read the article

  • Call to a member function find() on a non-object simpleHTMLDOM PHP

    - by wandersolo
    I am trying to read a link from one page, print the url, go to that page, and read the link on the next page in the same location, print the url, go to that page (and so on...) All I'm doing is reading the url and passing it as an argument to the get_links() method until there are no more links. This is my code but it throws a Fatal error: Call to a member function find() on a non-object. Anyone know what's wrong? <?php $mainPage = 'https://www.bu.edu/link/bin/uiscgi_studentlink.pl/1346752597?ModuleName=univschr.pl&SearchOptionDesc=Class+Subject&SearchOptionCd=C&KeySem=20133&ViewSem=Fall+2012&Subject=&MtgDay=&MtgTime='; get_links($mainPage); function get_links($url) { $data = new simple_html_dom(); $data = file_get_html($url); $nodes = $data->find("input[type=hidden]"); $fURL=$data->find("/html/body/form"); $firstPart = $fURL[0]->action . '<br>'; foreach ($nodes as $node) { $val = $node->value; $name = $node->name; $name . '<br />'; $val . "<br />"; $str1 = $str1. "&" . $name . "=" . $val; } $fixStr1 = str_replace('&College', '?College', $str1); $fixStr2 = str_replace('Fall 2012', 'Fall+2012', $fixStr1); $fixStr3 = str_replace('Class Subject', 'Class+Subject', $fixStr2); $fixStr4 = $firstPart . $fixStr3; echo $nextPageURL = chop($fixStr4); get_links($nextPageURL); } ?>

    Read the article

  • 12c - Utl_Call_Stack...

    - by noreply(at)blogger.com (Thomas Kyte)
    Over the next couple of months, I'll be writing about some cool new little features of Oracle Database 12c - things that might not make the front page of Oracle.com.  I'm going to start with a new package - UTL_CALL_STACK.In the past, developers have had access to three functions to try to figure out "where the heck am I in my code", they were:dbms_utility.format_call_stackdbms_utility.format_error_backtracedbms_utility.format_error_stackNow these routines, while useful, were of somewhat limited use.  Let's look at the format_call_stack routine for a reason why.  Here is a procedure that will just print out the current call stack for us:ops$tkyte%ORA12CR1> create or replace  2  procedure Print_Call_Stack  3  is  4  begin  5    DBMS_Output.Put_Line(DBMS_Utility.Format_Call_Stack());  6  end;  7  /Procedure created.Now, if we have a package - with nested functions and even duplicated function names:ops$tkyte%ORA12CR1> create or replace  2  package body Pkg is  3    procedure p  4    is  5      procedure q  6      is  7        procedure r  8        is  9          procedure p is 10          begin 11            Print_Call_Stack(); 12            raise program_error; 13          end p; 14        begin 15          p(); 16        end r; 17      begin 18        r(); 19      end q; 20    begin 21      q(); 22    end p; 23  end Pkg; 24  /Package body created.When we execute the procedure PKG.P - we'll see as a result:ops$tkyte%ORA12CR1> exec pkg.p----- PL/SQL Call Stack -----  object      line  object  handle    number  name0x6e891528         4  procedure OPS$TKYTE.PRINT_CALL_STACK0x6ec4a7c0        10  package body OPS$TKYTE.PKG0x6ec4a7c0        14  package body OPS$TKYTE.PKG0x6ec4a7c0        17  package body OPS$TKYTE.PKG0x6ec4a7c0        20  package body OPS$TKYTE.PKG0x76439070         1  anonymous blockBEGIN pkg.p; END;*ERROR at line 1:ORA-06501: PL/SQL: program errorORA-06512: at "OPS$TKYTE.PKG", line 11ORA-06512: at "OPS$TKYTE.PKG", line 14ORA-06512: at "OPS$TKYTE.PKG", line 17ORA-06512: at "OPS$TKYTE.PKG", line 20ORA-06512: at line 1The bit in red above is the output from format_call_stack whereas the bit in black is the error message returned to the client application (it would also be available to you via the format_error_backtrace API call). As you can see - it contains useful information but to use it you would need to parse it - and that can be trickier than it seems.  The format of those strings is not set in stone, they have changed over the years (I wrote the "who_am_i", "who_called_me" functions, I did that by parsing these strings - trust me, they change over time!).Starting in 12c - we'll have structured access to the call stack and a series of API calls to interrogate this structure.  I'm going to rewrite the print_call_stack function as follows:ops$tkyte%ORA12CR1> create or replace 2  procedure Print_Call_Stack  3  as  4    Depth pls_integer := UTL_Call_Stack.Dynamic_Depth();  5    6    procedure headers  7    is  8    begin  9        dbms_output.put_line( 'Lexical   Depth   Line    Name' ); 10        dbms_output.put_line( 'Depth             Number      ' ); 11        dbms_output.put_line( '-------   -----   ----    ----' ); 12    end headers; 13    procedure print 14    is 15    begin 16        headers; 17        for j in reverse 1..Depth loop 18          DBMS_Output.Put_Line( 19            rpad( utl_call_stack.lexical_depth(j), 10 ) || 20                    rpad( j, 7) || 21            rpad( To_Char(UTL_Call_Stack.Unit_Line(j), '99'), 9 ) || 22            UTL_Call_Stack.Concatenate_Subprogram 23                       (UTL_Call_Stack.Subprogram(j))); 24        end loop; 25    end; 26  begin 27    print; 28  end; 29  /Here we are able to figure out what 'depth' we are in the code (utl_call_stack.dynamic_depth) and then walk up the stack using a loop.  We will print out the lexical_depth, along with the line number within the unit we were executing plus - the unit name.  And not just any unit name, but the fully qualified, all of the way down to the subprogram name within a package.  Not only that - but down to the subprogram name within a subprogram name within a subprogram name.  For example - running the PKG.P procedure again results in:ops$tkyte%ORA12CR1> exec pkg.pLexical   Depth   Line    NameDepth             Number-------   -----   ----    ----1         6       20      PKG.P2         5       17      PKG.P.Q3         4       14      PKG.P.Q.R4         3       10      PKG.P.Q.R.P0         2       26      PRINT_CALL_STACK1         1       17      PRINT_CALL_STACK.PRINTBEGIN pkg.p; END;*ERROR at line 1:ORA-06501: PL/SQL: program errorORA-06512: at "OPS$TKYTE.PKG", line 11ORA-06512: at "OPS$TKYTE.PKG", line 14ORA-06512: at "OPS$TKYTE.PKG", line 17ORA-06512: at "OPS$TKYTE.PKG", line 20ORA-06512: at line 1This time - we get much more than just a line number and a package name as we did previously with format_call_stack.  We not only got the line number and package (unit) name - we got the names of the subprograms - we can see that P called Q called R called P as nested subprograms.  Also note that we can see a 'truer' calling level with the lexical depth, we can see we "stepped" out of the package to call print_call_stack and that in turn called another nested subprogram.This new package will be a nice addition to everyone's error logging packages.  Of course there are other functions in there to get owner names, the edition in effect when the code was executed and more. See UTL_CALL_STACK for all of the details.

    Read the article

  • Migrate from MySQL to PostgreSQL on Linux (Kubuntu)

    - by Dave Jarvis
    Storyline Trying to migrate a database from MySQL to PostgreSQL. All the documentation I have read covers, in great detail, how to migrate the structure. I have found very little documentation on migrating the data. The schema has 13 tables (which have been migrated successfully) and 9 GB of data. MySQL version: 5.1.x PostgreSQL version: 8.4.x I want to use the R programming language to analyze the data using SQL select statements; PostgreSQL has PL/R, but MySQL has nothing (as far as I can tell). A long time ago in a galaxy far, far away... Create the database location (/var has insufficient space; also dislike having the PostgreSQL version number everywhere -- upgrading would break scripts!): sudo mkdir -p /home/postgres/main sudo cp -Rp /var/lib/postgresql/8.4/main /home/postgres sudo chown -R postgres.postgres /home/postgres sudo chmod -R 700 /home/postgres sudo usermod -d /home/postgres/ postgres All good to here. Next, restart the server and configure the database using these installation instructions: sudo apt-get install postgresql pgadmin3 sudo /etc/init.d/postgresql-8.4 stop sudo vi /etc/postgresql/8.4/main/postgresql.conf Change data_directory to /home/postgres/main sudo /etc/init.d/postgresql-8.4 start sudo -u postgres psql postgres \password postgres sudo -u postgres createdb climate pgadmin3 Use pgadmin3 to configure the database and create a schema. A New Hope The episode began in a remote shell known as bash, with both databases running, and the installation of a command with a most unusual logo: SQL Fairy. perl Makefile.PL sudo make install sudo apt-get install perl-doc (strangely, it is not called perldoc) perldoc SQL::Translator::Manual Extract a PostgreSQL-friendly DDL and all the MySQL data: sqlt -f DBI --dsn dbi:mysql:climate --db-user user --db-password password -t PostgreSQL > climate-pg-ddl.sql mysqldump --skip-add-locks --complete-insert --no-create-db --no-create-info --quick --result-file="climate-my.sql" --databases climate --skip-comments -u root -p The Database Strikes Back Recreate the structure in PostgreSQL as follows: pgadmin3 (switch to it) Click the Execute arbitrary SQL queries icon Open climate-pg-ddl.sql Search for TABLE " replace with TABLE climate." (insert the schema name climate) Search for on " replace with on climate." (insert the schema name climate) Press F5 to execute This results in: Query returned successfully with no result in 122 ms. Replies of the Jedi At this point I am stumped. Where do I go from here (what are the steps) to convert climate-my.sql to climate-pg.sql so that they can be executed against PostgreSQL? How to I make sure the indexes are copied over correctly (to maintain referential integrity; I don't have constraints at the moment to ease the transition)? How do I ensure that adding new rows in PostgreSQL will start enumerating from the index of the last row inserted (and not conflict with an existing primary key from the sequence)? Resources A fair bit of information was needed to get this far: https://help.ubuntu.com/community/PostgreSQL http://articles.sitepoint.com/article/site-mysql-postgresql-1 http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL http://pgfoundry.org/frs/shownotes.php?release_id=810 http://sqlfairy.sourceforge.net/ Thank you!

    Read the article

  • backup and restoration of a freeipa infrastructure

    - by Sirex
    I'm finding the documentation on ipa server backup and restoration sadly lacking, and being so centrally critical it's not something i'm really happy about shooting in the dark with - could some kind soul more knowledable in the matter please attempt to provide an idiot-proof guide to backing up and restoring of IPA server(s) ? Particularly the main server (the cert signing one). ...We're looking towards rolling out ipa in a two server setup (1 master, 1 replica). I'm using dns srv records to handle failover, hence a loss of the replica isn't a big deal as i could make a new one and force a resync to happen - it's losing the master that bothered me. The thing that i'm really struggling with is locating a step-by-step procedure for backing up and restoring the master server. I'm aware that whole-VM snapshot is the recommended way of doing IPA server backup, but that isn't an option at this time for us. I'm also aware that freeipa 3.2.0 includes some sort of backup command build in, but that isn't in the ipa version of centos, and i don't expect it will be for some time yet. I've been trying many different methods, but none of them seem to restore cleanly, amongst others, i've tried; a command similar to db2ldif.pl -D "cn=directory manager" -w - -n userroot -a /root/userroot.ldif the script from here to produce three ldif files -- one for the domain ({domain}-userroot), and two for the ipa server (ipa-ipaca and ipa-userroot): Most of the restores i've tried have been similar to the form of: ldif2db.pl -D "cn=directory manager" -w - -n userroot -i userroot.ldif which seems to work and reports no errors, but totally borks the ipa install on the machine and i can no longer login with either the admin password on the backed up server, or the one i set it to on installation before attempting the ldif2db command (i'm installing ipa-server and running ipa-server-install, then attempting the restore). I'm not overly bothered about losing the CA, having to rejoin the domain, losing replication etc etc (although it'd be awesome if that could be avoided) but in the instance of the main server dropping i'd really like to avoid having to re-enter all the user/group information. I guess in the instance of losing the main server i could promote the other one and replicate in the other direction, but i've not tried that, either. Has anyone done that ? tl;dr: Can someone provide an idiots guide to backing up and restoring an IPA server (preferably on CentOS 6) in a clear enough way that'd make me feel confident it'll actually work when the dreaded time comes ? Crayons are optional, but appreciated ;-) I can't be the only person struggling with this, seeing how widely used IPA is, surely ?

    Read the article

  • Run a perl script as a windows 7 service

    - by reptile
    I have a perl script which is compiled using pp, to be run as a windows service on windows 7 machines. I looked at the thread http://www.perlmonks.org/index.pl?node%5Fid=230377 but of little use because most of them weren't clear and the solutions suggested in that were to create executables and not actually for running as a windows 7 service. I tried putting my compiled exe in the scheduled tasks of windows but I think its not able to run for some reason. How do I debug this?

    Read the article

  • PELSOFT LAB DELHI ELAN LAPTOP CHINEESE MAKE LOW QUALITY ONE [closed]

    - by PREETI HARI
    I have purchase one elan laptop from pelsoft lab delhi after a lot of follow up calls from their call centers , in one month i found this laptop is of no use, it is of no use ,company failed to provide service , in market on enquiry i got answer that this lap ttop is of chineese make and all components are of non standard and cannot be replaced or repaired , is any body know how to format and repair this kind of system ,other i will loos 22000 rs at astretch pl help

    Read the article

  • Bacula windows client could not connect to Bacula director

    - by pr0f-r00t
    I have a Bacula server on my Linux Debian squeeze host (Bacula version 5.0.2) and a Bacula client on Windows XP SP3. On my network each client can see each other, can share files and can ping. On my local server I could run bconsole and the server responds but when I run bconsole or bat on my windows client the server does not respond. Here are my configuration files: bacula-dir.conf: # # Default Bacula Director Configuration file # # The only thing that MUST be changed is to add one or more # file or directory names in the Include directive of the # FileSet resource. # # For Bacula release 5.0.2 (28 April 2010) -- debian squeeze/sid # # You might also want to change the default email address # from root to your address. See the "mail" and "operator" # directives in the Messages resource. # Director { # define myself Name = nima-desktop-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 1 Password = "Cv70F6pf1t6pBopT4vQOnigDrR0v3L" # Console password Messages = Daemon DirAddress = 127.0.0.1 # DirAddress = 72.16.208.1 } JobDefs { Name = "DefaultJob" Type = Backup Level = Incremental Client = nima-desktop-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultJob" } #Job { # Name = "BackupClient2" # Client = nima-desktop2-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=nima-desktop-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /nonexistant/path/to/file/archive/dir/bacula-restores } # job for vmware windows host Job { Name = "nimaxp-fd" Type = Backup Client = nimaxp-fd FileSet = "nimaxp-fs" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = Default Write Bootstrap = "/var/bacula/working/rsys-win-www-1-fd.bsr" #Change this } # job for vmware windows host Job { Name = "arg-michael-fd" Type = Backup Client = nimaxp-fd FileSet = "arg-michael-fs" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = Default Write Bootstrap = "/var/bacula/working/rsys-win-www-1-fd.bsr" #Change this } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } # # Put your list of files here, preceded by 'File =', one per line # or include an external list with: # # File = <file-name # # Note: / backs up everything on the root partition. # if you have other partitions such as /usr or /home # you will probably want to add them too. # # By default this is defined to point to the Bacula binary # directory to give a reasonable FileSet to backup to # disk storage during initial testing. # File = /usr/sbin } # # If you backup the root directory, the following two excluded # files can be useful # Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "nimaxp-fs" Enable VSS = yes Include { Options { signature = MD5 } File = "C:\softwares" File = C:/softwares File = "C:/softwares" } } # List of files to be backed up FileSet { Name = "arg-michael-fs" Enable VSS = yes Include { Options { signature = MD5 } File = "C:\softwares" File = C:/softwares File = "C:/softwares" } } # # When to do the backups, full backup on first sunday of the month, # differential (i.e. incremental since full) every other sunday, # and incremental backups other days Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = nima-desktop-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "_MOfxEuRzxijc0DIMcBqtyx9iW1tzE7V6" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # Client file service for vmware windows host Client { Name = nimaxp-fd Address = nimaxp FDPort = 9102 Catalog = MyCatalog Password = "Ku8F1YAhDz5EMUQjiC9CcSw95Aho9XbXailUmjOaAXJP" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # Client file service for vmware windows host Client { Name = arg-michael-fd Address = 192.168.0.61 FDPort = 9102 Catalog = MyCatalog Password = "b4E9FU6s/9Zm4BVFFnbXVKhlyd/zWxj0oWITKK6CALR/" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = nima-desktop2-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "_MOfxEuRzxijc0DIMcBqtyx9iW1tzE7V62" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" Device = FileStorage Media Type = File } # Definition of DDS tape storage device #Storage { # Name = DDS-4 # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # password for Storage daemon # Device = DDS-4 # must be same as Device in Storage daemon # Media Type = DDS-4 # must be same as MediaType in Storage daemon # Autochanger = yes # enable for autochanger device #} # Definition of 8mm tape storage device #Storage { # Name = "8mmDrive" # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # Device = "Exabyte 8mm" # MediaType = "8mm" #} # Definition of DVD storage device #Storage { # Name = "DVD" # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # Device = "DVD Writer" # MediaType = "DVD" #} # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; dbuser = ""; dbpassword = "" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard # # NOTE! If you send to two email or more email addresses, you will need # to replace the %r in the from field (-f part) with a single valid # email address in both the mailcommand and the operatorcommand. # What this does is, it sets the email address that emails would display # in the FROM field, which is by default the same email as they're being # sent to. However, if you send email to more than one address, then # you'll have to set the FROM address manually, to a single address. # for example, a '[email protected]', is better since that tends to # tell (most) people that its coming from an automated source. # mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = nima-desktop-mon Password = "-T0h6HCXWYNy0wWqOomysMvRGflQ_TA6c" CommandACL = status, .status } bacula-fd.conf on client: # # Default Bacula File Daemon Configuration file # # For Bacula release 5.0.3 (08/05/10) -- Windows MinGW32 # # There is not much to change here except perhaps the # File daemon Name # # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = nimaxp-fd FDport = 9102 # where we listen for the director WorkingDirectory = "C:\\Program Files\\Bacula\\working" Pid Directory = "C:\\Program Files\\Bacula\\working" # Plugin Directory = "C:\\Program Files\\Bacula\\plugins" Maximum Concurrent Jobs = 10 } # # List Directors who are permitted to contact this File daemon # Director { Name = Nima-desktop-dir Password = "Cv70F6pf1t6pBopT4vQOnigDrR0v3L" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = nimaxp-mon Password = "q5b5g+LkzDXorMViFwOn1/TUnjUyDlg+gRTBp236GrU3" Monitor = yes } # Send all messages except skipped files back to Director Messages { Name = Standard director = Nima-desktop = all, !skipped, !restored } I have checked my firewall and disabled the firewall but it doesn't work.

    Read the article

  • split command on Ubuntu command-line

    - by pedro
    I want to split a file into multiple files with at most 25 lines each. I'm using this: split -l 25 /etc/adduser.conf > /home/ubuntu/PL/trab3/rc_ But I do not get the files I expect. How can I get files with filenames like rc_01, rc_02, etc.?

    Read the article

  • Lighttpd not cleanly restarting (address already in use)

    - by NilObject
    When doing a dist-upgrade recently, my lighttpd-1.4.19 install on Ubuntu 8.0.4 has begun failing to restart or reload properly with the /etc/init.d/lighttpd restart command. ~$ sudo /etc/init.d/lighttpd restart * Stopping web server lighttpd ...done. * Starting web server lighttpd 2009-06-13 04:06:36: (network.c.300) can't bind to port: 80 Address already in use ...fail! The same error occurs when I do a reload. The way I get around it is to kill lighttpd and then issue the start command, but it seems like I shouldn't have to do that :) I've looked at my config files, and can't spot any immediate errors. Does anyone have any ideas what can be causing this error? This seems to be the latest version as of writing this question that is available via the apt-get route. My config file is: # Debian lighttpd configuration file # ############ Options you really have to take care of #################### ## modules to load # mod_access, mod_accesslog and mod_alias are loaded by default # all other module should only be loaded if neccesary # - saves some time # - saves memory server.modules = ( "mod_access", "mod_alias", "mod_accesslog", "mod_compress", "mod_fastcgi", "mod_rewrite", "mod_redirect", ) ## a static document-root, for virtual-hosting take look at the ## server.virtual-* options server.document-root = "/var/www/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" fastcgi.server = (".php" => (( "bin-path" => "/usr/bin/php5-cgi", "socket" => "/tmp/php.socket" ))) ## files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", "index.lighttpd.html" ) ## Use the "Content-Type" extended attribute to obtain mime type if possible # mimetype.use-xattr = "enable" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "audio/x-wav", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".rss" => "application/rss+xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar" ) include_shell "/usr/share/lighttpd/include-conf-enabled.pl" My /etc/init.d/lighttpd script is (untouched from installation): #!/bin/sh ### BEGIN INIT INFO # Provides: lighttpd # Required-Start: networking # Required-Stop: networking # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start the lighttpd web server. ### END INIT INFO PATH=/sbin:/bin:/usr/sbin:/usr/bin DAEMON=/usr/sbin/lighttpd NAME=lighttpd DESC="web server" PIDFILE=/var/run/$NAME.pid SCRIPTNAME=/etc/init.d/$NAME ENV="env -i LANG=C PATH=/usr/local/bin:/usr/bin:/bin" SSD="/sbin/start-stop-daemon" DAEMON_OPTS="-f /etc/lighttpd/lighttpd.conf" test -x $DAEMON || exit 0 set -e # be sure there is a /var/run/lighttpd, even with tmpfs mkdir -p /var/run/lighttpd > /dev/null 2> /dev/null chown www-data:www-data /var/run/lighttpd chmod 0750 /var/run/lighttpd . /lib/lsb/init-functions case "$1" in start) log_daemon_msg "Starting $DESC" $NAME if ! $ENV $SSD --start --quiet\ --pidfile $PIDFILE --exec $DAEMON -- $DAEMON_OPTS ; then log_end_msg 1 else log_end_msg 0 fi ;; stop) log_daemon_msg "Stopping $DESC" $NAME if $SSD --quiet --stop --oknodo --retry 30\ --pidfile $PIDFILE --exec $DAEMON; then rm -f $PIDFILE log_end_msg 0 else log_end_msg 1 fi ;; reload) log_daemon_msg "Reloading $DESC configuration" $NAME if $SSD --stop --signal 2 --oknodo --retry 30\ --quiet --pidfile $PIDFILE --exec $DAEMON; then if $ENV $SSD --start --quiet \ --pidfile $PIDFILE --exec $DAEMON -- $DAEMON_OPTS ; then log_end_msg 0 else log_end_msg 1 fi else log_end_msg 1 fi ;; restart|force-reload) $0 stop [ -r $PIDFILE ] && while pidof lighttpd |\ grep -q `cat $PIDFILE 2>/dev/null` 2>/dev/null ; do sleep 1; done $0 start ;; *) echo "Usage: $SCRIPTNAME {start|stop|restart|reload|force-reload}" >&2 exit 1 ;; esac exit 0

    Read the article

  • Attempting to ping RPC endpoint 6001/6004 (Exchange Information Store) on server on Exchange2010

    - by MadBoy
    I have Exchange 2010 in hosting setup like: TMG 2010 as load balancer Exchange 2010 x 2 (CAS,MAILBOX,HUB on each server) AD1, AD2 machines File witness All people currently connect thru OWA or POP3/SMTP and that works fine. The problem is autodiscovery doesn't work and RPC in terms of setting up Outlook doesn't work too. It doesn't work if I am connected with VPN or not. The thing is it used to work. Before reinstall of my machine 2 days ago I was able to get mails successfully thru Outlook that was set up using autodiscovery (but I was getting reports setting up of new clients wasn't working - so not sure why my outlook continued to work). I used https://www.testexchangeconnectivity.com to track it down and basically the message is more or less this: Attempting to ping RPC endpoint 6004 (NSPI Proxy Interface) on server autodiscover.domain.pl. The attempt to ping the endpoint failed. Additional Details The RPC_S_SERVER_UNAVAILABLE error (0x6ba) was thrown by the RPC Runtime process. I tried different solutions like disabling IP v6, followed couple of links and did all they proposed and it's still at the very same point: C:\Users\admin>netstat -a | find "6001" TCP 0.0.0.0:6001 EXCHANGE2:0 LISTENING TCP [::]:6001 EXCHANGE2:0 LISTENING C:\Users\admin>netstat -a | find "6002" C:\Users\admin>netstat -a | find "6003" C:\Users\admin>netstat -a | find "6004" I followed (and few others): http://helewix.com/blog/index.php/Microsoft-Solutions/2011/02/10/exchange-2010-how-to-open-ports-6001-6002-and-6004-on-your-server-for-telnet-to-work-and-rpc-to-be-able-to-connect-2 http://blogs.technet.com/b/exchange/archive/2008/06/20/3405633.aspx http://messagexchange.blogspot.com/2008/12/outlook-anywhere-failing-rpc-end-points.html Although most relate to Exchange 2007 and I have Exchange 2010 but there's not much things I can find on Exchange 2010 for the current problem. After applying all of those solutions error 6004 changed into error 6001 which doesn't bring me to my problems any closer. At this point even thou error was 6001 and 6004 was no more the 6004 port was still closed while 6001 stayed open. Attempting to ping RPC endpoint 6001 (Exchange Information Store) on server autodiscover.domain.pl. The attempt to ping the endpoint failed. Additional Details The RPC_S_SERVER_UNAVAILABLE error (0x6ba) was thrown by the RPC Runtime process. C:\Users\admin>netstat -a | find "6001" TCP 0.0.0.0:6001 EXCHANGE2:0 LISTENING TCP [::]:6001 EXCHANGE2:0 LISTENING C:\Users\admin>netstat -a | find "6002" C:\Users\admin>netstat -a | find "6003" C:\Users\admin>netstat -a | find "6004" So I reverted back to square one. I suspect it's a problem with TMG but really can't be sure. I tried multiple combinations but all fail.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >