Search Results

Search found 1984 results on 80 pages for 'exec'.

Page 75/80 | < Previous Page | 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • wget not behaving via IPC::Open3 vs bash

    - by Ryley
    I'm trying to stream a file from a remote website to a local command and am running into some problems when trying to detect errors. The code looks something like this: use IPC::Open3; my @cmd = ('wget','-O','-','http://10.10.1.72/index.php');#any website will do here my ($wget_pid,$wget_in,$wget_out,$wget_err); if (!($wget_pid = open3($wget_in,$wget_out,$wget_err,@cmd))){ print STDERR "failed to run open3\n"; exit(1) } close($wget_in); my @wget_outs = <$wget_out>; my @wget_errs = <$wget_err>; print STDERR "wget stderr: ".join('',@wget_errs); #page and errors outputted on the next line, seems wrong print STDERR "wget stdout: ".join('',@wget_outs); #clean up after this, not shown is running the filtering command, closing and waitpid'ing When I run that wget command directly from the command-line and redirect stderr to a file, something sane happens - the stdout will be the downloaded page, the stderr will contain the info about opening the given page. wget -O - http://10.10.1.72/index.php 2> stderr_test_file When I run wget via open3, I'm getting both the page and the info mixed together in stdout. What I expect is the loaded page in one stream and STDERR from wget in another. I can see I've simplified the code to the point where it's not clear why I want to use open3, but the general plan is that I wanted to stream stdout to another filtering program as I received it, and then at the end I was going to read the stderr from both wget and the filtering program to determine what, if anything went wrong. Other important things: I was trying to avoid writing the wget'd data to a file, then filtering that file to another file, then reading the output. It's key that I be able to see what went wrong, not just reading $? 8 (i.e. I have to tell the user, hey, that IP address is wrong, or isn't the right kind of website, or whatever). Finally, I'm choosing system/open3/exec over other perl-isms (i.e. backticks) because some of the input is provided by untrustworthy users.

    Read the article

  • Single console in eclipse for both Server and Client

    - by rits
    I am building a client server application using Java Sockets (in Windows XP). For that I need different consoles for both Client and Server(for Input and Output operations). But in eclipse both share a single console. Is there any plugin or some sort of cheat through which I can do this. After googling I got this, http://dev.eclipse.org/newslists/news.eclipse.newcomer/msg17138.html But, this seems to be only for write operations, not read operations. Also, I tried the following to launch application manually, but even this is not working........ package mypack; import java.io.BufferedOutputStream; import java.io.File; import java.io.FileOutputStream; import java.io.IOException; import java.io.OutputStream; import java.io.PrintStream; public class MySystem { public static void changeStream(String mainFile) throws IOException{ File temp = new File(".") ; String parentPath = temp.getCanonicalPath() ; System.out.println(parentPath); //creation of batch file starts here try{ File f = new File(parentPath + "\\a.bat") ; System.out.println("Created : " + f.createNewFile()); //f.deleteOnExit() ; FileOutputStream fos = new FileOutputStream(f) ; String str = "java " + mainFile ; String batchCommand="@echo off\n"+str+"\npause\nexit"; char arr[] = batchCommand.toCharArray() ; System.out.println(str) ; for(int i = 0 ; i < arr.length ; i++){ fos.write(arr[i]) ; } fos.close() ; } catch(Exception e){ } //creation of batch file ends here //execution of batch file starts here try{ Runtime r = Runtime.getRuntime() ; System.out.println(parentPath + "\\a.bat") ; Process p = r.exec(new String[]{"cmd","/k","start a.bat"},null,new File(parentPath)) ; OutputStream os = (OutputStream)p.getOutputStream() ; System.setOut( new PrintStream(os) ) ; System.out.println("Hello"); } catch(Exception e){ e.printStackTrace(); } //execution of batch file ends here } public static void main(String[] args) throws IOException { MySystem.changeStream("MySystem") ; } }

    Read the article

  • Qt - problem appending to QList of QList

    - by bullettime
    I'm trying to append items to a QList at runtime, but I'm running on a error message. Basically what I'm trying to do is to make a QList of QLists and add a few customClass objects to each of the inner lists. Here's my code: widget.h: class Widget : public QWidget { Q_OBJECT public: Widget(QWidget *parent = 0); ~Widget(); public slots: static QList<QList<customClass> > testlist(){ QList<QList<customClass> > mylist; for(int w=0 ; w<5 ; w++){ mylist.append(QList<customClass>()); } for(int z=0 ; z<mylist.size() ; z++){ for(int x=0 ; x<10 ; x++){ customClass co = customClass(); mylist.at(z).append(co); } } return mylist; } }; customclass.h: class customClass { public: customClass(){ this->varInt = 1; this->varQString = "hello world"; } int varInt; QString varQString; }; main.cpp: int main(int argc, char *argv[]) { QApplication a(argc, argv); Widget w; QList<QList<customClass> > list; list = w.testlist(); w.show(); return a.exec(); } But when I try to run the application, it gives off this error: error: passing `const QList<customClass>' as `this' argument of `void List<T>::append(const T&) [with T = customClass]' discards qualifiers I also tried inserting the objects using foreach: foreach(QList<customClass> list, mylist){ for(int x=0 ; x<10 ; x++){ list.append(customClass()); } } The error was gone, but the customClass objects weren't appended, I could verify that by using a debugging loop in main that showed the inner QLists sizes as zero. What am I doing wrong?

    Read the article

  • subclassing QList and operator+ overloading

    - by Milen
    I would like to be able to add two QList objects. For example: QList<int> b; b.append(10); b.append(20); b.append(30); QList<int> c; c.append(1); c.append(2); c.append(3); QList<int> d; d = b + c; For this reason, I decided to subclass the QList and to overload the operator+. Here is my code: class List : public QList<int> { public: List() : QList<int>() {} // Add QList + QList friend List operator+(const List& a1, const List& a2); }; List operator+(const List& a1, const List& a2) { List myList; myList.append(a1[0] + a2[0]); myList.append(a1[1] + a2[1]); myList.append(a1[2] + a2[2]); return myList; } int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); List b; b.append(10); b.append(20); b.append(30); List c; c.append(1); c.append(2); c.append(3); List d; d = b + c; List::iterator i; for(i = d.begin(); i != d.end(); ++i) qDebug() << *i; return a.exec(); } , the result is correct but I am not sure whether this is a good approach. I would like to ask whether there is better solution?

    Read the article

  • Invoke Python modules from Java

    - by user36813
    I have a Python interface of a graph library written in C - igraph (the name of library). My need is to invoke the python modules pertaining to this graph library from Java code. It goes like this, the core of library is in c. This core has been imported into Python and interfaces to the functions embedded in core are available in Python. My project's rest of the code is in Java and hence I would like to call the graph functions by Java as well. Jython - which lets you invoke python modules with in Java was an option.I went on trying Jython to discover that it will not work in my case as the core code is in C and Jython wont support anything that is imported as a c dll in python code.I also thought of opting for the approach of calling graph routines directly in c. That is without passing through Python code. I am assuming there must be something which lets you call c code from Java, how ever I am not good in C hence I did not go for it. My last resort seems to execute Python interpreter from command line using Java. But that is a dirty and shameless. Also to deal with the results produced by Python code I will have to write the results in a file and read it back in java. Again dirty way. Is there something that any one can suggest me? Thanks to every one giving time. Thanks Igal for answering. I had a look at it. At first glance it appears as if it is simply calling the python script. Jep jep = new Jep(false, SCRIPT_PATH, cl); jep.set("query", query); jep.runScript(SCRIPT_PATH + file); jep.close(); Isnt it very similar to what we would do if called the python interpreter from command line through a Java code. Runtime runtime = Runtime.getRuntime(); Process proc = runtime.exec("python test.py"); Concern is how do I use the results generated by Python script. The naive way is to write them to file and read it back in Java. I am searching for a smarter approach.Thanks for suggestion anyway.

    Read the article

  • Possible bug with tabified QDockWidget and setFloating()

    - by krunk
    I've run into some odd behavior with tabified QDockWidgets, below is an example program with comments that demonstrates the behavior. Is this a bug or is it expected behavior and I'm missing some nuance in QDockWidget that causes this? Directly, since this does not work, how would one properly "undock" a hidden QDockWidget then display it? #include <QApplication> #include <QMainWindow> #include <QAction> #include <QDockWidget> #include <QMenu> #include <QSize> #include <QMenuBar> using namespace std; int main (int argc, char* argv[]) { QApplication app(argc, argv); QMainWindow window; QDockWidget dock1(&window); QDockWidget dock2(&window); QMenu menu("View"); dock1.setAllowedAreas(Qt::LeftDockWidgetArea | Qt::RightDockWidgetArea); dock2.setAllowedAreas(Qt::LeftDockWidgetArea | Qt::RightDockWidgetArea); dock1.setWindowTitle("Dock One"); dock2.setWindowTitle("Dock Two"); window.addDockWidget(Qt::RightDockWidgetArea, &dock1); window.addDockWidget(Qt::RightDockWidgetArea, &dock2); window.menuBar()->addMenu(&menu); window.setMinimumSize(QSize(800, 600)); window.tabifyDockWidget(&dock1, &dock2); dock1.hide(); dock2.hide(); menu.addAction(dock1.toggleViewAction()); menu.addAction(dock2.toggleViewAction()); window.show(); // Below is where the oddness starts. It seems to only exhibit the // behavior if the dock widgets are tabified. // Odd behavior here // This does not work. the window never shows, though its menu action shows // checked. Not only does this window not show up, but all toggle actions // for all dock windows (e.g. dock1 and dock2) are broken for the duration // of the application loop. // dock1.setFloating(true); // dock1.show(); // This does work. . . of course only if you do _not_ run the above first. // however, you can often get a little lag or "blip" in the rendering as // the dock is shown docked before setFloating is set to true. dock1.show(); dock1.setFloating(true); return app.exec(); }

    Read the article

  • Retrieve names of running processes

    - by Dave DeLong
    Hi everyone, First off, I know that similar questions have been asked, but the answers provided haven't been very helpful so far (they all recommend one of the following options). I have a user application that needs to determine if a particular process is running. Here's what I know about the process: The name The user (root) It should already be running, since it's a LaunchDaemon, which means Its parent process should be launchd (pid 1) I've tried several ways to get this, but none have worked so far. Here's what I've tried: Running ps and parsing the output. This works, but it's slow (fork/exec is expensive), and I'd like this to be as fast as possible. Using the GetBSDProcessList function listed here. This also works, but the way in which they say to retrieve the process name (accessing kp_proc.p_comm from each kinfo_proc structure) is flawed. The resulting char* only contains the first 16 characters of the process name, which can be seen in the definition of the kp_proc structure: #define MAXCOMLEN 16 //defined in param.h struct extern_proc { //defined in proc.h ...snip... char p_comm[MAXCOMLEN+1]; ...snip... }; Using libProc.h to retrieve process information: pid_t pids[1024]; int numberOfProcesses = proc_listpids(PROC_ALL_PIDS, 0, NULL, 0); proc_listpids(PROC_ALL_PIDS, 0, pids, sizeof(pids)); for (int i = 0; i < numberOfProcesses; ++i) { if (pids[i] == 0) { continue; } char name[1024]; proc_name(pids[i], name, sizeof(name)); printf("Found process: %s\n", name); } This works, except it has the same flaw as GetBSDProcessList. Only the first portion of the process name is returned. Using the ProcessManager function in Carbon: ProcessSerialNumber psn; psn.lowLongOfPSN = kNoProcess; psn.highLongOfPSN = 0; while (GetNextProcess(&psn) == noErr) { CFStringRef procName = NULL; if (CopyProcessName(&psn, &procName) == noErr) { NSLog(@"Found process: %@", (NSString *)procName); } CFRelease(procName); } This does not work. It only returns process that are registered with the WindowServer (or something like that). In other words, it only returns apps with UIs, and only for the current user. I can't use -[NSWorkspace launchedApplications], since this must be 10.5-compatible. In addition, this only returns information about applications that appear in the Dock for the current user. I know that it's possible to retrieve the name of running processes (since ps can do it), but the question is "Can I do it without forking and exec'ing ps?". Any suggestions?

    Read the article

  • Problem with files consists of spaces and single quotes?

    - by Vijay
    I'using the following code to create thumbnails using ffmpeg but it was working fine for the files which have no spaces or any quotes.. But when the file has a space (like 'sachin knock.flv') or files which have quotes (like sachin's_double_cent.mp4) it doesn't work.. What can i do to get those files work accurately? One restriction is that i can't rename files as they are lump some.. My code is <?php error_reporting(E_ALL); extension_loaded('ffmpeg') or die('Error in loading ffmpeg'); $link = mysql_connect('localhost', 'root', ''); if (!$link) { die('Not connected : ' . mysql_error()); } $db_selected = mysql_select_db('db', $link); $max_width = 120; $max_height = 72; $path ="/home/rootuser/public_html/temp/"; $qry="select id, input_file, output_file from videos where thumbnail='' or thumbnail is null;"; $res=mysql_query($qry); while($row = mysql_fetch_array($res,MYSQL_ASSOC)) { $orig_str = array(" "); $rep_str = array("\ "); $outfile = $row[output_file]; // $infile = $row[input_file]; $infile1 = str_replace($orig_str, $rep_str, $outfile); $tmp = explode(".",$infile1); $tmp_name = $tmp[0]; $imgname = $tmp_name.".png"; $srcfile = "/home/rootuser/public_html/uploaded_vids/".$outfile; echo exec("ffmpeg -i ".$srcfile." -r 1 -ss 00:00:05 -f image2 -s 120x72 ".$path.$imgname); $nname = "./temp/".$imgname; $fileo = fopen($nname,"rb"); if($fileo) { $imgData = addslashes(file_get_contents($nname)); echo $imgdata; $qryy="update videos set thumbnail='{$imgData}' where input_file='$outfile'"; $ress=mysql_query($qryy); } else echo "Could not open<br><br>"; unlink('$nname'); } ?>

    Read the article

  • QTreeView memory consumption

    - by Eye of Hell
    Hello. I'm testing QTreeView functionality right now, and i was amazed by one thing. It seems that QTreeView memory consumption depends on items count O_O. This is highly unusual, since model-view containers of such type only keeps track for items being displayed, and rest of items are in the model. I have written a following code with a simple model that holds no data and just reports that it has 10 millions items. With MFC, Windows API or .NET tree / list with such model will take no memory, since it will display only 10-20 visible elements and will request model for more upon scrolling / expanding items. But with Qt, such simple model results in ~300Mb memory consumtion. Increasing number of items will increase memory consumption. Maybe anyone can hint me what i'm doing wrong? :) #include <QtGui/QApplication> #include <QTreeView> #include <QAbstractItemModel> class CModel : public QAbstractItemModel { public: QModelIndex index ( int i_nRow, int i_nCol, const QModelIndex& i_oParent = QModelIndex() ) const { return createIndex( i_nRow, i_nCol, 0 ); } public: QModelIndex parent ( const QModelIndex& i_oInex ) const { return QModelIndex(); } public: int rowCount ( const QModelIndex& i_oParent = QModelIndex() ) const { return i_oParent.isValid() ? 0 : 1000 * 1000 * 10; } public: int columnCount ( const QModelIndex& i_oParent = QModelIndex() ) const { return 1; } public: QVariant data ( const QModelIndex& i_oIndex, int i_nRole = Qt::DisplayRole ) const { return Qt::DisplayRole == i_nRole ? QVariant( "1" ) : QVariant(); } }; int main(int argc, char *argv[]) { QApplication a(argc, argv); QTreeView oWnd; CModel oModel; oWnd.setUniformRowHeights( true ); oWnd.setModel( & oModel ); oWnd.show(); return a.exec(); }

    Read the article

  • Why SQL2008 debugger would NOT step into a certain child stored procedure

    - by John Galt
    I'm encountering differences in T-SQL with SQL2008 (vs. SQL2000) that are leading me to dead-ends. I've verified that the technique of sharing #TEMP tables between a caller which CREATES the #TEMP and the child sProc which references it remain valid in SQL2008 See recent SO question. My core problem remains a critical "child" stored procedure that works fine in SQL2000 but fails in SQL2008 (i.e. a FROM clause in the child sProc is coded as: SELECT * FROM #AREAS A) despite #AREAS being created by the calling parent. Rather than post snippets of the code now, here is another symptom that may help you suggest something. I fired up the new debugger in SQL Mgmt Studio: EXEC dbo.AMS1 @S1='06',@C1='037',@StartDate='01/01/2008',@EndDate='07/31/2008',@Type=1,@ACReq = 1,@Output = 0,@NumofLines = 30,@SourceTable = 'P',@LoanPurposeCatg='P' This is a very large sProc and the key snippet that is weird is the following: **create table #Areas ( State char(2) , County char(3) , ZipCode char(5) NULL , CityName varchar(28) NULL , PData varchar(3) NULL , RData varchar(3) NULL , SMSA_CD varchar(10) NULL , TypeCounty varchar(50) , StateAbbr char(2) ) EXECUTE dbo.AMS_I_GetAreasV5 -- this child populates #Areas @SMSA = @SMSA , @S1 = @S1 , @C1 = @C1 , @Z1 = @Z1 , @SourceTable = @SourceTable , @CustomID = @CustomID , @UserName = @UserName , @CityName = @CityName , @Debug=0 EXECUTE dbo.AMS_I_GetAreas_FixAC -- this child cannot reference #Areas @StartDate = @StartDate , @EndDate = @EndDate , @SMSA_CD = @SMSA_CD , @S1 = @S1 , @C1 = @C1 , @Z1 = @Z1 , @CityName = @CityName , @CustomID = @CustomID , @Debug=0 -- continuation of the parent sProc** I can step through the execution of the parent stored procedure. When I get to the first child sproc above, I can either STEP INTO dbo.AMS_I_GetAreasV5 or STEP OVER its execution. When I arrive at the invocation of the 2nd child sProc - dbo.AMS_I_GetAreas_FixAC - I try to STEP INTO it (because that is where the problem statement is) and STEP INTO is ignored (i.e. treated like STEP OVER instead; yet I KNOW I pressed F11 not F10). It WAS executed however, because when control is returned to the statement after the EXECUTE, I click Continue to finish execution and the results windows shows the errors in the dbo.AMS_I_GetAreas_FixAC (i.e. the 2nd child) stored procedure. Is there a way to "pre-load" an sProc with the goal of setting a breakpoint on its entry so that I can pursue execution inside it? In summary, I wonder if the inability to step into a given child sproc might be related to the same inability of this particular child to reference a #temp created by its parent (caller).

    Read the article

  • git filter-branch chmod

    - by Evan Purkhiser
    I accidental had my umask set incorrectly for the past few months and somehow didn't notice. One of my git repositories has many files marked as executable that should be just 644. This repo has one main master branch, and about 4 private feature branches (that I keep rebased on top of the master). I've corrected the files in my master branch by running find -type f -exec chmod 644 {} \; and committing the changes. I then rebased my feature branches onto master. The problem is there are newly created files in the feature branches that are only in that branch, so they weren't corrected by my massive chmod commit. I didn't want to create a new commit for each feature branch that does the same thing as the commit I made on master. So I decided it would be best to go back through to each commit where a file was made and set the permissions. This is what I tried: git filter-branch -f --tree-filter 'chmod 644 `git show --diff-filter=ACR --pretty="format:" --name-only $GIT_COMMIT`; git add .' master.. It looked like this worked, but upon further inspection I noticed that the every commit after a commit containing a new file with the proper permissions of 644 would actually revert the change with something like: diff --git a b old mode 100644 new mode 100755 I can't for the life of me figure out why this is happening. I think I must be mis-understanding how git filter-branch works. My Solution I've managed to fix my problem using this command: git filter-branch -f --tree-filter 'FILES="$FILES "`git show --diff-filter=ACMR --pretty="format:" --name-only $GIT_COMMIT`; chmod 644 $FILES; true' development.. I keep adding onto the FILES variable to ensure that in each commit any file created at some point has the proper mode. However, I'm still not sure I really understand why git tracks the file mode for each commit. I had though that since I had fixed the mode of the file when it was first created that it would stay that mode unless one of my other commits explicit changed it to something else. That did not appear to the be the case. The reason I thought that this would work is from my understanding of rebase. If I go back to HEAD~5 and change a line of code, that change is propagated through, it doesn't just get changed back in HEAD~4.

    Read the article

  • SelectedItem in ListView binding

    - by Matt
    I'm new in wfp. In my sample application I'm using a ListView to display contents of property. I don't know how to bind SelectedItem in ListView to property and then bind to TextBlock. Window.xaml <Window x:Class="Exec.App" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Main window" Height="446" Width="475" > <Grid> <ListView Name="ListViewPersonDetails" Margin="15,12,29,196" ItemsSource="{Binding Persons}" SelectedItem="{Binding CurrentSelectedPerson}"> <ListView.View> <GridView> <GridViewColumn Header="FirstName" DisplayMemberBinding="{Binding FstNamePerson}"/> <GridViewColumn Header="LastName" DisplayMemberBinding="{Binding SndNamePerson}"/> <GridViewColumn Header="Address" DisplayMemberBinding="{Binding AdressPerson}"/> </GridView> </ListView.View> </ListView> <TextBlock Height="23" Name="textFirstNameBlock" FontSize="12" Margin="97,240,155,144"> <Run Text="Name: " /> <Run Text="{Binding CurrentSelectedPerson.FstNamePerson}" FontWeight="Bold" /> </TextBlock> <TextBlock Height="23" Name="textLastNameBlock" FontSize="12" Margin="97,263,155,121"> <Run Text="Branch: " /> <Run Text="{Binding CurrentSelectedPerson.SndNamePerson}" FontWeight="Bold" /> </TextBlock> <TextBlock Height="23" Name="textAddressBlock" FontSize="12" Margin="0,281,155,103" HorizontalAlignment="Right" Width="138"> <Run Text="City: " /> <Run Text="{Binding CurrentSelectedPerson.AdressPerson}" FontWeight="Bold" /> </TextBlock> </Grid> </Window> MainWindow.xaml.cs Tman manager = new Tman(); private List<Person> persons; public List<Person> Persons { get { return this.persons; } set { if (value != null) { this.persons = value; this.NotifyPropertyChanged("Data"); } } } private Person currentSelectedPerson; public Person CurrentSelectedPerson { get { return currentSelectedPerson; } set { this.currentSelectedPerson = value; this.NotifyPropertyChanged("CurrentSelectedItem"); } } public event PropertyChangedEventHandler PropertyChanged; private void NotifyPropertyChanged(string propertyName) { var handler = this.PropertyChanged; if (handler != null) { handler(this, new PropertyChangedEventArgs(propertyName)); } } private void Window_Loaded(object sender, RoutedEventArgs e){ ListViewPersonDetails.ItemsSource= manager.GetPersons(); } Person.cs class Person { public string FirstName { get; set; } public string LastName { get; set; } public string Address { get; set; } } Thanks for any help.

    Read the article

  • How can I prevent external MSBuild files from being cached (by Visual Studio) during a project build

    - by Damian Powell
    I have a project in my solution which started life as a C# library project. It's got nothing of any interest in it in terms of code, it is merely used as a dependency in the other projects in my solution in order to ensure that it is built first. One of the side-effects of building this project is that a shared AssemblyInfo.cs is created which contains the version number in use by the other projects. I have done this by adding the following to the .csproj file: <ItemGroup> <None Include="Properties\AssemblyInfo.Shared.cs.in" /> <Compile Include="Properties\AssemblyInfo.Shared.cs" /> <None Include="VersionInfo.targets" /> </ItemGroup> <Import Project="$(ProjectDir)VersionInfo.targets" /> <Target Name="BeforeBuild" DependsOnTargets="UpdateSharedAssemblyInfo" /> The referenced file, VersionInfo.targets, contains the following: <Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <!-- Some properties defining tool locations and the name of the AssemblyInfo.Shared.cs.in file etc. --> </PropertyGroup> <Target Name="UpdateSharedAssemblyInfo"> <!-- Uses the Exec task to run one of the tools to generate AssemblyInfo.Shared.cs based on the location of AssemblyInfo.Shared.cs.in and some of the other properties. --> </Target> </Project> The contents of the VersionInfo.targets file could simply be embedded within the .csproj file but it is external because I am trying to turn all of this into a project template. I want the users of the template to be able to add the new project to the solution, edit the VersionInfo.targets file, and run the build. The problem is that modifying and saving the VersionInfo.targets file and rebuilding the solution has no effect - the project file uses the values from the .targets file as they were when the project was opened. Even unloading and reloading the project has no effect. In order to get the new values, I need to close Visual Studio and reopen it (or reload the solution). How can I set this up so that the configuration is external to the .csproj file and not cached between builds?

    Read the article

  • How to salvage SQL server 2008 query from KILLED/ROLLBACK state without waiting half a day?

    - by littlegreen
    I have a stored procedure that inserts batches of millions of rows, emerging from a certain query, into an SQL database. It has one parameter selecting the batch; when this parameter is omitted, it will gather a list of batches and recursively call itself, in order to iterate over batches. In (pseudo-)code, it looks something like this: CREATE PROCEDURE spProcedure AS BEGIN IF @code = 0 BEGIN ... WHILE @@Fetch_Status=0 BEGIN EXEC spProcedure @code FETCH NEXT ... INTO @code END END ELSE BEGIN -- Disable indexes ... INSERT INTO table SELECT (...) -- Enable indexes ... Now it can happen that this procedure is slow, for whatever reason: it can't get a lock, one of the indexes it uses is misdefined or disabled. In that case, I want to be able kill the procedure, truncate and recreate the resulting table, and try again. However, when I try and kill the procedure, the process frequently oozes into a KILLED/ROLLBACK state from which there seems to be no return. From Google I have learned to do an sp_lock, find the spid, and then kill it with KILL <spid>. But when I try to kill it, it tells me SPID 75: transaction rollback in progress. Estimated rollback completion: 0%. Estimated time remaining: 554 seconds. I did find a forum message hinting that another spid should be killed before the other one can start a rollback. But that didn't work for me either, plus I do not understand, why that would be the case... could it be because I am recursively calling my own stored procedure? (But it should be having the same spid, right?) In any case, my process is just sitting there, being dead, not responding to kills, and locking the table. This is very frustrating, as I want to go on developing my queries, not waiting hours on my server sitting dead while pretending to be finishing a supposed rollback. Is there some way in which I can tell the server not to store any rollback information for my query? Or not to allow any other queries to interfere with the rollback, so that it will not take so long? Or how to rewrite my query in a better way, or how kill the process successfully without restarting the server?

    Read the article

  • Having trouble doing an Update with a Linq to Sql object

    - by Pure.Krome
    Hi folks, i've got a simple linq to sql object. I grab it from the database and change a field then save. No rows have been updated. :( When I check the full Sql code that is sent over the wire, I notice that it does an update to the row, not via the primary key but on all the fields via the where clause. Is this normal? I would have thought that it would be easy to update the field(s) with the where clause linking on the Primary Key, instead of where'ing (is that a word :P) on each field. here's the code... using (MyDatabase db = new MyDatabase()) { var boardPost = (from bp in db.BoardPosts where bp.BoardPostId == boardPostId select bp).SingleOrDefault(); if (boardPost != null && boardPost.BoardPostId > 0) { boardPost.ListId = listId; // This changes the value from 0 to 'x' db.SubmitChanges(); } } and here's some sample sql.. exec sp_executesql N'UPDATE [dbo].[BoardPost] SET [ListId] = @p6 WHERE ([BoardPostId] = @p0) AND .... <snip the other fields>',N'@p0 int,@p1 int,@p2 nvarchar(9),@p3 nvarchar(10),@p4 int,@p5 datetime,@p6 int',@p0=1276,@p1=212787,@p2=N'ttreterte',@p3=N'ttreterte3',@p4=1,@p5='2009-09-25 12:32:12.7200000',@p6=72 Now, i know there's a datetime field in this update .. and when i checked the DB it's value was/is '2009-09-25 12:32:12.720' (less zero's, than above) .. so i'm not sure if that is messing up the where clause condition... but still! should it do a where clause on the PK's .. if anything .. for speed! Yes / no ? UPDATE After reading nitzmahone's reply, I then tried playing around with the optimistic concurrency on some values, and it still didn't work :( So then I started some new stuff ... with the optimistic concurrency happening, it includes a where clause on the field it's trying to update. When that happens, it doesn't work. so.. in the above sql, the where clause looks like this ... WHERE ([BoardPostId] = @p0) AND ([ListId] IS NULL) AND ... <rest snipped>) This doesn't sound right! the value in the DB is null, before i do the update. but when i add the ListId value to the where clause (or more to the point, when L2S add's it because of the optomistic concurrecy), it fails to find/match the row. wtf?

    Read the article

  • Configuring a html page from an original demo page

    - by Wold
    I forked into rainyday.js through github, an awesome javascript program made by maroslaw at this link: https://github.com/maroslaw/rainyday.js. Basically I tried taking his demo page and my own photo city.jpg and changed the applicable fields so that I could run it on my own site, but only the picture loads and the script itself doesn't start to run. I'm pretty new to html and javascript so I'm probably omitting something very simple, but here is the script for the demo code: <script src="rainyday.js"></script> <script> function getURLParameter(name) { return decodeURIComponent((new RegExp('[?|&]' + name + '=' + '([^&;]+?)(&|#|;|$)').exec(location.search)||[,''])[1].replace(/\+/g, '%20'))||null; } function demo() { var image = document.getElementById('background'); image.onload = function () { var engine = null; var preset = getURLParameter('preset') || '1'; if (preset === '1') { engine = new RainyDay({ element: 'background', blur: 10, opacity: 1, fps: 30, speed: 30 }); engine.rain([ [1, 2, 8000] ]); engine.rain([ [3, 3, 0.88], [5, 5, 0.9], [6, 2, 1] ], 100); } else if (preset === '2') { engine = new RainyDay({ element: 'background', blur: 10, opacity: 1, fps: 30, speed: 30 }); engine.VARIABLE_GRAVITY_ANGLE = Math.PI / 8; engine.rain([ [0, 2, 0.5], [4, 4, 1] ], 50); } else if (preset === '3') { engine = new RainyDay({ element: 'background', blur: 10, opacity: 1, fps: 30, speed: 30 }); engine.trail = engine.TRAIL_SMUDGE; engine.rain([ [0, 2, 0.5], [4, 4, 1] ], 100); } }; image.crossOrigin = 'anonymous'; if (getURLParameter('imgur')) { image.src = 'http://i.imgur.com/' + getURLParameter('imgur') + '.jpg'; } else if (getURLParameter('img')) { image.src = getURLParameter('img') + '.jpg'; } var youtube = getURLParameter('youtube'); if (youtube) { var div = document.getElementById('sound'); var player = document.createElement('iframe'); player.frameborder = '0'; player.height = '1'; player.width = '1'; player.src = 'https://youtube.com/embed/' + youtube + '?autoplay=1&controls=0&showinfo=0&autohide=1&loop=1'; div.appendChild(player); } } </script> This is where I am naming my background and specifying the photo from within the directory. <body onload="demo();"> <div id="sound" style="z-index: -1;"></div> <div id="parent"> <img id='background' alt="background" src="city.jpg" /> </div> </body> The actual code for the whole entire rainyday.js script can be found here: https://github.com/maroslaw/rainyday.js/blob/master/rainyday.js Thanks in advance for any help and advice!

    Read the article

  • SQL Native Client 10 Performance miserable (due to server-side cursors)

    - by namezero
    we have an application that uses ODBC via CDatabase/CRecordset in MFC (VS2010). We have two backends implemented. MSSQL and MySQL. Now, when we use MSSQL (with the Native Client 10.0), retrieving records with SELECT is dramatically slow via slow links (VPN, for example). The MySQL ODBC driver does not exhibit this nasty behavior. For example: CRecordset r(&m_db); r.Open(CRecordset::snapshot, L"SELECT a.something, b.sthelse FROM TableA AS a LEFT JOIN TableB AS b ON a.ID=b.Ref"); r.MoveFirst(); while(!r.IsEOF()) { // Retrieve CString strData; crs.GetFieldValue(L"a.something", strData); crs.MoveNext(); } Now, with the MySQL driver, everything runs as it should. The query is returned, and everything is lightning fast. However, with the MSSQL Native Client, things slow down, because on every MoveNext(), the driver communicates with the server. I think it is due to server-side cursors, but I didn't find a way to disable them. I have tried using: ::SQLSetConnectAttr(m_db.m_hdbc, SQL_ATTR_ODBC_CURSORS, SQL_CUR_USE_ODBC, SQL_IS_INTEGER); But this didn't help either. There are still long-running exec's to sp_cursorfetch() et al in SQL Profiler. I have also tried a small reference project with SQLAPI and bulk fetch, but that hangs in FetchNext() for a long time, too (even if there is only one record in the resultset). This however only happens on queries with LEFT JOINS, table-valued functions, etc. Note that the query doesn't take that long - executing the same SQL via SQL Studio over the same connection returns in a reasonable time. Question1: Is is possible to somehow get the native client to "cache" all results locally use local cursors in a similar fashion as the MySQL driver seems to do it? Maybe this is the wrong approach altogether, but I'm not sure how else to do this. All we want is to retrieve all data at once from a SELECT, then never talk the server again until the next query. We don't care about recordset updates, deletes, etc or any of that nonsense. We only want to retrieve data. We take that recordset, get all the data, and delete it. Question2: Is there a more efficient way to just retrieve data in MFC with ODBC?

    Read the article

  • How can a C/C++ program put itself into background?

    - by Larry Gritz
    What's the best way for a running C or C++ program that's been launched from the command line to put itself into the background, equivalent to if the user had launched from the unix shell with '&' at the end of the command? (But the user didn't.) It's a GUI app and doesn't need any shell I/O, so there's no reason to tie up the shell after launch. But I want a shell command launch to be auto-backgrounded without the '&' (or on Windows). Ideally, I want a solution that would work on any of Linux, OS X, and Windows. (Or separate solutions that I can select with #ifdef.) It's ok to assume that this should be done right at the beginning of execution, as opposed to somewhere in the middle. One solution is to have the main program be a script that launches the real binary, carefully putting it into the background. But it seems unsatisfying to need these coupled shell/binary pairs. Another solution is to immediately launch another executed version (with 'system' or CreateProcess), with the same command line arguments, but putting the child in the background and then having the parent exit. But this seems clunky compared to the process putting itself into background. Edited after a few answers: Yes, a fork() (or system(), or CreateProcess on Windows) is one way to sort of do this, that I hinted at in my original question. But all of these solutions make a SECOND process that is backgrounded, and then terminate the original process. I was wondering if there was a way to put the EXISTING process into the background. One difference is that if the app was launched from a script that recorded its process id (perhaps for later killing or other purpose), the newly forked or created process will have a different id and so will not be controllable by any launching script, if you see what I'm getting at. Edit #2: fork() isn't a good solution for OS X, where the man page for 'fork' says that it's unsafe if certain frameworks or libraries are being used. I tried it, and my app complains loudly at runtime: "The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec()." I was intrigued by daemon(), but when I tried it on OS X, it gave the same error message, so I assume that it's just a fancy wrapper for fork() and has the same restrictions. Excuse the OS X centrism, it just happens to be the system in front of me at the moment. But I am indeed looking for a solution to all three platforms.

    Read the article

  • Using ImageMagick to create an image from a PDF...efficiently

    - by bigsweater
    I'm using ImageMagick to create a tiny JPG thumbnail image of an already-uploaded PDF. The code works fine. It's a WordPress widget, though this isn't necessarily WordPress specific. I'm unfamiliar with ImageMagick, so I was hoping somebody could tell me if this looks terrible or isn't following some best practices of some sort, or if I'm risking crashing the server. My questions, specifically, are: Is that image cached, or does the server have to re-generate the image every time somebody views the page? If it isn't cached, what's the best way to make sure the server doesn't have to regenerate the thumbnail? I tried to create a separate folder (/thumbs) for ImageMagick to put all the images in, instead of cluttering up the WP upload folders with images of PDFs. It kept throwing a permission error, despite 777 permissions on the folder in my testing environment. Why? Do the source/destination directories have to be the same? Am I doing anything incorrectly/inefficiently here that needs to be improved? The whole widget is on Pastebin: http://pastebin.com/WnSTEDm7 Relevant code: <?php if ( $url ) { $pdf = $url; $info = pathinfo($pdf); $filename = basename($pdf,'.'.$info['extension']); $uploads = wp_upload_dir(); $file_path = str_replace( $uploads['baseurl'], $uploads['basedir'], $url ); $dest_path = str_replace( '.pdf', '.jpg', $file_path ); $dest_url = str_replace( '.pdf', '.jpg', $pdf ); exec("convert \"{$file_path}[0]\" -colorspace RGB -geometry 60 $dest_path"); ?> <div class="entry"> <div class="widgetImg"> <p><a href="<?php echo $url; ?>" title="<?php echo $filename; ?>"><?php echo "<img src='".$dest_url."' alt='".$filename."' class='blueBorder' />"; ?></a></p> </div> <div class="widgetText"> <?php echo wpautop( $desc ); ?> <p><a class="downloadLink" href="<?php echo $url; ?>" title="<?php echo $filename; ?>">Download</a></p> </div> </div> <?php } ?> As you can see, the widget grabs whatever PDF is attached to the current page being viewed, creates an image of the first page of the PDF, stores it, then links to it in HTML. Thanks for any and all help!

    Read the article

  • I would like to run my Java program on System Startup on Mac OS/Windows. How can I do this?

    - by Misha Koshelev
    Here is what I came up with. It works but I was wondering if there is something more elegant. Thank you! Misha package com.mksoft.fbbday; import java.io.BufferedReader; import java.io.InputStreamReader; import java.io.IOException; import java.net.URI; import java.net.URISyntaxException; import java.io.File; import java.io.FileWriter; import java.io.PrintWriter; public class RunOnStartup { public static void install(String className,boolean doInstall) throws IOException,URISyntaxException { String osName=System.getProperty("os.name"); String fileSeparator=System.getProperty("file.separator"); String javaHome=System.getProperty("java.home"); String userHome=System.getProperty("user.home"); File jarFile=new File(RunOnStartup.class.getProtectionDomain().getCodeSource().getLocation().toURI()); if (osName.startsWith("Windows")) { Process process=Runtime.getRuntime().exec("reg query \"HKCU\\Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\Shell Folders\" /v Startup"); BufferedReader in=new BufferedReader(new InputStreamReader(process.getInputStream())); String result="",line; while ((line=in.readLine())!=null) { result+=line; } in.close(); result=result.replaceAll(".*REG_SZ[ ]*",""); File startupFile=new File(result+fileSeparator+jarFile.getName().replaceFirst(".jar",".bat")); if (doInstall) { PrintWriter out=new PrintWriter(new FileWriter(startupFile)); out.println("@echo off"); out.println("start /min \"\" \""+javaHome+fileSeparator+"bin"+fileSeparator+"java.exe -cp "+jarFile+" "+className+"\""); out.close(); } else { if (startupFile.exists()) { startupFile.delete(); } } } else if (osName.startsWith("Mac OS")) { File startupFile=new File(userHome+"/Library/LaunchAgents/com.mksoft."+jarFile.getName().replaceFirst(".jar",".plist")); if (doInstall) { PrintWriter out=new PrintWriter(new FileWriter(startupFile)); out.println(""); out.println(""); out.println(""); out.println(""); out.println(" Label"); out.println(" com.mksoft."+jarFile.getName().replaceFirst(".jar","")+""); out.println(" ProgramArguments"); out.println(" "); out.println(" "+javaHome+fileSeparator+"bin"+fileSeparator+"java"); out.println(" -cp"); out.println(" "+jarFile+""); out.println(" "+className+""); out.println(" "); out.println(" RunAtLoad"); out.println(" "); out.println(""); out.println(""); out.close(); } else { if (startupFile.exists()) { startupFile.delete(); } } } } }

    Read the article

  • QTreeView incorrectly displays the SpinBox if item is checkable and when using QWindowsStyle

    - by Sharraz
    Hello, I'm having a problem with a QTreeView in my program: The SpinBox used to edit the double value of a checkable item is displayed incorrectly when using the Windows style. Only the up and down buttons of the SpinBox can be seen, but not any value. The following example code is able to reproduce the problem: #include <QtGui> class Model : public QAbstractItemModel { public: Model() : checked(false), number(0) {} Qt::ItemFlags flags(const QModelIndex & index) const { return Qt::ItemIsEnabled | Qt::ItemIsEditable | Qt::ItemIsSelectable | Qt::ItemIsUserCheckable; } QVariant data(const QModelIndex &index, int role) const { switch (role) { case Qt::DisplayRole: case Qt::EditRole: return QVariant(number); case Qt::CheckStateRole: return QVariant(checked ? Qt::Checked : Qt::Unchecked); } return QVariant(); } QVariant headerData(int section, Qt::Orientation orientation, int role) const { return QVariant(); } int rowCount(const QModelIndex &parent) const { return parent.isValid() ? 0 : 1; } int columnCount(const QModelIndex &parent) const { return parent.isValid() ? 0 : 1; } bool setData(const QModelIndex &index, const QVariant &value, int role) { switch (role) { case Qt::EditRole: number = value.toDouble(); emit dataChanged(index, index); return true; case Qt::CheckStateRole: checked = value.toInt(); emit dataChanged(index, index); return true; } return false; } QModelIndex index(int row, int column, const QModelIndex &parent) const { if (!row && !column && !parent.isValid()) return createIndex(0, 0); return QModelIndex(); } QModelIndex parent(const QModelIndex &child) const { return QModelIndex(); } private: bool checked; double number; }; int main(int argc, char *argv[]) { QApplication app(argc, argv); QApplication::setStyle(new QWindowsStyle()); QTreeView tree; tree.setModel(new Model()); tree.show(); return app.exec(); } The problems seems to have something to do with the checkbox. If Qt::ItemIsUserCheckable is removed, the SpinBox will be displayed correctly. If the number is replaced by a longer one like 0.01, it can be seen partially. Any idea how this problem can be solved? Do I use the checkbox correctly? Greets, Sharraz

    Read the article

  • can't read from stream until child exits?

    - by BobTurbo
    OK I have a program that creates two pipes - forks - the child's stdin and stdout are redirected to one end of each pipe - the parent is connected to the other ends of the pipes and tries to read the stream associated with the child's output and print it to the screen (and I will also make it write to the input of the child eventually). The problem is, when the parent tries to fgets the child's output stream, it just stalls and waits until the child dies to fgets and then print the output. If the child doesn't exit, it just waits forever. What is going on? I thought that maybe fgets would block until SOMETHING was in the stream, but not block all the way until the child gives up its file descriptors. Here is the code: #include <stdlib.h> #include <stdio.h> #include <string.h> #include <sys/types.h> #include <unistd.h> int main(int argc, char *argv[]) { FILE* fpin; FILE* fpout; int input_fd[2]; int output_fd[2]; pid_t pid; int status; char input[100]; char output[100]; char *args[] = {"/somepath/someprogram", NULL}; fgets(input, 100, stdin); // the user inputs the program name to exec pipe(input_fd); pipe(output_fd); pid = fork(); if (pid == 0) { close(input_fd[1]); close(output_fd[0]); dup2(input_fd[0], 0); dup2(output_fd[1], 1); input[strlen(input)-1] = '\0'; execvp(input, args); } else { close(input_fd[0]); close(output_fd[1]); fpin = fdopen(input_fd[1], "w"); fpout = fdopen(output_fd[0], "r"); while(!feof(fpout)) { fgets(output, 100, fpout); printf("output: %s\n", output); } } return 0; }

    Read the article

  • Pig_Cassandra integration caused - ERROR 1070: Could not resolve CassandraStorage using imports:

    - by Le Dude
    I'm following basic Pig, Cassandra, Hadoop installation. Everything works just fine as a stand alone. No error. However when I tried to run the example file provided by Pig_cassandra example, I got this error. [root@localhost pig]# /opt/cassandra/apache-cassandra-1.1.6/examples/pig/bin/pig_cassandra -x local -x local /opt/cassandra/apache-cassandra-1.1.6/examples/pig/example-script.pig Using /opt/pig/pig-0.10.0/pig-0.10.0-withouthadoop.jar. 2012-10-24 21:14:58,551 [main] INFO org.apache.pig.Main - Apache Pig version 0.10.0 (r1328203) compiled Apr 19 2012, 22:54:12 2012-10-24 21:14:58,552 [main] INFO org.apache.pig.Main - Logging error messages to: /opt/pig/pig_1351138498539.log 2012-10-24 21:14:59,004 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: file:/// 2012-10-24 21:14:59,472 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1070: Could not resolve CassandraStorage using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.] Details at logfile: /opt/pig/pig_1351138498539.log Here is the log file Pig Stack Trace --------------- ERROR 1070: Could not resolve CassandraStorage using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.] org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1000: Error during parsing. Could not resolve CassandraStorage using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.] at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1597) at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1540) at org.apache.pig.PigServer.registerQuery(PigServer.java:540) at org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:970) at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:386) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:189) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:165) at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84) at org.apache.pig.Main.run(Main.java:555) at org.apache.pig.Main.main(Main.java:111) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) Caused by: Failed to parse: Cannot instantiate: CassandraStorage at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:184) at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1589) ... 14 more Caused by: java.lang.RuntimeException: Cannot instantiate: CassandraStorage at org.apache.pig.impl.PigContext.instantiateFuncFromSpec(PigContext.java:510) at org.apache.pig.parser.LogicalPlanBuilder.validateFuncSpec(LogicalPlanBuilder.java:791) at org.apache.pig.parser.LogicalPlanBuilder.buildFuncSpec(LogicalPlanBuilder.java:780) at org.apache.pig.parser.LogicalPlanGenerator.func_clause(LogicalPlanGenerator.java:4583) at org.apache.pig.parser.LogicalPlanGenerator.load_clause(LogicalPlanGenerator.java:3115) at org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1291) at org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:789) at org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:507) at org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:382) at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:175) ... 15 more Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 1070: Could not resolve CassandraStorage using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.] at org.apache.pig.impl.PigContext.resolveClassName(PigContext.java:495) at org.apache.pig.impl.PigContext.instantiateFuncFromSpec(PigContext.java:507) ... 24 more ================================================================================ I googled around and got to this point from other stackoverflow user that identified the potential problem but not the solution. Cassandra and pig integration cause error during startup I believe my configuration is correct and the path has already been defined properly. I didn't change anything in the pig_cassandra file. I'm not quite sure how to proceed from here. Please help?

    Read the article

  • Huawei b153 limit of devices

    - by bdecaf
    I set up my home network all through this 3G wifi router. Problem is it only allows 5 devices to connect. That's not much especially if a wifi printer and gaming consoles keep hogging these slots. On the other hand I don't see the point on blocking these devices. They are (should) not doing anything online just intern in my network. The documentation I can find is surpirisingly unhelpful and focuses how to plug the device in a power socket. So what would be my options. Notes: I have already been able to get a shell on the device using ssh. It's running some Busybox. But I fail to find the how and where this limit is enforced/created. Notes 2: Specifically my device is a 3WebCube - unfortunately not specifically marked with the Huawei Model number. Successes so far After enabling ssh in the options I can login: ssh -T [email protected] [email protected]'s password: ------------------------------- -----Welcome to ATP Cli------ ------------------------------- unfortunately because of this -T - the tab key does not work for autocomplete and all inputted commands will be echoed. Also no history with arrow keys. ATP interface this custom interface is not very useful: ATP>help help Welcome to ATP command line tool. If any question, please input "?" at the end of command. ATP>? ? cls debug help save ? exit ATP>save? save? Command failed. ATP>save ? save ? ATP>debug ? debug ? display set trace ? Shell BUT undocumented - I somehow found on a auto translated chinese website - all you need to do is input sh ATP>sh sh BusyBox vv1.9.1 (2011-03-27 11:59:11 CST) built-in shell (ash) Enter 'help' for a list of built-in commands. # builtin commands # help Built-in commands: ------------------- . : alias bg break cd chdir command continue eval exec exit export false fg getopts hash help jobs kill let local pwd read readonly return set shift source times trap true type ulimit umask unalias unset wait shows standard unix structure: # ls / var tmp proc linuxrc init etc bin usr sbin mnt lib html dev in /bin # ls /bin zebra strace ppps ln echo cat wscd startbsp pppc klog ebtables busybox wlancmd sshd ping kill dns brctl web sntp netstat iwpriv dhcps auth usbdiagd sms mount iwcontrol dhcpc atserver upnp sleep mknod iptables date atcmd upg siproxd mkdir ipcheck cp at umount sh mini_upnpd ip console ash test_at rm mic igmpproxy cms telnetd ripd ls ethcmd cmgr swapdev ps log equipcmd cli in /sbin # ls /sbin vconfig reboot insmod ifconfig arp route poweroff init halt using tftp after installing tftp on my desktop I was able to send files with tftp -s -l curcfg.xml 192.168.1.103 and to download onto the huawei with tftp -g -r curcfg.xml 192.168.1.103 I think I'll need that - because I don't see any editor installed. readout stuff (still playing around where I would get interesting info) For confirmation of hardware: # cat /var/log/modem_hardware_name ^HWVER:"WL1B153M001"# # cat /var/log/modem_software_name 1096.11.03.02.107 # cat /var/log/product_name B153

    Read the article

  • Cloudformation with Ubuntu throwing errors

    - by Sammaye
    I have been doing some reading and have come to the understanding that if you wish to use a launchConfig with Ubuntu you will need to install the cfn-init file yourself which I have done: "Properties" : { "KeyName" : { "Ref" : "KeyName" }, "SpotPrice" : "0.05", "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" }, { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] }, "SecurityGroups" : [ { "Ref" : "InstanceSecurityGroup" } ], "InstanceType" : { "Ref" : "InstanceType" }, "UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [ "#!/bin/bash\n", "apt-get -y install python-setuptools\n", "easy_install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-1.0-6.tar.gz\n", "cfn-init ", " --stack ", { "Ref" : "AWS::StackName" }, " --resource LaunchConfig ", " --configset ALL", " --access-key ", { "Ref" : "WorkerKeys" }, " --secret-key ", {"Fn::GetAtt": ["WorkerKeys", "SecretAccessKey"]}, " --region ", { "Ref" : "AWS::Region" }, " || error_exit 'Failed to run cfn-init'\n" ]]}} But I have a problem with this setup that I cannot seem to get a decent answer to. I keep getting this error in the logs: Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: config-scripts-per-once already ran once Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling scripts-per-boot with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling scripts-per-instance with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling scripts-user with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] cc_scripts_user.py[WARNING]: failed to run-parts in /var/lib/cloud/instance/scripts Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[WARNING]: Traceback (most recent call last):#012 File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/__init__.py", line 117, in run_cc_modules#012 cc.handle(name, run_args, freq=freq)#012 File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/__init__.py", line 78, in handle#012 [name, self.cfg, self.cloud, cloudinit.log, args])#012 File "/usr/lib/python2.7/dist-packages/cloudinit/__init__.py", line 326, in sem_and_run#012 func(*args)#012 File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/cc_scripts_user.py", line 31, in handle#012 util.runparts(runparts_path)#012 File "/usr/lib/python2.7/dist-packages/cloudinit/util.py", line 223, in runparts#012 raise RuntimeError('runparts: %i failures' % failed)#012RuntimeError: runparts: 1 failures Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[ERROR]: config handling of scripts-user, None, [] failed Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling keys-to-console with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling phone-home with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling final-message with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] cloud-init-cfg[ERROR]: errors running cloud_config [final]: ['scripts-user'] I have absolutely no idea what scripts-user means and Google is not helping much here either. I can, when I ssh into the server, see that it runs the userdata script since I can access cfn-init as a command whereas I cannot in the original AMI the instance is made from. However I have a launchConfig: "Comment" : "Install a simple PHP application", "AWS::CloudFormation::Init" : { "configSets" : { "ALL" : ["WorkerRole"] }, "WorkerRole" : { "files" : { "/etc/cron.d/worker.cron" : { "content" : "*/1 * * * * ubuntu /home/ubuntu/worker_cron.php &> /home/ubuntu/worker.log\n", "mode" : "000644", "owner" : "root", "group" : "root" }, "/home/ubuntu/worker_cron.php" : { "content" : { "Fn::Join" : ["", [ "#!/usr/bin/env php", "<?php", "define('ROOT', dirname(__FILE__));", "const AWS_KEY = \"", { "Ref" : "WorkerKeys" }, "\";", "const AWS_SECRET = \"", { "Fn::GetAtt": ["WorkerKeys", "SecretAccessKey"]}, "\";", "const QUEUE = \"", { "Ref" : "InputQueue" }, "\";", "exec('git clone x '.ROOT.'/worker');", "if(!file_exists(ROOT.'/worker/worker_despatcher.php')){", "echo 'git not downloaded right';", "exit();", "}", "echo 'git downloaded';", "include_once ROOT.'/worker/worker_despatcher.php';" ]]}, "mode" : "000755", "owner" : "ubuntu", "group" : "ubuntu" } } } } Which does not seem to run at all. I have checked for the files existance in my home directory and it's not there. I have checked for the cronjob entry and it's not there either. I cannot, after reading through the documentation, seem to see what's potentially wrong with my code. Any thoughts on why this is not working? Am I missing something blatant?

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80  | Next Page >