Search Results

Search found 1221 results on 49 pages for 'argv'.

Page 15/49 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • OpenGL particle system

    - by allan
    I'm really new with OpenGL, so bear with me. I'm trying to simulate a particle system using OpenGl but I can't get it to work, this is what I have so far: #include <GL/glut.h> int main (int argc, char **argv){ // data allocation, various non opengl stuff ............ glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE ); glutInitWindowPosition(100,100); glutInitWindowSize(size, size); glPointSize (4); glutCreateWindow("test gl"); ............ // initial state, not opengl ............ glViewport(0,0,size,size); glutDisplayFunc(display); glutIdleFunc(compute); glutMainLoop(); } void compute (void) { // change state not opengl glutPostRedisplay(); } void display (void) { glClear(GL_COLOR_BUFFER_BIT); glBegin(GL_POINTS); for(i = 0; i<nparticles; i++) { // two types of particles if (TYPE(particle[i]) == 1) glColor3f(1,0,0); else glColor3f(0,0,1); glVertex2f(X(particle[i]),Y(particle[i])); } glEnd(); glFlush(); glutSwapBuffers(); } I get a black window after a couple of seconds (the window has just the title bar before that). Where do I go wrong? Any help would be very much appreciated. Thanks. LE: the x and y coordinates of each particle are within the interval (0,size)

    Read the article

  • Celery tasks not works with gevent

    - by Novarg
    When i use celery + gevent for tasks that uses subprocess module i'm getting following stacktrace: Traceback (most recent call last): File "/home/venv/admin/lib/python2.7/site-packages/celery/task/trace.py", line 228, in trace_task R = retval = fun(*args, **kwargs) File "/home/venv/admin/lib/python2.7/site-packages/celery/task/trace.py", line 415, in __protected_call__ return self.run(*args, **kwargs) File "/home/webapp/admin/webadmin/apps/loggingquarantine/tasks.py", line 107, in release_mail_task res = call_external_script(popen_obj.communicate) File "/home/webapp/admin/webadmin/apps/core/helpers.py", line 42, in call_external_script return func_to_call(*args, **kwargs) File "/usr/lib64/python2.7/subprocess.py", line 740, in communicate return self._communicate(input) File "/usr/lib64/python2.7/subprocess.py", line 1257, in _communicate stdout, stderr = self._communicate_with_poll(input) File "/usr/lib64/python2.7/subprocess.py", line 1287, in _communicate_with_poll poller = select.poll() AttributeError: 'module' object has no attribute 'poll' My manage.py looks following (doing monkeypatch there): #!/usr/bin/env python from gevent import monkey import sys import os if __name__ == "__main__": if not 'celery' in sys.argv: monkey.patch_all() os.environ.setdefault("DJANGO_SETTINGS_MODULE", "webadmin.settings") from django.core.management import execute_from_command_line sys.path.append(".") execute_from_command_line(sys.argv) Is there a reason why celery tasks act like it wasn't patched properly? p.s. strange thing that my local setup on Macos works fine while i getting such exceptions under Centos (all package versions are the same, init and config scripts too)

    Read the article

  • tail call generated by clang 1.1 and 1.0 (llvm 2.7 and 2.6)

    - by ony
    After compilation next snippet of code with clang -O2 (or with online demo): #include <stdio.h> #include <stdlib.h> int flop(int x); int flip(int x) { if (x == 0) return 1; return (x+1)*flop(x-1); } int flop(int x) { if (x == 0) return 1; return (x+0)*flip(x-1); } int main(int argc, char **argv) { printf("%d\n", flip(atoi(argv[1]))); } I'm getting next snippet of llvm assembly in flip: bb1.i: ; preds = %bb1 %4 = add nsw i32 %x, -2 ; <i32> [#uses=1] %5 = tail call i32 @flip(i32 %4) nounwind ; <i32> [#uses=1] %6 = mul nsw i32 %5, %2 ; <i32> [#uses=1] br label %flop.exit I thought that tail call means dropping current stack (i.e. return will be to the upper frame, so next instruction should be ret %5), but according to this code it will do mul for it. And in native assembly there is simple call without tail optimisation (even with appropriate flag for llc) Can sombody explain why clang generates such code? As well I can't understand why llvm have tail call if it can simply check that next ret will use result of prev call and later do appropriate optimisation or generate native equivalent of tail-call instruction?

    Read the article

  • Using GCC (MinGW) to compile OpenGL on Windows

    - by Casey
    I've searched on google and haven't been able to come up with a solution. I would like to compile some OpenGL programming using GCC. In the GL folder in GCC I have the following headers: gl.h glext.h glu.h Then in my system32 file I have the following .dll opengl32.dll glu32.dll glut32.dll If I wanted to write a simple OpenGL "Hello World" and link and compile with GCC, what is the correct process? I'm attempting to use this code: #include <GL/gl.h> #include <GL/glut.h> void display() { glClear(GL_COLOR_BUFFER_BIT); glFlush(); } int main(int argc, char **argv) { glutInit(&argc, argv); glutInitWindowSize(512,512); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutCreateWindow("The glut hello world program"); glutDisplayFunc(display); glClearColor(0.0, 0.0, 0.0, 1.0); glutMainLoop(); // Infinite event loop return 0; } Thank you in advance for the help.

    Read the article

  • Why do I get two clicked or released signals when using a custom slot for a QPushButton ?

    - by Chris
    here's the main code at first I thought is was the message box but setting a label instead has the same effect. #include <time.h> #include "ui_mainwindow.h" #include <QMessageBox> class MainWindow : public QWidget, private Ui::MainWindow { Q_OBJECT public: MainWindow(QWidget *parent = 0); void makeSum(void); private: int r1; int r2; private slots: void on_pushButton_released(void); }; MainWindow::MainWindow(QWidget *parent) : QWidget(parent) { setupUi(this); } void MainWindow::on_pushButton_released(void) { bool ok; int a = lineEdit->text().toInt(&ok, 10); if (ok) { if (r1+r2==a) { QMessageBox::information( this, "Sums","Correct!" ); } else { QMessageBox::information( this, "Sums","Wrong!" ); } } else { QMessageBox::information( this, "Sums","You need to enter a number" ); } makeSum(); } void MainWindow::makeSum(void) { r1 = rand() % 10 + 1; r2 = rand() % 10 + 1; label->setText(QString::number(r1)); label_3->setText(QString::number(r2)); } int main(int argc, char *argv[]) { srand ( time(NULL) ); QApplication app(argc, argv); MainWindow mw; mw.makeSum(); mw.show(); return app.exec(); } #include "main.moc"

    Read the article

  • how to hide ssh expect user/password

    - by raindrop18
    my perl cgi script I have the password/user on clear text and want to hide it or the user enter the credential interactively.is that possible? here is my code. please any help!! i am very new for perl. #!/usr/local/bin/expect ####################################################################################################### # Input: It will handle two arguments -> a device and a show command. ####################################################################################################### # ######### Start of Script ###################### # #### Set up Timeouts - Debugging Variables log_user 0 set timeout 10 set userid "USER" set password "PASS" # ############## Get two arguments - (1) Device (2) Command to be executed set device [lindex $argv 0] set command [lindex $argv 1] spawn /usr/local/bin/ssh -l $userid $device match_max [expr 32 * 1024] expect { -re "RSA key fingerprint" {send "yes\r"} timeout {puts "Host is known"} } expect { -re "username: " {send "$userid\r"} -re "(P|p)assword: " {send "$password\r"} -re "Warning:" {send "$password\r"} -re "Connection refused" {puts "Host error -> $expect_out(buffer)";exit} -re "Connection closed" {puts "Host error -> $expect_out(buffer)";exit} -re "no address.*" {puts "Host error -> $expect_out(buffer)";exit} timeout {puts "Timeout error. Is device down or unreachable?? ssh_expect";exit} } expect { -re "\[#>]$" {send "term len 0\r"} timeout {puts "Error reading prompt -> $expect_out(buffer)";exit} } expect { -re "\[#>]$" {send "$command\r"} timeout {puts "Error reading prompt -> $expect_out(buffer)";exit} } expect -re "\[#>]$" set output $expect_out(buffer) send "exit\r" puts "$output\r\n"

    Read the article

  • Why does Perl's DBI complain about "failed: ERROR OCIEnvNlsCreate" when I try to connect to Oracle 1

    - by John
    I am getting the following error connecting to an Oracle 11g database using a simple Perl script: failed: ERROR OCIEnvNlsCreate. Check ORACLE_HOME (Linux) env var or PATH (Windows) and or NLS settings, permissions, etc. at The script is as follows: #!/usr/local/bin/perl use strict; use DBI; if ($#ARGV < 3) { print "Usage: perl testDbAccess.pl dataBaseUser dataBasePassword SID dataBasePort\n"; exit 0; } my ($user, $pwd, $sid, $port) = @ARGV; my $host = `hostname`; my $dbh; my $sth; my $dbname = "dbi:Oracle:HOST=$host;SID=$sid;PORT=$port"; openDbConnection(); closeDbConnection(); sub openDbConnection() { $dbh = DBI->connect ($dbname, $user ,$pwd , { RaiseError => 1}) || die "Database connection not made: $DBI::errstr"; } sub closeDbConnection() { #$sth->finish(); $dbh->disconnect(); } Anyone seen this problem before?

    Read the article

  • what does this attempted trojan horse code do?

    - by bstullkid
    It looks like this just sends a ping, but whats the point of that when you can just use ping? /* WARNING: this is someone's attempt at writing a malware trojan. Do not compile and *definitely* don't install. I added an exit as the first line to avoid mishaps - msw */ int main (int argc, char *argv[]) { exit(1); unsigned int pid = 0; char buffer[2]; char *args[] = { "/bin/ping", "-c", "5", NULL, NULL }; if (argc != 2) return 0; args[3] = strdup(argv[1]); for (;;) { gets(buffer); /* FTW */ if (buffer[0] == 0x6e) break; switch (pid = fork()) { case -1: printf("Error Forking\n"); exit(255); case 0: execvp(args[0], args); exit(1); default: break; } } return 255; }

    Read the article

  • fgets in c don't return a portion of an string

    - by Marc
    Hi! I'm totally new in C, and I'm trying to do a little application that searches a string into a file, my problem is that I need to open a big file (more than 1GB) with just one line inside and fgets return me the entire file (I'm doing test with a 10KB file). actually this is my code: #include <stdio.h> #include <string.h> int main(int argc, char *argv[]) { char *search = argv[argc-1]; int retro = strlen(search); int pun = 0; int sortida; int limit = 10; char ara[20]; FILE *fp; if ((fp = fopen ("SEARCHFILE", "r")) == NULL){ sortida = -1; exit (1); } while(!feof(fp)){ if (fgets(ara, 20, fp) == NULL){ break; } //this must be a 20 bytes line, but it gets the entyre 10Kb file printf("%s",ara); } sortida = 1; if(fclose(fp) != 0){ sortida = -2; exit (1); } return 0; } What can I do to find an string into a file? I'v tried with GREP but it don't helps, because it returns the position:ENTIRE_STRING. I'm open to ideas. Thanks in advance!

    Read the article

  • Consulting a Prolog Source Code from within a VS2008 Solution File

    - by Joshua Green
    I have a Prolog file (Hanoi.pl) containing the code for solving the Hanoi Towers puzzle: hanoi( N ):- move( N, left, middle, right ). move( 0, _, _, _ ):- !. move( N, A, B, C ):- M is N-1, move( M, A, C, B ), inform( A, B ), move( M, C, B, A ). inform( X, Y ):- write( 'move a disk from ' ), write( X ), write( ' to ' ), writeln( Y ). I also have a C++ file written in VS2008 IDE: #include <iostream> #include <string> #include <stdio.h> #include <stdlib.h> using namespace std; #include "SWI-cpp.h" #include "SWI-Prolog.h" predicate_t phanoi; term_t t0; int main(int argc, char** argv) { long n = 5; int rval; if ( !PL_initialise(1, argv) ) PL_halt(1); PL_put_integer( t0, n ); phanoi = PL_predicate( "hanoi", 1, NULL ); rval = PL_call_predicate( NULL, PL_Q_NORMAL, phanoi, t0 ); system( "PAUSE" ); } How can I consult my Prolog source code (Hanoi.pl) from within my C++ code? Not from the Command Prompt - from the code, something like include or consult or compile? It is located in the same folder as my cpp file. Thanks,

    Read the article

  • g++ Linking Error on Mac while compiling FFMPEG

    - by Saptarshi Biswas
    g++ on Snow Leopard is throwing linking errors on the following piece of code test.cpp #include <iostream> using namespace std; #include <libavcodec/avcodec.h> // required headers #include <libavformat/avformat.h> int main(int argc, char**argv) { av_register_all(); // offending library call return 0; } When I try to compile this using the following command g++ test.cpp -I/usr/local/include -L/usr/local/lib \ -lavcodec -lavformat -lavutil -lz -lm -o test I get the error Undefined symbols: "av_register_all()", referenced from: _main in ccUD1ueX.o ld: symbol(s) not found collect2: ld returned 1 exit status Interestingly, if I have an equivalent c code, test.c #include <stdio.h> #include <libavcodec/avcodec.h> #include <libavformat/avformat.h> int main(int argc, char**argv) { av_register_all(); return 0; } gcc compiles it just fine gcc test.c -I/usr/local/include -L/usr/local/lib \ -lavcodec -lavformat -lavutil -lz -lm -o test I am using Mac OS X 10.6.5 $ g++ --version i686-apple-darwin10-g++-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664) $ gcc --version i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664) FFMPEG's libavcodec, libavformat etc. are C libraries and I have built them on my machine like thus: ./configure --enable-gpl --enable-pthreads --enable-shared \ --disable-doc --enable-libx264 make && sudo make install As one would expect, libavformat indeed contains the symbol av_register_all $ nm /usr/local/lib/libavformat.a | grep av_register_all 0000000000000000 T _av_register_all 00000000000089b0 S _av_register_all.eh I am inclined to believe g++ and gcc have different views of the libraries on my machine. g++ is not able to pick up the right libraries. Any clue?

    Read the article

  • how to implement a really efficient bitvector sorting in python

    - by xiao
    Hello guys! Actually this is an interesting topic from programming pearls, sorting 10 digits telephone numbers in a limited memory with an efficient algorithm. You can find the whole story here What I am interested in is just how fast the implementation could be in python. I have done a naive implementation with the module bitvector. The code is as following: from BitVector import BitVector import timeit import random import time import sys def sort(input_li): return sorted(input_li) def vec_sort(input_li): bv = BitVector( size = len(input_li) ) for i in input_li: bv[i] = 1 res_li = [] for i in range(len(bv)): if bv[i]: res_li.append(i) return res_li if __name__ == "__main__": test_data = range(int(sys.argv[1])) print 'test_data size is:', sys.argv[1] random.shuffle(test_data) start = time.time() sort(test_data) elapsed = (time.time() - start) print "sort function takes " + str(elapsed) start = time.time() vec_sort(test_data) elapsed = (time.time() - start) print "sort function takes " + str(elapsed) start = time.time() vec_sort(test_data) elapsed = (time.time() - start) print "vec_sort function takes " + str(elapsed) I have tested from array size 100 to 10,000,000 in my macbook(2GHz Intel Core 2 Duo 2GB SDRAM), the result is as following: test_data size is: 1000 sort function takes 0.000274896621704 vec_sort function takes 0.00383687019348 test_data size is: 10000 sort function takes 0.00380706787109 vec_sort function takes 0.0371489524841 test_data size is: 100000 sort function takes 0.0520560741425 vec_sort function takes 0.374383926392 test_data size is: 1000000 sort function takes 0.867373943329 vec_sort function takes 3.80475401878 test_data size is: 10000000 sort function takes 12.9204008579 vec_sort function takes 38.8053860664 What disappoints me is that even when the test_data size is 100,000,000, the sort function is still faster than vec_sort. Is there any way to accelerate the vec_sort function?

    Read the article

  • Qt 101: Why can't I use this class?

    - by Airjoe
    I have experience with C++ but I've never really used Qt before. I'm trying to connect to a SQLite database, so I found a tutorial here and am going with that. In the QtCreator IDE, I went to Add New -- C++ Class and in the header file pasted in the header the header from that link and in the .cpp file I pasted the source. My main.cpp looks like this: #include <QtGui/QApplication> #include "mainwindow.h" #include "databasemanager.h" #include <qlabel.h> int main(int argc, char *argv[]) { QApplication a(argc, argv); MainWindow w; w.show(); DatabaseManager db(); QLabel hello("nothing..."); if(db.openDB()){ hello.setText("Win!"); } else{ hello.setText("Lame!"); } hello.resize(100, 30); hello.show(); return a.exec(); } And I'm getting this error: main.cpp:13: error: request for member 'openDB' in 'db', which is of non-class type 'DatabaseManager()' Can anyone point me in the right direction? I know "copypaste" code isn't good, I just wanted to see if I could get DB connectivity working and I figured something like this would be simple... thanks for the help.

    Read the article

  • Controlling shell command line wildcard expansion in C or C++

    - by Adrian McCarthy
    I'm writing a program, foo, in C++. It's typically invoked on the command line like this: foo *.txt My main() receives the arguments in the normal way. On many systems, argv[1] is literally *.txt, and I have to call system routines to do the wildcard expansion. On Unix systems, however, the shell expands the wildcard before invoking my program, and all of the matching filenames will be in argv. Suppose I wanted to add a switch to foo that causes it to recurse into subdirectories. foo -a *.txt would process all text files in the current directory and all of its subdirectories. I don't see how this is done, since, by the time my program gets a chance to see the -a, then shell has already done the expansion and the user's *.txt input is lost. Yet there are common Unix programs that work this way. How do they do it? In Unix land, how can I control the wildcard expansion? (Recursing through subdirectories is just one example. Ideally, I'm trying to understand the general solution to controlling the wildcard expansion.)

    Read the article

  • How does PATH environment affect my running executable from using msvcr90 to msvcr80 ???

    - by Runner
    #include <gtk/gtk.h> int main( int argc, char *argv[] ) { GtkWidget *window; gtk_init (&argc, &argv); window = gtk_window_new (GTK_WINDOW_TOPLEVEL); gtk_widget_show (window); gtk_main (); return 0; } I tried putting various versions of MSVCR80.dll under the same directory as the generated executable(via cmake),but none matched. Is there a general solution for this kinda problem? UPDATE Some answers recommend install the VS redist,but I'm not sure whether or not it will affect my installed Visual Studio 9, can someone confirm? Manifest file of the executable <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"></requestedExecutionLevel> </requestedPrivileges> </security> </trustInfo> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.DebugCRT" version="9.0.21022.8" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> </assembly> It seems the manifest file says it should use the MSVCR90, why it always reporting missing MSVCR80.dll? FOUND After spending several hours on it,finally I found it's caused by this setting in PATH: D:\MATLAB\R2007b\bin\win32 After removing it all works fine.But why can that setting affect my running executable from using msvcr90 to msvcr80 ???

    Read the article

  • Newbie: Render RGB to GTK widget -- howto?

    - by Billy Pilgrim
    Hi All, Big picture: I want to render an RGB image via GTK on a linux box. I'm a frustrated GTK newbie, so please forgive me. I assume that I should create a Drawable_area in which to render the image -- correct? Do I then have to create a graphics context attached to that area? How? my simple app (which doesn't even address the rgb issue yet is this: int main(int argc, char** argv) { GdkGC * gc = NULL; GtkWidget * window = NULL; GtkDrawingArea * dpage = NULL; GtkWidget * page = NULL; gtk_init( &argc, & argv ); window = gtk_window_new( GTK_WINDOW_TOPLEVEL ); page = gtk_drawing_area_new( ); dpage = GTK_DRAWING_AREA( page ); gtk_widget_set_size_request( page, PAGE_WIDTH, PAGE_HEIGHT ); gc = gdk_gc_new( GTK_DRAWABLE( dpage ) ); gtk_widget_show( window ); gtk_main(); return (EXIT_SUCCESS); } my dpage is apparently not a 'drawable' (though it is a drawing area). I am confused as to a) how do I get/create the graphics context which is required in subsequent function calls? b) am I close to a solution, or am I so completely *#&@& wrong that there is no hope c) a baby steps tutorial. (I started with hello world as my base, so I got that far). any and all help appreciated. bp

    Read the article

  • Can someone tell me why I'm seg faulting in this simple C program?

    - by user299648
    I keep on getting seg faulted, and for the life of me I dont why. The file I'm scanning is just 18 strings in 18 lines. I thinks the problem is the way I'm mallocing the double pointer called picks, but I dont know exactly why. I'm am only trying to scanf strings that are less than 15 chars long, so I don't see the problem. Can someone please help. #include <stdio.h> #include <stdlib.h> #include <string.h> #define MAX_LENGTH 100 int main( int argc,char *argv[] ) { char* string = malloc( sizeof(char) ); char** picks = malloc(15*sizeof(char)); FILE* pick_file = fopen( argv[l], "r" ); int num_picks; for( num_picks=0 ; fgets( string, MAX_LENGTH, pick_file ) != NULL ; num_picks++ ) { printf("pick a/an %s ", string ); scanf( "%s", picks+num_picks ); } int x; for(x=0; x<num_picks;x++) printf("s\n", picks+x); }

    Read the article

  • regular expression code

    - by Gaia Andreoletti
    Deal all, I need to find match between two tab delimited files files like this: File 1: ID1 1 65383896 65383896 G C PCNXL3 ID1 2 56788990 55678900 T A ACT1 ID1 1 56788990 55678900 T A PRO55 File 2 ID2 34 65383896 65383896 G C MET5 ID2 2 56788990 55678900 T A ACT1 ID2 2 56788990 55678900 T A HLA what I would like to do is to retrive the matching line between the two file. What I would like to match is everyting after the gene ID So far I have written this code but unfortunately perl keeps giving me the error: use of "Use of uninitialized value in pattern match (m//)" Could you please help me figure out where i am doing it wrong? Thank you in advance! use strict; open (INA, $ARGV[0]) || die "cannot to open gene file"; open (INB, $ARGV[1]) || die "cannot to open coding_annotated.var files"; my @sample1 = <INA>; my @sample2 = <INB>; foreach my $line (@sample1) { my @tab = split (/\t/, $line); my $chr = $tab[1]; my $start = $tab[2]; my $end = $tab[3]; my $ref = $tab[4]; my $alt = $tab[5]; my $name = $tab[6]; foreach my $item (@sample2){ my @fields = split (/\t/,$item); if ($fields[1]=~ m/$chr(.*)/ && $fields[2]=~ m/$start(.*)/ && $fields[4]=~ m/$ref(.*)/ && $fields[5]=~ m/$alt(.*)/&& $fields[6]=~ m/$name(.*)/){ print $line,"\n",$item; } } }

    Read the article

  • Installing Rails on Mountain Lion

    - by Jordan Medlock
    I was wondering if you could help me find why I cannot install Ruby on Rails on my MBP with OS X Mountain Lion. It's a weird problem and I'll give you as much info as I can. I've installed ruby and it's working at version 1.9.3 And I've installed ruby gems and it's worked for every other gem I've tried to install. It's version is 1.8.24 When I run $ sudo gem install rails it replies with the message: Successfully installed rails-3.2.8 1 gem installed Although when I ask it rails -v it returns: `Rails is not currently installed on this system. To get the latest version, simply type: $ sudo gem install rails You can then rerun your "rails" command.` What should I do? The rails bash file (/usr/bin/rails) contains: #!/usr/bin/ruby # Stub rails command to load rails from Gems or print an error if not installed. require 'rubygems' version = ">= 0" if ARGV.first =~ /^_(.*)_$/ and Gem::Version.correct? $1 then version = $1 ARGV.shift end begin gem 'railties', version or raise rescue Exception puts 'Rails is not currently installed on this system. To get the latest version, simply type:' puts puts ' $ sudo gem install rails' puts puts 'You can then rerun your "rails" command.' exit 0 end load Gem.bin_path('railties', 'rails', version) That must mean that the gem files aren't there or are old or corrupted How can I check that?

    Read the article

  • Can some tell me why I am seg faulting in this simple C program?

    - by user299648
    I keep on getting seg faulted after I end my first for loop, and for the life of me I don't why. The file I'm scanning is just 18 strings in 18 lines. I thinks the problem is the way I'm mallocing the double pointer called picks, but I don't know exactly why. I'm am only trying to scanf strings that are less than 15 chars long, so I don't see the problem. Can someone please help. #include <stdio.h> #include <stdlib.h> #include <string.h> #define MAX_LENGTH 100 int main( int argc,char *argv[] ) { char* string = malloc( 15*sizeof(char) ); char** picks = malloc(15*sizeof(char*)); FILE* pick_file = fopen( argv[l], "r" ); int num_picks; for( num_picks=0 ; fgets( string, MAX_LENGTH, pick_file ) != NULL ; num_picks++ ) { scanf( "%s", picks+num_picks ); } //this is where i seg fault int x; for(x=0; x<num_picks;x++) printf("s\n", picks+x); }

    Read the article

  • How do I controll clipping with non-opaque graphics-item's in Qt?

    - by JJacobsson
    I have a bunch of QGraphicsSvgItem's in a QGraphicsScene that are drawn connected by QGraphicsLineItem's. This show's a graph of a tree-structure. What I want to do is provide a feature where everything but a selected sub-tree becomes transparent. A kind of "highlight this sub-tree" feature. That part was easy, but the results are ugly because now the lines can be seen through the semi-transparent svg's. I am looking for some way to still clip other QGraphicsItem's in the scene to the svg item's, giving the effect that the svg's are semi-transparent windows to the background. I know this code does not use svg's but I figure you can replace that yourself if you are so inclined. int main(int argc, char *argv[]) { QApplication app(argc, argv); QGraphicsScene scene; for( int i = 0; i < 10; ++i ) { QGraphicsLineItem* line = new QGraphicsLineItem; line->setLine( i * 25.0 + 1.0, 0, i * 25.0 + 23.0, 0 ); scene.addItem( line ); } for( int i = 0; i < 11; ++i ) { QGraphicsEllipseItem* ellipse = new QGraphicsEllipseItem; ellipse->setRect( (i * 25.0) - 9.0, -9.0, 18.0, 18.0f ); ellipse->setBrush( QBrush( Qt::green, Qt::SolidPattern ) ); ellipse->setOpacity( 0.5 ); scene.addItem( ellipse ); } QGraphicsView view( &scene ); view.show(); return app.exec(); } I would like the line's to not be seen behind the circle's. I have tried fiddling with the depth-buffer and the stencil buffer using opengl rendering to no avail. How do I get the QGraphicsSvgItem's (or QGraphicsEllipseItem's in the example code) to still clip the lines even though they are semi-transparent?

    Read the article

  • How to get a unique WindowRef in a dockable Qt application on Mac

    - by Robin
    How do I get a unique WindowRef from a Qt application that includes docked windows on the Mac? My code boils down to: int main(int argc, char* argv[]) { QApplication* qtApp = new QApplication(argc, argv); MyQMainWindow mainwin; mainwin.show(); } class MyQMainWindow : public QMainWindow { //... QDockWidget* mDock; MyQWidget* mDrawArea; QStackedWidget* mCentralStack; }; MyQMainWindow::MyQMainWindow() { mDock = new QDockWidget(tr("Docked Widget"), this); mDock->setMaximumWidth(180); //... addDockWidget(Qt::RightDockWidgetArea, mDock); mDrawArea = new MyQWidget(this); mCentralStack = new QStackedWidget(); mCentralStack->addWidget(mDrawArea); // Other widgets added to stack in production code. setCentralWidget(mCentralStack); //... } (Apologies if the above isn't syntactically correct, it's just easier to illustrate than to describe.) I added the following temporary code at the end of the above constructor: HIViewRef view1 = (HIViewRef) mDrawArea->winId(); HIViewRef view2 = (HIViewRef) mDock->winId(); WindowRef win1 = HIViewGetWindow(view1); WindowRef win2 = HIViewGetWindow(view2); My problem is that view1 and view2 are different, but win1 and win2 are the same! I tried the following equivalent on Windows: HWND win1 = (HWND)(mCentralDrawArea->winId()); HWND win2 = (HWND)(mDock1->winId()); This time win1 and win2 are different. I need the window handle to pass on to a 3rd party SDK so that it can draw into the central area only. BTW, I appreciate that the winId() method comes with lots of portability warnings, but a substantial refactor is out of the question for me. The same goes for using Carbon instead of Cocoa. Thanks.

    Read the article

  • Compiling simple gtk+ application

    - by sterh
    Hello, I try to compile simple gtk+ application in Anjuta IDE. Application is a simple window: # include <gtk/gtk.h> int main( int argc, char *argv[]) { GtkWidget *label; GtkWidget *window; gtk_init(&argc, &argv); window = gtk_window_new(GTK_WINDOW_TOPLEVEL); gtk_window_set_title(GTK_WINDOW(window), "??????????, ???!"); label = gtk_label_new("??????????, ???!"); gtk_container_add(GTK_CONTAINER(window), label); gtk_widget_show_all(window); g_signal_connect(G_OBJECT(window), "destroy", G_CALLBACK(gtk_main_quit), NULL); gtk_main(); return 0; } In make file i have: GTK_CFLAGS = -D_REENTRANT -I/usr/include/gtk-2.0 -I/usr/lib/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/include/cairo -I/usr/include/pango-1.0 -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -I/usr/include/freetype2 -I/usr/include/directfb -I/usr/include/libpng12 -I/usr/include/pixman-1 GTK_LIBS = -lgtk-x11-2.0 -lgdk-x11-2.0 -latk-1.0 -lgdk_pixbuf-2.0 -lm -lpangocairo-1.0 -lpango-1.0 -lcairo -lgobject-2.0 -lgmodule-2.0 -ldl -lglib-2.0 But i see error, when i try to compile project: gtk/gtk.h - No such file or directory Thank you.

    Read the article

  • Specifying character

    - by danutenshu
    So below I have a code in C++ that is supposed to invert the arguments in a vector, but not the sequence. I have listed my problems as sidenotes in the code below. The invert function is supposed to invert each argument, and then the main function just outputs the inverted words in same order For instance, program("one two three four")=ruof eerth owt eno #include <iostream> #include <string> using namespace std; int invert(string normal) { string inverted; for (int num=normal.size()-1; num>=0; num--) { inverted.append(normal[num]); //I don't know how to get each character //I need another command for append } return **inverted**; <---- } int main(int argc, char* argv[]) { string text; for (int a=1; a<argc; a++) { text.append(invert(argv[a])); //Can't run the invert function text.append(" "); } cout << text << endl; return 0; }

    Read the article

  • Fibonacci in C works great with 1 - 18 but 19 does nothing at all...

    - by shevron
    Yeah right... we are forced to programm some good old C at our university... ;) So here's my problem: We got the assignment to program a little program that show a fibonacci sequence from 1 to n 1 to 18 works great. But from 19 the program does nothing at all and just exit as it's done. I can not find the error... so please give me a hint. :) #include #include #include #include int main(int argc, char **argv) { pid_t pid; int fib[argc]; int i, size; size = strtol(argv[1], NULL, 0L); fib[0] = 0; fib[1] = 1; pid = fork(); printf("size = %d \n", size); if(pid == 0){ for(i = 2; i 0){ // Parent, because pid 0 wait(NULL); printf("\n"); exit(1); } } Thanks already!

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >