Search Results

Search found 6123 results on 245 pages for 'unsigned char'.

Page 210/245 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • Binary files printing and desired precision

    - by yCalleecharan
    Hi, I'm printing a variable say z1 which is a 1-D array containing floating point numbers to a text file so that I can import into Matlab or GNUPlot for plotting. I've heard that binary files (.dat) are smaller than .txt files. The definition that I currently use for printing to a .txt file is: void create_out_file(const char *file_name, const long double *z1, size_t z_size){ FILE *out; size_t i; if((out = _fsopen(file_name, "w+", _SH_DENYWR)) == NULL){ fprintf(stderr, "***> Open error on output file %s", file_name); exit(-1); } for(i = 0; i < z_size; i++) fprintf(out, "%.16Le\n", z1[i]); fclose(out); } I have three questions: Are binary files really more compact than text files?; If yes, I would like to know how to modify the above code so that I can print the values of the array z1 to a binary file. I've read that fprintf has to be replaced with fwrite. My output file say dodo.dat should contain the values of array z1 with one floating number per line. I have %.16Le up in my code but I think that %.15Le is right as I have 15 precision digits with long double. I have put a dot (.) in the width position as I believe that this allows expansion to an arbitrary field to hold the desired number. Am I right? As an example with %.16Le, I can have an output like 1.0047914240730432e-002 which gives me 16 precision digits and the width of the field has the right width to display the number correctly. Is placing a dot (.) in the width position instead of a width value a good practice? Thanks a lot...

    Read the article

  • How to test for existence of a script-scoped variable in PowerShell?

    - by Damian Powell
    Is it possible to test for the existence of a script-scoped variable in PowerShell? I've been using the PowerShell Community Extensions (PSCX) but I've noticed that if you import the module while Set-PSDebug -Strict is set, an error is produced: The variable '$SCRIPT:helpCache' cannot be retrieved because it has not been set. At C:\Users\...\Modules\Pscx\Modules\GetHelp\Pscx.GetHelp.psm1:5 char:24 While investigating how I might fix this, I found this piece of code in Pscx.GetHelp.psm1: #requires -version 2.0 param([string[]]$PreCacheList) if ((!$SCRIPT:helpCache) -or $RefreshCache) { $SCRIPT:helpCache = @{} } This is pretty straight forward code; if the cache doesn't exist or needs to be refreshed, create a new, empty cache. The problem is that calling $SCRIPT:helpCache while Set-PSDebug -Strict is in force casues the error because the variable hasn't been defined yet. Ideally, we could use a Test-Variable cmdlet but such a thing doesn't exist! I thought about looking in the variable: provider but I don't know how to determine the scope of a variable. So my question is: how can I test for the existence of a variable while Set-PSDebug -Strict is in force, without causing an error?

    Read the article

  • gcc, strict-aliasing, and horror stories

    - by Joseph Quinsey
    In http://stackoverflow.com/questions/2906365/gcc-strict-aliasing-and-casting-through-a-union I asked whether anyone had encountered problems with union punning through pointers. So far, the answer seems to be No. This question is broader: do you have any horror stories about gcc and strict-aliasing? Background: Quoting from AndreyT's answer in http://stackoverflow.com/questions/2771023/c99-strict-aliasing-rules-in-c-gcc/2771041#2771041: "Strict aliasing rules are rooted in parts of the standard that were present in C and C++ since the beginning of [standardized] times. The clause that prohibits accessing object of one type through a lvalue of another type is present in C89/90 (6.3) as well as in C++98 (3.10/15). ... It is just that not all compilers wanted (or dared) to enforce it or rely on it." Well, gcc is now daring to do so, with its -fstrict-aliasing switch. And this has caused some problems. See, for example, the excellent article http://davmac.wordpress.com/2009/10/ about a Mysql bug, and the equally excellent discussion in http://cellperformance.beyond3d.com/articles/2006/06/understanding-strict-aliasing.html. Some other less-relevant links: http://stackoverflow.com/questions/1225741/performance-impact-of-fno-strict-aliasing http://stackoverflow.com/questions/754929/strict-aliasing http://stackoverflow.com/questions/262379/when-is-char-safe-for-strict-pointer-aliasing http://stackoverflow.com/questions/725138/how-to-detect-strict-aliasing-at-compile-time So to repeat, do you have a horror story of your own? Problems not indicated by -Wstrict-aliasing would, of course, be preferred. And other C compilers are also welcome.

    Read the article

  • C dynamic memory allocation for table of structs

    - by JosiP
    Hi here is my code. I want to dynamincly change no of elemnts in table with structs __state: typedef struct __state{ long int timestamp; int val; int prev_value; }*state_p, state_t; int main(int argc, char **argv){ int zm; int previous_state = 0; int state = 0; int i = 0; int j; state_p st; //here i want to have 20 structs st. st = (state_p) malloc(sizeof(state_t) * 20); while(1){ previous_state = state; scanf("%d", &state); printf("%d, %d\n", state, previous_state); if (previous_state != state){ printf("state changed %d %d\n", previous_state, state); // here i got compile error: main.c: In function ‘main’: main.c:30: error: incompatible type for argument 1 of ‘save_state’ main.c:34: error: invalid type argument of ‘->’ main.c:34: error: invalid type argument of ‘->’ save_state(st[i],previous_state, state); } i++; } return 0; } I suppose i have to change that st[i] to smth like st+ptr ? where pointer is incermeting in each loop iteration ? Or am I wrong ? When i change code: initialization into state_p st[20] and in each loop iteration i put st[i] = (state_p)malloc(sizeof(state_t)) everything works fine, but i want to dynammicly change number of elemets in that table. Thx in advance for any help

    Read the article

  • Is extending a base class with non-virtual destructor dangerous in C++

    - by Akusete
    Take the following code class A { }; class B : public A { }; class C : public A { int x; }; int main (int argc, char** argv) { A* b = new B(); A* c = new C(); //in both cases, only ~A() is called, not ~B() or ~C() delete b; //is this ok? delete c; //does this line leak memory? return 0; } when calling delete on a class with a non-virtual destructor with member functions (like class C), can the memory allocator tell what the proper size of the object is? If not, is memory leaked? Secondly, if the class has no member functions, and no explicit destructor behaviour (like class B), is everything ok? I ask this because I wanted to create a class to extend std::string, (which I know is not recommended, but for the sake of the discussion just bear with it), and overload the +=,+ operator. -Weffc++ gives me a warning because std::string has a non virtual destructor, but does it matter if the sub-class has no members and does not need to do anything in its destructor? -- FYI the += overload was to do proper file path formatting, so the path class could be used like class path : public std::string { //... overload, +=, + //... add last_path_component, remove_path_component, ext, etc... }; path foo = "/some/file/path"; foo = foo + "filename.txt"; //and so on... I just wanted to make sure someone doing this path* foo = new path(); std::string* bar = foo; delete bar; would not cause any problems with memory allocation

    Read the article

  • PDCurses TUI with C++ Win32 console application

    - by Bach
    I have downloaded pdcurses source and was able to successfully include curses.h in my project, linked the pre-compiled library and all good. After few hours of trying out the library, I saw the tuidemo.c in the demos folder, compiled it into an executable and brilliant! exactly what I needed for my project. Now the problem is that it's a C code, and I am working on a C++ project in VS c++ 2008. The files I need are tui.c and tui.h How can I include that C file in my C++ code? I saw few suggestions here but the compiler was not too happy with 100's of warnings and errors. How can I go on including/using that TUI pdcurses includes!? Thanks EDIT: I added extern "C" statement, so my test looks like this now, but I'm getting some other type of error #include <stdio.h> #include <stdlib.h> using namespace std; extern "C" { #include <tui.h> } void sub0(void) { //do nothing } void sub1(void) { //do nothing } int main (int argc, char * const argv[]) { menu MainMenu[] = { { "Asub", sub0, "Go inside first submenu" }, { "Bsub", sub1, "Go inside second submenu" }, { "", (FUNC)0, "" } /* always add this as the last item! */ }; startmenu(MainMenu, "TUI - 'textual user interface' demonstration program"); return 0; } Although it is compiling successfully, it is throwing an Error at runtime: 0xC0000005: Access violation reading location 0x021c52f9 at line startmenu(MainMenu, "TUI - 'textual user interface' demonstration program"); Not sure where to go from here. thanks again.

    Read the article

  • An Interactive Console I/O Wrapper/Interceptor in C# - What is the issue?

    - by amazedsaint
    I was trying to put together an interactive Console interceptor/wrapper in C# over the weekend, by re-mixing few code samples I've found in SO and other sites. With what I've as of now, I'm unable to read back from the console reliably. Any quick pointers? public class ConsoleInterceptor { Process _interProc; public event Action<string> OutputReceivedEvent; public ConsoleInterceptor() { _interProc = new Process(); _interProc.StartInfo = new ProcessStartInfo("cmd"); InitializeInterpreter(); } public ConsoleInterceptor(string command) { _interProc = new Process(); _interProc.StartInfo = new ProcessStartInfo(command); InitializeInterpreter(); } public Process InterProc { get { return _interProc; } } private void InitializeInterpreter() { InterProc.StartInfo.RedirectStandardInput = true; InterProc.StartInfo.RedirectStandardOutput = true; InterProc.StartInfo.RedirectStandardError = true; InterProc.StartInfo.CreateNoWindow = true; InterProc.StartInfo.UseShellExecute = false; InterProc.StartInfo.WindowStyle = ProcessWindowStyle.Hidden; bool started = InterProc.Start(); Redirect(InterProc.StandardOutput); Redirect(InterProc.StandardError); } private void Redirect(StreamReader input) { new Thread((a) => { var buffer = new char[1]; while (true) { if (input.Read(buffer, 0, 1) > 0) OutputReceived(new string(buffer)); }; }).Start(); } private void OutputReceived(string text) { if (OutputReceivedEvent != null) OutputReceivedEvent(text); } public void Input(string input) { InterProc.StandardInput.WriteLine(input); InterProc.StandardInput.Flush(); } }

    Read the article

  • C Nested Structure Pointer Problem

    - by Halo
    I have a shared structure, and inside it a request structure: struct shared_data { pthread_mutex_t th_mutex_queue; struct request_queue { int min; int max; char d_name[DIR_SIZE]; pid_t pid; int t_index; } request_queue[BUFSIZE]; int count; int data_buffer_allocation[BUFSIZE]; int data_buffers[BUFSIZE][100]; }; Then I prepare a request; struct shared_data *sdata_ptr; ... ... sdata_ptr->request_queue[index].pid = pid; strcpy(sdata_ptr->request_queue[index].d_name, dir_path_name); sdata_ptr->request_queue[index].min = min; sdata_ptr->request_queue[index].max = max; And the compiler warns me that I'm doing an incompatible implicit declaration in the strcpy function. I guess that's a problem with the pointers, but isn't what I wrote above supposed to be true?

    Read the article

  • why DataColumn AllowDbNull is true even if oracle db does not allow null

    - by matti
    Hi. I have column SomeId in table SomeLink. When I look with tOra or Sql Plus Worksheet both state: tOra: Column name Data type Default Null Comment SOMEID INTEGER {null} NOT NULL {null} Sql Plus: SOMEID NOT NULL NUMBER(38) I have authored a method that's intended to give default values to all NOT NULL fields that don't have values: public static void GetDefaultValuesForNonNullColumns(DataRow row) { foreach(DataColumn col in row.Table.Columns) { if (Convert.IsDBNull(row[col]) && !col.AllowDBNull) { if (ColumnIsNumeric(col.DataType)) row[col] = 0; else if (col.DataType == typeof(DateTime)) row[col] = DateTime.Now; else if (col.DataType == typeof(String)) row[col] = string.Empty; else if (col.DataType == typeof(Char)) row[col] = ' '; else throw new Exception(string.Format("Unsupported column type: {0}", col.DataType)); } } } When SOMEID is handled in loop the AllowDBNull = true. I really can't understand. The table is created in DataSet like this: _someLinkAdptr = _dbFactory.CreateDataAdapter(); _someLinkAdptr.SelectCommand = _dbFactory.CreateCommand(); _someLinkAdptr.SelectCommand.Connection = _cnctn; _someLinkAdptr.SelectCommand.CommandText = GetSomeLinkSelectTxtAndParams(_someLinkAdptr.SelectCommand, UndefinedValue.ToString(), UndefinedValue.ToString()); Select command returns no rows. The idea is that I can then use commandbuilder to get InsertCommand without building it myself. The row is added to dataset's table like this: private static void CreateDocLink(int anId, int anotherId) { DataRow row = _someDataSet.Tables["SomeLink"].NewRow(); row["AnId"] = anId; row["AnotherId"] = anotherId; Utility.GetDefaultValuesForNonNullColumns(row); _someDataSet.Tables["SomeLink"].Rows.Add(row); } When DataAdapter is updated to oracle db I get: ORA-01400: cannot insert NULL into (SOMESCHEMA.SOMELINK.SOMEID) Cheers & BR -Matti

    Read the article

  • Handling responses in libcurl

    - by sfactor
    i have a code to send a Form Post with login credentials to a webpage. it looks like this CURL *curl; CURLcode res; struct curl_httppost *formpost=NULL; struct curl_httppost *lastptr=NULL; struct curl_slist *headerlist=NULL; static const char buf[] = "Expect:"; curl_global_init(CURL_GLOBAL_ALL); /* Fill in the username */ curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "user", CURLFORM_COPYCONTENTS, "username", CURLFORM_END); /* Fill in the password */ curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "pass", CURLFORM_COPYCONTENTS, "password", CURLFORM_END); /* Fill in the submit field too, even if this is rarely needed */ curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "submit", CURLFORM_COPYCONTENTS, "send", CURLFORM_END); curl = curl_easy_init(); /* initalize custom header list (stating that Expect: 100-continue is not wanted */ headerlist = curl_slist_append(headerlist, buf); if(curl) { /* what URL that receives this POST */ curl_easy_setopt(curl, CURLOPT_URL, "http://127.0.0.1:8000/index.html"); /* only disable 100-continue header if explicitly requested */ curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headerlist); curl_easy_setopt(curl, CURLOPT_HTTPPOST, formpost); res = curl_easy_perform(curl); /* always cleanup */ curl_easy_cleanup(curl); /* then cleanup the formpost chain */ curl_formfree(formpost); /* free slist */ curl_slist_free_all (headerlist); } now i need to know is how do i handle the responses i get back from the server? i need to store the response into a string and then work on it.

    Read the article

  • Risking the exception anti-pattern.. with some modifications

    - by Sridhar Iyer
    Lets say that I have a library which runs 24x7 on certain machines. Even if the code is rock solid, a hardware fault can sooner or later trigger an exception. I would like to have some sort of failsafe in position for events like this. One approach would be to write wrapper functions that encapsulate each api a: returnCode=DEFAULT; try { returnCode=libraryAPI1(); } catch(...) { returnCode=BAD; } return returnCode; The caller of the library then restarts the whole thread, reinitializes the module if the returnCode is bad. Things CAN go horribly wrong. E.g. if the try block(or libraryAPI1()) had: func1(); char *x=malloc(1000); func2(); if func2() throws an exception, x will never be freed. On a similar vein, file corruption is a possible outcome. Could you please tell me what other things can possibly go wrong in this scenario?

    Read the article

  • How to asynchronously read to std::string using Boost::asio?

    - by SpyBot
    Hello. I'm learning Boost::asio and all that async stuff. How can I asynchronously read to variable user_ of type std::string? Boost::asio::buffer(user_) works only with async_write(), but not with async_read(). It works with vector, so what is the reason for it not to work with string? Is there another way to do that besides declaring char user_[max_len] and using Boost::asio::buffer(user_, max_len)? Also, what's the point of inheriting from boost::enable_shared_from_this<Connection> and using shared_from_this() instead of this in async_read() and async_write()? I've seen that a lot in the examples. Here is a part of my code: class Connection { public: Connection(tcp::acceptor &acceptor) : acceptor_(acceptor), socket_(acceptor.get_io_service(), tcp::v4()) { } void start() { acceptor_.get_io_service().post( boost::bind(&Connection::start_accept, this)); } private: void start_accept() { acceptor_.async_accept(socket_, boost::bind(&Connection::handle_accept, this, placeholders::error)); } void handle_accept(const boost::system::error_code& err) { if (err) { disconnect(); } else { async_read(socket_, boost::asio::buffer(user_), boost::bind(&Connection::handle_user_read, this, placeholders::error, placeholders::bytes_transferred)); } } void handle_user_read(const boost::system::error_code& err, std::size_t bytes_transferred) { if ( err or (bytes_transferred != sizeof(user_)) ) { disconnect(); } else { ... } } ... void disconnect() { socket_.shutdown(tcp::socket::shutdown_both); socket_.close(); socket_.open(tcp::v4()); start_accept(); } tcp::acceptor &acceptor_; tcp::socket socket_; std::string user_; std::string pass_; ... };

    Read the article

  • Is my method for avoiding dynamic_cast<> faster than dynamic_cast<> itself ?

    - by ereOn
    Hi, I was answering a question a few minutes ago and it raised to me another one: In one of my projects, I do some network message parsing. The messages are in the form of: [1 byte message type][2 bytes payload length][x bytes payload] The format and content of the payload are determined by the message type. I have a class hierarchy, based on a common class Message. To instanciate my messages, i have a static parsing method which gives back a Message* depending on the message type byte. Something like: Message* parse(const char* frame) { // This is sample code, in real life I obviously check that the buffer // is not NULL, and the size, and so on. switch(frame[0]) { case 0x01: return new FooMessage(); case 0x02: return new BarMessage(); } // Throw an exception here because the mesage type is unknown. } I sometimes need to access the methods of the subclasses. Since my network message handling must be fast, I decived to avoid dynamic_cast<> and I added a method to the base Message class that gives back the message type. Depending on this return value, I use a static_cast<> to the right child type instead. I did this mainly because I was told once that dynamic_cast<> was slow. However, I don't know exactly what it really does and how slow it is, thus, my method might be as just as slow (or slower) but far more complicated. What do you guys think of this design ? Is it common ? Is it really faster than using dynamic_cast<> ? Any detailed explanation of what happen under the hood when one use dynamic_cast<> is welcome !

    Read the article

  • CoInitialize fails in dll

    - by Quandary
    Question: I have the following program, which uses COM to use the Microsoft Speech API (SAPI) to take a text and output it as sound. Now it works fine as long as I have it in a .exe. When I load it as .dll, it fails. Why? I used dependencywalker, and saw the exe doesn't have MSVCR100D and ole32, so I loaded them like this: LoadLibraryA("MSVCR100D.DLL"); LoadLibraryA("ole32.dll"); but it didn't help... Any idea why ? #include <windows.h> #include <sapi.h> #include <cstdlib> int main(int argc, char* argv[]) { ISpVoice * pVoice = NULL; if (FAILED(::CoInitialize(NULL))) return FALSE; HRESULT hr = CoCreateInstance(CLSID_SpVoice, NULL, CLSCTX_ALL, IID_ISpVoice, (void **) &pVoice); if( SUCCEEDED( hr ) ) { hr = pVoice->Speak(L"Noobie was fragged by GSG9 Googlebot", 0, NULL); hr = pVoice->Speak(L"Test Test", 0, NULL); hr = pVoice->Speak(L"This sounds normal <pitch middle = '-10'/> but the pitch drops half way through", SPF_IS_XML, NULL ); pVoice->Release(); pVoice = NULL; } ::CoUninitialize(); return EXIT_SUCCESS; }

    Read the article

  • c++ Mixing printf with wprintf (or cout with wcout)

    - by Bo Jensen
    I know you should not mix printing with printf,cout and wprintf,wcout, but have a hard time finding a good answer why and if it is possible to get round it. The problem is I use a external library that prints with printf and my own uses wcout. If I do a simple example it works fine, but from my full application it simply does not print the printf statements. If this is really a limitation, then there would be many libraries out there which can not work together with wide printing applications. Any insight on this is more than welcome. Update : I boiled it down to : #include <stdio.h> #include <stdlib.h> #include <iostream> #include <readline/readline.h> #include <readline/history.h> int main() { char *buf; std::wcout << std::endl; /* ADDING THIS LINE MAKES PRINTF VANISH!!! */ rl_bind_key('\t',rl_abort);//disable auto-complete while((buf = readline("my-command : "))!=NULL) { if (strcmp(buf,"quit")==0) break; std::wcout<<buf<< std::endl; if (buf[0]!=0) add_history(buf); } free(buf); return 0; } So I guess it might be a flushing problem, but it still looks strange to me, I have to check up on it.

    Read the article

  • Weird problem with string function

    - by wrongusername
    I'm having a weird problem with the following function, which returns a string with all the characters in it after a certain point: string after(int after, string word) { char temp[word.size() - after]; cout << word.size() - after << endl; //output here is as expected for(int a = 0; a < (word.size() - after); a++) { cout << word[a + after]; //and so is this temp[a] = word[a + after]; cout << temp[a]; //and this } cout << endl << temp << endl; //but output here does not always match what I want string returnString = temp; return returnString; } The thing is, when the returned string is 7 chars or less, it works just as expected. When the returned string is 8 chars or more, then it starts spewing nonsense at the end of the expected output. For example, the lines cout << after(1, "12345678") << endl; cout << after(1, "123456789") << endl; gives an output of: 7 22334455667788 2345678 2345678 8 2233445566778899 23456789?,?D~ 23456789?,?D~ What can I do to fix this error, and are there any default C++ functions that can do this for me?

    Read the article

  • Dynamic stack allocation in C++

    - by Poni
    I want to allocate memory on the stack. Heard of _alloca / alloca and I understand that these are compiler-specific stuff, which I don't like. So, I came-up with my own solution (which might have it's own flaws) and I want you to review/improve it so for once and for all we'll have this code working: /*#define allocate_on_stack(pointer, size) \ __asm \ { \ mov [pointer], esp; \ sub esp, [size]; \ }*/ /*#define deallocate_from_stack(size) \ __asm \ { \ add esp, [size]; \ }*/ void test() { int buff_size = 4 * 2; char *buff = 0; __asm { // allocate mov [buff], esp; sub esp, [buff_size]; } // playing with the stack-allocated memory for(int i = 0; i < buff_size; i++) buff[i] = 0x11; __asm { // deallocate add esp, [buff_size]; } } void main() { __asm int 3h; test(); } Compiled with VC9. What flaws do you see in it? Me for example, not sure that subtracting from ESP is the solution for "any kind of CPU". Also, I'd like to make the commented-out macros work but for some reason I can't.

    Read the article

  • Silverlight Socket Constantly Returns With Empty Buffer

    - by Benny
    I am using Silverlight to interact with a proxy application that I have developed but, without the proxy sending a message to the Silverlight application, it executes the receive completed handler with an empty buffer ('\0's). Is there something I'm doing wrong? It is causing a major memory leak. this._rawBuffer = new Byte[this.BUFFER_SIZE]; SocketAsyncEventArgs receiveArgs = new SocketAsyncEventArgs(); receiveArgs.SetBuffer(_rawBuffer, 0, _rawBuffer.Length); receiveArgs.Completed += new EventHandler<SocketAsyncEventArgs>(ReceiveComplete); this._client.ReceiveAsync(receiveArgs); if (args.SocketError == SocketError.Success && args.LastOperation == SocketAsyncOperation.Receive) { // Read the current bytes from the stream buffer int bytesRecieved = this._client.ReceiveBufferSize; // If there are bytes to process else the connection is lost if (bytesRecieved > 0) { try { //Find out what we just received string messagePart = UTF8Encoding.UTF8.GetString(_rawBuffer, 0, _rawBuffer.GetLength(0)); //Take out any trailing empty characters from the message messagePart = messagePart.Replace('\0'.ToString(), ""); //Concatenate our current message with any leftovers from previous receipts string fullMessage = _theRest + messagePart; int seperator; //While the index of the seperator (LINE_END defined & initiated as private member) while ((seperator = fullMessage.IndexOf((char)Messages.MessageSeperator.Terminator)) > 0) { //Pull out the first message available (up to the seperator index string message = fullMessage.Substring(0, seperator); //Queue up our new message _messageQueue.Enqueue(message); //Take out our line end character fullMessage = fullMessage.Remove(0, seperator + 1); } //Save whatever was NOT a full message to the private variable used to store the rest _theRest = fullMessage; //Empty the queue of messages if there are any while (this._messageQueue.Count > 0) { ... } } catch (Exception e) { throw e; } // Wait for a new message if (this._isClosing != true) Receive(); } } Thanks in advance.

    Read the article

  • Indentation control while developing a small python like language

    - by sap
    Hello, I'm developing a small python like language using flex, byacc (for lexical and parsing) and C++, but i have a few questions regarding scope control. just as python it uses white spaces (or tabs) for indentation, not only that but i want to implement index breaking like for instance if you type "break 2" inside a while loop that's inside another while loop it would not only break from the last one but from the first loop as well (hence the number 2 after break) and so on. example: while 1 while 1 break 2 'hello world'!! #will never reach this. "!!" outputs with a newline end 'hello world again'!! #also will never reach this. again "!!" used for cout end #after break 2 it would jump right here but since I don't have an "anti" tab character to check when a scope ends (like C for example i would just use the '}' char) i was wondering if this method would the the best: I would define a global variable, like "int tabIndex" on my yacc file that i would access in my lex file using extern. then every time i find a tab character on my lex file i would increment that variable by 1. when parsing on my yacc file if i find a "break" keyword i would decrement by the amount typed after it from the tabIndex variable, and when i reach and EOF after compiling and i get a tabIndex != 0 i would output compilation error. now the problem is, whats the best way to see if the indentation got reduced, should i read \b (backspace) chars from lex and then reduce the tabIndex variable (when the user doesn't use break)? another method to achieve this? also just another small question, i want every executable to have its starting point on the function called start() should i hardcode this onto my yacc file? sorry for the long question any help is greatly appreciated. also if someone can provide an yacc file for python would be nice as a guideline (tried looking on Google and had no luck). thanks in advance.

    Read the article

  • Compiling Objective-C project on Linux (Ubuntu)

    - by Alex
    How to make an Objective-C project work on Ubuntu? My files are: Fraction.h #import <Foundation/NSObject.h> @interface Fraction: NSObject { int numerator; int denominator; } -(void) print; -(void) setNumerator: (int) n; -(void) setDenominator: (int) d; -(int) numerator; -(int) denominator; @end Fraction.m #import "Fraction.h" #import <stdio.h> @implementation Fraction -(void) print { printf( "%i/%i", numerator, denominator ); } -(void) setNumerator: (int) n { numerator = n; } -(void) setDenominator: (int) d { denominator = d; } -(int) denominator { return denominator; } -(int) numerator { return numerator; } @end main.m #import <stdio.h> #import "Fraction.h" int main( int argc, const char *argv[] ) { // create a new instance Fraction *frac = [[Fraction alloc] init]; // set the values [frac setNumerator: 1]; [frac setDenominator: 3]; // print it printf( "The fraction is: " ); [frac print]; printf( "\n" ); // free memory [frac release]; return 0; } I've tried two approaches to compile it: Pure gcc: $ sudo apt-get install gobjc gnustep gnustep-devel $ gcc `gnustep-config --objc-flags` -o main main.m -lobjc -lgnustep-base /tmp/ccIQKhfH.o:(.data.rel+0x0): undefined reference to `__objc_class_name_Fraction' I created a GNUmakefile Makefile: include ${GNUSTEP_MAKEFILES}/common.make TOOL_NAME = main main_OBJC_FILES = main.m include ${GNUSTEP_MAKEFILES}/tool.make ... and ran: $ source /usr/share/GNUstep/Makefiles/GNUstep.sh $ make Making all for tool main... Linking tool main ... ./obj/main.o:(.data.rel+0x0): undefined reference to `__objc_class_name_Fraction' So in both cases compiler gets stuck at undefined reference to `__objc_class_name_Fraction' Do you have and idea how to resolve this issue?

    Read the article

  • Incorrect logic flow? function that gets coordinates for a sudoku game

    - by igor
    This function of mine keeps on failing an autograder, I am trying to figure out if there is a problem with its logic flow? Any thoughts? Basically, if the row is wrong, "invalid row" should be printed, and clearInput(); called, and return false. When y is wrong, "invalid column" printed, and clearInput(); called and return false. When both are wrong, only "invalid row" is to be printed (and still clearInput and return false. Obviously when row and y are correct, print no error and return true. My function gets through most of the test cases, but fails towards the end, I'm a little lost as to why. bool getCoords(int & x, int & y) { char row; bool noError=true; cin>>row>>y; row=toupper(row); if(row>='A' && row<='I' && isalpha(row) && y>=1 && y<=9) { x=row-'A'; y=y-1; return true; } else if(!(row>='A' && row<='I')) { cout<<"Invalid row"<<endl; noError=false; clearInput(); return false; } else { if(noError) { cout<<"Invalid column"<<endl; } clearInput(); return false; } }

    Read the article

  • Consulting a Prolog Source Code from within a VS2008 Solution File

    - by Joshua Green
    I have a Prolog file (Hanoi.pl) containing the code for solving the Hanoi Towers puzzle: hanoi( N ):- move( N, left, middle, right ). move( 0, _, _, _ ):- !. move( N, A, B, C ):- M is N-1, move( M, A, C, B ), inform( A, B ), move( M, C, B, A ). inform( X, Y ):- write( 'move a disk from ' ), write( X ), write( ' to ' ), writeln( Y ). I also have a C++ file written in VS2008 IDE: #include <iostream> #include <string> #include <stdio.h> #include <stdlib.h> using namespace std; #include "SWI-cpp.h" #include "SWI-Prolog.h" predicate_t phanoi; term_t t0; int main(int argc, char** argv) { long n = 5; int rval; if ( !PL_initialise(1, argv) ) PL_halt(1); PL_put_integer( t0, n ); phanoi = PL_predicate( "hanoi", 1, NULL ); rval = PL_call_predicate( NULL, PL_Q_NORMAL, phanoi, t0 ); system( "PAUSE" ); } How can I consult my Prolog source code (Hanoi.pl) from within my C++ code? Not from the Command Prompt - from the code, something like include or consult or compile? It is located in the same folder as my cpp file. Thanks,

    Read the article

  • How to avoid using the plld.exe utility in VS2008 (for linking C++ and Prolog codes)

    - by Joshua Green
    Here is my code in its entirety: Trying "listing." at the Prolog prompt that pops up when I run the program confirms that my Prolog source code has been loaded (consulted). #include <iostream> #include <fstream> #include <string> #include <math.h> #include <stdio.h> #include <stdlib.h> #include <stdafx.h> using namespace std; #include "Windows.h" #include "ctype.h" #include "SWI-cpp.h" #include "SWI-Prolog.h" #include "SWI-Stream.h" int main(int argc, char** argv) { argc = 4; argv[0] = "libpl.dll"; argv[1] = "-G32m"; argv[2] = "-L32m"; argv[3] = "-T32m"; PL_initialise(argc, argv); if ( !PL_initialise(argc, argv) ) PL_halt(1); PlCall( "consult(swi('plwin.rc'))" ); PlCall( "consult('hello.pl')" ); PL_halt( PL_toplevel() ? 0 : 1 ); } So this is how to load a Prolog source code (hello.pl) at run time into VS2008 without having to use plld at the VS command prompt.

    Read the article

  • Perl's use encoding pragma breaking UTF strings

    - by Karel Bílek
    I have a problem with Perl and Encoding pragma. (I use utf-8 everywhere, in input, output, the perl scripts themselves. I don't want to use other encoding, never ever.) However. When I write binmode(STDOUT, ':utf8'); use utf8; $r = "\x{ed}"; print $r; I see the string "í" (which is what I want - and what is 00+ED unicode char). But when I add the "use encoding" pragma like this binmode(STDOUT, ':utf8'); use utf8; use encoding 'utf8'; $r = "\x{ed}"; print $r; all I see is a box character. Why? Moreover, when I add Data::Dumper and let the Dumper print the new string like this binmode(STDOUT, ':utf8'); use utf8; use encoding 'utf8'; $r = "\x{ed}"; use Data::Dumper; print Dumper($r); I see that perl changed the string to "\x{fffd}". Why?

    Read the article

  • openssl crypto library - base64 conversion

    - by Hassan Syed
    I'm using openssl BIO objects to convert a binary string into a base64 string. The code is as follows: void ToBase64(std::string & s_in) { BIO * b_s = BIO_new( BIO_s_mem() ); BIO * b64_f = BIO_new( BIO_f_base64() ); b_s = BIO_push( b64_f , b_s); std::cout << "IN::" << s_in.length(); BIO_write(b_s, s_in.c_str(), s_in.length()); char * pp; int sz = BIO_get_mem_data(b_s, &pp); std::cout << "OUT::" << sz << endl; s_in.assign(pp,sz); //std::cout << sz << " " << std::string(pp,sz) << std::endl; BIO_free (b64_f); // TODO ret error potential BIO_free (b_s); // } The in length is either 64 or 72. However the output is always 65, which is incorrect it should be much larger than that. The documentation isn't the best in the world, AFAIK the bio_s_mem object is supposed to grow dynamically. What am I doing wrong ?

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >