Search Results

Search found 88594 results on 3544 pages for 'vc project file'.

Page 120/3544 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • deploying project to tomcat ROOT

    - by stsd
    I have deployed my project to tomcat root, and it works fine without any problem. To acheive this I created a ROOT file TOMCAT_HOME/conf/Catalina/localhost/ROOT.xml with content below: <Context docBase="/home/user/project.war" path="" reloadable="true" /> So right now I can see my project under localhost:8080/ without any problem.. but I don't know where my project has been extracted, there is even no ROOT directory under TOMCAT_HOME/webapps, any idea?

    Read the article

  • Problem to build Flex project.

    - by nemade-vipin
    hello friends, I have created one application when I have clean the project.And use the Build project option it is not building SWF file again.I want to reflect changes in my project.Please tell me how to build project in Flex builder.

    Read the article

  • importing project in Eclipse

    - by Zachary
    Hello I am new to Eclipse (and I am a novice in Java): I am creating a project which should make use of some classes from another project. Do I have to export necessarily this last project as JAR file and add in my project? Are there other alternatives?

    Read the article

  • PHP remote project and subversion

    - by jax
    I am working on a live remote php project in eclipse. ie, I just connect the the project using RSE edit the files and save. I have recently setup subeclipse and am wondering if there is an way to add my php files to a subversion project while still working on the live project? Or maybe there is a better way to do this and get the same result.

    Read the article

  • How to open an existing project in .net c# in Visual studio 2008

    - by LS
    I already have a project folder, with relevant files inside. I need to change the working directory in Visual studio to this project folder, I just can't find how to do it in Visual studio. It seems I can only create a new project or open a solution file, but not set the working directory to the folder I want and start work on existing project. There is no solution file in the folder too.

    Read the article

  • Size doesn't matter

    - by ssoolsma
    Whenever I start a new project I *always* break up my code in different projects. Also known as n-tier solution. The scale of  the project doesn't matter, but make sure that each project is responsible for himself (or herself if you prefer). I make sure that i ....At least thought about how the project should work on the toilet or in a project team meeting.Have a solution directory and create my projects within. I like to name my project (and it's folders by the namespaces). For instance: When i'm creating a piece of (web)software called: ChuckNorris, i always include the software name in my projects. Start off with designing the DataAccess project. I name it: ChuckNorris.DataAccess which lets me easily identify the project incase the project scales alot.Build the classes which represent the database structure. Don't stop working on a class untill it's finished for now. Also, don't over-do the methods. Build stuff only when it's needed, and not think: "Hm, that would be cool to have". Cause most of the time you end up with unused code, and we don't want that.Build a unittest project and make sure you create the folder inside the project that it's testing. So, create the ChuckNorris.DataAccess.UnitTest project inside the folder of the dataaccess project. I would suggest using the nUnit testframework.Incase you though, hm i skip unittest: Don't! Just build it - it will safe you alot of time later onNow, read 5 again. Build that bloody unittest. Don't skip. (i cant emphasize this enough)Now, every class in the dataaccess project is responsible for itself. They don't rely on each other. This is where we use the BusinessLogic project for. Start creating the ChuckNorris.BusinessLogic project. (not inside the data-access project ofcourse, but withing the ChuckNorris folder.Combine stuff from data-access. This usual involves alot of copying the data-access classes and feels silly at first. (we'll get to that later on)Now you come up to a point of creating a service project. You might not always see why to use it, but see it as a way to expose your businesslogic to any application (including your own). Sometimes i use it as a so-called "Factory". Every call goes through this factory, so that's the only thing i'm exposing to any program, and make sure that those methods are the only ones that I allow you to invoke.Build any UI (website, phoneapp, forms application, silverlight, wpf or whatever) and reference it to you service project. Fall in love (cough) with this approach.It's possible that it doesn't seem to make much sense, and very incomplete. Well, that last part is correct. Next post will go in to detail of setting up your Data-Access project and use the entity framework.

    Read the article

  • Survey of project-administration experience [on hold]

    - by Salvador Beltrán
    My name is Salvador, I'm a Computer System Eng. Student and I'm searching for people to contribute with my research and I need real opinions - Experience (is an investigation for problems in the Project Management Area), just to be clear it can be any kind of project. If you help me with these 3 questions I would appreciate you so much! :) 1 - Any kind of problem that ocurred during the process of the project administration(Just the description). 2 - What was the impact? 3 - And what was the solution to avoid this problem in some future. 4 - What do you do(Software Engineering,Networking,etc). Thank you very much!

    Read the article

  • Bigger ProjectServer farm is performing worse

    - by MSPS DBA
    I am using Project Server 2007 sp3 with SharePoint 2007 sp3 and SQL Server 2008 r2. I have recently moved my farm from 2 servers (1 DB and 1 App/Web) to a very big farm having Many Servers, Clustered Database, Load Balancer, Powerful processors and Large RAM. This Farm has more than one Web Servers, Project App Servers, SharePoint App Servers and a separate Index Server. But the performance of Project Server in the new Farm has been downgraded. Views are taking even more time to load data and Project publishing time has also been increased. I am also facing deadlock problems which are causing the project server queue jobs to fail. Could anyone inform me that what would be the reason of this problem and what should be the starting point to look into the issue? Is it mainly because now the application server needs to communicate with other application servers which were not needed in the previous farm? Thanks!

    Read the article

  • Flow control in a batch file

    - by dboarman-FissureStudios
    Reference Iterating arrays in a batch file I have the following: for /f "tokens=1" %%Q in ('query termserver') do ( if not ERRORLEVEL ( echo Checking %%Q for /f "tokens=1" %%U in ('query user %UserID% /server:%%Q') do (echo %%Q) ) ) When running query termserver from the command line, the first two lines are: Known ------------------------- ...followed by the list of terminal servers. However, I do not want to include these as part of the query user command. Also, there are about 4 servers I do not wish to include. When I supply UserID with this code, the program is promptly exiting. I know it has something to do with the if statement. Is this not possible to nest flow control inside the for-loop? I had tried setting a variable to exactly the names of the servers I wanted to check, but the iteration would end on the first server: set TermServers=Server1.Server2.Server3.Server7.Server8.Server10 for /f "tokens=2 delims=.=" %%Q in ('set TermServers') do ( echo Checking %%Q for /f "tokens=1" %%U in ('query user %UserID% /server:%%Q') do (echo %%Q) ) I would prefer this second example over the first if nothing else for cleanliness. Any help regarding either of these issues would be greatly appreciated.

    Read the article

  • How to save a picture to a file?

    - by Peter vdL
    I'm trying to use a standard Intent that will take a picture, then allow approval or retake. Then I want to save the picture into a file. Here's the Intent I am using: Intent intent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE ); startActivityForResult( intent, 22 ); The docs at http://developer.android.com/reference/android/provider/MediaStore.html#ACTION_IMAGE_CAPTURE say "The caller may pass an extra EXTRA_OUTPUT to control where this image will be written. If the EXTRA_OUTPUT is not present, then a small sized image is returned as a Bitmap object in the extra field. If the EXTRA_OUTPUT is present, then the full-sized image will be written to the Uri value of EXTRA_OUTPUT." I don't pass extra output, I hope to get a Bitmap object in the extra field of the Intent passed into onActivityResult() (for this request). So where/how do you extract it? Intent has a getExtras(), but that returns a Bundle, and Bundle wants a key string to give you something back. What do you invoke on the Intent to extract the bitmap?

    Read the article

  • aio_read from file error on OS X

    - by Pyetras
    The following code: #include <fcntl.h> #include <unistd.h> #include <stdio.h> #include <aio.h> #include <errno.h> int main (int argc, char const *argv[]) { char name[] = "abc"; int fdes; if ((fdes = open(name, O_RDWR | O_CREAT, 0600 )) < 0) printf("%d, create file", errno); int buffer[] = {0, 1, 2, 3, 4, 5}; if (write(fdes, &buffer, sizeof(buffer)) == 0){ printf("writerr\n"); } struct aiocb aio; int n = 2; while (n--){ aio.aio_reqprio = 0; aio.aio_fildes = fdes; aio.aio_offset = sizeof(int); aio.aio_sigevent.sigev_notify = SIGEV_NONE; int buffer2; aio.aio_buf = &buffer2; aio.aio_nbytes = sizeof(buffer2); if (aio_read(&aio) != 0){ printf("%d, readerr\n", errno); }else{ const struct aiocb *aio_l[] = {&aio}; if (aio_suspend(aio_l, 1, 0) != 0){ printf("%d, suspenderr\n", errno); }else{ printf("%d\n", *(int *)aio.aio_buf); } } } return 0; } Works fine on Linux (Ubuntu 9.10, compiled with -lrt), printing 1 1 But fails on OS X (10.6.6 and 10.6.5, I've tested it on two machines): 1 35, readerr Is this possible that this is due to some library error on OS X, or am I doing something wrong?

    Read the article

  • [UNIX] Sort lines of massive file by number of words on line (ideally in parallel)

    - by conradlee
    I am working on a community detection algorithm for analyzing social network data from Facebook. The first task, detecting all cliques in the graph, can be done efficiently in parallel, and leaves me with an output like this: 17118 17136 17392 17064 17093 17376 17118 17136 17356 17318 12345 17118 17136 17356 17283 17007 17059 17116 Each of these lines represents a unique clique (a collection of node ids), and I want to sort these lines in descending order by the number of ids per line. In the case of the example above, here's what the output should look like: 17118 17136 17356 17318 12345 17118 17136 17356 17283 17118 17136 17392 17064 17093 17376 17007 17059 17116 (Ties---i.e., lines with the same number of ids---can be sorted arbitrarily.) What is the most efficient way of sorting these lines. Keep the following points in mind: The file I want to sort could be larger than the physical memory of the machine Most of the machines that I'm running this on have several processors, so a parallel solution would be ideal An ideal solution would just be a shell script (probably using sort), but I'm open to simple solutions in python or perl (or any language, as long as it makes the task simple) This task is in some sense very easy---I'm not just looking for any old solution, but rather for a simple and above all efficient solution

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Batch File to Delete Folders

    - by Homebrew
    I found some code to delete folders, in this case deleting all but 'n' # of folders. I created 10 test folders, plus 1 that was already there. I want to delete all but 4. The code works, it leaves 4 of my test folders, except that it also leaves the other folder. Is there some attribute of the other folder that's getting checked in the batch file that's stopping it from getting deleted ? It was created through a job a couple of weeks ago. Here's the code I stole (but don't really understand the details): rem DOS - Delete Folders if # folders > n @Echo Off :: User Variables :: Set this to the number of folders you want to keep Set _NumtoKeep=4 :: Set this to the folder that contains the folders to check and delete Set _Path=C:\MyFolder_Temp\FolderTest If Exist "%temp%\tf}1{" Del "%temp%\tf}1{" PushD %_Path% Set _s=%_NumtoKeep% If %_NumtoKeep%==1 set _s=single For /F "tokens=* skip=%_NumtoKeep%" %%I In ('dir "%_Path%" /AD /B /O-D /TW') Do ( If Exist "%temp%\tf}1{" ( Echo %%I:%%~fI >>"%temp%\tf}1{" ) Else ( Echo.>"%temp%\tf}1{" Echo Do you wish to delete the following folders?>>"%temp%\tf}1{" Echo Date Name>>"%temp%\tf}1{" Echo %%I:%%~fI >>"%temp%\tf}1{" )) PopD If Not Exist "%temp%\tf}1{" Echo No Folders Found to delete & Goto _Done Type "%temp%\tf}1{" | More Set _rdflag= /q Goto _Removeold Set _rdflag= :_Removeold For /F "tokens=1* skip=3 Delims=:" %%I In ('type "%temp%\tf}1{"') Do ( If "%_rdflag%"=="" Echo Deleting rd /s%_rdflag% "%%J") :_Done If Exist "%temp%\tf}1{" Del "%temp%\tf}1{"

    Read the article

  • NullReferenceExeption when reading from a file

    - by Whitey
    I need to read a file structured like this: 01000 00030 00500 03000 00020 And put it in an array like this: int[,] iMap = new int[iMapHeight, iMapWidth] { {0, 1, 0, 0, 0}, {0, 0, 0, 3, 0}, {0, 0, 5, 0, 0}, {0, 3, 0, 0, 0}, {0, 0, 0, 2, 0}, }; Hopefully you see what I'm trying to do here. I was confused how to do this so I asked here on SO, but the code I got from it gets this error: Object reference not set to an instance of an object. I'm pretty new to this so I have no idea how to fix it... I only barely know the code: protected void ReadMap(string mapPath) { using (var reader = new StreamReader(mapPath)) { for (int i = 0; i < iMapHeight; i++) { string line = reader.ReadLine(); for (int j = 0; j < iMapWidth; j++) { iMap[i, j] = (int)(line[j] - '0'); } } } } The line I get the error on is this: iMap[i, j] = (int)(line[j] - '0'); Can anyone provide a solution? Thank you. :)

    Read the article

  • C++ File manipulation problem

    - by Carlucho
    I am trying to open a file which normally has content, for the purpose of testing i will like to initialize the program without the files being available/existing so then the program should create empty ones, but am having issues implementing it. This is my code originally void loadFiles() { fstream city; city.open("city.txt", ios::in); fstream latitude; latitude.open("lat.txt", ios::in); fstream longitude; longitude.open("lon.txt", ios::in); while(!city.eof()){ city >> cityName; latitude >> lat; longitude >> lon; t.add(cityName, lat, lon); } city.close(); latitude.close(); longitude.close(); } I have tried everything i can think of, ofstream, ifstream, adding ios::out all all its variations. Could anybody explain me what to do in order to fix the problem. Thanks!

    Read the article

  • SQL query error while trying to put a file in the database

    - by DaGhostman Dimitrov
    Hey Guys I have a big problem that I have no Idea why.. I have few forms that upload files to the database, all of them work fine except one.. I use the same query in all(simple insert). I think that it has something to do with the files i am trying to upload but I am not sure. Here is the code: if ($_POST['action'] == 'hndlDocs') { $ref = $_POST['reference']; // Numeric value of $doc = file_get_contents($_FILES['doc']['tmp_name']); $link = mysqli_connect('localhost','XXXXX','XXXXX','documents'); mysqli_query($link,"SET autocommit = 0"); $query = "INSERT INTO documents ({$ref}, '{$doc}', '{$_FILES['doc']['type']}') ;"; mysqli_query($link,$query); if (mysqli_error($link)) { var_dump(mysqli_error($link)); mysqli_rollback($link); } else { print("<script> window.history.back(); </script>"); mysqli_commit($link); } } The database has only these fields: DATABASE documents ( reference INT(5) NOT NULL, //it is unsigned zerofill doc LONGBLOB NOT NULL, //this should contain the binary data mime_type TEXT NOT NULL // the mime type of the file php allows only application/pdf and image/jpeg ); And the error I get is : You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '00001, '????' at line 1 I will appreciate every help. Cheers!

    Read the article

  • reading the file name from user input in MIPS assembly

    - by Hassan Al-Jeshi
    I'm writing a MIPS assembly code that will ask the user for the file name and it will produce some statistics about the content of the file. However, when I hard code the file name into a variable from the beginning it works just fine, but when I ask the user to input the file name it does not work. after some debugging, I have discovered that the program adds 0x00 char and 0x0a char (check asciitable.com) at the end of user input in the memory and that's why it does not open the file based on the user input. anyone has any idea about how to get rid of those extra chars, or how to open the file after getting its name from the user?? here is my complete code (it is working fine except for the file name from user thing, and anybody is free to use it for any purpose he/she wants to): .data fin: .ascii "" # filename for input msg0: .asciiz "aaaa" msg1: .asciiz "Please enter the input file name:" msg2: .asciiz "Number of Uppercase Char: " msg3: .asciiz "Number of Lowercase Char: " msg4: .asciiz "Number of Decimal Char: " msg5: .asciiz "Number of Words: " nline: .asciiz "\n" buffer: .asciiz "" .text #----------------------- li $v0, 4 la $a0, msg1 syscall li $v0, 8 la $a0, fin li $a1, 21 syscall jal fileRead #read from file move $s1, $v0 #$t0 = total number of bytes li $t0, 0 # Loop counter li $t1, 0 # Uppercase counter li $t2, 0 # Lowercase counter li $t3, 0 # Decimal counter li $t4, 0 # Words counter loop: bge $t0, $s1, end #if end of file reached OR if there is an error in the file lb $t5, buffer($t0) #load next byte from file jal checkUpper #check for upper case jal checkLower #check for lower case jal checkDecimal #check for decimal jal checkWord #check for words addi $t0, $t0, 1 #increment loop counter j loop end: jal output jal fileClose li $v0, 10 syscall fileRead: # Open file for reading li $v0, 13 # system call for open file la $a0, fin # input file name li $a1, 0 # flag for reading li $a2, 0 # mode is ignored syscall # open a file move $s0, $v0 # save the file descriptor # reading from file just opened li $v0, 14 # system call for reading from file move $a0, $s0 # file descriptor la $a1, buffer # address of buffer from which to read li $a2, 100000 # hardcoded buffer length syscall # read from file jr $ra output: li $v0, 4 la $a0, msg2 syscall li $v0, 1 move $a0, $t1 syscall li $v0, 4 la $a0, nline syscall li $v0, 4 la $a0, msg3 syscall li $v0, 1 move $a0, $t2 syscall li $v0, 4 la $a0, nline syscall li $v0, 4 la $a0, msg4 syscall li $v0, 1 move $a0, $t3 syscall li $v0, 4 la $a0, nline syscall li $v0, 4 la $a0, msg5 syscall addi $t4, $t4, 1 li $v0, 1 move $a0, $t4 syscall jr $ra checkUpper: blt $t5, 0x41, L1 #branch if less than 'A' bgt $t5, 0x5a, L1 #branch if greater than 'Z' addi $t1, $t1, 1 #increment Uppercase counter L1: jr $ra checkLower: blt $t5, 0x61, L2 #branch if less than 'a' bgt $t5, 0x7a, L2 #branch if greater than 'z' addi $t2, $t2, 1 #increment Lowercase counter L2: jr $ra checkDecimal: blt $t5, 0x30, L3 #branch if less than '0' bgt $t5, 0x39, L3 #branch if greater than '9' addi $t3, $t3, 1 #increment Decimal counter L3: jr $ra checkWord: bne $t5, 0x20, L4 #branch if 'space' addi $t4, $t4, 1 #increment words counter L4: jr $ra fileClose: # Close the file li $v0, 16 # system call for close file move $a0, $s0 # file descriptor to close syscall # close file jr $ra Note: I'm using MARS Simulator, if that makes any different

    Read the article

  • Run bat file in Java and wait 2

    - by Savvas Dalkitsis
    This is a followup question to my other question : http://stackoverflow.com/questions/2434125/run-bat-file-in-java-and-wait The reason i am posting this as a separate question is that the one i already asked was answered correctly. From some research i did my problem is unique to my case so i decided to create a new question. Please go read that question before continuing with this one as they are closely related. Running the proposed code blocks the program at the waitFor invocation. After some research i found that the waitFor method blocks if your process has output that needs to be proccessed so you should first empty the output stream and the error stream. I did those things but my method still blocks. I then found a suggestion to simply loop while waiting the exitValue method to return the exit value of the process and handle the exception thrown if it is not, pausing for a brief moment as well so as not to consume all the CPU. I did this: import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; public class Test { public static void main(String[] args) { try { Process p = Runtime.getRuntime().exec( "cmd /k start SQLScriptsToRun.bat" + " -UuserName -Ppassword" + " projectName"); final BufferedReader input = new BufferedReader(new InputStreamReader(p.getInputStream())); final BufferedReader error = new BufferedReader(new InputStreamReader(p.getErrorStream())); new Thread(new Runnable() { @Override public void run() { try { while (input.readLine()!=null) {} } catch (IOException e) { e.printStackTrace(); } } }).start(); new Thread(new Runnable() { @Override public void run() { try { while (error.readLine()!=null) {} } catch (IOException e) { e.printStackTrace(); } } }).start(); int i = 0; boolean finished = false; while (!finished) { try { i = p.exitValue(); finished = true; } catch (IllegalThreadStateException e) { e.printStackTrace(); try { Thread.sleep(500); } catch (InterruptedException e1) { e1.printStackTrace(); } } } System.out.println(i); } catch (IOException e) { e.printStackTrace(); } } } but my process will not end! I keep getting this error: java.lang.IllegalThreadStateException: process has not exited Any ideas as to why my process will not exit? Or do you have any libraries to suggest that handle executing batch files properly and wait until the execution is finished?

    Read the article

  • Re: Help with Boost Grammar

    - by Decmac04
    I have redesigned and extended the grammar I asked about earlier as shown below: // BIFAnalyser.cpp : Defines the entry point for the console application. // // /*============================================================================= Copyright (c) Temitope Jos Onunkun 2010 http://www.dcs.kcl.ac.uk/pg/onun/ Use, modification and distribution is subject to the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) =============================================================================*/ //////////////////////////////////////////////////////////////////////////// // // // B Machine parser using the Boost "Grammar" and "Semantic Actions". // // // //////////////////////////////////////////////////////////////////////////// include include include include include include //////////////////////////////////////////////////////////////////////////// using namespace std; using namespace boost::spirit; //////////////////////////////////////////////////////////////////////////// // // Semantic Actions // //////////////////////////////////////////////////////////////////////////// // // namespace { //semantic action function on individual lexeme void do_noint(char const* start, char const* end) { string str(start, end); if (str != "NAT1") cout << "PUSH(" << str << ')' << endl; } //semantic action function on addition of lexemes void do_add(char const*, char const*) { cout << "ADD" << endl; // for(vector::iterator vi = strVect.begin(); vi < strVect.end(); ++vi) // cout << *vi << " "; } //semantic action function on subtraction of lexemes void do_subt(char const*, char const*) { cout << "SUBTRACT" << endl; } //semantic action function on multiplication of lexemes void do_mult(char const*, char const*) { cout << "\nMULTIPLY" << endl; } //semantic action function on division of lexemes void do_div(char const*, char const*) { cout << "\nDIVIDE" << endl; } // // vector flowTable; //semantic action function on simple substitution void do_sSubst(char const* start, char const* end) { string str(start, end); //use boost tokenizer to break down tokens typedef boost::tokenizer Tokenizer; boost::char_separator sep(" -+/*:=()",0,boost::drop_empty_tokens); // char separator definition Tokenizer tok(str, sep); Tokenizer::iterator tok_iter = tok.begin(); pair dependency; //create a pair object for dependencies //create a vector object to store all tokens vector dx; // int counter = 0; // tracks token position for(tok.begin(); tok_iter != tok.end(); ++tok_iter) //save all tokens in vector { dx.push_back(*tok_iter ); } counter = dx.size(); // vector d_hat; //stores set of dependency pairs string dep; //pairs variables as string object // dependency.first = *tok.begin(); vector FV; for(int unsigned i=1; i < dx.size(); i++) { // if(!atoi(dx.at(i).c_str()) && (dx.at(i) !=" ")) { dependency.second = dx.at(i); dep = dependency.first + "|-" + dependency.second + " "; d_hat.push_back(dep); vector<string> row; row.push_back(dependency.first); //push x_hat into first column of each row for(unsigned int j=0; j<2; j++) { row.push_back(dependency.second);//push an element (column) into the row } flowTable.push_back(row); //Add the row to the main vector } } //displays internal representation of information flow table cout << "\n****************\nDependency Table\n****************\n"; cout << "X_Hat\tDx\tG_Hat\n"; cout << "-----------------------------\n"; for(unsigned int i=0; i < flowTable.size(); i++) { for(unsigned int j=0; j<2; j++) { cout << flowTable[i][j] << "\t "; } if (*tok.begin() != "WHILE" ) //if there are no global flows, cout << "\t{}"; //display empty set cout << "\n"; } cout << "***************\n\n"; for(int unsigned j=0; j < FV.size(); j++) { if(FV.at(j) != dependency.second) dep = dependency.first + "|-" + dependency.second + " "; d_hat.push_back(dep); } cout << "PUSH(" << str << ')' << endl; cout << "\n*******\nDependency pairs\n*******\n"; for(int unsigned i=0; i < d_hat.size(); i++) cout << d_hat.at(i) << "\n...\n"; cout << "\nSIMPLE SUBSTITUTION\n\n"; } //semantic action function on multiple substitution void do_mSubst(char const* start, char const* end) { string str(start, end); cout << "PUSH(" << str << ')' << endl; //cout << "\nMULTIPLE SUBSTITUTION\n\n"; } //semantic action function on unbounded choice substitution void do_mChoice(char const* start, char const* end) { string str(start, end); cout << "PUSH(" << str << ')' << endl; cout << "\nUNBOUNDED CHOICE SUBSTITUTION\n\n"; } void do_logicExpr(char const* start, char const* end) { string str(start, end); //use boost tokenizer to break down tokens typedef boost::tokenizer Tokenizer; boost::char_separator sep(" -+/*=:()<",0,boost::drop_empty_tokens); // char separator definition Tokenizer tok(str, sep); Tokenizer::iterator tok_iter = tok.begin(); //pair dependency; //create a pair object for dependencies //create a vector object to store all tokens vector dx; for(tok.begin(); tok_iter != tok.end(); ++tok_iter) //save all tokens in vector { dx.push_back(*tok_iter ); } for(unsigned int i=0; i cout << "PUSH(" << str << ')' << endl; cout << "\nPREDICATE\n\n"; } void do_predicate(char const* start, char const* end) { string str(start, end); cout << "PUSH(" << str << ')' << endl; cout << "\nMULTIPLE PREDICATE\n\n"; } void do_ifSelectPre(char const* start, char const* end) { string str(start, end); //if cout << "PUSH(" << str << ')' << endl; cout << "\nPROTECTED SUBSTITUTION\n\n"; } //semantic action function on machine substitution void do_machSubst(char const* start, char const* end) { string str(start, end); cout << "PUSH(" << str << ')' << endl; cout << "\nMACHINE SUBSTITUTION\n\n"; } } //////////////////////////////////////////////////////////////////////////// // // Machine Substitution Grammar // //////////////////////////////////////////////////////////////////////////// // Simple substitution grammar parser with integer values removed struct Substitution : public grammar { template struct definition { definition(Substitution const& ) { machine_subst = ( (simple_subst) | (multi_subst) | (if_select_pre_subst) | (unbounded_choice) )[&do_machSubst] ; unbounded_choice = str_p("ANY") ide_list str_p("WHERE") predicate str_p("THEN") machine_subst str_p("END") ; if_select_pre_subst = ( ( str_p("IF") predicate str_p("THEN") machine_subst *( str_p("ELSIF") predicate machine_subst ) !( str_p("ELSE") machine_subst) str_p("END") ) | ( str_p("SELECT") predicate str_p("THEN") machine_subst *( str_p("WHEN") predicate machine_subst ) !( str_p("ELSE") machine_subst) str_p("END")) | ( str_p("PRE") predicate str_p("THEN") machine_subst str_p("END") ) )[&do_ifSelectPre] ; multi_subst = ( (machine_subst) *( ( str_p("||") (machine_subst) ) | ( str_p("[]") (machine_subst) ) ) ) [&do_mSubst] ; simple_subst = (identifier str_p(":=") arith_expr) [&do_sSubst] ; expression = predicate | arith_expr ; predicate = ( (logic_expr) *( ( ch_p('&') (logic_expr) ) | ( str_p("OR") (logic_expr) ) ) )[&do_predicate] ; logic_expr = ( identifier (str_p("<") arith_expr) | (str_p("<") arith_expr) | (str_p("/:") arith_expr) | (str_p("<:") arith_expr) | (str_p("/<:") arith_expr) | (str_p("<<:") arith_expr) | (str_p("/<<:") arith_expr) | (str_p("<=") arith_expr) | (str_p("=") arith_expr) | (str_p("=") arith_expr) | (str_p("=") arith_expr) ) [&do_logicExpr] ; arith_expr = term *( ('+' term)[&do_add] | ('-' term)[&do_subt] ) ; term = factor ( ('' factor)[&do_mult] | ('/' factor)[&do_div] ) ; factor = lexeme_d[( identifier | +digit_p)[&do_noint]] | '(' expression ')' | ('+' factor) ; ide_list = identifier *( ch_p(',') identifier ) ; identifier = alpha_p +( alnum_p | ch_p('_') ) ; } rule machine_subst, unbounded_choice, if_select_pre_subst, multi_subst, simple_subst, expression, predicate, logic_expr, arith_expr, term, factor, ide_list, identifier; rule<ScannerT> const& start() const { return predicate; //return multi_subst; //return machine_subst; } }; }; //////////////////////////////////////////////////////////////////////////// // // Main program // //////////////////////////////////////////////////////////////////////////// int main() { cout << "*********************************\n\n"; cout << "\t\t...Machine Parser...\n\n"; cout << "*********************************\n\n"; // cout << "Type an expression...or [q or Q] to quit\n\n"; string str; int machineCount = 0; char strFilename[256]; //file name store as a string object do { cout << "Please enter a filename...or [q or Q] to quit:\n\n "; //prompt for file name to be input //char strFilename[256]; //file name store as a string object cin strFilename; if(*strFilename == 'q' || *strFilename == 'Q') //termination condition return 0; ifstream inFile(strFilename); // opens file object for reading //output file for truncated machine (operations only) if (inFile.fail()) cerr << "\nUnable to open file for reading.\n" << endl; inFile.unsetf(std::ios::skipws); Substitution elementary_subst; // Simple substitution parser object string next; while (inFile str) { getline(inFile, next); str += next; if (str.empty() || str[0] == 'q' || str[0] == 'Q') break; parse_info< info = parse(str.c_str(), elementary_subst !end_p, space_p); if (info.full) { cout << "\n-------------------------\n"; cout << "Parsing succeeded\n"; cout << "\n-------------------------\n"; } else { cout << "\n-------------------------\n"; cout << "Parsing failed\n"; cout << "stopped at: " << info.stop << "\"\n"; cout << "\n-------------------------\n"; } } } while ( (*strFilename != 'q' || *strFilename !='Q')); return 0; } However, I am experiencing the following unexpected behaviours on testing: The text files I used are: f1.txt, ... containing ...: debt:=(LoanRequest+outstandingLoan1)*20 . f2.txt, ... containing ...: debt:=(LoanRequest+outstandingLoan1)*20 || newDebt := loanammount-paidammount || price := purchasePrice + overhead + bb . f3.txt, ... containing ...: yy < (xx+7+ww) . f4.txt, ... containing ...: yy < (xx+7+ww) & yy : NAT . When I use multi_subst as start rule both files (f1 and f2) are parsed correctly; When I use machine_subst as start rule file f1 parse correctly, while file f2 fails, producing the error: “Parsing failed stopped at: || newDebt := loanammount-paidammount || price := purchasePrice + overhead + bb” When I use predicate as start symbol, file f3 parse correctly, but file f4 yields the error: “ “Parsing failed stopped at: & yy : NAT” Can anyone help with the grammar, please? It appears there are problems with the grammar that I have so far been unable to spot.

    Read the article

  • MSBuild "Wrapper" fails while VS2010 "Pure" compile succeeds for MFC application in CruiseControl.NE

    - by ee
    The Overview I am working on a Continuous Integration build of a MFC appliction via CruiseControl.net and VS2010. When building my .sln, a "Visual Studio" CCNet task (devenv) works, but a wrapper MSBuild script run via the CCNet MSBuild task fails with errors like: error RC1015: cannot open include file 'winres.h'.. error C1083: Cannot open include file: 'afxwin.h': No such file or directory error C1083: Cannot open include file: 'afx.h': No such file or directory The Question How can I adjust the build environment of my msbuild wrapper so that the application builds correctly? (Pretty clearly the MFC paths aren't right for the msbuild environment, but how do i fix it for MSBuild+VS2010+MFC+CCNet?) Background Details We have successfully upgraded an MFC application (.exe with some MFC extension .dlls) to Visual Studio 2010 and can compile the application without issue on developer machines. Now I am working on compiling the application on the CI server environment I did a full installation of VS2010 (Professional) on the build server. In this way, I knew everything I needed would be on the machine (one way or another) and that this would be consistent with developer machines. VS2010 is correctly installed on the CI server, and the devenv task works as expected I now have a wrapper MSBuild script that does some extended version processing and then builds the .sln for the application via an MSBuild task. This wrapper script is run via CCNet's MSBuild task and fails with the above mentioned errors My Assumptions This seems to be a missing/wrong configuration of include paths to standard header resources of the MFC persuasion I should be able to coerce the MSBuild environment to consider the relevant resource files from my VS2010 install and have this approach work. But how do I do that? Am I setting Environment variables? Registry settings? I can see how one can inject additional directories in some cases, but this seems to need a more systemic configuration at the compiler defaults level.

    Read the article

  • Add entries to Nautilus' right-click menu (copy, move to arbitrary directories)

    - by qbi
    Assume I want to copy a file from /home/foo/bar/baz to /opt/quuz/dir1/option3. When I try it with Nautilus, first I have to open the correct directory, copy the file, go to the other directory and paste it there. I was thinking of a better way and old KDE3 versions of Konqueror came to mind. It was possible to right-click on a file. The context menu had an option for copying, moving the file to some default directories. Furthermore you could select any directory under /. So for the above action one would right click on a file, select /opt first, a list of subdirectories will open, select /opt/quuz and so on. Using GNOME there are only two possible values (home and desktop). Is there any way to insert more directories to this context menu in GNOME? Can I copy somehow the behaviour of Konqueror?

    Read the article

  • Is it possible to preview arbitrary formats in Nautilus?

    - by alfC
    I recently found out that Nautilus (Ubuntu 12.04 at least) can show thumbnails of files of non-image formats, for example (data grapher) grace files (.agr) shows a small version of the graph contained in its data. Obviously, there some library or script that is processing the file, making the image, and allowing nautilus to show a small version of it. This made me think that in principle any file that potentially can be processed into an image can serve as a Nautilus thumbnail. For example, a .tex file (which can be converted to .pdf) or a gnuplot script can be displayed as a thumbnail when possible. In the case of .tex file, the correspoding .pdf can be created by the command pdflatex file.tex. The question is, how can I tell Nautilus to create a thumbnail for an arbitrary format and how do I specify the commands to do so within Nautilus?

    Read the article

  • Using Python to get a CSV output for the following example.

    - by Az
    Hi there, I'm back again with my ongoing saga of Student-Project Allocation questions. Thanks to Moron (who does not match his namesake) I've got a bit of direction for an evaluation portion of my project. Going with the idea of the Assignment Problem and Hungarian Algorithm I would like to express my data in the form of a .csv file which would end up looking like this in spreadsheet form. This is based on the structure I saw here. | | Project 1 | Project 2 | Project 3 | |----------|-----------|-----------|-----------| |Student1 | | 2 | 1 | |----------|-----------|-----------|-----------| |Student2 | 1 | 2 | 3 | |----------|-----------|-----------|-----------| |Student3 | 1 | 3 | 2 | |----------|-----------|-----------|-----------| To make it less cryptic: the rows are the Students/Agents and the columns represent Projects/Task. Obviously ONE project can be assigned to ONE student. That, in short, is what my project is about. The fields represent the preference weights the students have placed upon the projects (ranging from 1 to 10). If blank, that student does not want that project and there's no chance of him/her being assigned such. Anyway, my data is stored within dictionaries. Specifically the students and projects dictionaries such that: students[student_id] = Student(student_id, student_name, alloc_proj, alloc_proj_rank, preferences) where preferences is in the form of a dictionary such that preferences[rank] = {project_id} and projects[project_id] = Project(project_id, project_name) I'm aware that sorted(students.keys()) will give me a sorted list of all the student IDs which will populate the row labels and sorted(projects.keys()) will give me the list I need to populate the column labels. Thus for each student, I'd go into their preferences dictionary and match the applicable projects to ranks. I can do that much. Where I'm failing is understanding how to create a .csv file. Any help, pointers or good tutorials will be highly appreciated.

    Read the article

  • VS 2008 C++ build output?

    - by STingRaySC
    Why when I watch the build output from a VC++ project in VS do I see: 1Compiling... 1a.cpp 1b.cpp 1c.cpp 1d.cpp 1e.cpp [etc...] 1Generating code... 1x.cpp 1y.cpp [etc...] The output looks as though several compilation units are being handled before any code is generated. Is this really going on? I'm trying to improve build times, and by using pre-compiled headers, I've gotten great speedups for each ".cpp" file, but there is a relatively long pause during the "Generating Code..." message. I do not have "Whole Program Optimization" nor "Link Time Code Generation" turned on. If this is the case, then why? Why doesn't VC++ compile each ".cpp" individually (which would include the code generation phase)? If this isn't just an illusion of the output, is there cross-compilation-unit optimization potentially going on here? There don't appear to be any compiler options to control that behavior (I know about WPO and LTCG, as mentioned above). EDIT: The build log just shows the ".obj" files in the output directory, one per line. There is no indication of "Compiling..." vs. "Generating code..." steps. EDIT: I have confirmed that this behavior has nothing to do with the "maximum number of parallel project builds" setting in Tools - Options - Projects and Solutions - Build and Run. Nor is it related to the MSBuild project build output verbosity setting. Indeed if I cancel the build before the "Generating code..." step, the ".obj" files will not exist for the most recent set of "compiled" files. E.g., if I cancel the build during "c.cpp" above, I will see only "a.obj" and "b.obj".

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >