Search Results

Search found 3033 results on 122 pages for 'manual'.

Page 101/122 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • not able to run c/cpp execs in eclipse cdt

    - by user1658323
    i installed eclipse and then cdt on an ubuntu system recently and was trying to make the first runnable c/c++ proj.. i installed g++ also, and then created the first executable cpp 'Hello World' project some files are created... then some issues... 1) even though Build Automatically is selected, I have to goto the project n do a Build Project to build it manually, and this i have to do everytime i make a change 2) After Building manually, there are some new folders created with Binaries and Debug files and i can see g++ commands in the console being executed. The project binary is output both to debug n binaries folder. But i am not able to run these through the Green Play Button or any other way in eclipse. Even Run configuration is not showing any option for c/C++ proj.. though i can goto terminal and run the binary myself through ./ But i want to be able to run n debug this through eclipse. plz help in fixing me this problem as i really love eclipse n have some c/cpp assignments coming soon.. Console info on doing a manual project build - Build of configuration Debug for project qwe ** make all Building file: ../src/qwe.cpp Invoking: GCC C++ Compiler g++ -O0 -g3 -Wall -c -fmessage-length=0 -MMD -MP -MF"src/qwe.d" -MT"src/qwe.d" -o "src/qwe.o" "../src/qwe.cpp" Finished building: ../src/qwe.cpp Building target: qwe Invoking: GCC C++ Linker g++ -o "qwe" ./src/qwe.o Finished building target: qwe Build Finished **

    Read the article

  • Slope requires a real as parameter 2?

    - by Dave Jarvis
    Question How do you pass the correct value to udf_slope's second parameter type? Attempts CAST(Y.YEAR AS FLOAT), but that failed (SQL error). Y.YEAR + 0.0, but that failed, too (see error message). slope(D.AMOUNT, 1.0), failed as well Error Message Using udf_slope fails due to: Can't initialize function 'slope'; slope() requires a real as parameter 2 Code SELECT D.AMOUNT, Y.YEAR, slope(D.AMOUNT, Y.YEAR + 0.0) as SLOPE, intercept(D.AMOUNT, Y.YEAR + 0.0) as INTERCEPT FROM YEAR_REF Y, DAILY D Here, D.AMOUNT is a FLOAT and Y.YEAR is an INTEGER. Create Function The slope function was created as follows: CREATE AGGREGATE FUNCTION slope RETURNS REAL SONAME 'udf_slope.so'; Function Signature From udf_slope.cc: double slope( UDF_INIT* initid, UDF_ARGS* args, char* is_null, char* is_error ) Example Usages Reading the fine manual reveals: UDF intercept() Calculates the intercept of the linear regression of two sets of variables. Function name intercept Input parameter(s) 2 (dependent variable: REAL, independent variable: REAL) Examples SELECT intercept(income,age) FROM customers UDF slope() Calculates the slope of the linear regression of two sets of variables. Function name slope Input parameter(s) 2 (dependent variable: REAL, independent variable: REAL) Examples SELECT slope(income,age) FROM customers Thoughts? Thank you!

    Read the article

  • How would you start automating my job?

    - by Jurily
    At my new job, we sell imported stuff. In order to be able to sell said stuff, currently the following things need to happen for every incoming shipment: Invoice arrives, in the form of an email attachment, Excel spreadsheet Monkey opens invoice, copy-pastes the relevant part of three columns into the relevant parts of a spreadsheet template, where extremely complex calculations happen, like =B2*550 Monkey sends this new spreadsheet to boss (email if lucky, printer otherwise), who sets the retail price Monkey opens the reply, then proceeds to input the data into the production database using a client program that is unusable on so many levels it's not even worth detailing Monkey fires up HyperTerminal, types in "AT", disconnect Monkey sends text messages and emails to customers using another part of the horrible client program, one at a time I want to change Monkey from myself to software wherever possible. I've never written anything that interfaces with email, Excel, databases or SMS before, but I'd be more than happy to learn if it saves me from this. Here's my uneducated wishlist: Monkey asks Thunderbird (mail server perhaps?) for the attachment Monkey tells Excel to dump the spreadsheet into a more Jurily-friendly format, like CSV or something Monkey parses the output, does the complex calculations // TODO: find a way to get the boss-generated prices with minimal manual labor involved Monkey connects to the database, inserts data Monkey spams costumers Is all this feasible? If yes, where do I start reading? How would you improve it? What language/framework do you think would be ideal for this? What would you do about the boss?

    Read the article

  • MySQL " identify storage engine statement"

    - by sammysmall
    This IS NOT a Homework question! While building my current student database project I realized that I may want to identify comprehensive information about a database design in the future. More-so if I am fortunate enough to get a job in this field and were handed a database project how could I break down certain elements for identification... In all of my previous designs I have been using MySQL Community Server (GPL) 5.1.42, I thought (duh) that I was using the MyISAM based on most of my text-book instruction and MySQL 5.0 Reference Manual :: 13 Storage Engines :: 13.1 The MyISAM Storage Engine I determined that this was in fact incorrect for this version and the use of "SHOW ENGINES" at the console... No problem, figured out why they have "versions" the need to pay attention to what version is being used, and the need for a means to determine what I am about to mess up "if" I do not pay attention to detail... Q1. Specifically what statement will identify the version used by someone elses initial database creation? (since I created my own databases I know what version I used) Q2. Specifically what statement will identify the storage engine that the developer used when creating the database. (I specified a particular database in my collection then tried SHOW Engine, did not work, then tried to just get the metadata from one table in that database: mysql SELECT duck_cust, table_type, engine - FROM INFORMATION_SCHEMA.tables - WHERE table_schema = 'tp' - ORDER BY table_type ASC, table_name DESC; as this was not really what I wanted (and did not work) I am looking for some direction from the pros... Q3. (If you really have the inclination to continue helping) If I were to access a database from an earlier/later "version" are there backward/forward compatibility issues for maintaining/updating data between versions? Please and Thank you in advance for your time and efforts! sammysmall

    Read the article

  • algorithm - How to sort a 0/1 array with 2n/3 comparisons?

    - by Jackson Tale
    In Algorithm Design Manual, there is such an excise 4-26 Consider the problem of sorting a sequence of n 0’s and 1’s using comparisons. For each comparison of two values x and y, the algorithm learns which of x < y, x = y, or x y holds. (a) Give an algorithm to sort in n - 1 comparisons in the worst case. Show that your algorithm is optimal. (b) Give an algorithm to sort in 2n/3 comparisons in the average case (assuming each of the n inputs is 0 or 1 with equal probability). Show that your algorithm is optimal. For (a), I think it is fairly easy. I can choose a[n-1] as pivot, then do something like in quicksort partition, scan 0 to n - 2, find the middle point where left side is all 0 and right side is all 1, this take n - 1 comparisons. But for (b), I can't get a clue. It says "each of the n inputs is 0 or 1 with equal probability", so I guess I can assume the numbers of 0 and 1 equal? But how can I get a result which is related to 1/3? divide the whole array into 3 groups? Thanks

    Read the article

  • How can I redirect directory using htaccess including files with spaces in the names

    - by James
    Hi, I am dealing with a situation where someone has handed me a bunch of old files on the server which already have a lot of incoming links directly to them (mostly pdf files). I now have the files organized but in a different directory. Before it was 'domain/manuals/file' now it is 'domain/media/manual/file'. I am trying to resolve this issue using an htaccess file. Many of the files have spaces in the names (not something I can control) and because they already have links to them I can't just go through renaming them. I have found that I can redirect files individually when they have spaces in the names by using quotes such as: redirect 301 "/manuals/file 123.pdf" "http://www.domain.com/manuals/file 123.pdf" However, there are loads of these files and I wondered if there is a way to create a regular expression that will handle spaces in file names that I could use to redirect the entire directory. I should add that some files contain decent file names with no spaces in, some have one space and others more than one space. It's not pretty. If you've encountered this problem before I would really appreciate hearing your advice, I'm running out of ideas. Thanks.

    Read the article

  • Atomic operations on several transactionless external systems

    - by simendsjo
    Say you have an application connecting 3 different external systems. You need to update something in all 3. In case of a failure, you need to roll back the operations. This is not a hard thing to implement, but say operation 3 fails, and when rolling back, the rollback for operation 1 fails! Now the first external system is in an invalid state... I'm thinking a possible solution is to shut down the application and forcing a manual fix of the external system, but then again... It might already have used this information (and perhaps that's why it failed), or we might not have sufficient access. Or it might not even be a good way to rollback the action! Are there some good ways of handling such cases? EDIT: Some application details.. It's a multi user web application. Most of the work is done with scheduled jobs (through Quartz.Net), so most operations is run in it's own thread. Some user actions should trigger jobs that update several systems though. The external systems are somewhat unstable. I Was thinking of changing the application to use the Command and Unit Of Work pattern

    Read the article

  • Convincing why testing is good

    - by FireAphis
    Hello, In my team of real-time-embedded C/C++ developers, most people don't have any culture of testing their code beyond the casual manual sanity checks. I personally strongly believe in advantages of autonomous automatic tests, but when I try to convince I get some reappearing arguments like: We will spend more time on writing the tests than writing the code. It takes a lot of effort to maintain the tests. Our code is spaghetti; no way we can unit-test it. Our requirement are not sealed – we’ll have to rewrite all the tests every time the requirements are changed. Now, I'd gladly hear any convincing tips and advises, but what I am really looking for are references to researches, articles, books or serious surveys that show (preferably in numbers) how testing is worth the effort. Something like "We in IBM/Microsoft/Google, surveying 3475 active projects, found out that putting 50% more development time into testing decreased by 75% the time spent on fixing bugs" or "after half a year, the time needed to write code with test was only marginally longer than what used to take without tests". Any ideas? P.S.: I'm adding C++ tag too in case someone has a specific experience with convincing this, usually elitist, type of developers :-)

    Read the article

  • how does MySQL implement the "group by"?

    - by user188916
    I read from the MySQL Reference Manual and find that when it can take use of index,it just do index scan,other it will create tmp tables and do things like filesort. And I also read from other article that the "Group By" result will sort by group by columns by default,if "order by null" clause added,it won't don filesort. The difference can be found from the "explain ..." clause. so my problem is:what is the difference between "group by" clause that with "order by null" and which doesn't have? I try to use profiling to see what mysql do on the background,and only see result like: result for group clause without order by null: |preparing | 0.000016 | | Creating tmp table | 0.000048 | | executing | 0.000009 | | Copying to tmp table | 0.000109 | **| Sorting result | 0.000023 |** | Sending data | 0.000027 | result for clause with "order by null": preparing | 0.000016 | | Creating tmp table | 0.000052 | | executing | 0.000009 | | Copying to tmp table | 0.000114 | | Sending data | 0.000028 | So I guess what MySQL do when the "order by null" added,it does not use filesort algorithm,maybe when it creates the tmp table,it uses index as well,and then use the index to do group by operation,when completed,it just read result from the table rows and does not sort the result. But my original opinion is that MySQL can use quicksort to sort the items and then do group by,so the result will be sorted as well. Any opinion appreciated,thanks.

    Read the article

  • GNU C++ how to check when -std=c++0x is in effect?

    - by TerryP
    My system compiler (gcc42) works fine with the TR1 features that I want, but trying to support newer compiler versions other than the systems, trying to accessing TR1 headers an #error demanding the -std=c++0x option because of how it interfaces with library or some hub bub like that. /usr/local/lib/gcc45/include/c++/bits/c++0x_warning.h:31:2: error: #error This file requires compiler and library support for the upcoming ISO C++ standard, C++0x. This support is currently experimental, and must be enabled with the -std=c++0x or -std=gnu++0x compiler options. Having to supply an extra switch is no problem, to support GCC 4.4 and 4.5 under this system (FreeBSD), but obviously it changes the picture! Using my system compiler (g++ 4.2 default dialect): #include <tr1/foo> using std::tr1::foo; Using newer (4.5) versions of the compiler with -std=c++0x: #include <foo> using std::foo; Is there anyway using the pre processor, that I can tell if g++ is running with C++0x features enabled? Something like this is what I'm looking for: #ifdef __CXX0X_MODE__ #endif but I have not found anything in the manual or off the web. At this rate, I'm starting to think that life would just be easier, to use Boost as a dependency, and not worry about a new language standard arriving before TR4... hehe.

    Read the article

  • Unserializing an API return object (PHP/Ebay API)

    - by DavidYell
    I have been working with the Ebay api for a project and have found it great. I have however found a problem now, more PHP related. When I read my items from Ebay, I store a bunch of details in the database. Currently, just for the sake of it really, I serialize the whole return object and store it in the database in a related table. The idea being, that when I display my information, I have all the details to hand should I need them. The problem arises in that the pricing information is always in a sub object. [ConvertedAdjustmentAmount] => __PHP_Incomplete_Class Object ( [__PHP_Incomplete_Class_Name] => eBayAmountType [_] => 0 [currencyID] => USD ) As you can see when I unserialize my object, my cunning plan falls foul of the Incomplete class problem. I have checked the following question, without success. http://stackoverflow.com/questions/965611/forcing-access-to-php-incomplete-class-object-properties The main issue lies, as far as I can see, in that the price class is stored in the Ebay api, so how do I recreate it? I have been reading this page, http://uk3.php.net/manual/en/function.unserialize.php and trying to figure out, unserialize_callback_func which I can't figure out either, so any help would be appreciated!

    Read the article

  • MySQL nested CASE error I need help with?

    - by AK
    What I am trying to do here is: IF the records in table todo as identified in $done have a value in the column recurinterval then THEN reset date_scheduled column ELSE just set status_id column to 6 for those records. This is the error I get from mysql_error() ... You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'CASE recurinterval != 0 AND recurinterval IS NOT NULL THEN SET date_sche' at line 2 How can I make this statement work? UPDATE todo CASE recurinterval != 0 AND recurinterval IS NOT NULL THEN SET date_scheduled = CASE recurunit WHEN 'DAY' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval DAY) WHEN 'WEEK' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval WEEK) WHEN 'MONTH' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval MONTH) WHEN 'YEAR' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval YEAR) END WHERE todo_id IN ($done) ELSE SET status_id = 6 WHERE todo_id IN ($done) END The following mySQL statement worked just fine before I revised like above. UPDATE todo SET date_scheduled = CASE recurunit WHEN 'DAY' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval DAY) WHEN 'WEEK' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval WEEK) WHEN 'MONTH' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval MONTH) WHEN 'YEAR' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval YEAR) END WHERE todo_id IN ($done) AND recurinterval != 0 AND recurinterval IS NOT NULL

    Read the article

  • How slow are bit fields in C++

    - by Shane MacLaughlin
    I have a C++ application that includes a number of structures with manually controlled bit fields, something like #define FLAG1 0x0001 #define FLAG2 0x0002 #define FLAG3 0x0004 class MyClass { ' ' unsigned Flags; int IsFlag1Set() { return Flags & FLAG1; } void SetFlag1Set() { Flags |= FLAG1; } void ResetFlag1() { Flags &= 0xffffffff ^ FLAG1; } ' ' }; For obvious reasons I'd like to change this to use bit fields, something like class MyClass { ' ' struct Flags { unsigned Flag1:1; unsigned Flag2:1; unsigned Flag3:1; }; ' ' }; The one concern I have with making this switch is that I've come across a number of references on this site stating how slow bit fields are in C++. My assumption is that they are still faster than the manual code shown above, but is there any hard reference material covering the speed implications of using bit fields on various platforms, specifically 32bit and 64bit windows. The application deals with huge amounts of data in memory and must be both fast and memory efficient, which could well be why it was written this way in the first place.

    Read the article

  • libcurl (c api) READFUNCTION for http PUT blocking forever

    - by Duane
    I am using libcurl for a RESTful library. I am having two problems with a PUT message, I am just trying to send a small content like "hello" via put. My READFUNCTION for PUT's blocks for a very large amount of time (minutes) when I follow the manual at curl.haxx.se and return a 0 indicating I have finished the content. (on os X) When I return something 0 this succeeds much faster (< 1 sec) When I run this on my linux machine (ubuntu 10.4) this blocking event appears to NEVER return when I return 0, if I change the behavior to return the size written libcurl appends all the data in the http body sending way more data and it fails with a "too much data" message from the server. my readfunction is below, any help would be greatly appreciated. I am using libcurl 7.20.1 typedef struct{ void *data; int body_size; int bytes_remaining; int bytes_written; } postdata; size_t readfunc(void *ptr, size_t size, size_t nmemb, void *stream) { if(stream) { postdata ud = (postdata)stream; if(ud->bytes_remaining) { if(ud->body_size > size*nmemb) { memcpy(ptr, ud->data+ud->bytes_written, size*nmemb); ud->bytes_written+=size+nmemb; ud->bytes_remaining = ud->body_size-size*nmemb; return size*nmemb; } else { memcpy(ptr, ud->data+ud->bytes_written, ud->bytes_remaining); ud->bytes_remaining=0; return 0; } }

    Read the article

  • pyOpenSSL and the WantReadError

    - by directedition
    I have a socket server that I am trying to move over to SSL on python 2.5, but I've run into a snag with pyOpenSSL. I can't find any good tutorials on using it, so I'm operating largely on guesses. Here is how my server sets up the socket: ctx = SSL.Context(SSL.SSLv23_METHOD) ctx.use_privatekey_file ("mykey.pem") ctx.use_certificate_file("mycert.pem") sock = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) addr = ('', int(8081)) sock.bind(addr) sock.listen(5) Here is how it accepts clients: sock.setblocking(0) while True: if len(select([sock], [], [], 0.25)[0]): client_sock, client_addr = sock.accept() client = ClientGen(client_sock) And here is how it sends/receives from the connected sockets: while True: (r, w, e) = select.select([sock], [sock], [], 0.25) if len(r): bytes = sock.recv(1024) if len(w): n_bytes = sock.send(self.message) It's compacted, but you get the general idea. The problem is, once the send/receive loop starts, it dies right away, before anything has been sent or received (that I can see anyway): Traceback (most recent call last): File "ClientGen.py", line 50, in networkLoop n_bytes = sock.send(self.message WantReadError The manual's description of the 'WantReadError' is very vague, saying it can come from just about anywhere. What am I doing wrong?

    Read the article

  • C/C++ macro/template blackmagic to generate unique name.

    - by anon
    Macros are fine. Templates are fine. Pretty much whatever it works is fine. The example is OpenGL; but the technique is C++ specific and relies on no knowledge of OpenGL. Precise problem: I want an expression E; where I do not have to specify a unique name; such that a constructor is called where E is defined, and a destructor is called where the block E is in ends. For example, consider: class GlTranslate { GLTranslate(float x, float y, float z); { glPushMatrix(); glTranslatef(x, y, z); } ~GlTranslate() { glPopMatrix(); } }; Manual solution: { GlTranslate foo(1.0, 0.0, 0.0); // I had ti give it a name ..... } // auto popmatrix Now, I have this not only for glTranslate, but lots of other PushAttrib/PopAttrib calls too. I would prefer not to have to come up with a unique name for each var. Is there some trick involving macros templates ... or something else that will automatically create a variable who's constructor is called at point of definition; and destructor called at end of block? Thanks!

    Read the article

  • Error with MySQL Query

    - by Ken
    Okay, I must be an idiot, because this is my 3rd question for today. Here's my code: date_default_timezone_set("America/Los_Angeles"); include("mainmenu.php"); $con = mysql_connect("localhost", "root", "********"); if(!$con){ die(mysql_error()); } $usrname = $_POST['usrname']; $fname = $_POST['fname']; $lname = $_POST['lname']; $password = $_POST['password']; $email = $_POST['email']; mysql_select_db("`users`, $con) or die(mysql_error()"); $query = ("INSERT INTO `users`.`data` (`id`, `usrname`, `fname`, `lname`, `email`, `password`) VALUES (NULL, '$usrname', '$fname', '$lname', '$email', 'password'))"); mysql_query('$query') or die(mysql_error()); mysql_close($con); echo("Thank you for registering!"); I always get the error returned as: "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '$query' at line 1. Help a newbie. I'm about to stab my monitor.

    Read the article

  • folder structure in a mercurial repo?

    - by ajsie
    I have just switched from svn to mercurial and have read some tutorials about it. I've still got some confusions that i hope you could help me to sort out. I wonder if I have understood the folder structure in a mercurial repo right. In a svn repo I usually have these folders: svn: branches (branches/chat, branches/new_login etc) tags (version1.0, version2.0 etc) sandbox trunk Should a branch actually be another clone of the original/central repo in mercurial? it seemed like that when I read the manual. And a tag is just a named identifier, but you should clone the original/central repo whenever you want to create a tag? How about the sandbox? should that be another clone too? So basically you just have in a repo all the folders/files that you would have in the trunk folder? mercurial: central repo: projects folders/files (not in any parentfolder) tag repo: cloned from central repo at a given moment for release (version1.0, version2.0 etc) branch repo: cloned from central repo for adding features (chat, new_login etc) sandbox repo: experimental repo (could be pushed to central repo, or just deleted) is this correct?

    Read the article

  • Using capistrano to deploy from different git branches

    - by Toms Mikoss
    I am using capistrano to deploy a RoR application. The codebase is in a git repository, and branching is widely used in development. Capistrano uses deploy.rb file for it's settings, one of them being the branch to deploy from. My problem is this: let's say I create a new branch A from master. The deploy file will reference master branch. I edit that, so A can be deployed to test environment. I finish working on the feature, and merge branch A into master. Since the deploy.rb file from A is fresher, it gets merged in and now the deploy.rb in master branch references A. Time to edit again. That's a lot of seemingly unnecessary manual editing - the parameter should always match current branch name. On top of that, it is easy to forget to edit the settings each and every time. What would be the best way to automate this process? Edit: Turns out someone already had done exactly what I needed.

    Read the article

  • Best way to migrate export/import from SQL Server to oracle

    - by matao
    Hi guys! I'm faced with needing access for reporting to some data that lives in Oracle and other data that lives in a SQL Server 2000 database. For various reasons these live on different sides of a firewall. Now we're looking at doing an export/import from sql server to oracle and I'd like some advice on the best way to go about it... The procedure will need to be fully automated and run nightly, so that excludes using the SQL developer tools. I also can't make a live link between databases from our (oracle) side as the firewall is in the way. The data needs to be transformed in the process from a star schema to a de-normalised table ready for reporting. What I'm thinking about is writing a monster query for SQL Server (which I mostly have already) that will denormalise and read out the data from SQL Server into a flat file using the sql server equivalent of sqlplus as a scheduled task, dump into a Well Known Location, then on the oracle side have a cron job that copies down the file and loads it with sql loader and rebuilds indexes etc. This is all doable, but very manual. Is there one or a combination of FOSS or standard oracle/SQL Server tools that could automate this for me? the Irreducible complexity is the query on one side and building indexes on the other, but I would love to not have to write the CSV dumping detail or the SQL loader script, just say dump this view out to CSV on one side, and on the other truncate and insert into this table from CSV and not worry about mapping column names and all other arcane sqlldr voodoo... best practices? thoughts? comments? edit: I have about 50+ columns all of varying types and lengths in my dataset, which is why I'd prefer to not have to write out how to generate and map each single column...

    Read the article

  • ZF2 - How to use the Hydrator/exchangeArray() to populate a nested object

    - by Dominic Watson
    I've got an object with values that are stored in my database. My object also contains another object which is stored in the database using just the ID of it (foreign key). http://framework.zend.com/manual/2.0/en/modules/zend.stdlib.hydrator.html Before the Hydrator/exchangeArray functionality in ZF2 you would use a Mapper to grab everything you need to create the object. Now I'm trying to eliminate this extra layer by just using Hydration/exchangeArray to populate my objects but am a bit stuck on creating the nested object. Should my entity have the Inner object's table injected into it so I can create it if the ID of it is passed to my 'exchangeArray' ? Here are example entities as an example. // Village id, name, position, square_id // Map Square id, name, type Upon sending square_id to my Village's exchangeArray() function. It would get the mapTable and use hydrator to pull in the square using the ID I have. It doesn't seem right to be to have mapper instances inside my entity as I thought they should be disconnected from anything but it's own entity specific parameters and functionality?

    Read the article

  • C++ MACRO that will execute a block of code and a certain command after that block.

    - by Poni
    void main() { int xyz = 123; // original value { // code block starts xyz++; if(xyz < 1000) xyz = 1; } // code block ends int original_value = xyz; // should be 123 } void main() { int xyz = 123; // original value MACRO_NAME(xyz = 123) // the macro takes the code code that should be executed at the end of the block. { // code block starts xyz++; if(xyz < 1000) xyz = 1; } // code block ends << how to make the macro execute the "xyz = 123" statement? int original_value = xyz; // should be 123 } Only the first main() works. I think the comments explain the issue. It doesn't need to be a macro but to me it just sounds like a classical "macro-needed" case. By the way, there's the BOOST_FOREACH macro/library and I think it does the exact same thing I'm trying to achieve but it's too complex for me to find the essence of what I need. From its introductory manual page, an example: #include <string> #include <iostream> #include <boost/foreach.hpp> int main() { std::string hello( "Hello, world!" ); BOOST_FOREACH( char ch, hello ) { std::cout << ch; } return 0; }

    Read the article

  • Migrate Data and Schema from MySQL to SQL Server

    - by colithium
    Are there any free solutions for automatically migrating a database from MySQL to SQL Server Server that "just works"? I've been attempting this simple (at least I thought so) task all day now. I've tried: SQL Server Management Studio's Import Data feature Create an empty database Tasks - Import Data... .NET Framework Data Provider for Odbc Valid DSN (verified it connects) Copy data from one or more tables or views Check 1 VERY simple table Click Preview Get Error: The preview data could not be retrieved. ADDITIONAL INFORMATION: ERROR [42000] [MySQL][ODBC 5.1 Driver][mysqld-5.1.45-community]You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '"table_name"' at line 1 (myodbc5.dll) A similar error occurs if I go through the rest of the wizard and perform the operation. The failed step is "Setting Source Connection" the error refers to retrieving column information and then lists the above error. It can retrieve column information just fine when I modify column mappings so I really don't know what the issue is. I've also tried getting various MySql tools to output ddl statements that SQL Server understand but haven't succeeded. I've tried with MySQL v5.1.11 to SQL Server 2005 and with MySQL v5.1.45 to SQL Server 2008 (with ODBC drivers 3.51.27.00 and 5.01.06.00 respectively)

    Read the article

  • new Stateful session bean instance without calling lookup

    - by kislo_metal
    Scenario: I have @Singleton UserFactory (@Stateless could be) , its method createSession() generating @Stateful UserSession bean by manual lookup. If I am injecting by DI @EJB - i will get same instance during calling fromFactory() method(as it should be) What I want - is to get new instance of UserSession without preforming lookup. Q1: how could I call new instance of @Stateful session bean? Code: @Singleton @Startup @LocalBean public class UserFactory { @EJB private UserSession session; public UserFactory() { } @Schedule(second = "*/1", minute = "*", hour = "*") public void creatingInstances(){ try { InitialContext ctx = new InitialContext(); UserSession session2 = (UserSession) ctx.lookup("java:global/inferno/lic/UserSession"); System.out.println("in singleton UUID " +session2.getSessionUUID()); } catch (NamingException e) { e.printStackTrace(); } } @Schedule(second = "*/1", minute = "*", hour = "*") public void fromFactory(){ System.out.println("in singleton UUID " +session.getSessionUUID()); } public UserSession creatSession(){ UserSession session2 = null; try { InitialContext ctx = new InitialContext(); session2 = (UserSession) ctx.lookup("java:global/inferno/lic/UserSession"); System.out.println("in singleton UUID " +session2.getSessionUUID()); } catch (NamingException e) { e.printStackTrace(); } return session2; } } As I understand, calling of session.getClass().newInstance(); is not a best idea Q2 : is it true? I am using glassfish v3, ejb 3.1.

    Read the article

  • How to add an array value to the middle of an associative array?

    - by Citizen
    Lets say I have this array: $array = array('a'=>1,'z'=>2,'d'=>4); Later in the script, I want to add the value 'c'=>3 before 'z'. How can I do this? EDIT: Yes, the order is important. When I run a foreach() through the array, I do NOT want this newly added value added to the end of the array. I am getting this array from a mysql_fetch_assoc() EDIT 2: The keys I used above are placeholders. Using ksort() will not achieve what I want. EDIT 3: http://www.php.net/manual/en/function.array-splice.php#88896 accomplishes what I'm looking for but I'm looking for something simpler. EDIT 4: Thanks for the downvotes. I gave feedback to your answers and you couldn't help, so you downvoted and requested to close the question because you didn't know the answer. Thanks. EDIT 5: Take a sample db table with about 30 columns. I get this data using mysql_fetch_assoc(). In this new array, after column 'pizza' and 'drink', I want to add a new column 'full_dinner' that combines the values of 'pizza' and 'drink' so that when I run a foreach() on the said array, 'full_dinner' comes directly after 'drink'

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >