Search Results

Search found 2659 results on 107 pages for 'vector drawings'.

Page 83/107 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Adding text over existing PDFs using reportlab

    - by Shane
    I'm interested in filling out existing PDF forms programatically. All I really need to do is pull information from user input and then place the appropriate text over an existing PDF in the appropriate locations. I can already do this with reportlab by feeding the same sheet of paper into a printer, twice, but this just really rubs me the wrong way. I'm tempted to just personally reverse engineer each existing PDF and draw every line and character myself before adding the user-inputted text, but I wanted to check to see if there was an easy way to take an existing PDF and set it as a background for some extra text. I'd really prefer to use python as it's the only language I feel comfortable with. I also realize that I could just scan the document itself and use the resulting raster image as a background, but I would prefer the precision of vector graphics. It seems like ReportLab has a commercial product with this functionality, and the specific function I'm looking for is in it (copyPages) - but it seems like overkill to pay for a 4 figure product for a single, simple function for a nonprofit use.

    Read the article

  • Why array size in llvm created by llvm's internal tools is not good enough for llvm?

    - by vava
    I'm getting strange error message from the following code: ArrayType * arrayType = ArrayType::get(Type::getInt32Ty(ctx), 0); stack = builder->CreateAlloca(arrayType, ConstantArray::get(arrayType, std::vector<Constant *>())); This code compiles fine but it gives me llvm::Value* getAISize(llvm::LLVMContext&, llvm::Value*): Assertion `Amt-getType() == Type::getInt32Ty(Context) && "Malloc/Allocation array size is not a 32-bit integer!"' failed. during execution. What's interesting, is that I never said of what type array size should be, it was created somewhere inside helper functions. So I naturally see no good reason for assert to be there. But I must be doing something wrong anyway, can you help me spot the problem? Thanks in advance. PS. LLVM, as good as it is, lacking documentation tremendously.

    Read the article

  • JLabel which hides text after reaching certain length or number of values

    - by Patrick Kiernan
    The purpose of the JLabel is to show who a message is to, like in a mail client e.g. To: John, Mary, Peter, Frank, Tom, Harry I will have the names in a vector so can build up a string from that and then set the label's text to this string. However it has the potential to get quite long. I was think it might be nice to have something like this: To: John, Mary, Peter, Frank, Tom, Harry, ... Then when you click on the '...', it will expand more or just show a tool tip if you mouse over the ... Yes this idea is stolen from Thunderbird! I am open to other ideas, don't have to use a JLabel. Thanks.

    Read the article

  • Why my button can trigger the UI to scroll and my TimerTask inside the activity can't?

    - by Spidey
    Long Story Short: a method of my activity updates and scrolls the ListView through an ArrayAdapter like it should, but a method of an internal TimerTask for polling messages (which are displayed in the ListView) updates the ListView, but don't scroll it. Why? Long Story: I have a chat activity with this layout: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="#fff" > <ListView android:id="@+id/messageList" android:layout_width="fill_parent" android:layout_height="fill_parent" android:stackFromBottom="true" android:transcriptMode="alwaysScroll" android:layout_weight="1" android:fadeScrollbars="true" /> <LinearLayout android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="wrap_content" android:gravity="center" > <EditText android:id="@+id/message" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_weight="1" /> <Button android:id="@+id/button_send" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Send" android:onClick="sendMessage" /> </LinearLayout> </LinearLayout> The internal listView (with id messageList) is populated by an ArrayAdapter which inflates the XML below and replaces strings in it. <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="wrap_content" android:clickable="false" android:background="#fff" android:paddingLeft="2dp" android:paddingRight="2dp" > <TextView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/date" android:layout_width="wrap_content" android:layout_height="wrap_content" android:textSize="16sp" android:textColor="#00F" android:typeface="monospace" android:text="2010-10-12 12:12:03" android:gravity="left" /> <TextView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/sender" android:layout_width="fill_parent" android:layout_height="wrap_content" android:textSize="16sp" android:textColor="#f84" android:text="spidey" android:gravity="right" android:textStyle="bold" /> <TextView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/body" android:layout_width="fill_parent" android:layout_height="wrap_content" android:textSize="14sp" android:padding="1dp" android:gravity="left" android:layout_below="@id/date" android:text="Mensagem muito legal 123 quatro cinco seis." android:textColor="#000" /> </RelativeLayout> The problem is: in the main layout, I have a EditText for the chat message, and a Button to send the message. I have declared the adapter in the activity scope: public class ChatManager extends Activity{ private EditText et; private ListView lv; private Timestamp lastDate = null; private long campaignId; private ChatAdapter ca; private List<ChatMessage> vetMsg = new ArrayList<ChatMessage>(); private Timer chatPollingTimer; private static final int CHAT_POLLING_PERIOD = 10000; ... } So, inside sendMessage(View v), the notifyDataSetChanged() scrolls the ListView acordingly, so I can see the latest chat messages automatically: public void sendMessage(View v) { String msg = et.getText().toString(); if(msg.length() == 0){ return; } et.setText(""); String xml = ServerCom.sendAndGetChatMessages(campaignId, lastDate, msg); Vector<ChatMessage> vetNew = Chat.parse(new InputSource(new StringReader(xml))); //Pegando a última data if(!vetNew.isEmpty()){ lastDate = vetNew.lastElement().getDateSent(); //Atualizando a tela vetMsg.addAll(vetNew); ca.notifyDataSetChanged(); } } But inside my TimerTask, I can't. The ListView IS UPDATED, but it just don't scroll automatically. What am I doing wrong? private class chatPollingTask extends TimerTask { @Override public void run() { String xml; if(lastDate != null){ //Chama o Updater xml = ServerCom.getChatMessages(campaignId, lastDate); }else{ //Chama o init denovo xml = ServerCom.getChatMessages(campaignId); } Vector<ChatMessage> vetNew = Chat.parse(new InputSource(new StringReader(xml))); if(!(vetNew.isEmpty())){ //TODO: descobrir porque o chat não está rolando quando chegam novas mensagens //Descobrir também como forçar o rolamento, enquanto o bug não for corrigido. Log.d("CHAT", "New message(s) acquired!"); lastDate = vetNew.lastElement().getDateSent(); vetMsg.addAll(vetNew); ca.notifyDataSetChanged(); } } } How can I force the scroll to the bottom? I've tried using scrollTo using lv.getBottom()-lv.getHeight(), but didn't work. Is this a bug in the Android SDK? Sorry for the MASSIVE amount of code, but I guess this way the question gets pretty clear.

    Read the article

  • Matrix inversion in OpenCL

    - by buchtak
    Hi, I am trying to accelerate some computations using OpenCL and part of the algorithm consists of inverting a matrix. Is there any open-source library or freely available code to compute lu factorization (lapack dgetrf and dgetri) of matrix or general inversion written in OpenCL or CUDA? The matrix is real and square but doesn't have any other special properties besides that. So far, I've managed to find only basic blas matrix-vector operations implementations on gpu. The matrix is rather small, only about 60-100 rows and cols, so it could be computed faster on cpu, but it's used kinda in the middle of the algorithm, so I would have to transfer it to host, calculate the inverse, and then transfer the result back on the device where it's then used in much larger computations.

    Read the article

  • decoding 802.11 b

    - by stan
    I have a raw grabbed data from spectrometer that was working on wifi (802.11b) channel 6. (two laptops in ad-hoc ping each other). I would like to decode this data in matlab. I see them as complex vector with 4.6 mln of complex samples. I see their spectrum quite nice. I am looking document a bit less complicated as IEEE 802.11 standard (which I have). I can share measurement data to other people.

    Read the article

  • C++ template name pretty print

    - by aaa
    hello. I have need to print indented template names for debugging purposes. For example, instead of single-line, I would like to indent name like this: boost::phoenix::actor< boost::phoenix::composite< boost::phoenix::less_eval, boost::fusion::vector< boost::phoenix::argument<0>, boost::phoenix::argument<1>, I started writing my own but is getting to be complicated. Is there an existing solution? if there is not one, can you help me to finish up my implementation? I will post it if so. Thanks

    Read the article

  • Parsing "true" and "false" using Boost.Spirit.Lex and Boost.Spirit.Qi

    - by Andrew Ross
    As the first stage of a larger grammar using Boost.Spirit I'm trying to parse "true" and "false" to produce the corresponding bool values, true and false. I'm using Spirit.Lex to tokenize the input and have a working implementation for integer and floating point literals (including those expressed in a relaxed scientific notation), exposing int and float attributes. Token definitions #include <boost/spirit/include/lex_lexertl.hpp> namespace lex = boost::spirit::lex; typedef boost::mpl::vector<int, float, bool> token_value_type; template <typename Lexer> struct basic_literal_tokens : lex::lexer<Lexer> { basic_literal_tokens() { this->self.add_pattern("INT", "[-+]?[0-9]+"); int_literal = "{INT}"; // To be lexed as a float a numeric literal must have a decimal point // or include an exponent, otherwise it will be considered an integer. float_literal = "{INT}(((\\.[0-9]+)([eE]{INT})?)|([eE]{INT}))"; literal_true = "true"; literal_false = "false"; this->self = literal_true | literal_false | float_literal | int_literal; } lex::token_def<int> int_literal; lex::token_def<float> float_literal; lex::token_def<bool> literal_true, literal_false; }; Testing parsing of float literals My real implementation uses Boost.Test, but this is a self-contained example. #include <string> #include <iostream> #include <cmath> #include <cstdlib> #include <limits> bool parse_and_check_float(std::string const & input, float expected) { typedef std::string::const_iterator base_iterator_type; typedef lex::lexertl::token<base_iterator_type, token_value_type > token_type; typedef lex::lexertl::lexer<token_type> lexer_type; basic_literal_tokens<lexer_type> basic_literal_lexer; base_iterator_type input_iter(input.begin()); float actual; bool result = lex::tokenize_and_parse(input_iter, input.end(), basic_literal_lexer, basic_literal_lexer.float_literal, actual); return result && std::abs(expected - actual) < std::numeric_limits<float>::epsilon(); } int main(int argc, char *argv[]) { if (parse_and_check_float("+31.4e-1", 3.14)) { return EXIT_SUCCESS; } else { return EXIT_FAILURE; } } Parsing "true" and "false" My problem is when trying to parse "true" and "false". This is the test code I'm using (after removing the Boost.Test parts): bool parse_and_check_bool(std::string const & input, bool expected) { typedef std::string::const_iterator base_iterator_type; typedef lex::lexertl::token<base_iterator_type, token_value_type > token_type; typedef lex::lexertl::lexer<token_type> lexer_type; basic_literal_tokens<lexer_type> basic_literal_lexer; base_iterator_type input_iter(input.begin()); bool actual; lex::token_def<bool> parser = expected ? basic_literal_lexer.literal_true : basic_literal_lexer.literal_false; bool result = lex::tokenize_and_parse(input_iter, input.end(), basic_literal_lexer, parser, actual); return result && actual == expected; } but compilation fails with: boost/spirit/home/qi/detail/assign_to.hpp: In function ‘void boost::spirit::traits::assign_to(const Iterator&, const Iterator&, Attribute&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Attribute = bool]’: boost/spirit/home/lex/lexer/lexertl/token.hpp:434: instantiated from ‘static void boost::spirit::traits::assign_to_attribute_from_value<Attribute, boost::spirit::lex::lexertl::token<Iterator, AttributeTypes, HasState>, void>::call(const boost::spirit::lex::lexertl::token<Iterator, AttributeTypes, HasState>&, Attribute&) [with Attribute = bool, Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, AttributeTypes = boost::mpl::vector<int, float, bool, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, HasState = mpl_::bool_<true>]’ ... backtrace of instantiation points .... boost/spirit/home/qi/detail/assign_to.hpp:79: error: no matching function for call to ‘boost::spirit::traits::assign_to_attribute_from_iterators<bool, __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, void>::call(const __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >&, const __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >&, bool&)’ boost/spirit/home/qi/detail/construct.hpp:64: note: candidates are: static void boost::spirit::traits::assign_to_attribute_from_iterators<bool, Iterator, void>::call(const Iterator&, const Iterator&, char&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >] My interpretation of this is that Spirit.Qi doesn't know how to convert a string to a bool - surely that's not the case? Has anyone else done this before? If so, how?

    Read the article

  • Floodfill with "layers"

    - by user146780
    What I want is to create a vector drawing program with layers, but to avoid using transparency / opacity, I want to draw each shape from lowest layer to highest layer onto a single bitmap. For the filling, I want to floodfill the shape. My issue is that, if I have a shape that is drawn then floodfilled, then the next shape overlaps it a bit and that new shape's border is the same as the other one's then floodfill will only partially fill it. Is there a way given a shape's coordinates that I can find the actual bounds for floodfill rather than use a target color? Thanks

    Read the article

  • Reinforcement learning with neural networks

    - by Betamoo
    I am working on a project with RL & NN I need to determine the action vector structure which will be fed to a neural network.. I have 3 different actions (A & B & Nothing) each with different powers (e.g A100 A50 B100 B50) I wonder what is the best way to feed these actions to a NN in order to yield best results? 1- feed A/B to input 1, while action power 100/50/Nothing to input 2 2- feed A100/A50/Nothing to input 1, while B100/B50/Nothing to input 2 3- feed A100/A50 to input 1, while B100/B50 to input 2, while Nothing flag to input 3 4- Also to feed 100 & 50 or normalize them to 2 & 1 ? I need reasons why to choose one method Any suggestions are recommended Thanks

    Read the article

  • Datatype to use for collection of QT buttons

    - by different
    Hi Everyone, I am brand new to QT and need to develop the Mancala game. Since I'm brand new to the QT environment, my plan it to keep things very simple. I will be using the "Push Button" widget as pieces on the game. Since two players play this game, my idea is to have to arrays of buttons. One array for player 1 and the other for player 2. My question is since I am using "Push Button" widgets, how can I group them to iterate through? I notice that QT has both the array and vector data types but I'm confused on how these data types can be used to "group" the buttons. Does anyone know of any sample code or tutorials to look at to learn more? Thanks for your time and any input provided.

    Read the article

  • Faster projected-norm (quadratic-form, metric-matrix...) style computations

    - by thekindamzkyoulike
    I need to perform lots of evaluations of the form X(:,i)' * A * X(:,i) i = 1...n where X(:,i) is a vector and A is a symmetric matrix. Ostensibly, I can either do this in a loop for i=1:n z(i) = X(:,i)' * A * X(:,i) end which is slow, or vectorise it as z = diag(X' * A * X) which wastes RAM unacceptably when X has a lot of columns. Currently I am compromising on Y = A * X for i=1:n z(i) = Y(:,i)' * X(:,i) end which is a little faster/lighter but still seems unsatisfactory. I was hoping there might be some matlab/scilab idiom or trick to achieve this result more efficiently?

    Read the article

  • High density Silverlight charting control

    - by ahosie
    I've been looking into Silverlight charting controls to display a large number of samples, (~10,000 data points in five separate series - ~50k points all up). I have found the existing options produced by Dundas, Visifire, Microsoft etc to be extremely poor performers when displaying more than a few hundred data points. I believe the performance issues with existing chart controls is caused by the heavy use of vector graphics. Ergo one solution would be a client-side chart control that uses the WritableBitmap class to generate a raster chart. Before I fall too far down the wheel re-invention rabbit hole - has anyone found a third party or OSS control that will manage large numbers of data points on a sparkline?

    Read the article

  • Idiomatic way to do list/dict in Cython?

    - by ramanujan
    My problem: I've found that processing large data sets with raw C++ using the STL map and vector can often be considerably faster (and with lower memory footprint) than using Cython. I figure that part of this speed penalty is due to using Python lists and dicts, and that there might be some tricks to use less encumbered data structures in Cython. For example, this page (http://wiki.cython.org/tutorials/numpy) shows how to make numpy arrays very fast in Cython by predefining the size and types of the ND array. Question: Is there any way to do something similar with lists/dicts, e.g. by stating roughly how many elements or (key,value) pairs you expect to have in them? That is, is there an idiomatic way to convert lists/dicts to (fast) data structures in Cython? If not I guess I'll just have to write it in C++ and wrap in a Cython import.

    Read the article

  • How can HTML5 "replace" Flash?

    - by Kassini
    A topic of debate that's seen a resurgence since the unveiling of the iPad is the issue of Flash versus HTML5. There are those that suggest that HTML5 will one day supplant/replace Adobe Flash. I do not develop software that runs in a browser, so my (limited) understanding is: HTML is a pure-text markup language that is delivered over HTTP to a client browser. The client browser interprets the markup and renders (with varying degrees of success) the page according to an standard specification. Adobe Flash is a propriety framework for working with audio, video, sound and raster/vector graphics. It requires special authoring tools (a compiler perhaps?) and a custom player that's available as a plug-in to most common browsers. Could someone please explain (to this C/C++ developer) how it is possible from a technical/coding point-of-view that a text-based markup language (HTML5) could be considered a replacement to a multimedia framework (Flash)? Please no opinionated arguments - just technical facts.

    Read the article

  • Signals and threads - good or bad design decision?

    - by Jens
    I have to write a program that performs highly computationally intensive calculations. The program might run for several days. The calculation can be separated easily in different threads without the need of shared data. I want a GUI or a web service that informs me of the current status. My current design uses BOOST::signals2 and BOOST::thread. It compiles and so far works as expected. If a thread finished one iteration and new data is available it calls a signal which is connected to a slot in the GUI class. My question(s): Is this combination of signals and threads a wise idea? I another forum somebody advised someone else not to "go down this road". Are there potential deadly pitfalls nearby that I failed to see? Is my expectation realistic that it will be "easy" to use my GUI class to provide a web interface or a QT, a VTK or a whatever window? Is there a more clever alternative (like other boost libs) that I overlooked? following code compiles with g++ -Wall -o main -lboost_thread-mt <filename>.cpp code follows: #include <boost/signals2.hpp> #include <boost/thread.hpp> #include <boost/bind.hpp> #include <iostream> #include <iterator> #include <string> using std::cout; using std::cerr; using std::string; /** * Called when a CalcThread finished a new bunch of data. */ boost::signals2::signal<void(string)> signal_new_data; /** * The whole data will be stored here. */ class DataCollector { typedef boost::mutex::scoped_lock scoped_lock; boost::mutex mutex; public: /** * Called by CalcThreads call the to store their data. */ void push(const string &s, const string &caller_name) { scoped_lock lock(mutex); _data.push_back(s); signal_new_data(caller_name); } /** * Output everything collected so far to std::out. */ void out() { typedef std::vector<string>::const_iterator iter; for (iter i = _data.begin(); i != _data.end(); ++i) cout << " " << *i << "\n"; } private: std::vector<string> _data; }; /** * Several of those can calculate stuff. * No data sharing needed. */ struct CalcThread { CalcThread(string name, DataCollector &datcol) : _name(name), _datcol(datcol) { } /** * Expensive algorithms will be implemented here. * @param num_results how many data sets are to be calculated by this thread. */ void operator()(int num_results) { for (int i = 1; i <= num_results; ++i) { std::stringstream s; s << "["; if (i == num_results) s << "LAST "; s << "DATA " << i << " from thread " << _name << "]"; _datcol.push(s.str(), _name); } } private: string _name; DataCollector &_datcol; }; /** * Maybe some VTK or QT or both will be used someday. */ class GuiClass { public: GuiClass(DataCollector &datcol) : _datcol(datcol) { } /** * If the GUI wants to present or at least count the data collected so far. * @param caller_name is the name of the thread whose data is new. */ void slot_data_changed(string caller_name) const { cout << "GuiClass knows: new data from " << caller_name << std::endl; } private: DataCollector & _datcol; }; int main() { DataCollector datcol; GuiClass mc(datcol); signal_new_data.connect(boost::bind(&GuiClass::slot_data_changed, &mc, _1)); CalcThread r1("A", datcol), r2("B", datcol), r3("C", datcol), r4("D", datcol), r5("E", datcol); boost::thread t1(r1, 3); boost::thread t2(r2, 1); boost::thread t3(r3, 2); boost::thread t4(r4, 2); boost::thread t5(r5, 3); t1.join(); t2.join(); t3.join(); t4.join(); t5.join(); datcol.out(); cout << "\nDone" << std::endl; return 0; }

    Read the article

  • Why does this crash with access violation to 0xcccccc...?

    - by Mike
    I have a random piece of code, I use for reading from CSV files... and it's fine... until after about 2000 reads... then the getline line fails with an access violation to 0xcccccc... which I assume means that the input stream (file) has been cleared... Not that I know why :) int CCSVManager::ReadCSVLine ( fstream * fsInputFile, vector <string> * recordData ) { string s; getline ( *fsInputFile, s ); stringstream iss( s ); for ( unsigned int i = 0; i < getNumFields (); i++ ) { getline ( iss, s, ',' ); (*recordData)[i] = s; } return 0; } Any ideas why?

    Read the article

  • How to create Encryption Key for Encryption Algorithms?

    - by Akash Kava
    I want to use encryption algorithm available in .Net Security namespace, however I am trying to understand how to generate the key, for example AES algorithm needs 256 bits, that 16 bytes key, and some initialization vector, which is also few bytes. Can I use any combination of values in my Key and IV? e.g. all zeros in Key and IV are valid or not? I know the detail of algorithm which does lots of xors, so zero wont serve any good, but are there any restrictions by these algorithms? Or Do I have to generate the key using some program and save it permanently somewhere?

    Read the article

  • Using local classes with STL algorithms

    - by David Rodríguez - dribeas
    I have always wondered why you cannot use locally defined classes as predicates to STL algorithms. In the question: Approaching STL algorithms, lambda, local classes and other approaches, BubbaT mentions says that 'Since the C++ standard forbids local types to be used as arguments' Example code: int main() { int array[] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }; std::vector<int> v( array, array+10 ); struct pair : public std::unary_function<int,bool> { bool operator()( int x ) { return !( x % 2 ); } }; std::remove_if( v.begin(), v.end(), pair() ); // error } Does anyone know where in the standard is the restriction? What is the rationale for disallowing local types?

    Read the article

  • OpenLayers: Raise event when map is zoomed or moved by user

    - by David Pfeffer
    I'm using OpenLayers to display OpenStreetMap maps. (Though, I'd assume this should be general enough to work for any map product...) I'm displaying some very sophisticated vector overlays, and the amount and resolution of the features I'm returning from the server via GeoJSON to overlay has proven too much for many computers. What I'd like to do now instead is to only send data befitting the resolution of the current zoom, and fitting the current view port. This should be relatively easy to do using the GetResolution and CalculateBounds methods on the Map object. However, I don't know when to call these methods because I can't find a way to register a function to be called when the user pans the map (changing the view port) or zooms the map (changing the resolution and view port). How can I get a callback when the user pans or zooms the map?

    Read the article

  • How to change handedness of coordinates?

    - by 742
    How to convert from Euler's coordinates E1 = (x1, y1, z1, yaw1, pitch1, roll1) to E2 = (x2, y2, z2, yaw2, pitch2, roll2) where x, y, z are the coordinates of a point and yaw, pitch, roll the direction/orientation of a vector which origin is the point. yaw is around y, pitch around x, roll around z. They are performed in that order. Yaw 0 is normal to the plan xy (opposite to z in E1 and equal to z in E2). E1 uses a right handed space and E2 a left handed space. Both have the same origin, the same direction for y (top) and z (into the screen). They differ by x which is to the left on E1 and to the right on E2. They also differ by their direction of positive rotations. I've a custom type to hold the scalar representation and to convert from and to the equivalent WPF Matrix3d representation.

    Read the article

  • Templated Control databinding to custom properties

    - by Dan Wray
    Is there some trick that I'm missing here? I've created a templated control, very simple. One single property on it, and I'd like to databind from the (viewmodel/datacontext of the) page in which it's hosted to a custom property on the control. The property will eventually be a vector type object, defining the position of the control, however in an attempt to get this to work I've tried reducing it to a basic string property. Each time I'm faced with "Set property 'SimpleGame.Classes.Sprite.Property' threw an exception.". I can't even catch the exception in a debug session, the set property code is not being executed. Do I need to use a dependency / attached property or something? I wouldn't have thought so...

    Read the article

  • Could the assign function for containers possibly overflow?

    - by Kristo
    I ran into this question today and thought I should post it for the community's reference and/or opinions. The standard C++ containers vector, deque, list, and string provide an assign member function. There are two versions; I'm primarily interested in the one accepting an iterator range. The Josuttis book is a little ambiguous with its description. From p. 237... Assigns all elements of the range [beg,end); this is, is replaces all existing elements with copies of the elements of [beg,end). It doesn't say what happens if the size of the assignee container is different from the range being assigned. Does it truncate? Does it automagically expand? Is it undefined behavior?

    Read the article

  • How do you calculate the reflex angle given to vectors in 3D space?

    - by Reimund
    I want to calculate the angle between two vectors a and b. Lets assume these are at the origin. This can be done with theta = arccos(a . b / |a| * |b|) However arccos gives you the angle in [0, pi], i.e. it will never give you an angle greater than 180 degrees, which is what I want. So how do you find out when the vectors have gone past the 180 degree mark? In 2D I would simply let the sign of the y-component on one of the vectors determine what quadrant the vector is in. But what is the easiest way to do it in 3D?

    Read the article

  • Vim OmniCppComplete on vectors of pointers

    - by Alex
    Hi, I might have done something wrong in the set up but is OmniCppComplete supposed to provide the members/functions of classes when doing this? vectorofpointers[0]-> At the moment all I get when trying that are things relating to the vector class itself, which obviously isn't very useful. I think it might have been working before I tagged /usr/include/ but I could be wrong. Also, is it possible to disable the preview window? I find it just clutters up my workspace. And since I enabled ShowPrototypeInAbbr I don't really need it. Thanks, Alex

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >