Search Results

Search found 9647 results on 386 pages for 'cross compile'.

Page 356/386 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • Undefined symbols for C++0x lambdas?

    - by Austin Hyde
    I was just poking around into some new stuff in C++0x, when I hit a stumbling block: #include <list> #include <cstdio> using namespace std; template <typename T,typename F> void ForEach (list<T> l, F f) { for (typename list<T>::iterator it=l.begin();it!=l.end();++it) f(*it); } int main() { int arr[] = {1,2,3,4,5,6}; list<int> l (arr,arr+6); ForEach(l,[](int x){printf("%d\n",x);}); } does not compile. I get a load of undefined symbol errors. Here's make's output: i386-apple-darwin9-gcc-4.5.0 -std=c++0x -I/usr/local/include -o func main.cpp Undefined symbols: "___cxa_rethrow", referenced from: std::_List_node<int>* std::list<int, std::allocator<int> >::_M_create_node<int const&>(int const&&&) in ccPxxPwU.o "operator new(unsigned long)", referenced from: __gnu_cxx::new_allocator<std::_List_node<int> >::allocate(unsigned long, void const*) in ccPxxPwU.o "___gxx_personality_v0", referenced from: ___gxx_personality_v0$non_lazy_ptr in ccPxxPwU.o "___cxa_begin_catch", referenced from: std::_List_node<int>* std::list<int, std::allocator<int> >::_M_create_node<int const&>(int const&&&) in ccPxxPwU.o "operator delete(void*)", referenced from: __gnu_cxx::new_allocator<std::_List_node<int> >::deallocate(std::_List_node<int>*, unsigned long) in ccPxxPwU.o "___cxa_end_catch", referenced from: std::_List_node<int>* std::list<int, std::allocator<int> >::_M_create_node<int const&>(int const&&&) in ccPxxPwU.o "std::__throw_bad_alloc()", referenced from: __gnu_cxx::new_allocator<std::_List_node<int> >::allocate(unsigned long, void const*) in ccPxxPwU.o "std::_List_node_base::_M_hook(std::_List_node_base*)", referenced from: void std::list<int, std::allocator<int> >::_M_insert<int const&>(std::_List_iterator<int>, int const&&&) in ccPxxPwU.o ld: symbol(s) not found collect2: ld returned 1 exit status make: *** [func] Error 1 Why is this not working?

    Read the article

  • Boost Python - Limits to the number of arguments when wrapping a function

    - by Derek
    I'm using Boost Python to wrap some C++ functions that I've created. One of my C++ functions contains 22 arguments. Boost complains when I try to compile my solution with this function, and I'm trying to figure out if it is just because this function has too many arguments. Does anyone know if such a limit exists? I've copied the error I'm getting below, not the code because I figure someone either knows the answer to this or not - and if there is no limit then I'll just try to figure it out myself. Thanks very much in advance! Here is a copy of the beginning of the error message I receive... 1>main.cpp 1>c:\cpp_ext\boost\boost_1_47\boost\python\make_function.hpp(76): error C2780: 'boost::mpl::vector17<RT,most_derived<Target,ClassT>::type&,T0,T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14> boost::python::detail::get_signature(RT (__thiscall ClassT::* )(T0,T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14) volatile const,Target *)' : expects 2 arguments - 1 provided 1>c:\cpp_ext\boost\boost_1_47\boost\python\signature.hpp(236) : see declaration of 'boost::python::detail::get_signature' And eventually I get about a hundred copies of error messages very much resembling this one: 1>c:\cpp_ext\boost\boost_1_47\boost\python\make_function.hpp(76): error C2784: 'boost::mpl::vector17<RT,ClassT&,T0,T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14> boost::python::detail::get_signature(RT (__thiscall ClassT::* )(T0,T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14) volatile const)' : could not deduce template argument for 'RT (__thiscall ClassT::* )(T0,T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14) volatile const' from 'std::string (__cdecl *)(const std::string &,jal::date::JULIAN_DATE,const std::string &,const std::string &,int,const std::string &,const std::string &,const std::string &,const std::string &,const std::string &,const std::string &,int,const std::string &,const std::string &,int,const std::string &,const std::string &,const std::string &,const std::string &,const std::string &,int,const std::string &)' 1> c:\cpp_ext\boost\boost_1_47\boost\python\signature.hpp(218) : see declaration of 'boost::python::detail::get_signature' 1>c:\cpp_ext\boost\boost_1_47\boost\python\make_function.hpp(76): error C2780: 'boost::mpl::vector17<RT,most_derived<Target,ClassT>::type&,T0,T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14> boost::python::detail::get_signature(RT (__thiscall ClassT::* )(T0,T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14) volatile,Target *)' : expects 2 arguments - 1 provided 1> c:\cpp_ext\boost\boost_1_47\boost\python\signature.hpp(236) : see declaration of 'boost::python::detail::get_signature'

    Read the article

  • Error inflating class android.widget.CompoundButton

    - by snctln
    [Disclaimer: This has been cross posted on the Android Developers Google Group I am trying to use a CompoundButton in a project I am working on. Every time I try and use it by declaring it in my layout xml file I receive the error "01-04 12:27:46.471: ERROR/AndroidRuntime(1771): Caused by: android.view.InflateException: Binary XML file line #605: Error inflating class android.widget.CompoundButton" After fighting the error for a half an hour I decided to try a minimalistic example. I am using the latest eclipse developer tools, and targeting android 2.2 makign the minimum sdk required 2.2 (8). Here is the activity java code: package com.example.CompoundButtonExample; import android.app.Activity; import android.os.Bundle; public class CompoundButtonExampleActivity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); } } Here is the layout xml code: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" > <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="@string/hello" /> <CompoundButton android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="@string/hello" /> </LinearLayout> Here is the manifest xml code: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.CompoundButtonExample" android:versionCode="1" android:versionName="1.0"> <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".CompoundButtonExampleActivity" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> <uses-sdk android:minSdkVersion="8" /> </manifest> As you can see it is just the default hello world project that eclipse creates for you when you start a new Android project. It only differs in the fact that I add a "CompoundButton" to the main layout in a vertical LinearLayout. Can anyone confirm this bug? Or tell me what I am doing wrong?

    Read the article

  • Silverlight ControlTemplate and F#

    - by akaphenom
    Has anybody had any success incorporating a Silverlight ControlTemplate into an F# Silverlight application. I am trying to add transitions to the Navgiation.Frame element and following along on with a C# example: http://www.davidpoll.com/2009/07/19/silverlight-3-navigation-adding-transitions-to-the-frame-control The downloaded source uses the MSBUILD:Compile option on the template XAML and the file is included as a "Page"... ILDASM doesn't show any object created for the XAML; In my project I incldued it as a "Resource" (same as I have done for my pages) and referenced it in app.xaml: <Application xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="Module1.MyApp"> <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/FSSilverlightApp;component/TransitioningFrame.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> </Application> the TransitioningFrame.xaml is as follows: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:navigation="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Navigation" xmlns:toolkit="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Layout.Toolkit"> <ControlTemplate x:Key="TransitioningFrame" TargetType="navigation:Frame"> <Border Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}"> <toolkit:TransitioningContentControl Content="{TemplateBinding Content}" Cursor="{TemplateBinding Cursor}" Margin="{TemplateBinding Padding}" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}" HorizontalContentAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalContentAlignment="{TemplateBinding VerticalContentAlignment}" Transition="DefaultTransition" /> </Border> </ControlTemplate> </ResourceDictionary> My page objects all load their respective xaml with the follwoing code: type Page1() as this = inherit UriUserControl("/FSSilverlightApp;component/Page1.xaml") do Application.LoadComponent(this, base.uri) and somewhere in app startup: let p1 = new Page1() I donot have a comparable piece for the ControlTemplate - though I was hoping the application object and App.xaml would pull it in magically (as an aside, the reliance on this magic has made setting up a 100% f# silverlight application rather tricky - as nearly all the published articles I find are based around wizards and short cuts - very little on the acual plumbing - ugh). Any advice or thougts on the subject are appreciated.

    Read the article

  • A mysterious compilation error: cannot convert from 'const boost::shared_ptr<T>' to 'const boost::shared_ptr<T>'

    - by Stephane Rolland
    I wanted to protect the access to a log file that I use for multithreaded logging with boostlog library. I tried this stream class class ThreadSafeStream { public: template <typename TInput> const ThreadSafeStream& operator<< (const TInput &tInput) const { // some thread safe file access return *this; } }; using it this way (text_sink is a boostlog object): //... m_spSink.reset(new text_sink); text_sink::locked_backend_ptr pBackend = m_spSink->locked_backend(); const boost::shared_ptr< ThreadSafeStream >& spFileStream = boost::make_shared<ThreadSafeStream>(); pBackend->add_stream(spFileStream); // this causes the compilation error and I get this mysterious error: cannot convert from 'const boost::shared_ptr<T>' to 'const boost::shared_ptr<T>' the whole compile error: Log.cpp(79): error C2664: 'boost::log2_mt_nt5::sinks::basic_text_ostream_backend<CharT>::add_stream' : cannot convert parameter 1 from 'const boost::shared_ptr<T>' to 'const boost::shared_ptr<T> &' 1> with 1> [ 1> CharT=char 1> ] 1> and 1> [ 1> T=ThreadSafeStream 1> ] 1> and 1> [ 1> T=std::basic_ostream<char,std::char_traits<char>> 1> ] 1> Reason: cannot convert from 'const boost::shared_ptr<T>' to 'const boost::shared_ptr<T>' 1> with 1> [ 1> T=ThreadSafeStream 1> ] 1> and 1> [ 1> T=std::basic_ostream<char,std::char_traits<char>> 1> ] 1> No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called I suspect that I am not well defining the operator<<()... but I don't find what is wrong.

    Read the article

  • How to create a compiler in vb.net

    - by Cyclone
    Before answering this question, understand that I am not asking how to create my own programming language, I am asking how, using vb.net code, I can create a compiler for a language like vb.net itself. Essentially, the user inputs code, they get a .exe. By NO MEANS do I want to write my own language, as it seems other compiler related questions on here have asked. I also do not want to use the vb.net compiler itself, nor do I wish to duplicate the IDE. The exact purpose of what I wish to do is rather hard to explain, but all I need is a nudge in the right direction for writing a compiler (from scratch if possible) which can simply take input and create a .exe. I have opened .exe files as plain text before (my own programs) to see if I could derive some meaning from what I assumed would be human readable text, yet I was obviously sorely disappointed to see the random ascii, though it is understandable why this is all I found. I know that a .exe file is simply lines of code, being parsed by the computer it is on, but my question here really boils down to this: What code makes up a .exe? How could I go about making one in a plain text editor if I wanted to? (No, I do not want to do that, but if I understand the process my goals will be much easier to achieve.) What makes an executable file an executable file? Where does the logic of the code fit in? This is intended to be a programming question as opposed to a computer question, which is why I did not post it on SuperUser. I know plenty of information about the System.IO namespace, so I know how to create a file and write to it, I simply do not know what exactly I would be placing inside this file to get it to work as an executable file. I am sorry if this question is "confusing", "dumb", or "obvious", but I have not been able to find any information regarding the actual contents of an executable file anywhere. One of my google searches Something that looked promising EDIT: The second link here, while it looked good, was an utter fail. I am not going to waste hours of my time hitting keys and recording the results. "Use the "Alt" and the 3-digit combinations to create symbols that do not appear on the keyboard but that you need in the program." (step 4) How the heck do I know what symbols I need??? Thank you very much for your help, and my apologies if this question is a nooby or "bad" one. To sum this up simply: I want to create a program in vb.net that can compile code in a particular language to a single executable file. What methods exist that can allow me to do this, and if there are none, how can I go about writing my own from scratch?

    Read the article

  • How to use objetcs as modules/functors in Scala?

    - by Jeff
    Hi. I want to use object instances as modules/functors, more or less as shown below: abstract class Lattice[E] extends Set[E] { val minimum: E val maximum: E def meet(x: E, y: E): E def join(x: E, y: E): E def neg(x: E): E } class Calculus[E](val lat: Lattice[E]) { abstract class Expr case class Var(name: String) extends Expr {...} case class Val(value: E) extends Expr {...} case class Neg(e1: Expr) extends Expr {...} case class Cnj(e1: Expr, e2: Expr) extends Expr {...} case class Dsj(e1: Expr, e2: Expr) extends Expr {...} } So that I can create a different calculus instance for each lattice (the operations I will perform need the information of which are the maximum and minimum values of the lattice). I want to be able to mix expressions of the same calculus but not be allowed to mix expressions of different ones. So far, so good. I can create my calculus instances, but problem is that I can not write functions in other classes that manipulate them. For example, I am trying to create a parser to read expressions from a file and return them; I also was trying to write an random expression generator to use in my tests with ScalaCheck. Turns out that every time a function generates an Expr object I can't use it outside the function. Even if I create the Calculus instance and pass it as an argument to the function that will in turn generate the Expr objects, the return of the function is not recognized as being of the same type of the objects created outside the function. Maybe my english is not clear enough, let me try a toy example of what I would like to do (not the real ScalaCheck generator, but close enough). def genRndExpr[E](c: Calculus[E], level: Int): Calculus[E]#Expr = { if (level > MAX_LEVEL) { val select = util.Random.nextInt(2) select match { case 0 => genRndVar(c) case 1 => genRndVal(c) } } else { val select = util.Random.nextInt(3) select match { case 0 => new c.Neg(genRndExpr(c, level+1)) case 1 => new c.Dsj(genRndExpr(c, level+1), genRndExpr(c, level+1)) case 2 => new c.Cnj(genRndExpr(c, level+1), genRndExpr(c, level+1)) } } } Now, if I try to compile the above code I get lots of error: type mismatch; found : plg.mvfml.Calculus[E]#Expr required: c.Expr case 0 = new c.Neg(genRndExpr(c, level+1)) And the same happens if I try to do something like: val boolCalc = new Calculus(Bool) val e1: boolCalc.Expr = genRndExpr(boolCalc) Please note that the generator itself is not of concern, but I will need to do similar things (i.e. create and manipulate calculus instance expressions) a lot on the rest of the system. Am I doing something wrong? Is it possible to do what I want to do? Help on this matter is highly needed and appreciated. Thanks a lot in advance.

    Read the article

  • Using Effect For Fog of War

    - by Qua
    I'm trying to apply fog of war to areas on the screen not currently visible to the player. I do this by rendering the game content in one RenderTarget and the the fog of war into another, and then I merge them with an effect file that takes the color from the game RenderTarget and the alpha from the fog of war render target. The FOW RenderTarget is black where the FOW appears, and white where it doesn't. This does work, but it colors the fog of war (the unrevealed locations) white instead of the intended color of black. Before applying the effect I clear the backbuffer of the device to white. When I try to clear it to black, non of the fog of war appears at all, which I assume is a product of alpha blending with black. It works for all other colors, however - giving the resulting screen a tint of that color. How do I archieve a black fog while still being able to do alpha blending between the two render targets? The rendering code for applying the FOW: private RenderTarget2D mainTarget; private RenderTarget2D lightTarget; private void CombineRenderTargetsAndDraw() { batch.GraphicsDevice.SetRenderTarget(null); batch.GraphicsDevice.Clear(Color.White); fogOfWar.Parameters["LightsTexture"].SetValue(lightTarget); batch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend); fogOfWar.CurrentTechnique.Passes[0].Apply(); batch.Draw( mainTarget, new Rectangle(0, 0, batch.GraphicsDevice.PresentationParameters.BackBufferWidth, batch.GraphicsDevice.PresentationParameters.BackBufferHeight), Color.White ); batch.End(); } The effect file I'm using to apply the FOW: texture LightsTexture; sampler ColorSampler : register(s0); sampler LightsSampler = sampler_state{ Texture = <LightsTexture>; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 TexCoord : TEXCOORD0; }; float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 tex = input.TexCoord; float4 color = tex2D(ColorSampler, tex); float4 alpha = tex2D(LightsSampler, tex); return float4(color.r, color.g, color.b, alpha.r); } technique Technique1 { pass Pass1 { PixelShader = compile ps_2_0 PixelShaderFunction(); } }

    Read the article

  • Can't build gem -- native extension build fails -- can you see why?

    - by marfarma
    I can't figure out what is going wrong here -- any ideas?? I'm running on a Ubuntu 8.04 LTS, and have installed libxml2 and libxslt from these instructions: http://www.techsww.com/tutorials/libraries/libxml/installation/installing_libxml_on_ubuntu_linux.php http://www.techsww.com/tutorials/libraries/libxslt/installation/installing_libxslt_on_ubuntu_linux.php However, I installed the latest versions: libxslt-1.1.24 libxml2-2.7.3 The install was uneventful -------------------- I set LD_LIBRARY_PATH ---------------------------------- echo $LD_LIBRARY_PATH /usr/local/libxslt/lib: ------------- seems like the function is present -- at least based on the output of strings ------------ /usr/local/libxslt/lib$ strings * | grep ParseStylesheetDoc xsltParseStylesheetDoc xsltParseStylesheetDoc xsltParseStylesheetDoc xsltParseStylesheetDoc xsltParseStylesheetDoc xsltParseStylesheetDoc xsltParseStylesheetDoc ----------------------- But the compile still fails ---------------------------------------- sudo gem install webrat Building native extensions. This could take a while... ERROR: Error installing webrat: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb install webrat checking for iconv.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libxml/parser.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libxslt/xslt.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libexslt/exslt.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for xmlParseDoc() in -lxml2... yes checking for xsltParseStylesheetDoc() in -lxslt... no libxslt is missing. try 'port install libxslt' or 'yum install libxslt-devel' *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/local/bin/ruby --with-iconv-dir --without-iconv-dir --with-iconv-include --without-iconv-include=${iconv-dir}/include --with-iconv-lib --without-iconv-lib=${iconv-dir}/lib --with-xml2-dir --without-xml2-dir --with-xml2-include --without-xml2-include=${xml2-dir}/include --with-xml2-lib --without-xml2-lib=${xml2-dir}/lib --with-xslt-dir --without-xslt-dir --with-xslt-include --without-xslt-include=${xslt-dir}/include --with-xslt-lib --without-xslt-lib=${xslt-dir}/lib --with-xml2lib --without-xml2lib --with-xsltlib --without-xsltlib Gem files will remain installed in /usr/local/lib/ruby/gems/1.8/gems/nokogiri-1.3.3 for inspection. Results logged to /usr/local/lib/ruby/gems/1.8/gems/nokogiri-1.3.3/ext/nokogiri/gem_make.out

    Read the article

  • HTML/CSS set div to height of sibling

    - by Paul
    I have 2 div's contained in a third. One of the contained div's is floated left, the other floated right. I would like the 2 sibling div's to always be at the same height, but am having a problem with this. So far I am only viewing the page in Firefox, and figured I'd worry about any cross-browser issues after I get it working in at least one browser. Here is the markup: <div id="main-container" class="border clearfix"> <div id="left-div" class="border"> ... </div> <div id="right-div" class="border"> ... </div> </div> Here is the CSS: #main-container { position: relative; min-height: 500px; } #left-div { position: relative; float: left; width: 700px; min-height: inherit; } #right-div { position: relative; float: right; width: 248px; min-height: inherit; height: inherit; } .clearfix:after { content: " "; display: block; height: 0; clear: both; visibility: hidden; } .clearfix { display: inline-block; _height: 1%; clear: both; } .clearfix { display: block; clear: both; } .border { border: solid 1px #000; } If the content in the #left-div is longer than 500px, the #right-div does not expand to match. In an example I tried, Firefox said the computed style height of the #main-container was 804px, the computed style height of the #left-div was 800px, and the computed style height of the #right-div was 586.2px, as it had expanded to fit it's own content. I understand I might be going about this the wrong way, and if this is a duplicate questions then I apologize, but I wasn't quite sure what to search under.

    Read the article

  • CSS Transform offset varies with text length

    - by Mr. Polywhirl
    I have set up a demo that has 5 floating <div>s with rotated text of varying length. I am wondering if there is a way to have a CSS class that can handle centering of all text regardless of length. At the moment I have to create a class for each length of characters in my stylesheet. This could get too messy. I have also noticed that the offsets get screwd up is I increase or decrease the size of the wrapping <div>. I will be adding these classes to divs through jQuery, but I still have to set up each class for cross-browser compatibility. .transform3 { -webkit-transform-origin: 65% 100%; -moz-transform-origin: 65% 100%; -ms-transform-origin: 65% 100%; -o-transform-origin: 65% 100%; transform-origin: 65% 100%; } .transform4 { -webkit-transform-origin: 70% 110%; -moz-transform-origin: 70% 110%; -ms-transform-origin: 70% 110%; -o-transform-origin: 70% 110%; transform-origin: 70% 110%; } .transform5 { -webkit-transform-origin: 80% 120%; -moz-transform-origin: 80% 120%; -ms-transform-origin: 80% 120%; -o-transform-origin: 80% 120%; transform-origin: 80% 120%; } .transform6 { -webkit-transform-origin: 85% 136%; -moz-transform-origin: 85% 136%; -ms-transform-origin: 85% 136%; -o-transform-origin: 85% 136%; transform-origin: 85% 136%; } .transform7 { -webkit-transform-origin: 90% 150%; -moz-transform-origin: 90% 150%; -ms-transform-origin: 90% 150%; -o-transform-origin: 90% 150%; transform-origin: 90% 150%; } Note: The offset values I set were eye-balled. Update Although I would like this handled with a stylesheet, I believe that I will have to calculate the transformations of the CSS in JavaScript. I have created the following demo to demonstrate dynamic transformations. In this demo, the user can adjust the font-size of the .v_text class and as long as the length of the text does not exceed the .v_text_wrapper height, the text should be vertically aligned in the center, but be aware that I have to adjust the magicOffset value. Well, I just noticed that this does not work in iOS...

    Read the article

  • stringstream problem - vector iterator not dereferencable

    - by andreas
    Hello I've got a problem with the following code snippet. It is related to the stringstream "stringstream css(cv.back())" bit. If it is commented out the program will run ok. It is really weird, as I keep getting it in some of my programs, but if I just create a console project the code will run fine. In some of my Win32 programs it will and in some it won't (then it will return "vector iterator not dereferencable" but it will compile just fine). Any ideas at all would be really appreciated. Thanks! vector<double> cRes(2); vector<double> pRes(2); int readTimeVects2(vector<double> &cRes, vector<double> &pRes){ string segments; vector<string> cv, pv, chv, phv; ifstream cin("cm.txt"); ifstream pin("pw.txt"); ifstream chin("hm.txt"); ifstream phin("hw.txt"); while (getline(cin,segments,'\t')) { cv.push_back(segments); } while (getline(pin,segments,'\t')) { pv.push_back(segments); } while (getline(chin,segments,'\t')) { chv.push_back(segments); } while (getline(phin,segments,'\t')) { phv.push_back(segments); } cin.close(); pin.close(); chin.close(); phin.close(); stringstream phss(phv.front()); phss >> pRes[0]; phss.clear(); stringstream chss(chv.front()); chss >> cRes[0]; chss.clear(); stringstream pss(pv.back()); pss >> pRes[1]; pss.clear(); stringstream css(cv.back()); css >> cRes[1]; css.clear(); return 0; }

    Read the article

  • Ant + JUnit = ClassNotFoundExceptions when running tests?

    - by rfkrocktk
    I'm trying to run some tests in Ant presently using JUnit, and all of my tests are failing with the following stacktrace: java.lang.ClassNotFoundException: com.mypackage.MyTestCase It doesn't make too much sense to me. I'm first compiling my test cases using <javac>, then directly running the <junit> task to run the tests. My buildfile looks like this: <target name="compile.webapp.tests"> <javac srcdir="${test.java.src.dir}" destdir="${test.java.bin.dir}"> <classpath> <filelist> <file name="${red5.home}/red5.jar"/> <file name="${red5.home}/boot.jar"/> <file name="${bin.dir}/${ant.project.name}.jar"/> </filelist> <fileset dir="${red5.lib.dir}" includes="**/*"/> <fileset dir="${main.java.lib.dir}" includes="**/*"/> <fileset dir="${test.java.lib.dir}" includes="**/*"/> </classpath> </javac> </target> <target name="run.webapp.tests"> <junit printsummary="true"> <classpath> <filelist> <file name="${red5.home}/red5.jar"/> <file name="${red5.home}/boot.jar"/> <file name="${bin.dir}/${ant.project.name}.jar"/> </filelist> <fileset dir="${red5.lib.dir}" includes="**/*.jar"/> <fileset dir="${main.java.lib.dir}" includes="**/*.jar"/> <fileset dir="${test.java.lib.dir}" includes="**/*.jar"/> </classpath> <formatter type="xml"/> <batchtest todir="${test.java.output.dir}"> <fileset dir="${test.java.bin.dir}" includes="**/*TestCase*"/> </batchtest> </junit> </target> This is really weird, I can't seem to fix this. Is there something I'm doing wrong here?

    Read the article

  • Using array as map value: Cant see the error

    - by Tom
    Hi all, Im trying to create a map, where the key is an int, and the value is an array int red[3] = {1,0,0}; int green[3] = {0,1,0}; int blue[3] = {0,0,1}; std::map<int, int[3]> colours; colours.insert(std::pair<int,int[3]>(GLUT_LEFT_BUTTON,red)); //THIS IS LINE 24 ! colours.insert(std::pair<int,int[3]>(GLUT_MIDDLE_BUTTON,blue)); colours.insert(std::pair<int,int[3]>(GLUT_RIGHT_BUTTON,green)); However, when I try to compile this code, I get the following error. g++ (Ubuntu 4.4.1-4ubuntu8) 4.4.1 In file included from /usr/include/c++/4.4/bits/stl_algobase.h:66, from /usr/include/c++/4.4/bits/stl_tree.h:62, from /usr/include/c++/4.4/map:60, from ../src/utils.cpp:9: /usr/include/c++/4.4/bits/stl_pair.h: In constructor ‘std::pair<_T1, _T2>::pair(const _T1&, const _T2&) [with _T1 = int, _T2 = int [3]]’: ../src/utils.cpp:24: instantiated from here /usr/include/c++/4.4/bits/stl_pair.h:84: error: array used as initializer /usr/include/c++/4.4/bits/stl_pair.h: In constructor ‘std::pair<_T1, _T2>::pair(const std::pair<_U1, _U2>&) [with _U1 = int, _U2 = int [3], _T1 = const int, _T2 = int [3]]’: ../src/utils.cpp:24: instantiated from here /usr/include/c++/4.4/bits/stl_pair.h:101: error: array used as initializer In file included from /usr/include/c++/4.4/map:61, from ../src/utils.cpp:9: /usr/include/c++/4.4/bits/stl_map.h: In member function ‘_Tp& std::map<_Key, _Tp, _Compare, _Alloc>::operator[](const _Key&) [with _Key = int, _Tp = int [3], _Compare = std::less<int>, _Alloc = std::allocator<std::pair<const int, int [3]> >]’: ../src/utils.cpp:30: instantiated from here /usr/include/c++/4.4/bits/stl_map.h:450: error: conversion from ‘int’ to non-scalar type ‘int [3]’ requested make: *** [src/utils.o] Error 1 I really cant see where the error is. Or even if there's an error. Any help (please include an explanation to help me avoid this mistake) will be appreciated. Thanks in advance.

    Read the article

  • passing multiple vectors to function by reference (using structure)

    - by madman
    Hi StackOverflow, Can someone tell me the correct way of passing multiple vectors to a function that can take only a single argument? (specifically for pthread_create (..) function) I tried the following but it does not seem to work :-( First, I created the following structure struct ip2 { void* obj; int dim; int n_s; vector<vector<vector<double> > > *wlist; vector<int> *nrsv; struct model *pModel; }; The threads that I have created actually needs all these parameters. Since im using pthreads_create I put all this in a structure and then passed the pointer to the structure as an argument to pthread_create (as shown). some_fxn() { //some code struct ip2 ip; ip.obj=(void*) this; ip.n_s=n_s; ip.wlist=&wlist; ip.nrsv=&nrsv; ip.pModel=pModel; ip.dim=dim; pthread_create(&callThd1[lcntr], &attr1, &Cls::Entry, (void*) &ip); } The Entry method looks something like this. void* Cls::Entry(void *ip) { struct ip2 *x; x = (struct ip2 *)ip; (reinterpret_cast<Cls1 *>(x->obj))->Run(x->dim,x->n_s, x->wlist, x->nrsv, x->pModel); } The Run method looks something like this. void Run(int dim, int n_c, vector<vector<vector<double> > > *wlist, vector<int> *nrsv, struct model* pModel ) { //some code for(int k = 0; k < n_c; ++k) { //some code end = index + nrsv[k]; //some code } I get the following error when I try to compile the program. error: no match for ‘operator+’ in ‘index + *(((std::vector<int, std::allocator<int> >*)(((unsigned int)k) * 12u)) + nrsv)’ Can someone tell me how to do it the right way. Madhavan

    Read the article

  • Understanding PTS and DTS in video frames

    - by theateist
    I had fps issues when transcoding from avi to mp4(x264). Eventually the problem was in PTS and DTS values, so lines 12-15 where added before av_interleaved_write_frame function: 1. AVFormatContext* outContainer = NULL; 2. avformat_alloc_output_context2(&outContainer, NULL, "mp4", "c:\\test.mp4"; 3. AVCodec *encoder = avcodec_find_encoder(AV_CODEC_ID_H264); 4. AVStream *outStream = avformat_new_stream(outContainer, encoder); 5. // outStream->codec initiation 6. // ... 7. avformat_write_header(outContainer, NULL); 8. // reading and decoding packet 9. // ... 10. avcodec_encode_video2(outStream->codec, &encodedPacket, decodedFrame, &got_frame) 11. 12. if (encodedPacket.pts != AV_NOPTS_VALUE) 13. encodedPacket.pts = av_rescale_q(encodedPacket.pts, outStream->codec->time_base, outStream->time_base); 14. if (encodedPacket.dts != AV_NOPTS_VALUE) 15. encodedPacket.dts = av_rescale_q(encodedPacket.dts, outStream->codec->time_base, outStream->time_base); 16. 17. av_interleaved_write_frame(outContainer, &encodedPacket) After reading many posts I still do not understand: outStream->codec->time_base = 1/25 and outStream->time_base = 1/12800. The 1st one was set by me but I cannot figure out why and who set 12800? I noticed that before line (7) outStream->time_base = 1/90000 and right after it it changes to 1/12800, why? When I transcode from avi to avi, meaning changing the line (2) to avformat_alloc_output_context2(&outContainer, NULL, "avi", "c:\\test.avi"; , so before and after line (7) outStream->time_base remains always 1/25 and not like in mp4 case, why? What is the difference between time_base of outStream->codec and outStream? To calc the pts av_rescale_q does: takes 2 time_base, multiplies their fractions in cross and then compute the pts. Why it does this in this way? As I debugged, the encodedPacket.pts has value incremental by 1, so why changing it if it does has value? At the beginning the dts value is -2 and after each rescaling it still has negative number, but despite this the video played correctly! Shouldn't it be positive?

    Read the article

  • Saxon XSLT-Transformation: How to change serialization of an empty tag from <x/> to <x></x>?

    - by Ben
    Hello folks! I do some XSLT-Transformation using Saxon HE 9.2 with the output later being unmarshalled by Castor 1.3.1. The whole thing runs with Java at the JDK 6. My XSLT-Transformation looks like this: <xsl:transform version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ns="http://my/own/custom/namespace/for/the/target/document"> <xsl:output method="xml" encoding="UTF-8" indent="no" /> <xsl:template match="/"> <ns:item> <ns:property name="id"> <xsl:value-of select="/some/complicated/xpath" /> </ns:property> <!-- ... more ... --> </xsl:template> So the thing is: if the XPath-expression /some/complicated/xpath evaluates to an empty sequence, the Saxon serializer writes <ns:property/> instead of <ns:property></ns:property>. This, however, confuses the Castor unmarshaller, which is next in the pipeline and which unmarshals the output of the transformation to instances of XSD-generated Java-code. So my question is: How can I tell the Saxon-serializer to output empty tags not as standalone tags? Here is what I am proximately currently doing to execute the transformation: import net.sf.saxon.s9api.*; import javax.xml.transform.*; import javax.xml.transform.sax.SAXSource; // ... // read data XMLReader xmlReader = XMLReaderFactory.createXMLReader(); // ... there is some more setting up the xmlReader here ... InputStream xsltStream = new FileInputStream(xsltFile); InputStream inputStream = new FileInputStream(inputFile); Source xsltSource = new SAXSource(xmlReader, new InputSource(xsltStream)); Source inputSource = new SAXSource(xmlReader, new InputSource(inputStream)); XdmNode input = processor.newDocumentBuilder().build(inputSource); // initialize transformation configuration Processor processor = new Processor(false); XsltCompiler compiler = processor.newXsltCompiler(); compiler.setErrorListener(this); XsltExecutable executable = compiler.compile(xsltSource); Serializer serializer = new Serializer(); serializer.setOutputProperty(Serializer.Property.METHOD, "xml"); serializer.setOutputProperty(Serializer.Property.INDENT, "no"); serializer.setOutputStream(output); // execute transformation XsltTransformer transformer = executable.load(); transformer.setInitialContextNode(input); transformer.setErrorListener(this); transformer.setDestination(serializer); transformer.setSchemaValidationMode(ValidationMode.STRIP); transformer.transform(); I'd appreciate any hint pointing in the direction of a solution. :-) In case of any unclarity I'd be happy to give more details. Nightly greetings from Germany, Benjamin

    Read the article

  • a problem in socks.h

    - by janathan
    i use this (http://www.codeproject.com/KB/IP/Socks.aspx) lib in my socket programing in c++ and copy the socks.h in include folder and write this code: include include include include include include "socks.h" define PORT 1001 // the port client will be connecting to define MAXDATASIZE 100 static void ReadThread(void* lp); int socketId; int main(int argc, char* argv[]) { const char temp[]="GET / HTTP/1.0\r\n\r\n"; CSocks cs; cs.SetVersion(SOCKS_VER4); cs.SetSocksPort(1080); cs.SetDestinationPort(1001); cs.SetDestinationAddress("192.168.11.97"); cs.SetSocksAddress("192.168.11.97"); //cs.SetVersion(SOCKS_VER5); //cs.SetSocksAddress("128.0.21.200"); socketId = cs.Connect(); // if failed if (cs.m_IsError) { printf( "\n%s", cs.GetLastErrorMessage()); getch(); return 0; } // send packet for requesting to a server if(socketId > 0) { send(socketId, temp, strlen(temp), 0); HANDLE ReadThreadID; // handle for read thread id HANDLE handle; // handle for thread handle handle = CreateThread ((LPSECURITY_ATTRIBUTES)NULL, // No security attributes. (DWORD)0, // Use same stack size. (LPTHREAD_START_ROUTINE)ReadThread, // Thread procedure. (LPVOID)(void*)NULL, // Parameter to pass. (DWORD)0, // Run immediately. (LPDWORD)&ReadThreadID); WaitForSingleObject(handle, INFINITE); } else { printf("\nSocks Server / Destination Server not started.."); } closesocket(socketId); getch(); return 0; } // Thread Proc for reading from server socket. static void ReadThread(void* lp) { int numbytes; char buf[MAXDATASIZE]; while(1) { if ((numbytes=recv(socketId, buf, MAXDATASIZE-1, 0)) == -1) { printf("\nServer / Socks Server has been closed Receive thread Closed\0"); break; } if (numbytes == 0) break; buf[numbytes] = '\0'; printf("Received: %s\r\n",buf); send(socketId,buf,strlen(buf),0); } } but when compile this i get an error . pls help me thanks

    Read the article

  • May volatile be in user defined types to help writing thread-safe code

    - by David Rodríguez - dribeas
    I know, it has been made quite clear in a couple of questions/answers before, that volatile is related to the visible state of the c++ memory model and not to multithreading. On the other hand, this article by Alexandrescu uses the volatile keyword not as a runtime feature but rather as a compile time check to force the compiler into failing to accept code that could be not thread safe. In the article the keyword is used more like a required_thread_safety tag than the actual intended use of volatile. Is this (ab)use of volatile appropriate? What possible gotchas may be hidden in the approach? The first thing that comes to mind is added confusion: volatile is not related to thread safety, but by lack of a better tool I could accept it. Basic simplification of the article: If you declare a variable volatile, only volatile member methods can be called on it, so the compiler will block calling code to other methods. Declaring an std::vector instance as volatile will block all uses of the class. Adding a wrapper in the shape of a locking pointer that performs a const_cast to release the volatile requirement, any access through the locking pointer will be allowed. Stealing from the article: template <typename T> class LockingPtr { public: // Constructors/destructors LockingPtr(volatile T& obj, Mutex& mtx) : pObj_(const_cast<T*>(&obj)), pMtx_(&mtx) { mtx.Lock(); } ~LockingPtr() { pMtx_->Unlock(); } // Pointer behavior T& operator*() { return *pObj_; } T* operator->() { return pObj_; } private: T* pObj_; Mutex* pMtx_; LockingPtr(const LockingPtr&); LockingPtr& operator=(const LockingPtr&); }; class SyncBuf { public: void Thread1() { LockingPtr<BufT> lpBuf(buffer_, mtx_); BufT::iterator i = lpBuf->begin(); for (; i != lpBuf->end(); ++i) { // ... use *i ... } } void Thread2(); private: typedef vector<char> BufT; volatile BufT buffer_; Mutex mtx_; // controls access to buffer_ };

    Read the article

  • Problem with std::map and std::pair

    - by Tom
    Hi everyone. I have a small program I want to execute to test something #include <map> #include <iostream> using namespace std; struct _pos{ float xi; float xf; bool operator<(_pos& other){ return this->xi < other.xi; } }; struct _val{ float f; }; int main() { map<_pos,_val> m; struct _pos k1 = {0,10}; struct _pos k2 = {10,15}; struct _val v1 = {5.5}; struct _val v2 = {12.3}; m.insert(std::pair<_pos,_val>(k1,v1)); m.insert(std::pair<_pos,_val>(k2,v2)); return 0; } The problem is that when I try to compile it, I get the following error $ g++ m2.cpp -o mtest In file included from /usr/include/c++/4.4/bits/stl_tree.h:64, from /usr/include/c++/4.4/map:60, from m2.cpp:1: /usr/include/c++/4.4/bits/stl_function.h: In member function ‘bool std::less<_Tp>::operator()(const _Tp&, const _Tp&) const [with _Tp = _pos]’: /usr/include/c++/4.4/bits/stl_tree.h:1170: instantiated from ‘std::pair<typename std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::iterator, bool> std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::_M_insert_unique(const _Val&) [with _Key = _pos, _Val = std::pair<const _pos, _val>, _KeyOfValue = std::_Select1st<std::pair<const _pos, _val> >, _Compare = std::less<_pos>, _Alloc = std::allocator<std::pair<const _pos, _val> >]’ /usr/include/c++/4.4/bits/stl_map.h:500: instantiated from ‘std::pair<typename std::_Rb_tree<_Key, std::pair<const _Key, _Tp>, std::_Select1st<std::pair<const _Key, _Tp> >, _Compare, typename _Alloc::rebind<std::pair<const _Key, _Tp> >::other>::iterator, bool> std::map<_Key, _Tp, _Compare, _Alloc>::insert(const std::pair<const _Key, _Tp>&) [with _Key = _pos, _Tp = _val, _Compare = std::less<_pos>, _Alloc = std::allocator<std::pair<const _pos, _val> >]’ m2.cpp:30: instantiated from here /usr/include/c++/4.4/bits/stl_function.h:230: error: no match for ‘operator<’ in ‘__x < __y’ m2.cpp:9: note: candidates are: bool _pos::operator<(_pos&) $ I thought that declaring the operator< on the key would solve the problem, but its still there. What could be wrong? Thanks in advance.

    Read the article

  • Strange error inserting new relationship data in core-data

    - by michael
    My app will allow users to create a personalised list of events from a large list of events. I have a table view which simply displays these events, tapping on one of them takes the user to the event details view, which has a button "add to my events". In this detailed view I own the original event object, retrieved via an NSFetchedResultsController and passed to the detailed view (via a table cell, the same as the core data recipes sample). I have no trouble retrieving/displaying information from this "event". I am then trying to add it to the list of MyEvents represented by a one to many (inverse) relationship: This code: NSManagedObjectContext *context = [event managedObjectContext]; MyEvents *myEvents = (MyEvents *)[NSEntityDescription insertNewObjectForEntityForName:@"MyEvents" inManagedObjectContext:context]; [myEvents addEventObject:event];//ERROR And this code (suggested below): //would this add to or overwrite the "list" i am attempting to maintain NSManagedObjectContext *context = [event managedObjectContext]; MyEvents *myEvents = (MyEvents *)[NSEntityDescription insertNewObjectForEntityForName:@"MyEvents" inManagedObjectContext:context]; NSMutableSet *myEvent = [myEvents mutableSetValueForKey:@"event"]; [myEvent addObject:event]; //ERROR Bot produce (at the line indicated by //ERROR): *** -[NSComparisonPredicate evaluateWithObject:]: message sent to deallocated instance Seems I may have missed something fundamental. I cant glean any more information through the use of debugging tools, with my knowledge of them. 1) Is this a valid way to compile and store an editable list like this? 2) Is there a better way? 3) What could possibly be the deallocated instance in error? -- I have now modified the Event entity to have a to-many relationship called "myEvents" which referrers to itself. I can add Events to this fine, and logging the object shows the correct memory addresses appearing for the relationship after a [event addMyEventObject:event];. The same failure happens right after this however. I am still at a loss to understand what is going wrong. This is the backtrace #0 0x01f753a7 in ___forwarding___ () #1 0x01f516c2 in __forwarding_prep_0___ () #2 0x01c5aa8f in -[NSFetchedResultsController(PrivateMethods) _preprocessUpdatedObjects:insertsInfo:deletesInfo:updatesInfo:sectionsWithDeletes:newSectionNames:treatAsRefreshes:] () #3 0x01c5d63b in -[NSFetchedResultsController(PrivateMethods) _managedObjectContextDidChange:] () #4 0x0002e63a in _nsnote_callback () #5 0x01f40005 in _CFXNotificationPostNotification () #6 0x0002bef0 in -[NSNotificationCenter postNotificationName:object:userInfo:] () #7 0x01bbe17d in -[NSManagedObjectContext(_NSInternalNotificationHandling) _postObjectsDidChangeNotificationWithUserInfo:] () #8 0x01c1d763 in -[NSManagedObjectContext(_NSInternalChangeProcessing) _createAndPostChangeNotification:withDeletions:withUpdates:withRefreshes:] () #9 0x01ba25ea in -[NSManagedObjectContext(_NSInternalChangeProcessing) _processRecentChanges:] () #10 0x01bdfb3a in -[NSManagedObjectContext processPendingChanges] () #11 0x01bd0957 in _performRunLoopAction () #12 0x01f4d252 in __CFRunLoopDoObservers () #13 0x01f4c65f in CFRunLoopRunSpecific () #14 0x01f4bc48 in CFRunLoopRunInMode () #15 0x0273878d in GSEventRunModal () #16 0x02738852 in GSEventRun () #17 0x002ba003 in UIApplicationMain () Cheers.

    Read the article

  • More GCC link time issues: undefined reference to main

    - by vikramtheone
    Hi Guys, I'm writing software for a Cortex-A8 processor and I have to write some ARM assembly code to access specific registers. I'm making use of the gnu compilers and related tool chains, these tools are installed on the processor board(Freescale i.MX515) with Ubuntu. I make a connection to it from my host PC(Windows) using WinSCP and the PuTTY terminal. As usual I started with a simple C project having main.c and functions.s. I compile the main.c using GCC, assemble the functions.s using as and link the generated object files using once again GCC, but I get strange errors during this process. An important finding - Meanwhile, I found out that my assembly code may have some issues because when I individually assemble it using the command as -o functions.o functions.s and try running the generated functions.o using ./functions.o command, the bash shell is failing to recognize this file as an executable(on pressing tab functions.o is not getting selected/PuTTY is not highlighting the file). Can anyone suggest whats happening here? Are there any specific options I have to send, to GCC during the linking process? The errors I see are strange and beyond my understanding, I don't understand to what the GCC is referring. I'm pasting here the contents of main.c, functions.s, the Makefile and the list of errors. Help, please!!! Vikram main.c #include <stdio.h> #include <stdlib.h> int main(void) { puts("!!!Hello World!!!"); /* prints !!!Hello World!!! */ return EXIT_SUCCESS; } functions.s * Main program */ .equ STACK_TOP, 0x20000800 .text .global _start .syntax unified _start: .word STACK_TOP, start .type start, function start: movs r0, #10 movs r1, #0 .end Makefile all: hello hello: main.o functions.o gcc -o main.o functions.o main.o: main.c gcc -c -mcpu=cortex-a8 main.c functions.o: functions.s as -mcpu=cortex-a8 -o functions.o functions.s Errors ubuntu@ubuntu-desktop:~/Documents/Project/Others/helloworld$ make gcc -c -mcpu=cortex-a8 main.c as -mcpu=cortex-a8 -o functions.o functions.s gcc -o main.o functions.o functions.o: In function `_start': (.text+0x0): multiple definition of `_start' /usr/lib/gcc/arm-linux-gnueabi/4.3.3/../../../crt1.o:init.c:(.text+0x0): first defined here /usr/lib/gcc/arm-linux-gnueabi/4.3.3/../../../crt1.o: In function `_start': init.c:(.text+0x30): undefined reference to `main' collect2: ld returned 1 exit status make: *** [hello] Error 1

    Read the article

  • Lazy loading the addthis script? (or lazy loading external js content dependent on already fired eve

    - by Keith Bentrup
    I want to have the addthis widget available for my users, but I want to lazy load it so that my page loads as quickly as possible. However, after trying it via a script tag and then via my lazy loading method, it appears to only work via the script tag. In the obfuscated code, I see something that looks like it's dependent on the DOMContentLoaded event (at least for firefox). Since the DOMContentLoaded event has already fired, the widget doesn't render properly. What to do? I could just use a script tag (slower)... or could I fire (in a cross browser way) the DOMContentLoaded (or equivalent) event? I have a feeling this may not be possible b/c I believe that (like jQuery) there are multiple tests of the content ready event, and so multiple simulated events would have to occur. Nonetheless, this is an interesting problem b/c I have seen a couple widgets now assume that you are including their stuff via static script tags. It would be nice if they wrote code that was more useful to developers concerned about speed, but until then, is there a work around?? And/or are any of my assumptions wrong? Edit: Because the 1st answer to the question seemed to miss the point of my problem, I wanted to clarify the situation. This is about a specific problem. I'm not looking for yet another lazy load script or check if some dependencies are loaded script. Specifically this problem deals with external widgets that you do not have control over and may or may not be obfuscated delaying the load of the external widgets until they are needed or at least, til substantially after everything else has been loaded including other deferred elements b/c of the how the widget was written, precludes existing, typical lazy loading paradigms While it's esoteric, I have seen it happen with a couple widgets - where the widget developers assume that you're just willing to throw in another script tag at the bottom of the page. I'm looking to save those 500-1000 ms** though as numerous studies by yahoo, google, and amazon show it to be important to your user's experience. **My testing with hammerhead and personal experience indicates that this will be my savings in this case.

    Read the article

  • NHibernate exception on query

    - by Yoav
    I'm getting a mapping exception doing the most basic query. This is my domain class: public class Project { public virtual string PK { get; set; } public virtual string Id { get; set; } public virtual string Name { get; set; } public virtual string Description { get; set; } } And the mapping class: public class ProjectMap :ClassMap<Project> { public ProjectMap() { Table("PROJECTS"); Id(x => x.PK, "PK"); Map(x => x.Id, "ID"); Map(x => x.Name, "NAME"); Map(x => x.Description, "DESCRIPTION"); } } Configuration: public ISessionFactory SessionFactory { return Fluently.Configure() .Database(MsSqlCeConfiguration.Standard.ShowSql().ConnectionString(c => c.Is("data source=" + path))) .Mappings(m => m.FluentMappings.AddFromAssemblyOf<Project>()) .BuildSessionFactory(); } And query: IList project; using (ISession session = SessionFactory.OpenSession()) { IQuery query = session.CreateQuery("from Project"); project = query.List<Project>(); } I'm getting the exception on the query line: NHibernate.Hql.Ast.ANTLR.QuerySyntaxException: Project is not mapped [from Project] at NHibernate.Hql.Ast.ANTLR.SessionFactoryHelperExtensions.RequireClassPersister(String name) at NHibernate.Hql.Ast.ANTLR.Tree.FromElementFactory.AddFromElement() at NHibernate.Hql.Ast.ANTLR.Tree.FromClause.AddFromElement(String path, IASTNode alias) at NHibernate.Hql.Ast.ANTLR.HqlSqlWalker.fromElement() at NHibernate.Hql.Ast.ANTLR.HqlSqlWalker.fromElementList() at NHibernate.Hql.Ast.ANTLR.HqlSqlWalker.fromClause() at NHibernate.Hql.Ast.ANTLR.HqlSqlWalker.unionedQuery() at NHibernate.Hql.Ast.ANTLR.HqlSqlWalker.query() at NHibernate.Hql.Ast.ANTLR.HqlSqlWalker.selectStatement() at NHibernate.Hql.Ast.ANTLR.HqlSqlWalker.statement() at NHibernate.Hql.Ast.ANTLR.HqlSqlTranslator.Translate() at NHibernate.Hql.Ast.ANTLR.QueryTranslatorImpl.Analyze(HqlParseEngine parser, String collectionRole) at NHibernate.Hql.Ast.ANTLR.QueryTranslatorImpl.DoCompile(IDictionary`2 replacements, Boolean shallow, String collectionRole) at NHibernate.Hql.Ast.ANTLR.QueryTranslatorImpl.Compile(IDictionary`2 replacements, Boolean shallow) at NHibernate.Engine.Query.HQLQueryPlan..ctor(String hql, String collectionRole, Boolean shallow, IDictionary`2 enabledFilters, ISessionFactoryImplementor factory) at NHibernate.Engine.Query.QueryPlanCache.GetHQLQueryPlan(String queryString, Boolean shallow, IDictionary`2 enabledFilters) at NHibernate.Impl.AbstractSessionImpl.GetHQLQueryPlan(String query, Boolean shallow) at NHibernate.Impl.AbstractSessionImpl.CreateQuery(String queryString) I assume something is wrong with my query.

    Read the article

  • Unique element ID, even if element doesn't have one

    - by Robert J. Walker
    I'm writing a GreaseMonkey script where I'm iterating through a bunch of elements. For each element, I need a string ID that I can use to reference that element later. The element itself doesn't have an id attribute, and I can't modify the original document to give it one (although I can make DOM changes in my script). I can't store the references in my script because when I need them, the GreaseMonkey script itself will have gone out of scope. Is there some way to get at an "internal" ID that the browser uses, for example? A Firefox-only solution is fine; a cross-browser solution that could be applied in other scenarios would be awesome. Edit: If the GreaseMonkey script is out of scope, how are you referencing the elements later? They GreaseMonkey script is adding events to DOM objects. I can't store the references in an array or some other similar mechanism because when the event fires, the array will be gone because the GreaseMonkey script will have gone out of scope. So the event needs some way to know about the element reference that the script had when the event was attached. And the element in question is not the one to which it is attached. Can't you just use a custom property on the element? Yes, but the problem is on the lookup. I'd have to resort to iterating through all the elements looking for the one that has that custom property set to the desired id. That would work, sure, but in large documents it could be very time consuming. I'm looking for something where the browser can do the lookup grunt work. Wait, can you or can you not modify the document? I can't modify the source document, but I can make DOM changes in the script. I'll clarify in the question. Can you not use closures? Closuses did turn out to work, although I initially thought they wouldn't. See my later post. It sounds like the answer to the question: "Is there some internal browser ID I could use?" is "No."

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >