Search Results

Search found 5665 results on 227 pages for 'runtime workbench'.

Page 187/227 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • Boost link error when using "--layout=system" on VS2005

    - by Kevin
    I'm new to boost, and thought I'd try it out with some realistic deployment scenarios for the .dlls, so I used the following command to compile/install the libraries: .\bjam install --layout=system variant=debug runtime-link=shared link=shared --with-date_time --with-thread --with-regex --with-filesystem --includedir=<my include directory> --libdir=<my bin directory> > installlog.txt That seemed to work, but my simple program (taken right from the "Getting Started" page) fails: #include <boost/regex.hpp> #include <iostream> #include <string> // Place your functions after this line int main() { std::string line; boost::regex pat( "^Subject: (Re: |Aw: )*(.*)" ); while (std::cin) { std::getline(std::cin, line); boost::smatch matches; if (boost::regex_match(line, matches, pat)) std::cout << matches[2] << std::endl; } } This fails with the following linker error: fatal error LNK1104: cannot open file 'libboost_regex-vc80-mt-1_42.lib' I'm sure that both the .lib and the .dlls are in that directory, and named how I want them to be (ie: boost_regex.lib, etc, all unversioned, as the --layout=system says). So why is it looking for the versioned type of it? And how do I get it to look for the unversioned type of the library? I've tried this with more "normal" options, such as below: .\bjam stage --build-type=complete --with-date_time --with-thread --with-filesystem --with-regex > mybuildlog.txt And that works fine. I made sure my compiler saw the "stage\lib" directory, and it compiled and ran fine with nothing beyond having the environment looking into the right lib directory. But when I took those "testing" directories away, and wanted to use these others (unversioned), then it failed. I'm under VS2005 here on XP. Any ideas?

    Read the article

  • How to determine one to many column name from entity type

    - by snicker
    I need a way to determine the name of the column used by NHibernate to join one-to-many collections from the collected entity's type. I need to be able to determine this at runtime. Here is an example: I have some entities: namespace Entities { public class Stable { public virtual int Id {get; set;} public virtual string StableName {get; set;} public virtual IList<Pony> Ponies { get; set; } } public class Dude { public virtual int Id { get; set; } public virtual string DudesName { get; set; } public virtual IList<Pony> PoniesThatBelongToDude { get; set; } } public class Pony { public virtual int Id {get; set;} public virtual string Name {get; set;} public virtual string Color { get; set; } } } I am using NHibernate to generate the database schema, which comes out looking like this: create table "Stable" (Id integer, StableName TEXT, primary key (Id)) create table "Dude" (Id integer, DudesName TEXT, primary key (Id)) create table "Pony" (Id integer, Name TEXT, Color TEXT, Stable_id INTEGER, Dude_id INTEGER, primary key (Id)) Given that I have a Pony entity in my code, I need to be able to find out: A. Does Pony even belong to a collection in the mapping? B. If it does, what are the column names in the database table that pertain to collections In the above instance, I would like to see that Pony has two collection columns, Stable_id and Dude_id.

    Read the article

  • Deleting dynamic pageviews using Javascript

    - by user203127
    Hi, I created dynamic tab creation application using telerik radstrip. I can delete the tabs that are creating dynamically using crossbar option. When i am deleting the tabs its just deleting the tabs but not pageview corresponding to the tab. I have tried to delete the pageviews but i am getting error like "Microsoft JScript runtime error: Object doesn't support this property or method" and pointing to multiPage.get_pageViews().remove(pageView); code. Can anyone help me how to delete pageviews. Thanks in advance. Javascript code. <script type="text/javascript"> /* <![CDATA[ */ function deleteTab(tabText) { var tabStrip = $find("<%= RadTabStrip1.ClientID %>"); var multiPage = $find("<%= RadMultiPage1.ClientID %>"); var tab = tabStrip.findTabByText(tabText); var pageView = tab.get_pageView(); var tabToSelect = tab.get_nextTab(); if (!tabToSelect) tabToSelect = tab.get_previousTab(); tabStrip.get_tabs().remove(tab); multiPage.get_pageView().remove(pageView); multiPage.get_pageViews().remove(pageView); if (tabToSelect) tabToSelect.set_selected(true); } /* ]]> */ </script>

    Read the article

  • How to deserialize implementation classes in OSGi

    - by Daniel Schneller
    In an eRCP OSGi based application the user can push a button and go to a lock screen similar to that of Windows or Mac OS X. When this happens, the current state of the application is serialized to a file and control is handed over to the lock screen. In this mobile application memory is very tight, so we need to get rid of the original view/controller when the lock screen comes up. This works fine and we end up with a binary serialized file. Once the user logs back in, the file is read in again and the original state of the application restored. This works fine as well, except when the controller that was serialized contained a reference to an object which comes from a different bundle. In my concrete case the original controller (from bundle A) can call a web service and gets a result back. Nothing fancy, just some Strings and Numbers in a simple value holder class. However the controller only sees this as a Result interface; the actual runtime object (ResultImpl) is defined and created in a different bundle (bundle B, the webservice client implementation) and returned via a service call. When the deserialization now tries to thaw the controller from the file, it throws a ClassNotFound exception, complaining about not being able to deserialize the result object, because deserialization is called from bundle A, which cannot see the ResultImpl class from bundle B. Any ideas on how to work around that? The only thing I could come up with is to clone all the individual values into another object, defined in the controller's bundle, but this seems like quite a hassle.

    Read the article

  • boost::python string-convertible properties

    - by Checkers
    I have a C++ class, which has the following methods: class Bar { ... const Foo& getFoo() const; void setFoo(const Foo&); }; where class Foo is convertible to std::string (it has an implicit constructor from std::string and an std::string cast operator). I define a Boost.Python wrapper class, which, among other things, defines a property based on previous two functions: class_<Bar>("Bar") ... .add_property( "foo", make_function( &Bar::getFoo, return_value_policy<return_by_value>()), &Bar::setFoo) ... I also mark the class as convertible to/from std::string. implicitly_convertible<std::string, Foo>(); implicitly_convertible<Foo, std::string>(); But at runtime I still get a conversion error trying to access this property: TypeError: No to_python (by-value) converter found for C++ type: Foo How to achieve the conversion without too much boilerplate of wrapper functions? (I already have all the conversion functions in class Foo, so duplication is undesirable.

    Read the article

  • How to print a page when a JButton is pressed in java swing using PrinterJob?

    - by Prayag Upd
    I tried the following code AWT but at runtime shows multiple print dialogs repeatedly.... package printerjob; import java.awt.BasicStroke; import java.awt.Frame; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.event.WindowAdapter; import java.awt.print.PageFormat; import java.awt.print.Printable; import java.awt.print.PrinterException; import java.awt.print.PrinterJob; /** * * @author pragX */ public class FramePrinterJob extends Frame implements Printable{ public void start(){ add(button); } @Override public void paint(Graphics graphics){ PrinterJob printerJob=PrinterJob.getPrinterJob(); printerJob.setPrintable(this); if(printerJob.printDialog()){ try{ printerJob.print(); }catch(PrinterException printerException){ //printerException.printStackTrace(); System.out.println("Error Printing." + printerException); } } } public int print(Graphics graphics, PageFormat pageFormat, int pageIndex) throws PrinterException { //throw new UnsupportedOperationException("Not supported yet."); if(pageIndex>=1){ return Printable.NO_SUCH_PAGE; } graphics.translate((int) pageFormat.getImageableX(), (int)pageFormat.getImageableY()); Graphics2D graphics2D=(Graphics2D)graphics; graphics2D.setStroke(new BasicStroke(4f)); graphics2D.drawLine(20, 20, 20, 120); graphics2D.drawLine(40, 20, 40, 120); graphics2D.drawLine(20, 70, 40, 70); graphics2D.drawLine(60, 70, 60, 120); graphics2D.drawLine(60, 40, 60, 45); return Printable.PAGE_EXISTS; } public static void main(String args[]){ Frame frame=new FramePrinterJob(); frame.addWindowListener(new WindowAdapter() { public void windowClosing(){System.exit(0);}}); frame.setSize(300,400); frame.setVisible(true); } }

    Read the article

  • Why is TRest in Tuple<T1... TRest> not constrained?

    - by Anthony Pegram
    In a Tuple, if you have more than 7 items, you can provide an 8th item that is another tuple and define up to 7 items, and then another tuple as the 8th and on and on down the line. However, there is no constraint on the 8th item at compile time. For example, this is legal code for the compiler: var tuple = new Tuple<int, int, int, int, int, int, int, double> (1, 1, 1, 1, 1, 1, 1, 1d); Even though the intellisense documentation says that TRest must be a Tuple. You do not get any error when writing or building the code, it does not manifest until runtime in the form of an ArgumentException. You can roughly implement a Tuple in a few minutes, complete with a Tuple-constrained 8th item. I just wonder why it was left off the current implementation? Is it possibly a forward-compatibility issue where they could add more elements with a hypothetical C# 5? Short version of rough implementation interface IMyTuple { } class MyTuple<T1> : IMyTuple { public T1 Item1 { get; private set; } public MyTuple(T1 item1) { Item1 = item1; } } class MyTuple<T1, T2> : MyTuple<T1> { public T2 Item2 { get; private set; } public MyTuple(T1 item1, T2 item2) : base(item1) { Item2 = item2; } } class MyTuple<T1, T2, TRest> : MyTuple<T1, T2> where TRest : IMyTuple { public TRest Rest { get; private set; } public MyTuple(T1 item1, T2 item2, TRest rest) : base(item1, item2) { Rest = rest; } } ... var mytuple = new MyTuple<int, int, MyTuple<int>>(1, 1, new MyTuple<int>(1)); // legal var mytuple2 = new MyTuple<int, int, int>(1, 2, 3); // illegal

    Read the article

  • How do I remove implementing types from GWT’s Serialization Policy?

    - by Bluu
    The opposite of this question: http://stackoverflow.com/questions/138099/how-do-i-add-a-type-to-gwts-serialization-policy-whitelist GWT is adding undesired types to the serialization policy and bloating my JS. How do I trim my GWT whitelist by hand? Or should I at all? For example, if I put the interface List on a GWT RPC service class, GWT has to generate Javascript that handles ArrayList, LinkedList, Stack, Vector, ... even though my team knows we're only ever going to return an ArrayList. I could just make the method's return type ArrayList, but I like relying on an interface rather than a specific implementation. After all, maybe one day we will switch it up and return e.g. a LinkedList. In that case, I'd like to force the GWT serialization policy to compile for only ArrayList and LinkedList. No Stacks or Vectors. These implicit restrictions have one huge downside I can think of: a new member of the team starts returning Vectors, which will be a runtime error. So besides the question in the title, what is your experience designing around this?

    Read the article

  • Created function not found

    - by skerit
    I'm trying to create a simple function, but at runtime firebug says the function does not exist. Here's the function code: <script type="text/javascript"> function load_qtip(apply_qtip_to) { $(apply_qtip_to).each(function(){ $(this).qtip( { content: { // Set the text to an image HTML string with the correct src URL to the loading image you want to use text: '<img class="throbber" src="/projects/qtip/images/throbber.gif" alt="Loading..." />', url: $(this).attr('rel'), // Use the rel attribute of each element for the url to load title: { text: 'Nieuwsbladshop.be - ' + $(this).attr('tooltip'), // Give the tooltip a title using each elements text //button: 'Sluiten' // Show a close link in the title } }, position: { corner: { target: 'bottomMiddle', // Position the tooltip above the link tooltip: 'topMiddle' }, adjust: { screen: true // Keep the tooltip on-screen at all times } }, show: { when: 'mouseover', solo: true // Only show one tooltip at a time }, hide: 'mouseout', style: { tip: true, // Apply a speech bubble tip to the tooltip at the designated tooltip corner border: { width: 0, radius: 4 }, name: 'light', // Use the default light style width: 250 // Set the tooltip width } }) } } </script> And I'm trying to call it here: <script type="text/javascript"> // Create the tooltips only on document load $(document).ready(function() { load_qtip('#shopcarousel a[rel]'); // Use the each() method to gain access to each elements attributes }); </script> What am I doing wrong?

    Read the article

  • Setting up relations/mappings for a SQLAlchemy many-to-many database

    - by Brent Ramerth
    I'm new to SQLAlchemy and relational databases, and I'm trying to set up a model for an annotated lexicon. I want to support an arbitrary number of key-value annotations for the words which can be added or removed at runtime. Since there will be a lot of repetition in the names of the keys, I don't want to use this solution directly, although the code is similar. My design has word objects and property objects. The words and properties are stored in separate tables with a property_values table that links the two. Here's the code: from sqlalchemy import Column, Integer, String, Table, create_engine from sqlalchemy import MetaData, ForeignKey from sqlalchemy.orm import relation, mapper, sessionmaker from sqlalchemy.ext.declarative import declarative_base engine = create_engine('sqlite:///test.db', echo=True) meta = MetaData(bind=engine) property_values = Table('property_values', meta, Column('word_id', Integer, ForeignKey('words.id')), Column('property_id', Integer, ForeignKey('properties.id')), Column('value', String(20)) ) words = Table('words', meta, Column('id', Integer, primary_key=True), Column('name', String(20)), Column('freq', Integer) ) properties = Table('properties', meta, Column('id', Integer, primary_key=True), Column('name', String(20), nullable=False, unique=True) ) meta.create_all() class Word(object): def __init__(self, name, freq=1): self.name = name self.freq = freq class Property(object): def __init__(self, name): self.name = name mapper(Property, properties) Now I'd like to be able to do the following: Session = sessionmaker(bind=engine) s = Session() word = Word('foo', 42) word['bar'] = 'yes' # or word.bar = 'yes' ? s.add(word) s.commit() Ideally this should add 1|foo|42 to the words table, add 1|bar to the properties table, and add 1|1|yes to the property_values table. However, I don't have the right mappings and relations in place to make this happen. I get the sense from reading the documentation at http://www.sqlalchemy.org/docs/05/mappers.html#association-pattern that I want to use an association proxy or something of that sort here, but the syntax is unclear to me. I experimented with this: mapper(Word, words, properties={ 'properties': relation(Property, secondary=property_values) }) but this mapper only fills in the foreign key values, and I need to fill in the other value as well. Any assistance would be greatly appreciated.

    Read the article

  • Enumerating computers in NT4 domain using WNetEnumResourceW (C++) or DirectoryEntry (C#)

    - by Kevin Davis
    I'm trying to enumerate computers in NT4 domains (not Active Directory) and support Unicode NetBIOS names. According to MSDN, WNetEnumResourceW is the Unicode counterpart of WNetEnumResource which to me would imply that using this would do the trick. However, I have not been able to get Unicode NetBIOS names properly using WNetEnumResourceW. I've also tried the C# rough equivalent DirectoryEntry using the WinNT: provider with no luck on Unicode names either. If I use DirectoryEntry on Active Directory (using the LDAP: provider) I do get Unicode names back. I noticed that during some debugging my code using DirectoryEntry and the WinNT: provider, the exceptions I saw were of type System.Runtime.InteropServices.COMException which tends to make me believe that this is just calling WNetEnumResourceW via COM. This web page implies that for some Net APIs the MS documentation is incomplete and possibly inaccurate which further confuses things. Additionally I've found that using the C# method which certainly results in cleaner, more understandable code also yields incomplete results in enumerating computers in domains\workgroups. Does anyone have any insight on this? Is it possible that computer acting as the WINS server is mangling the name? How would I determine this? Thanks

    Read the article

  • C++: is it safe to work with std::vectors as if they were arrays?

    - by peoro
    I need to have a fixed-size array of elements and to call on them functions that require to know about how they're placed in memory, in particular: functions like glVertexPointer, that needs to know where the vertices are, how distant they are one from the other and so on. In my case vertices would be members of the elements to store. to get the index of an element within this array, I'd prefer to avoid having an index field within my elements, but would rather play with pointers arithmetic (ie: index of Element *x will be x - & array[0]) -- btw, this sounds dirty to me: is it good practice or should I do something else? Is it safe to use std::vector for this? Something makes me think that an std::array would be more appropriate but: Constructor and destructor for my structure will be rarely called: I don't mind about such overhead. I'm going to set the std::vector capacity to size I need (the size that would use for an std::array, thus won't take any overhead due to sporadic reallocation. I don't mind a little space overhead for std::vector's internal structure. I could use the ability to resize the vector (or better: to have a size chosen during setup), and I think there's no way to do this with std::array, since its size is a template parameter (that's too bad: I could do that even with an old C-like array, just dynamically allocating it on the heap). If std::vector is fine for my purpose I'd like to know into details if it will have some runtime overhead with respect to std::array (or to a plain C array): I know that it'll call the default constructor for any element once I increase its size (but I guess this won't cost anything if my data has got an empty default constructor?), same for destructor. Anything else?

    Read the article

  • Java/Groovy File IO Replacing an Image File with it's own Contents - Why Does This Work?

    - by jboyd
    I have some JPG files that need to be replaced at runtime with a JFIF standardized version of themselves (we are using a vendor that gives us JPG that do not have proper headers so they don't work in certain applications)... I am able to create a new file from the existing image, then get a buffered image from that file and write the contents right back into the file without having to delete it and it works... imageSrcFolder.eachFileMatch ( ~/.*\.jpg/, { BufferedImage bi = ImageIO.read( it ) ImageIO.write( bi, "jpg", it ) }); The question I have is why? Why doesn't the file end up doubled in size? Why don't I have to delete it first? Why am I able to take a file object to an existing file and then treat it as if it were a brand new one? It seems that what I consider to be a "file" is not what the File object in java actually is, or else this wouldn't work at all. My code does exactly what I want it to do, but I'm not convinced it always will... it just seems way too easy

    Read the article

  • Trying to calculate large numbers in Python with gmpy. Python keeps crashing?

    - by Ryan Peschel
    I was recommended to use gmpy to assist with calculating large numbers efficiently. Before I was just using python and my script ran for a day or two and then ran out of memory (not sure how that happened because my program's memory usage should basically be constant throughout.. maybe a memory leak?) Anyways, I keep getting this weird error after running my program for a couple seconds: mp_allocate< 545275904->545275904 > Fatal Python error: mp_allocate failure This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. Also, python crashes and Windows 7 gives me the generic python.exe has stopped working dialog. This wasn't happening with using standard python integers. Now that I switch to gmpy I am getting this error just seconds in to running my script. I thought gmpy was specialized in dealing with large number arithmetic? For reference, here is a sample program that produces the error: import gmpy2 p = gmpy2.xmpz(3000000000) s = gmpy2.xmpz(2) M = s**p for x in range(p): s = (s * s) % M I have 10 gigs of RAM and without gmpy this script ran for days without running out of memory (still not sure how that happened considering s never really gets larger.. Anyone have any ideas? EDIT: Forgot to mention I am using Python 3.2

    Read the article

  • Getting error when compiling debug mode: C++/CLI - error LNK2022

    - by Yochai Timmer
    I've got a CLI code wrapping a C++ DLL. When i try to compile it in debug mode, i get the following error: Error 22 error LNK2022: metadata operation failed (8013118D) : Inconsistent layout information induplicated types .... MSVCMRTD.lib (locale0_implib.obj) The weird thing is that on Release mode it compiles OK and works OK. The only difference i can see that causes the problem is when i change: Configuration Properties - C/C++ - Code Generation - Runtime Library When it's set to: Multi-threaded Debug DLL (/MDd) it throws the error. When it's set to: Multi-threaded DLL (/MD) it compiles fine. The same settings work for all the other DLLs in the project (CLI and C++) and they inherit the same properties. I'm using VS2010. So, how can i solve this ? And can I get some explanation to WHY this is happening ? Update: I've basically tried changing every option in the project's properties with no luck. I've read somewhere that this might be caused from duplicate declarations of a type of the same name. But in the CLI file i'm calling std::string etc. explicitly from std. Any other ideas ?

    Read the article

  • create board for game with events support in WPF

    - by netmajor
    How can I create board for simple game, looks like for chess,but user could dynamically change number of column and rows? In cells I could insert symbol of pawn, like small image or just ellipse or rectangle with fill. This board should have possibility add and remove pawn from cells and move pown from one cell to another. My first idea was Grid. I do it in code behind, but it's hurt to implement events or everything in runtime create board :/ int size = 12; Grid board = new Grid(); board.ShowGridLines = true; for (int i = 0; i < size;i++ ) { board.ColumnDefinitions.Add(new ColumnDefinition()); board.RowDefinitions.Add(new RowDefinition()); } //komputer Rectangle ai = new Rectangle(); ai.Height = 20; ai.Width = 20; ai.AllowDrop = true; ai.Fill = Brushes.Orange; Grid.SetRow(ai, 0); Grid.SetColumn(ai,0); //czlowiek Rectangle hum = new Rectangle(); hum.Height = 20; hum.Width = 20; hum.AllowDrop = true; hum.Fill = Brushes.Green; Grid.SetRow(hum,size); Grid.SetColumn(hum,size); board.Children.Add(ai); board.Children.Add(hum); this.Content = board; It's a way to do this dynamically col and row change in XAML ? It's better way to implement that board create and implement events on move pawn from one cell to another ?

    Read the article

  • xcode project-/target-settings-syntax for linker flag force_load on iPhone

    - by Kaiserludi
    Hi all. I am confronted with the double bind, that on the one hand for one of the 3rd party static libraries, my iPhone application uses, the linker flag -all_load has to be set in the application project- or target settings, otherwise the app crashes at runtime not finding some symbols, called internally from the lib, on the other hand for another 3rd party static lib -all_load must not be set on application level, or the app won't build thanks to a "duplicate symbols"-linker error. To solve this issue I now want to use force_load instant of load_all, as it due to documentation it does the same like all_load, but only for the passed path or lib-file, instead of all libs. The problem with force_load is, I do not have a clue, how to pass a path or file as parameter with it, when passing it via xcode project- or target-settings. All syntax-possibilities coming to my mind either lead into xcode thinking its another linker flag instead of a parameter to the previous one, or the linker is throwing syntax related errors or the flag simply does nothing at all in comparison to not being set. I also opened the .pbxproj-file in a text-editor to edit it to the correct command line syntax manually, but when reloading the project with xcode, it auto changes the syntax into interpreting the parameter to force_load as a separate flag. Anyone having an idea on this issue? Thx, Kaiserludi.

    Read the article

  • Cannot call DLL import entry in C# from C++ project. EntryPointNotFoundException

    - by kriau
    I'm trying to call from C# a function in a custom DLL written in C++. However I'm getting the warning during code analysis and the error at runtime: Warning: CA1400 : Microsoft.Interoperability : Correct the declaration of 'SafeNativeMethods.SetHook()' so that it correctly points to an existing entry point in 'wi.dll'. The unmanaged entry point name currently linked to is SetHook. Error: System.EntryPointNotFoundException was unhandled. Unable to find an entry point named 'SetHook' in DLL 'wi.dll'. Both projects wi.dll and C# exe has been compiled in to the same DEBUG folder, both files reside here. There is only one file with the name wi.dll in the whole file system. C++ function definition looks like: #define WI_API __declspec(dllexport) bool WI_API SetHook(); I can see exported function using Dependency Walker: as decorated: bool SetHook(void) as undecorated: ?SetHook@@YA_NXZ C# DLL import looks like (I've defined these lines using CLRInsideOut from MSDN magazine): [DllImport("wi.dll", EntryPoint = "SetHook", CallingConvention = CallingConvention.Cdecl)] [return: MarshalAsAttribute(UnmanagedType.I1)] internal static extern bool SetHook(); I've tried without EntryPoint and CallingConvention definitions as well. Both projects are 32-bits, I'm using W7 64 bits, VS 2010 RC. I believe that I simply have overlooked something.... Thanks in advance.

    Read the article

  • ASP.NET MVC3 - Bug using Javascript

    - by ebb
    Hey there, I'm trying to use Ajax.BeginForm() to POST A Json result from my controller (I'm using MVC3). When the Json result is called it should be sent to a javascript function and extract the object using "var myObject = content.get_response().get_object();", However it just throws a "Microsoft JScript runtime error: Object doesn't support this property or method" when trying to invoke the Ajax POST. My code: Controller: [HttpPost] public ActionResult Index(string message) { return Json(new { Success = true, Message = message }); } View: <!DOCTYPE html> <html> <head> <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/MicrosoftAjax.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/MicrosoftMvcAjax.js")" type="text/javascript"></script> <script type="text/javascript"> function JsonAdd_OnComplete(mycontext) { var myObject = mycontext.get_response().get_object(); alert(mycontext.Message); } </script> </head> <body> <div> @using(Ajax.BeginForm("Index", "Home", new AjaxOptions() { HttpMethod = "POST", OnComplete = "JsonAdd_OnComplete" })) { @Html.TextBox("message") <input type="submit" value="SUBMIT" /> } </div> </body> </html> The strange thing is that the exactly same code works in MVC2 - Is this a bug, or have I forgot something? Thanks in advance.

    Read the article

  • How to add deploy.jar to classpath?

    - by dma_k
    I am facing the problem: I need to add ${java.home}/lib/deploy.jar JAR file to classpath in the runtime (dynamically from java). The solution with Thread#setContextClassLoader(ClassLoader) (mentioned here) does not work because of this bug (if somebody can explain what is really a problem – you are welcome). The solution with -Xbootclasspath/a:"%JAVA_HOME%/jre/lib/deploy.jar" does not work well for me, because I want to have "pure executable jar" as a deliverable: no wrapping scripts please (more over %JAVA_HOME% may not be defined in user's environment in Windows for example, plus I need to write a script per platform) The solution with merging deploy.jar file into my deliverable works only if I make a build on Windows platform. Unfortunately, when the deliverable is produced on build server running on Linux, I got Linux-dependant JAR, which does not execute on Windows – it fails with the trace below. I have read How the Java Launcher Finds Classes and Java programming dynamics: Java classes and class loading articles but I've got no extra ideas, how to correctly handle this situation. Any advices or solutions are very welcomed. Trace: java.lang.NoClassDefFoundError: Could not initialize class com.sun.deploy.config.Config at com.sun.deploy.net.proxy.UserDefinedProxyConfig.getBrowserProxyInfo(UserDefinedProxyConfig.java:43) at com.sun.deploy.net.proxy.DynamicProxyManager.reset(DynamicProxyManager.java:235) at com.sun.deploy.net.proxy.DeployProxySelector.reset(DeployProxySelector.java:59) ... java.lang.NullPointerException at com.sun.deploy.net.proxy.DynamicProxyManager.getProxyList(DynamicProxyManager.java:63) at com.sun.deploy.net.proxy.DeployProxySelector.select(DeployProxySelector.java:166)

    Read the article

  • Where is the taglib definition of PrimeFaces 4?

    - by Michael Wölm
    I am looking around how to define custom components in JSF. According to the Java EE tutorial, any custom component needs to be described in a taglib. When I take a look into the PrimeFaces source, I cannot find any taglib file or any hint where the namespace is bound and the available components are defined. I am adding primefaces jar to my dependencies, adding xmlns:p="http://primefaces.org/ui to the xml namespace, defining some primfaces components on my page and it works... Ok, but neither I can find the related taglib in the source or binary package nor my IDE (IntelliJ) is able to find where "xmlns:p="http://primefaces.org/ui" is pointing to. Therefore, code completion is also not possible. (all other mojarra taglibs are found.) Is it possible that PrimeFaces is defining the taglib via annotations directly in Java classes or is it generating it during runtime? I can easily find the UIComponents, primefaces defines in its source, but the configuration of the taglib seems to be missing. I am sure I just don't know how PrimeFaces is doing it, but the javaeetutorial is not describing any other opportunity than defining a ...-taglib.xml

    Read the article

  • std::conditional compile-time branch evaluation

    - by cmannett85
    Compiling this: template < class T, class Y, class ...Args > struct isSame { static constexpr bool value = std::conditional< sizeof...( Args ), typename std::conditional< std::is_same< T, Y >::value, isSame< Y, Args... >, // Error! std::false_type >::type, std::is_same< T, Y > >::type::value; }; int main() { qDebug() << isSame< double, int >::value; return EXIT_SUCCESS; } Gives me this compiler error: error: wrong number of template arguments (1, should be 2 or more) The issue is that isSame< double, int > has an empty Args parameter pack, so isSame< Y, Args... > effectively becomes isSame< Y > which does not match the signature. But my question is: Why is that branch being evaluated at all? sizeof...( Args ) is false, so the inner std:conditional should not be evaluated. This isn't a runtime piece of code, the compiler knows that sizeof..( Args ) will never be true with the given template types. If you're curious, it's supposed to be a variadic version of std::is_same, not that it works...

    Read the article

  • Is there a fundamental difference between malloc and HeapAlloc (aside from the portability)?

    - by Lambert
    Hi, I'm having code that, for various reasons, I'm trying to port from the C runtime to one that uses the Windows Heap API. I've encountered a problem: If I redirect the malloc/calloc/realloc/free calls to HeapAlloc/HeapReAlloc/HeapFree (with GetProcessHeap for the handle), the memory seems to be allocated correctly (no bad pointer returned, and no exceptions thrown), but the library I'm porting says "failed to allocate memory" for some reason. I've tried this both with the Microsoft CRT (which uses the Heap API underneath) and with another company's run-time library (which uses the Global Memory API underneath); the malloc for both of those works well with the library, but for some reason, using the Heap API directly doesn't work. I've checked that the allocations aren't too big (= 0x7FFF8 bytes), and they're not. The only problem I can think of is memory alignment; is that the case? Or other than that, is there a fundamental difference between the Heap API and the CRT memory API that I'm not aware of? If so, what is it? And if not, then why does the static Microsoft CRT (included with Visual Studio) take some extra steps in malloc/calloc before calling HeapAlloc? I'm suspecting there's a difference but I can't think of what it might be. Thank you!

    Read the article

  • AnyCPU/x86/x64 for C# application and it's C++/CLI dependency

    - by Soonts
    I'm Windows developer, I'm using Microsoft visual studio 2008 SP1. My developer machine is 64 bit. The software I'm currently working on is managed .exe written in C#. Unfortunately, I was unable to solve the whole problem solely in C#. That's why I also developed a small managed DLL in C++/CLI. Both projects are in the same solution. My C# .exe build target is "Any CPU". When my C++ DLL build target is "x86", the DLL is not loaded. As far as I understood when I googled, the reason is C++/CLI language, unlike other .NET languages, compiles to the native code, not managed code. I switched the C++ DLL build target to x64, and everything works now. However, AFAIK everything will stop working as soon as my client will install my product on a 32-bit OS. I have to support Windows Vista and 7, both 32 and 64 bit versions of each of them. I don't want to fall back to 32 bits. That 250 lines of C++ code in my DLL is only 2% of my codebase. And that DLL is only used in several places, so in the typical usage scenario it's not even loaded. My DLL implements two COM objects with ATL, so I can't use "/clr:safe" project setting. Is there way to configure the solution and the projects so that C# project builds "Any CPU" version, the C++ project builds both 32 bit and 64 bit versions, then in the runtime when the managed .EXE is starting up, it uses either 32-bit DLL or 64-bit DLL depending on the OS? Or maybe there's some better solution I'm not aware of? Thanks in advance!

    Read the article

  • Implementing list position locator in C++?

    - by jfrazier
    I am writing a basic Graph API in C++ (I know libraries already exist, but I am doing it for the practice/experience). The structure is basically that of an adjacency list representation. So there are Vertex objects and Edge objects, and the Graph class contains: list<Vertex *> vertexList list<Edge *> edgeList Each Edge object has two Vertex* members representing its endpoints, and each Vertex object has a list of Edge* members representing the edges incident to the Vertex. All this is quite standard, but here is my problem. I want to be able to implement deletion of Edges and Vertices in constant time, so for example each Vertex object should have a Locator member that points to the position of its Vertex* in the vertexList. The way I first implemented this was by saving a list::iterator, as follows: vertexList.push_back(v); v->locator = --vertexList.end(); Then if I need to delete this vertex later, then rather than searching the whole vertexList for its pointer, I can call: vertexList.erase(v->locator); This works fine at first, but it seems that if enough changes (deletions) are made to the list, the iterators will become out-of-date and I get all sorts of iterator errors at runtime. This seems strange for a linked list, because it doesn't seem like you should ever need to re-allocate the remaining members of the list after deletions, but maybe the STL does this to optimize by keeping memory somewhat contiguous? In any case, I would appreciate it if anyone has any insight as to why this happens. Is there a standard way in C++ to implement a locator that will keep track of an element's position in a list without becoming obsolete? Much thanks, Jeff

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >