Search Results

Search found 15099 results on 604 pages for 'runtime environment'.

Page 469/604 | < Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >

  • Dynamic Auto updating (to UI, Grid) binding list in C# Winform?

    - by Dhana
    I'm not even sure if i'm doing this correctly. But basically I have a list of objects that are built out of a class/interface. From there, I am binding the list to a datagrid view that is on a Windows Form (C#) Here the list is a Sync list which will auto update the UI, in this case datagridview. Every thing works fine now, but now i would like to have the List should have an dynamic object, that is the object will have by default two static property (ID, Name), and at run time user will select remaining properties. These should be bind to the data grid. Any update on the list should be auto reflected in the grid. I am aware that, we can use dynamic objects, but i would like to know , how to approach for solution, datagridview.DataSource = myData; // myData is AutoUpdateList<IPersonInfo> Now IPersonInfo is the type of object, need to add dynamic properties for this type at runtime. public class AutoUpdateList<T> : System.ComponentModel.BindingList<T> { private System.ComponentModel.ISynchronizeInvoke _SyncObject; private System.Action<System.ComponentModel.ListChangedEventArgs> _FireEventAction; public AutoUpdateList() : this(null) { } public AutoUpdateList(System.ComponentModel.ISynchronizeInvoke syncObject) { _SyncObject = syncObject; _FireEventAction = FireEvent; } protected override void OnListChanged(System.ComponentModel.ListChangedEventArgs args) { try { if (_SyncObject == null) { FireEvent(args); } else { _SyncObject.Invoke(_FireEventAction, new object[] { args }); } } catch (Exception) { // TODO: Log Here } } private void FireEvent(System.ComponentModel.ListChangedEventArgs args) { base.OnListChanged(args); } } Could you help out on this?

    Read the article

  • Linux C debugging library to detect memory corruptions

    - by calandoa
    When working sometimes ago on an embedded system with a simple MMU, I used to program dynamically this MMU to detect memory corruptions. For instance, at some moment at runtime, the foo variable was overwritten with some unexpected data (probably by a dangling pointer or whatever). So I added the additional debugging code : at init, the memory used by foo was indicated as a forbidden region to the MMU; each time foo was accessed on purpose, access to the region was allowed just before then forbidden just after; a MMU irq handler was added to dump the master and the address responsible of the violation. This was actually some kind of watchpoint, but directly self-handled by the code itself. Now, I would like to reuse the same trick, but on a x86 platform. The problem is that I am very far from understanding how is working the MMU on this platform, and how it is used by Linux, but I wonder if any library/tool/system call already exist to deal with this problem. Note that I am aware that various tools exist like Valgrind or GDB to manage memory problems, but as far as I know, none of these tools car be dynamically reconfigured by the debugged code. I am mainly interested for user space under Linux, but any info on kernel mode or under Windows is also welcome!

    Read the article

  • Java downcasting and is-A has-A relationship

    - by msharma
    HI, I have a down casting question, I am a bit rusty in this area. I have 2 clasess like this: class A{ int i; String j ; //Getters and setters} class B extends A{ String k; //getter and setter} I have a method like this, in a Utility helper class: public static A converts(C c){} Where C are objects that are retireved from the database and then converted. The problem is I want to call the above method by passing in a 'C' and getting back B. So I tried this: B bClasss = (B) Utility.converts(c); So even though the above method returns A I tried to downcast it to B, but I get a runtime ClassCastException. Is there really no way around this? DO I have to write a separate converts() method which returns a B class type? If I declare my class B like: class B { String k; A a;} // So instead of extending A it has-a A, getter and setters also then I can call my existing method like this: b.setA(Utility.converts(c) ); This way I can reuse the existing method, even though the extends relationship makes more sense. What should I do? Any help much appreciated. Thanks.

    Read the article

  • C# custom control to get internal text as string

    - by Ed Woodcock
    ok, I'm working on a custom control that can contain some javascript, and read this out of the page into a string field. This is a workaround for dynamic javascript inside an updatepanel. At the moment, I've got it working, but if I try to put a server tag inside the block: <custom:control ID="Custom" runat="server"> <%= ControlName.ClientID %> </custom:control> The compiler does not like it. I know these are generated at runtime, and so might not be compatible with what I'm doing, but does anyone have any idea how I can get that working? EDIT Error message is: Code blocks are not supported in this context EDIT 2 The control: [DataBindingHandler("System.Web.UI.Design.TextDataBindingHandler, System.Design, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"), ControlValueProperty("Text"), DefaultProperty("Text"), ParseChildren(true, "Text"), AspNetHostingPermission(SecurityAction.LinkDemand, Level = AspNetHostingPermissionLevel.Minimal), AspNetHostingPermission(SecurityAction.InheritanceDemand, Level = AspNetHostingPermissionLevel.Minimal)] public class CustomControl : Control, ITextControl { [DefaultValue(""), Bindable(true), Localizable(true)] public string Text { get { return (string)(ViewState["Text"] ?? string.Empty); } set { ViewState["Text"] = value; } } }

    Read the article

  • Java Servlet: getInitParameter not work in Service()

    - by Gabriele
    I've added some parameters in my web.xml config file, as follow: <context-param> <param-name>service1</param-name> <param-value>http://www.example.com/example2.html</param-value> </context-param> <context-param> <param-name>service2</param-name> <param-value>http://www.example.com/example2.html</param-value> </context-param> ... I try to get the parameter in my servlet, in particular in my service method: protected void service(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { System.out.println(this.getServletContext().getInitParameter("service1")); ... but at runtime I have a NullPointerException... How can I get the parameter value included in web.xml? This is the stacktrace: GRAVE: Servlet.service() for servlet DispatcherServlet threw exception java.lang.NullPointerException at javax.servlet.GenericServlet.getServletContext(GenericServlet.java:160) at it.servlethope.DispatcherServlet.service(DispatcherServlet.java:66) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.tuckey.web.filters.urlrewrite.RuleChain.handleRewrite(RuleChain.java:176) at org.tuckey.web.filters.urlrewrite.RuleChain.doRules(RuleChain.java:145) at org.tuckey.web.filters.urlrewrite.UrlRewriter.processRequest(UrlRewriter.java:92) at org.tuckey.web.filters.urlrewrite.UrlRewriteFilter.doFilter(UrlRewriteFilter.java:381) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Unknown Source) DispatcherServlet.java:66 is the line where I try the getInitParameter()

    Read the article

  • can a webservice load jars during run time

    - by KItis
    I have created a simple web-service using Java. i want to load jars related to web-service during runtime. I have done this task for normal Java application. there what I did was JarFile jar = new JarFile(f.getPath()); final Manifest manifest = jar.getManifest(); final Attributes mattr = manifest.getMainAttributes(); // Read the manifset in jar files and get the Name attribute // whare it specified the class that need to load //for (Object a : mattr.keySet()) { for (Iterator iter = mattr.keySet().iterator(); iter.hasNext();) { Object obj = (Object)iter.next(); if ("ServiceName".equals(obj.toString())) className = mattr.getValue((Name) obj); //System.out.println(className); } /* * Create the jar class loader and use the first argument passed * in from the command line as the jar file to use. */ JarClassLoader jarLoader = new JarClassLoader(f.getPath()); /* Load the class from the jar file and resolve it. */ Class c = jarLoader.loadClass(className, true); My problem is can I put jars that need to be loaded during run time in to separate folder rather than putting in to WEBINF folder. do i have to put jars both in axis and web-application. thanks in advance for any contribution for this question.

    Read the article

  • Is the Java classpath final after JVM startup?

    - by Jens
    Hi, I have read a lot about the Java class loading process lately. Often I came across texts that claimed that it is not possible to add classes to the classpath during runtime and load them without class loader hackery (URLClassLoaders etc.) As far as I know classes are loaded dynamically. That means their bytecode representation is only loaded and transformed to a java.lang.Class object when needed. So shouldn't it be possible to add a JAR or *.class file to the classpath after the JVM started and load those classes, provided they haven't been loaded yet? (To be clear: In this case the classpath is simple folder on the filesystem. "Adding a JAR or *.class file" simply means dropping them in this folder.) And if not, does that mean that the classpath is searched on JVM startup and all fully qualified names of the found classes are cached in an internal "list"? It would be nice of you if you could point me to some sources in your answers. Preferably the offical SUN documentation: Sun JVM Spec. I have read the spec but could not find anything about the classpath and if it's finalized on JVM startup. P.s. This is a theoretical question. I just want to know if it is possible. There is nothing practical I want to achieve. There is just my thirst for knowledge :)

    Read the article

  • A generic error occurred in GDI+, JPEG Image to MemoryStream

    - by madcapnmckay
    Hi, This seems to be a bit of an infamous error all over the web. So much so that I have been unable to find an answer to my problem as my scenario doesn't fit. An exception gets thrown when I save the image to the stream. Weirdly this works perfectly with a png but gives the above error with jpg and gif which is rather confusing. Most similar problem out there relate to saving images to files without permissions. Ironically the solution is to use a memory stream as I am doing.... public static byte[] ConvertImageToByteArray(Image imageToConvert) { using (var ms = new MemoryStream()) { ImageFormat format; switch (imageToConvert.MimeType()) { case "image/png": format = ImageFormat.Png; break; case "image/gif": format = ImageFormat.Gif; break; default: format = ImageFormat.Jpeg; break; } imageToConvert.Save(ms, format); return ms.ToArray(); } } More detail to the exception. The reason this causes so many issues is the lack of explanation :( System.Runtime.InteropServices.ExternalException was unhandled by user code Message="A generic error occurred in GDI+." Source="System.Drawing" ErrorCode=-2147467259 StackTrace: at System.Drawing.Image.Save(Stream stream, ImageCodecInfo encoder, EncoderParameters encoderParams) at System.Drawing.Image.Save(Stream stream, ImageFormat format) at Caldoo.Infrastructure.PhotoEditor.ConvertImageToByteArray(Image imageToConvert) in C:\Users\Ian\SVN\Caldoo\Caldoo.Coordinator\PhotoEditor.cs:line 139 at Caldoo.Web.Controllers.PictureController.Croppable() in C:\Users\Ian\SVN\Caldoo\Caldoo.Web\Controllers\PictureController.cs:line 132 at lambda_method(ExecutionScope , ControllerBase , Object[] ) at System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters) at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters) at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters) at System.Web.Mvc.ControllerActionInvoker.<>c__DisplayClassa.<InvokeActionMethodWithFilters>b__7() at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation) InnerException: OK things I have tried so far. Cloning the image and working on that. Retrieving the encoder for that MIME passing that with jpeg quality setting. Please can anyone help.

    Read the article

  • How to Inject code in c# method calls from a separate app

    - by Fusspawn
    I was curious if anyone knew of a way of monitoring a .Net application's runtime info (what method is being called and such) and injecting extra code to be run on certain methods from a separate running process. say i have two applications: app1.exe that for simplicity's sake could be class Program { static void Main(string[] args) { while(true){ Somefunc(); } } static void Somefunc() { Console.WriteLine("Hello World"); } } and I have a second application that I wish to be able to detect when Somefunc() from application 1 is running and inject its own code, class Program { static void Main(string[] args) { while(true){ if(App1.SomeFuncIsCalled) InjectCode(); } } static void InjectCode() { App1.Console.WriteLine("Hello World Injected"); } } So The result would be Application one would show Hello World Hello World Injected I understand its not going to be this simple ( By a long shot ) but I have no idea if it's even possible and if it is where to even start. Any suggestions ? I've seen similar done in java, But never in c#. EDIT: To clarify, the usage of this would be to add a plugin system to a .Net based game that I do not have access to the source code of.

    Read the article

  • problem adding object to hashtable

    - by daemonkid
    I am trying to call a class method dynamically depending on a condition. This is how I am doing it I have three classes implement a single interface interface IReadFile { string DoStuff(); } The three classes A,B,C implement the interface above. I am trying to add them to a hashtable with the code below _HashT.Add("a", new classA()); _HashT.Add("b", new classB()); _HashT.Add("c", new classC()); This compiles fine, but gives a runtime error.{Object reference not set to an instance of an object.} I was planning to return the correct class to the interface type depending on a parameter that matches the key value. say if I send in a. ClassA is returned to the interface type and the method is called. IReadFile Obj = (IReadFile )_HashT["a"].GetType(); obj.DoStuff(); How do I correct the part above where the objects need to be added to the hashtable? Or do I need to use a different approach? All the classes are in the same assembly and namespace. Thanks for your time.

    Read the article

  • Deleting dynamic pageviews using Javascript

    - by user203127
    Hi, I created dynamic tab creation application using telerik radstrip. I can delete the tabs that are creating dynamically using crossbar option. When i am deleting the tabs its just deleting the tabs but not pageview corresponding to the tab. I have tried to delete the pageviews but i am getting error like "Microsoft JScript runtime error: Object doesn't support this property or method" and pointing to multiPage.get_pageViews().remove(pageView); code. Can anyone help me how to delete pageviews. Thanks in advance. Javascript code. <script type="text/javascript"> /* <![CDATA[ */ function deleteTab(tabText) { var tabStrip = $find("<%= RadTabStrip1.ClientID %>"); var multiPage = $find("<%= RadMultiPage1.ClientID %>"); var tab = tabStrip.findTabByText(tabText); var pageView = tab.get_pageView(); var tabToSelect = tab.get_nextTab(); if (!tabToSelect) tabToSelect = tab.get_previousTab(); tabStrip.get_tabs().remove(tab); multiPage.get_pageView().remove(pageView); multiPage.get_pageViews().remove(pageView); if (tabToSelect) tabToSelect.set_selected(true); } /* ]]> */ </script>

    Read the article

  • How to determine one to many column name from entity type

    - by snicker
    I need a way to determine the name of the column used by NHibernate to join one-to-many collections from the collected entity's type. I need to be able to determine this at runtime. Here is an example: I have some entities: namespace Entities { public class Stable { public virtual int Id {get; set;} public virtual string StableName {get; set;} public virtual IList<Pony> Ponies { get; set; } } public class Dude { public virtual int Id { get; set; } public virtual string DudesName { get; set; } public virtual IList<Pony> PoniesThatBelongToDude { get; set; } } public class Pony { public virtual int Id {get; set;} public virtual string Name {get; set;} public virtual string Color { get; set; } } } I am using NHibernate to generate the database schema, which comes out looking like this: create table "Stable" (Id integer, StableName TEXT, primary key (Id)) create table "Dude" (Id integer, DudesName TEXT, primary key (Id)) create table "Pony" (Id integer, Name TEXT, Color TEXT, Stable_id INTEGER, Dude_id INTEGER, primary key (Id)) Given that I have a Pony entity in my code, I need to be able to find out: A. Does Pony even belong to a collection in the mapping? B. If it does, what are the column names in the database table that pertain to collections In the above instance, I would like to see that Pony has two collection columns, Stable_id and Dude_id.

    Read the article

  • How to deserialize implementation classes in OSGi

    - by Daniel Schneller
    In an eRCP OSGi based application the user can push a button and go to a lock screen similar to that of Windows or Mac OS X. When this happens, the current state of the application is serialized to a file and control is handed over to the lock screen. In this mobile application memory is very tight, so we need to get rid of the original view/controller when the lock screen comes up. This works fine and we end up with a binary serialized file. Once the user logs back in, the file is read in again and the original state of the application restored. This works fine as well, except when the controller that was serialized contained a reference to an object which comes from a different bundle. In my concrete case the original controller (from bundle A) can call a web service and gets a result back. Nothing fancy, just some Strings and Numbers in a simple value holder class. However the controller only sees this as a Result interface; the actual runtime object (ResultImpl) is defined and created in a different bundle (bundle B, the webservice client implementation) and returned via a service call. When the deserialization now tries to thaw the controller from the file, it throws a ClassNotFound exception, complaining about not being able to deserialize the result object, because deserialization is called from bundle A, which cannot see the ResultImpl class from bundle B. Any ideas on how to work around that? The only thing I could come up with is to clone all the individual values into another object, defined in the controller's bundle, but this seems like quite a hassle.

    Read the article

  • IIS Flash Remoting Request Limit introduced with Coldfusion 9

    - by ciaranarcher
    Hi all We've just been our Coldfusion servers from Enterprise CF 8.01 to CF 9. They are running Win 2008. We ran into trouble on those servers that provide the Flash remoting back-end for a Flex application we provide. Once the CF 9 upgrade was complete we noticed that during busy times when many Flex clients were connecting, we appeared to have a hard limit of 25 Flash Remoting Requests running, despite having much higher limits (in fact 150) set in CF Admin. Initially we thought that this was an issue with the fact that Blaze DS was now bundled with CF 9 (rather than a separate install) so we decided to roll-back the CF 9 installation. This, unfortunately, didn't work and we were still stuck with out hard limit of 25 Flash Remoting requests. Then looking at IIS we noticed that the CF9 ISAPI filter was still installed (after we had ran the Web Service Configuration part of the install). That was removed and the CF 8 one was re-run and all of a sudden the Flash Remoting hard limit disappeared. So it seems that it might have had something to do with the wsconfig of CF 9 (C:\ColdFusion9\runtime\bin\wsconfig.exe) Has anyone else had this problem, or does anybody know of where these hard limits are configured in IIS? Any and all help appreciated!

    Read the article

  • C standard addressing simplification inconsistency

    - by Chris Lutz
    Section §6.5.3.2 "Address and indirection operators" ¶3 says (relevant section only): The unary & operator returns the address of its operand. ... If the operand is the result of a unary * operator, neither that operator nor the & operator is evaluated and the result is as if both were omitted, except that the constraints on the operators still apply and the result is not an lvalue. Similarly, if the operand is the result of a [] operator, neither the & operator nor the unary * that is implied by the [] is evaluated and the result is as if the & operator were removed and the [] operator were changed to a + operator. ... This means that this: int *i = NULL; printf("%p", (void *) (&*i) ); printf("%p", (void *) (&i[10]) ); Should be perfectly legal, printing the null pointer (probably 0) and the null pointer plus 10 (probably 10). The standard seems very clear that both of those cases are required to be optimized. However, it doesn't seem to require the following to be optimized: struct { int a; short b; } *s = 0; printf("%p", (void *) (&s->b) ); This seems awfully inconsistent. I can see no reason that the above code shouldn't print the null pointer plus sizeof(int) (possibly 4). Simplifying a &-> expression is going to be the same conceptually (IMHO) as &[], a simple address-plus-offset. It's even an offset that's going to be determinable at compile time, rather than potentially runtime with the [] operator. Is there anything in the rationale about why this is so seemingly inconsistent?

    Read the article

  • Enumerating computers in NT4 domain using WNetEnumResourceW (C++) or DirectoryEntry (C#)

    - by Kevin Davis
    I'm trying to enumerate computers in NT4 domains (not Active Directory) and support Unicode NetBIOS names. According to MSDN, WNetEnumResourceW is the Unicode counterpart of WNetEnumResource which to me would imply that using this would do the trick. However, I have not been able to get Unicode NetBIOS names properly using WNetEnumResourceW. I've also tried the C# rough equivalent DirectoryEntry using the WinNT: provider with no luck on Unicode names either. If I use DirectoryEntry on Active Directory (using the LDAP: provider) I do get Unicode names back. I noticed that during some debugging my code using DirectoryEntry and the WinNT: provider, the exceptions I saw were of type System.Runtime.InteropServices.COMException which tends to make me believe that this is just calling WNetEnumResourceW via COM. This web page implies that for some Net APIs the MS documentation is incomplete and possibly inaccurate which further confuses things. Additionally I've found that using the C# method which certainly results in cleaner, more understandable code also yields incomplete results in enumerating computers in domains\workgroups. Does anyone have any insight on this? Is it possible that computer acting as the WINS server is mangling the name? How would I determine this? Thanks

    Read the article

  • How to print a page when a JButton is pressed in java swing using PrinterJob?

    - by Prayag Upd
    I tried the following code AWT but at runtime shows multiple print dialogs repeatedly.... package printerjob; import java.awt.BasicStroke; import java.awt.Frame; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.event.WindowAdapter; import java.awt.print.PageFormat; import java.awt.print.Printable; import java.awt.print.PrinterException; import java.awt.print.PrinterJob; /** * * @author pragX */ public class FramePrinterJob extends Frame implements Printable{ public void start(){ add(button); } @Override public void paint(Graphics graphics){ PrinterJob printerJob=PrinterJob.getPrinterJob(); printerJob.setPrintable(this); if(printerJob.printDialog()){ try{ printerJob.print(); }catch(PrinterException printerException){ //printerException.printStackTrace(); System.out.println("Error Printing." + printerException); } } } public int print(Graphics graphics, PageFormat pageFormat, int pageIndex) throws PrinterException { //throw new UnsupportedOperationException("Not supported yet."); if(pageIndex>=1){ return Printable.NO_SUCH_PAGE; } graphics.translate((int) pageFormat.getImageableX(), (int)pageFormat.getImageableY()); Graphics2D graphics2D=(Graphics2D)graphics; graphics2D.setStroke(new BasicStroke(4f)); graphics2D.drawLine(20, 20, 20, 120); graphics2D.drawLine(40, 20, 40, 120); graphics2D.drawLine(20, 70, 40, 70); graphics2D.drawLine(60, 70, 60, 120); graphics2D.drawLine(60, 40, 60, 45); return Printable.PAGE_EXISTS; } public static void main(String args[]){ Frame frame=new FramePrinterJob(); frame.addWindowListener(new WindowAdapter() { public void windowClosing(){System.exit(0);}}); frame.setSize(300,400); frame.setVisible(true); } }

    Read the article

  • ASP.NET MVC3 - Bug using Javascript

    - by ebb
    Hey there, I'm trying to use Ajax.BeginForm() to POST A Json result from my controller (I'm using MVC3). When the Json result is called it should be sent to a javascript function and extract the object using "var myObject = content.get_response().get_object();", However it just throws a "Microsoft JScript runtime error: Object doesn't support this property or method" when trying to invoke the Ajax POST. My code: Controller: [HttpPost] public ActionResult Index(string message) { return Json(new { Success = true, Message = message }); } View: <!DOCTYPE html> <html> <head> <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/MicrosoftAjax.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/MicrosoftMvcAjax.js")" type="text/javascript"></script> <script type="text/javascript"> function JsonAdd_OnComplete(mycontext) { var myObject = mycontext.get_response().get_object(); alert(mycontext.Message); } </script> </head> <body> <div> @using(Ajax.BeginForm("Index", "Home", new AjaxOptions() { HttpMethod = "POST", OnComplete = "JsonAdd_OnComplete" })) { @Html.TextBox("message") <input type="submit" value="SUBMIT" /> } </div> </body> </html> The strange thing is that the exactly same code works in MVC2 - Is this a bug, or have I forgot something? Thanks in advance.

    Read the article

  • Boost link error when using "--layout=system" on VS2005

    - by Kevin
    I'm new to boost, and thought I'd try it out with some realistic deployment scenarios for the .dlls, so I used the following command to compile/install the libraries: .\bjam install --layout=system variant=debug runtime-link=shared link=shared --with-date_time --with-thread --with-regex --with-filesystem --includedir=<my include directory> --libdir=<my bin directory> > installlog.txt That seemed to work, but my simple program (taken right from the "Getting Started" page) fails: #include <boost/regex.hpp> #include <iostream> #include <string> // Place your functions after this line int main() { std::string line; boost::regex pat( "^Subject: (Re: |Aw: )*(.*)" ); while (std::cin) { std::getline(std::cin, line); boost::smatch matches; if (boost::regex_match(line, matches, pat)) std::cout << matches[2] << std::endl; } } This fails with the following linker error: fatal error LNK1104: cannot open file 'libboost_regex-vc80-mt-1_42.lib' I'm sure that both the .lib and the .dlls are in that directory, and named how I want them to be (ie: boost_regex.lib, etc, all unversioned, as the --layout=system says). So why is it looking for the versioned type of it? And how do I get it to look for the unversioned type of the library? I've tried this with more "normal" options, such as below: .\bjam stage --build-type=complete --with-date_time --with-thread --with-filesystem --with-regex > mybuildlog.txt And that works fine. I made sure my compiler saw the "stage\lib" directory, and it compiled and ran fine with nothing beyond having the environment looking into the right lib directory. But when I took those "testing" directories away, and wanted to use these others (unversioned), then it failed. I'm under VS2005 here on XP. Any ideas?

    Read the article

  • Why is TRest in Tuple<T1... TRest> not constrained?

    - by Anthony Pegram
    In a Tuple, if you have more than 7 items, you can provide an 8th item that is another tuple and define up to 7 items, and then another tuple as the 8th and on and on down the line. However, there is no constraint on the 8th item at compile time. For example, this is legal code for the compiler: var tuple = new Tuple<int, int, int, int, int, int, int, double> (1, 1, 1, 1, 1, 1, 1, 1d); Even though the intellisense documentation says that TRest must be a Tuple. You do not get any error when writing or building the code, it does not manifest until runtime in the form of an ArgumentException. You can roughly implement a Tuple in a few minutes, complete with a Tuple-constrained 8th item. I just wonder why it was left off the current implementation? Is it possibly a forward-compatibility issue where they could add more elements with a hypothetical C# 5? Short version of rough implementation interface IMyTuple { } class MyTuple<T1> : IMyTuple { public T1 Item1 { get; private set; } public MyTuple(T1 item1) { Item1 = item1; } } class MyTuple<T1, T2> : MyTuple<T1> { public T2 Item2 { get; private set; } public MyTuple(T1 item1, T2 item2) : base(item1) { Item2 = item2; } } class MyTuple<T1, T2, TRest> : MyTuple<T1, T2> where TRest : IMyTuple { public TRest Rest { get; private set; } public MyTuple(T1 item1, T2 item2, TRest rest) : base(item1, item2) { Rest = rest; } } ... var mytuple = new MyTuple<int, int, MyTuple<int>>(1, 1, new MyTuple<int>(1)); // legal var mytuple2 = new MyTuple<int, int, int>(1, 2, 3); // illegal

    Read the article

  • Java/Groovy File IO Replacing an Image File with it's own Contents - Why Does This Work?

    - by jboyd
    I have some JPG files that need to be replaced at runtime with a JFIF standardized version of themselves (we are using a vendor that gives us JPG that do not have proper headers so they don't work in certain applications)... I am able to create a new file from the existing image, then get a buffered image from that file and write the contents right back into the file without having to delete it and it works... imageSrcFolder.eachFileMatch ( ~/.*\.jpg/, { BufferedImage bi = ImageIO.read( it ) ImageIO.write( bi, "jpg", it ) }); The question I have is why? Why doesn't the file end up doubled in size? Why don't I have to delete it first? Why am I able to take a file object to an existing file and then treat it as if it were a brand new one? It seems that what I consider to be a "file" is not what the File object in java actually is, or else this wouldn't work at all. My code does exactly what I want it to do, but I'm not convinced it always will... it just seems way too easy

    Read the article

  • Getting error when compiling debug mode: C++/CLI - error LNK2022

    - by Yochai Timmer
    I've got a CLI code wrapping a C++ DLL. When i try to compile it in debug mode, i get the following error: Error 22 error LNK2022: metadata operation failed (8013118D) : Inconsistent layout information induplicated types .... MSVCMRTD.lib (locale0_implib.obj) The weird thing is that on Release mode it compiles OK and works OK. The only difference i can see that causes the problem is when i change: Configuration Properties - C/C++ - Code Generation - Runtime Library When it's set to: Multi-threaded Debug DLL (/MDd) it throws the error. When it's set to: Multi-threaded DLL (/MD) it compiles fine. The same settings work for all the other DLLs in the project (CLI and C++) and they inherit the same properties. I'm using VS2010. So, how can i solve this ? And can I get some explanation to WHY this is happening ? Update: I've basically tried changing every option in the project's properties with no luck. I've read somewhere that this might be caused from duplicate declarations of a type of the same name. But in the CLI file i'm calling std::string etc. explicitly from std. Any other ideas ?

    Read the article

  • Setting up relations/mappings for a SQLAlchemy many-to-many database

    - by Brent Ramerth
    I'm new to SQLAlchemy and relational databases, and I'm trying to set up a model for an annotated lexicon. I want to support an arbitrary number of key-value annotations for the words which can be added or removed at runtime. Since there will be a lot of repetition in the names of the keys, I don't want to use this solution directly, although the code is similar. My design has word objects and property objects. The words and properties are stored in separate tables with a property_values table that links the two. Here's the code: from sqlalchemy import Column, Integer, String, Table, create_engine from sqlalchemy import MetaData, ForeignKey from sqlalchemy.orm import relation, mapper, sessionmaker from sqlalchemy.ext.declarative import declarative_base engine = create_engine('sqlite:///test.db', echo=True) meta = MetaData(bind=engine) property_values = Table('property_values', meta, Column('word_id', Integer, ForeignKey('words.id')), Column('property_id', Integer, ForeignKey('properties.id')), Column('value', String(20)) ) words = Table('words', meta, Column('id', Integer, primary_key=True), Column('name', String(20)), Column('freq', Integer) ) properties = Table('properties', meta, Column('id', Integer, primary_key=True), Column('name', String(20), nullable=False, unique=True) ) meta.create_all() class Word(object): def __init__(self, name, freq=1): self.name = name self.freq = freq class Property(object): def __init__(self, name): self.name = name mapper(Property, properties) Now I'd like to be able to do the following: Session = sessionmaker(bind=engine) s = Session() word = Word('foo', 42) word['bar'] = 'yes' # or word.bar = 'yes' ? s.add(word) s.commit() Ideally this should add 1|foo|42 to the words table, add 1|bar to the properties table, and add 1|1|yes to the property_values table. However, I don't have the right mappings and relations in place to make this happen. I get the sense from reading the documentation at http://www.sqlalchemy.org/docs/05/mappers.html#association-pattern that I want to use an association proxy or something of that sort here, but the syntax is unclear to me. I experimented with this: mapper(Word, words, properties={ 'properties': relation(Property, secondary=property_values) }) but this mapper only fills in the foreign key values, and I need to fill in the other value as well. Any assistance would be greatly appreciated.

    Read the article

  • C++: is it safe to work with std::vectors as if they were arrays?

    - by peoro
    I need to have a fixed-size array of elements and to call on them functions that require to know about how they're placed in memory, in particular: functions like glVertexPointer, that needs to know where the vertices are, how distant they are one from the other and so on. In my case vertices would be members of the elements to store. to get the index of an element within this array, I'd prefer to avoid having an index field within my elements, but would rather play with pointers arithmetic (ie: index of Element *x will be x - & array[0]) -- btw, this sounds dirty to me: is it good practice or should I do something else? Is it safe to use std::vector for this? Something makes me think that an std::array would be more appropriate but: Constructor and destructor for my structure will be rarely called: I don't mind about such overhead. I'm going to set the std::vector capacity to size I need (the size that would use for an std::array, thus won't take any overhead due to sporadic reallocation. I don't mind a little space overhead for std::vector's internal structure. I could use the ability to resize the vector (or better: to have a size chosen during setup), and I think there's no way to do this with std::array, since its size is a template parameter (that's too bad: I could do that even with an old C-like array, just dynamically allocating it on the heap). If std::vector is fine for my purpose I'd like to know into details if it will have some runtime overhead with respect to std::array (or to a plain C array): I know that it'll call the default constructor for any element once I increase its size (but I guess this won't cost anything if my data has got an empty default constructor?), same for destructor. Anything else?

    Read the article

  • AnyCPU/x86/x64 for C# application and it's C++/CLI dependency

    - by Soonts
    I'm Windows developer, I'm using Microsoft visual studio 2008 SP1. My developer machine is 64 bit. The software I'm currently working on is managed .exe written in C#. Unfortunately, I was unable to solve the whole problem solely in C#. That's why I also developed a small managed DLL in C++/CLI. Both projects are in the same solution. My C# .exe build target is "Any CPU". When my C++ DLL build target is "x86", the DLL is not loaded. As far as I understood when I googled, the reason is C++/CLI language, unlike other .NET languages, compiles to the native code, not managed code. I switched the C++ DLL build target to x64, and everything works now. However, AFAIK everything will stop working as soon as my client will install my product on a 32-bit OS. I have to support Windows Vista and 7, both 32 and 64 bit versions of each of them. I don't want to fall back to 32 bits. That 250 lines of C++ code in my DLL is only 2% of my codebase. And that DLL is only used in several places, so in the typical usage scenario it's not even loaded. My DLL implements two COM objects with ATL, so I can't use "/clr:safe" project setting. Is there way to configure the solution and the projects so that C# project builds "Any CPU" version, the C++ project builds both 32 bit and 64 bit versions, then in the runtime when the managed .EXE is starting up, it uses either 32-bit DLL or 64-bit DLL depending on the OS? Or maybe there's some better solution I'm not aware of? Thanks in advance!

    Read the article

< Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >