Search Results

Search found 11960 results on 479 pages for 'virtual domains'.

Page 456/479 | < Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >

  • How to add objects to association in OnPreInsert, OnPreUpdate

    - by Dmitriy Nagirnyak
    Hi, I have an event listener (for Audit Logs) which needs to append audit log entries to the association of the object: public Company : IAuditable { // Other stuff removed for bravety IAuditLog IAuditable.CreateEntry() { var entry = new CompanyAudit(); this.auditLogs.Add(entry); return entry; } public virtual IEnumerable<CompanyAudit> AuditLogs { get { return this.auditLogs } } } The AuditLogs collection is mapped with cascading: public class CompanyMap : ClassMap<Company> { public CompanyMap() { // Id and others removed fro bravety HasMany(x => x.AuditLogs).AsSet() .LazyLoad() .Access.ReadOnlyPropertyThroughCamelCaseField() .Cascade.All(); } } And the listener just asks the auditable object to create log entries so it can update them: internal class AuditEventListener : IPreInsertEventListener, IPreUpdateEventListener { public bool OnPreUpdate(PreUpdateEvent ev) { var audit = ev.Entity as IAuditable; if (audit == null) return false; Log(audit); return false; } public bool OnPreInsert(PreInsertEvent ev) { var audit = ev.Entity as IAuditable; if (audit == null) return false; Log(audit); return false; } private static void LogProperty(IAuditable auditable) { var entry = auditable.CreateAuditEntry(); entry.CreatedAt = DateTime.Now; entry.Who = GetCurrentUser(); // Might potentially execute a query. // Also other information is set for entry here } } The problem with it though is that it throws TransientObjectException when commiting the transaction: NHibernate.TransientObjectException : object references an unsaved transient instance - save the transient instance before flushing. Type: PropConnect.Model.UserAuditLog, Entity: PropConnect.Model.UserAuditLog at NHibernate.Engine.ForeignKeys.GetEntityIdentifierIfNotUnsaved(String entityName, Object entity, ISessionImplementor session) at NHibernate.Type.EntityType.GetIdentifier(Object value, ISessionImplementor session) at NHibernate.Type.ManyToOneType.NullSafeSet(IDbCommand st, Object value, Int32 index, Boolean[] settable, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.WriteElement(IDbCommand st, Object elt, Int32 i, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.PerformInsert(Object ownerId, IPersistentCollection collection, IExpectation expectation, Object entry, Int32 index, Boolean useBatch, Boolean callable, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.Recreate(IPersistentCollection collection, Object id, ISessionImplementor session) at NHibernate.Action.CollectionRecreateAction.Execute() at NHibernate.Engine.ActionQueue.Execute(IExecutable executable) at NHibernate.Engine.ActionQueue.ExecuteActions(IList list) at NHibernate.Engine.ActionQueue.ExecuteActions() at NHibernate.Event.Default.AbstractFlushingEventListener.PerformExecutions(IEventSource session) at NHibernate.Event.Default.DefaultFlushEventListener.OnFlush(FlushEvent event) at NHibernate.Impl.SessionImpl.Flush() at NHibernate.Transaction.AdoTransaction.Commit() As the cascading is set to All I expected NH to handle this. I also tried to modify the collection using state but pretty much the same happens. So the question is what is the last chance to modify object's associations before it gets saved? Thanks, Dmitriy.

    Read the article

  • Ruby erb template- try to change layout- get error

    - by nigel curruthers
    Hi there! I'm working my way through adapting a template I have been given that is basically a list of products for sale. I want to change it from a top-down list into a table layout. I want to end up with something as follows- <div id= 'ladiesproducts'> <% ladies_products = hosting_products.find_all do |product| product.name.match("ladies") end %> <table><tbody> <% [ladies_products].each do | slice | %> <tr> <% slice.each do | product | %> <td> <h4><%= product.name.html %></h4> <p><%= product.description %></p> <% other parts go here %> </td> <% end %> </tr> <% end %> </tbody></table> </div> This works fine for the layout that I am trying to achieve. The problem I have is when I paste back the <% other parts go here % part of the code. I get an internal error message on the page. I am completely new to Ruby so am just bumbling my way through this really. I have a hunch that I'm neglecting something that is probably very simple. The <% other parts go here %> code is as follows: <input type='hidden' name='base_renewal_period-<%= i %>' value="<%= product.base_renewal_period %>" /> <input type='hidden' name='quoted_unit_price-<%= i %>' value="<%= billing.price(product.unit_price) %>" /> <p><input type='radio' name='add-product' value='<%= product.specific_type.html %>:<%= i %>:base_renewal_period,quoted_unit_price,domain' /><%= billing.currency_symbol.html %><%= billing.price(product.unit_price, :use_tax_prefs) %> <% if product.base_renewal_period != 'never' %> every <%= product.unit_period.to_s_short.html %> <% end %> <% if product.setup_fee != 0 %> plus a one off fee of <%= billing.currency_symbol.html %><%= sprintf("%.2f", if billing.include_tax? then billing.price(product.setup_fee) else product.setup_fee end) %> <% end %> <% if product.has_free_products? %> <br /> includes free domains <% product.free_products_list.each do | free_product | %> <%= free_product["free_name"] %> <% end %> <% end %> * </p> <% i = i + 1 %> <% end %> <p><input type='submit' value='Add to Basket'/></p> </form> <% unless basket.nil? or basket.empty? or no_upsell? %> <p><a href='basket?add-no-product=package'>No thank you, please continue with my order ...</a></p> <% end %> <% if not billing.tax_applies? %> <% elsif billing.include_tax? %> <p>* Includes <%= billing.tax_name %></p> <% else %> <p>* Excluding <%= billing.tax_name %></p> <% end %> If anyone can point out what I'm doing wrong or what I'm missing or failing to change I would GREATLY appreciate it! Many thanks in advance. Nigel

    Read the article

  • Any tool(s) for knowing the layout (segments) of running process in Windows?

    - by claws
    I've always been curious about How exactly the process looks in memory? What are the different segments(parts) in it? How exactly will be the program (on the disk) & process (in the memory) are related? My previous question: http://stackoverflow.com/questions/1966920/more-info-on-memory-layout-of-an-executable-program-process In my quest, I finally found a answer. I found this excellent article that cleared most of my queries: http://www.linuxforums.org/articles/understanding-elf-using-readelf-and-objdump_125.html In the above article, author shows how to get different segments of the process (LINUX) & he compares it with its corresponding ELF file. I'm quoting this section here: Courious to see the real layout of process segment? We can use /proc//maps file to reveal it. is the PID of the process we want to observe. Before we move on, we have a small problem here. Our test program runs so fast that it ends before we can even dump the related /proc entry. I use gdb to solve this. You can use another trick such as inserting sleep() before it calls return(). In a console (or a terminal emulator such as xterm) do: $ gdb test (gdb) b main Breakpoint 1 at 0x8048376 (gdb) r Breakpoint 1, 0x08048376 in main () Hold right here, open another console and find out the PID of program "test". If you want the quick way, type: $ cat /proc/`pgrep test`/maps You will see an output like below (you might get different output): [1] 0039d000-003b2000 r-xp 00000000 16:41 1080084 /lib/ld-2.3.3.so [2] 003b2000-003b3000 r--p 00014000 16:41 1080084 /lib/ld-2.3.3.so [3] 003b3000-003b4000 rw-p 00015000 16:41 1080084 /lib/ld-2.3.3.so [4] 003b6000-004cb000 r-xp 00000000 16:41 1080085 /lib/tls/libc-2.3.3.so [5] 004cb000-004cd000 r--p 00115000 16:41 1080085 /lib/tls/libc-2.3.3.so [6] 004cd000-004cf000 rw-p 00117000 16:41 1080085 /lib/tls/libc-2.3.3.so [7] 004cf000-004d1000 rw-p 004cf000 00:00 0 [8] 08048000-08049000 r-xp 00000000 16:06 66970 /tmp/test [9] 08049000-0804a000 rw-p 00000000 16:06 66970 /tmp/test [10] b7fec000-b7fed000 rw-p b7fec000 00:00 0 [11] bffeb000-c0000000 rw-p bffeb000 00:00 0 [12] ffffe000-fffff000 ---p 00000000 00:00 0 Note: I add number on each line as reference. Back to gdb, type: (gdb) q So, in total, we see 12 segment (also known as Virtual Memory Area--VMA). But I want to know about Windows Process & PE file format. Any tool(s) for getting the layout (segments) of running process in Windows? Any other good resources for learning more on this subject? EDIT: Are there any good articles which shows the mapping between PE file sections & VA segments?

    Read the article

  • ELMAH - Using custom error pages to collecting user feedback

    - by vdh_ant
    Hey guys I'm looking at using ELMAH for the first time but have a requirement that needs to be met that I'm not sure how to go about achieving... Basically, I am going to configure ELMAH to work under asp.net MVC and get it to log errors to the database when they occur. On top of this I be using customErrors to direct the user to a friendly message page when an error occurs. Fairly standard stuff... The requirement is that on this custom error page I have a form which enables to user to provide extra information if they wish. Now the problem arises due to the fact that at this point the error is already logged and I need to associate the loged error with the users feedback. Normally, if I was using my own custom implementation, after I log the error I would pass through the ID of the error to the custom error page so that an association can be made. But because of the way that ELMAH works, I don't think the same is quite possible. Hence I was wondering how people thought that one might go about doing this.... Cheers UPDATE: My solution to the problem is as follows: public class UserCurrentConextUsingWebContext : IUserCurrentConext { private const string _StoredExceptionName = "System.StoredException."; private const string _StoredExceptionIdName = "System.StoredExceptionId."; public virtual string UniqueAddress { get { return HttpContext.Current.Request.UserHostAddress; } } public Exception StoredException { get { return HttpContext.Current.Application[_StoredExceptionName + this.UniqueAddress] as Exception; } set { HttpContext.Current.Application[_StoredExceptionName + this.UniqueAddress] = value; } } public string StoredExceptionId { get { return HttpContext.Current.Application[_StoredExceptionIdName + this.UniqueAddress] as string; } set { HttpContext.Current.Application[_StoredExceptionIdName + this.UniqueAddress] = value; } } } Then when the error occurs, I have something like this in my Global.asax: public void ErrorLog_Logged(object sender, ErrorLoggedEventArgs args) { var item = new UserCurrentConextUsingWebContext(); item.StoredException = args.Entry.Error.Exception; item.StoredExceptionId = args.Entry.Id; } Then where ever you are later you can pull out the details by var item = new UserCurrentConextUsingWebContext(); var error = item.StoredException; var errorId = item.StoredExceptionId; item.StoredException = null; item.StoredExceptionId = null; Note this isn't 100% perfect as its possible for the same IP to have multiple requests to have errors at the same time. But the likely hood of that happening is remote. And this solution is independent of the session, which in our case is important, also some errors can cause sessions to be terminated, etc. Hence why this approach has worked nicely for us.

    Read the article

  • How to show percentage of 'memory used' in a win32 process?

    - by pj4533
    I know that memory usage is a very complex issue on Windows. I am trying to write a UI control for a large application that shows a 'percentage of memory used' number, in order to give the user an indication that it may be time to clear up some memory, or more likely restart the application. One implementation used ullAvailVirtual from MEMORYSTATUSEX as a base, then used HeapWalk() to walk the process heap looking for additional free memory. The HeapWalk() step was needed because we noticed that after a while of running the memory allocated and freed by the heap was never returned and reported by the ullAvailVirtual number. After hours of intensive working, the ullAvailVirtual number no longer would accurately report the amount of memory available. However, this method proved not ideal, due to occasional odd errors that HeapWalk() would return, even when the process heap was not corrupted. Further, since this is a UI control, the heap walking code was executing every 5-10 seconds. I tried contacting Microsoft about why HeapWalk() was failing, escalated a case via MSDN, but never got an answer other than "you probably shouldn't do that". So, as a second implementation, I used PagefileUsage from PROCESS_MEMORY_COUNTERS as a base. Then I used VirtualQueryEx to walk the virtual address space adding up all regions that weren't MEM_FREE and returned a value for GetMappedFileNameA(). My thinking was that the PageFileUsage was essentially 'private bytes' so if I added to that value the total size of the DLLs my process was using, it would be a good approximation of the amount of memory my process was using. This second method seems to (sorta) work, at least it doesn't cause crashes like the heap walker method. However, when both methods are enabled, the values are not the same. So one of the methods is wrong. So, StackOverflow world...how would you implement this? which method is more promising, or do you have a third, better method? should I go back to the original method, and further debug the odd errors? should I stay away from walking the heap every 5-10 seconds? Keep in mind the whole point is to indicate to the user that it is getting 'dangerous', and they should either free up memory or restart the application. Perhaps a 'percentage used' isn't the best solution to this problem? What is? Another idea I had was a color based system (red, yellow, green, which I could base on more factors than just a single number)

    Read the article

  • Why is it still so hard to write software?

    - by nornagon
    Writing software, I find, is composed of two parts: the Idea, and the Implementation. The Idea is about thinking: "I have this problem; how do I solve it?" and further, "how do I solve it elegantly?" The answers to these questions are obtainable by thinking about algorithms and architecture. The ideas come partially through analysis and partially through insight and intuition. The Idea is usually the easy part. You talk to your friends and co-workers and you nut it out in a meeting or over coffee. It takes an hour or two, plus revisions as you implement and find new problems. The Implementation phase of software development is so difficult that we joke about it. "Oh," we say, "the rest is a Simple Matter of Code." Because it should be simple, but it never is. We used to write our code on punch cards, and that was hard: mistakes were very difficult to spot, so we had to spend extra effort making sure every line was perfect. Then we had serial terminals: we could see all our code at once, search through it, organise it hierarchically and create things abstracted from raw machine code. First we had assemblers, one level up from machine code. Mnemonics freed us from remembering the machine code. Then we had compilers, which freed us from remembering the instructions. We had virtual machines, which let us step away from machine-specific details. And now we have advanced tools like Eclipse and Xcode that perform analysis on our code to help us write code faster and avoid common pitfalls. But writing code is still hard. Writing code is about understanding large, complex systems, and tools we have today simply don't go very far to help us with that. When I click "find all references" in Eclipse, I get a list of them at the bottom of the window. I click on one, and I'm torn away from what I was looking at, forced to context switch. Java architecture is usually several levels deep, so I have to switch and switch and switch until I find what I'm really looking for -- by which time I've forgotten where I came from. And I do that all day until I've understood a system. It's taxing mentally, and Eclipse doesn't do much that couldn't be done in 1985 with grep, except eat hundreds of megs of RAM. Writing code has barely changed since we were staring at amber on black. We have the theoretical groundwork for much more advanced tools, tools that actually work to help us comprehend and extend the complex systems we work with every day. So why is writing code still so hard?

    Read the article

  • Apache: How can i see my localhost on 192.168.1.101 from 192.168.1.102?

    - by takpar
    Hi, I'm running Apache on Ubuntu. My IP address is 192.168.1.101 While http://localhost and http://192.168.1.101 work fine in my PC, I cannot access it from within my laptop using http://192.168.1.102 It's strange. I can ping 192.168.1.101 but I got "The connection has timed out." in browser. I'm using default apache config. so this is what my sites-available/default looks like: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/www/public_html <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/www/public_html> Options Indexes FollowSymLinks MultiViews #AllowOverride None AllowOverride all Order allow,deny allow from all </Directory> /etc/apache2/posrts.conf NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> my laptop runs Ubuntu as well. so I don't think this is a firewall issue. commands executed in Laptop (192.168.1.102): adp@adp-laptop:~$ ping 192.168.1.101 PING 192.168.1.101 (192.168.1.101) 56(84) bytes of data. 64 bytes from 192.168.1.101: icmp_seq=1 ttl=64 time=32.1 ms 64 bytes from 192.168.1.101: icmp_seq=2 ttl=64 time=54.8 ms 64 bytes from 192.168.1.101: icmp_seq=3 ttl=64 time=77.0 ms 64 bytes from 192.168.1.101: icmp_seq=4 ttl=64 time=100 ms ^C --- 192.168.1.101 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 32.193/66.193/100.717/25.463 ms adp@adp-laptop:~$ telnet 192.168.1.101 80 Trying 192.168.1.101... telnet: Unable to connect to remote host: Connection timed out commands executed in PC (192.168.1.101): adp@adp-desktop:~$ ps afx | grep http 12672 pts/4 S+ 0:00 | \_ grep --color=auto http adp@adp-desktop:~$ ping 192.168.1.102 PING 192.168.1.102 (192.168.1.102) 56(84) bytes of data. 64 bytes from 192.168.1.102: icmp_seq=1 ttl=64 time=32.1 ms 64 bytes from 192.168.1.102: icmp_seq=2 ttl=64 time=54.8 ms 64 bytes from 192.168.1.102: icmp_seq=3 ttl=64 time=77.0 ms 64 bytes from 192.168.1.102: icmp_seq=4 ttl=64 time=100 ms ^C --- 192.168.1.102 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 32.193/66.193/100.717/25.463 ms adp@adp-desktop:~$ telnet 192.168.1.102 80 Trying 192.168.1.102... telnet: Unable to connect to remote host: Connection refused adp@adp-desktop:~$ telnet 192.168.1.102 Trying 192.168.1.102... telnet: Unable to connect to remote host: Connection refused What should i do?

    Read the article

  • NHibernate FetchMode.Lazy

    - by RyanFetz
    I have an object which has a property on it that has then has collections which i would like to not load in a couple situations. 98% of the time i want those collections fetched but in the one instance i do not. Here is the code I have... Why does it not set the fetch mode on the properties collections? [DataContract(Name = "ThemingJob", Namespace = "")] [Serializable] public class ThemingJob : ServiceJob { [DataMember] public virtual Query Query { get; set; } [DataMember] public string Results { get; set; } } [DataContract(Name = "Query", Namespace = "")] [Serializable] public class Query : LookupEntity<Query>, DAC.US.Search.Models.IQueryEntity { [DataMember] public string QueryResult { get; set; } private IList<Asset> _Assets = new List<Asset>(); [IgnoreDataMember] [System.Xml.Serialization.XmlIgnore] public IList<Asset> Assets { get { return _Assets; } set { _Assets = value; } } private IList<Theme> _Themes = new List<Theme>(); [IgnoreDataMember] [System.Xml.Serialization.XmlIgnore] public IList<Theme> Themes { get { return _Themes; } set { _Themes = value; } } private IList<Affinity> _Affinity = new List<Affinity>(); [IgnoreDataMember] [System.Xml.Serialization.XmlIgnore] public IList<Affinity> Affinity { get { return _Affinity; } set { _Affinity = value; } } private IList<Word> _Words = new List<Word>(); [IgnoreDataMember] [System.Xml.Serialization.XmlIgnore] public IList<Word> Words { get { return _Words; } set { _Words = value; } } } using (global::NHibernate.ISession session = NHibernateApplication.GetCurrentSession()) { global::NHibernate.ICriteria criteria = session.CreateCriteria(typeof(ThemingJob)); global::NHibernate.ICriteria countCriteria = session.CreateCriteria(typeof(ThemingJob)); criteria.AddOrder(global::NHibernate.Criterion.Order.Desc("Id")); var qc = criteria.CreateCriteria("Query"); qc.SetFetchMode("Assets", global::NHibernate.FetchMode.Lazy); qc.SetFetchMode("Themes", global::NHibernate.FetchMode.Lazy); qc.SetFetchMode("Affinity", global::NHibernate.FetchMode.Lazy); qc.SetFetchMode("Words", global::NHibernate.FetchMode.Lazy); pageIndex = Convert.ToInt32(pageIndex) - 1; // convert to 0 based paging index criteria.SetMaxResults(pageSize); criteria.SetFirstResult(pageIndex * pageSize); countCriteria.SetProjection(global::NHibernate.Criterion.Projections.RowCount()); int totalRecords = (int)countCriteria.List()[0]; return criteria.List<ThemingJob>().ToPagedList<ThemingJob>(pageIndex, pageSize, totalRecords); }

    Read the article

  • How to find relation between change in latitudes at centre of map and top/bottom

    - by Imran
    Hi, Im having little trouble finding a relation between the movement at centre and edge of a circle, Im doing for panning world map,my map extent is 180,89:-180,-89, my map pans by adding change(dx,dY) to its extents and not its centre. Now a situation has arrrised where I have to move the map to a specific centre, to calculate the change in longitudes is very easy and simple, but its the change in lattitudes that has caused problem. It seems the change in centreY of map is more than the change at edge of the mapY, or simply if I have to move the map centre from 0long,0lat to 73long,33lat, for dX I simply get 73, but for dY apparently it looks 33 but if i add 33 to top of map that is 89 , it will be 122 which is incorrect since Latitudes are between 90 and -90 . It seems a case a projection of a circle on 2D plane where the edge of circle since is moving backward due to angle expereinces less change and the centre expereinces more change, now is there a relation between these two factors? I tried converting the difference between OriginY and destinationY into radians and then add to Top and Bottom of Map, but it did'nt really work for me. Please note that the map is project on a virtual canvas whose width starts from 256 and increases by 256*2^z , z=0 is default and whole world is visible at that extent of canvas code: public void moveMapTo(double destinationLongitude,double destinationLattitude) // moves map to the new centre { double dXLong=destinationLongitude-centreLongitude; double atanhsinO = atanh(Math.sin(destinationLattitude * Math.PI / 180.00)); double atanhsinD = atanh(Math.sin(centreLatitude * Math.PI / 180.00)); double atanhCentre = (atanhsinD + atanhsinO) / 2; double latitudeSpan =destinationLattitude - centreLatitude; double radianOfCentreLatitude = Math.atan(Math.sinh(atanhCentre)); double dXLat=latitudeSpan / Math.cos(radianOfCentreLatitude); dXLat*=getLattitudeSpan()*(Math.PI/180); <--- HERE IS THE PORBLEM System.out.println("dxLong:"+dXLong+"_dxLat:"+dXLat); mapLeft+=dXLong; mapRight+=dXLong; mapTop+=dXLat; mapBottom+=dXLat; } ////latitude span function private double getLattitudeSpan() { double latitudeSpan = mapTop - mapBottom; latitudeSpan = latitudeSpan / Math.cos(radianOfCentreLatitude); return Math.abs(latitudeSpan); } //ht

    Read the article

  • Binding functions of derived class with luabind

    - by Anamon
    I am currently developing a plugin-based system in C++ which provides a Lua scripting interface, for which I chose to use luabind. I'm using Lua 5 and luabind 0.9, both statically linked and compiled with MSVC++ 8. I am now having trouble binding functions with luabind when they are defined in a derived class, but not its parent class. More specifically, I have an abstract base class called 'IPlugin' from which all plugin classes inherit. When the plugin manager initialises, it registers that class and its functions like this: luabind::open(L); luabind::module(L) [ luabind::class_("IPlugin") .def("start", (void(IPlugin::*)())&IPlugin::start) ]; As it is only known at runtime what effective plugin classes are available, I had to solve loading plugins in a kind of roundabout way. The plugin manager exposes a factory function to Lua, which takes the name of a plugin class and a desired object name. The factory then creates the object, registers the plugin's class as inheriting from the 'IPlugin' base class, and immediately calls a function on the created object that registers itself as a global with the Lua state, like this: void PluginExample::registerLuaObject(lua_State *L, string a_name) { luabind::globals(L)[a_name] = (PluginExample*)this; } I initially did this because I had problems with Lua determining the most derived class of the object, as if I register it from the StreamManager it is only known as a subtype of 'IPlugin' and not the specific subtype. I'm not sure anymore if this is even necessary though, but it works and the created object is subsequently accessible from Lua under 'a_name'. The problem I have, though, is that functions defined in the derived class, which were not declared at all in the parent class, cannot be used. Virtual functions defined in the base class, such as 'start' above, work fine, and calling them from Lua on the new object runs the respective redefined code from the 'PluginExample' class. But if I add a new function to 'PluginExample', here for example a function taking no arguments and returning void, and register it like this: luabind::module(L) [ luabind::class_("PluginExample") .def(luabind::constructor()) .def("func", &PluginExample::func) ]; calling 'func' on the new object yields the following Lua runtime error: No matching overload found, candidates: void func(PluginExample&) I am correctly using the ':' syntax so the 'self' argument is not needed and it seems suddenly Lua cannot determine the derived type of the object anymore. I am sure I am doing something wrong, probably having to do with the two-step binding required by my system architecture, but I can't figure out where. I'd much appreciate some help =)

    Read the article

  • How do I destruct data associated with an object after the object no longer exists?

    - by Phineas
    I'm creating a class (say, C) that associates data (say, D) with an object (say, O). When O is destructed, O will notify C that it soon will no longer exist :( ... Later, when C feels it is the right time, C will let go of what belonged to O, namely D. If D can be any type of object, what's the best way for C to be able to execute "delete D;"? And what if D is an array of objects? My solution is to have D derive from a base class that C has knowledge of. When the time comes, C calls delete on a pointer to the base class. I've also considered storing void pointers and calling delete, but I found out that's undefined behavior and doesn't call D's destructor. I considered that templates could be a novel solution, but I couldn't work that idea out. Here's what I have so far for C, minus some details: // This class is C in the above description. There may be many instances of C. class Context { public: // D will inherit from this class class Data { public: virtual ~Data() {} }; Context(); ~Context(); // Associates an owner (O) with its data (D) void add(const void* owner, Data* data); // O calls this when he knows its the end (O's destructor). // All instances of C are now aware that O is gone and its time to get rid // of all associated instances of D. static void purge (const void* owner); // This is called periodically in the application. It checks whether // O has called purge, and calls "delete D;" void refresh(); // Side note: sometimes O needs access to D Data *get (const void *owner); private: // Used for mapping owners (O) to data (D) std::map _data; }; // Here's an example of O class Mesh { public: ~Mesh() { Context::purge(this); } void init(Context& c) const { Data* data = new Data; // GL initialization here c.add(this, new Data); } void render(Context& c) const { Data* data = c.get(this); } private: // And here's an example of D struct Data : public Context::Data { ~Data() { glDeleteBuffers(1, &vbo); glDeleteTextures(1, &texture); } GLint vbo; GLint texture; }; }; P.S. If you're familiar with computer graphics and VR, I'm creating a class that separates an object's per-context data (e.g. OpenGL VBO IDs) from its per-application data (e.g. an array of vertices) and frees the per-context data at the appropriate time (when the matching rendering context is current).

    Read the article

  • gcc/g++: error when compiling large file

    - by Alexander
    Hi, I have a auto-generated C++ source file, around 40 MB in size. It largely consists of push_back commands for some vectors and string constants that shall be pushed. When I try to compile this file, g++ exits and says that it couldn't reserve enough virtual memory (around 3 GB). Googling this problem, I found that using the command line switches --param ggc-min-expand=0 --param ggc-min-heapsize=4096 may solve the problem. They, however, only seem to work when optimization is turned on. 1) Is this really the solution that I am looking for? 2) Or is there a faster, better (compiling takes ages with these options acitvated) way to do this? Best wishes, Alexander Update: Thanks for all the good ideas. I tried most of them. Using an array instead of several push_back() operations reduced memory usage, but as the file that I was trying to compile was so big, it still crashed, only later. In a way, this behaviour is really interesting, as there is not much to optimize in such a setting -- what does the GCC do behind the scenes that costs so much memory? (I compiled with deactivating all optimizations as well and got the same results) The solution that I switched to now is reading in the original data from a binary object file that I created from the original file using objcopy. This is what I originally did not want to do, because creating the data structures in a higher-level language (in this case Perl) was more convenient than having to do this in C++. However, getting this running under Win32 was more complicated than expected. objcopy seems to generate files in the ELF format, and it seems that some of the problems I had disappeared when I manually set the output format to pe-i386. The symbols in the object file are by standard named after the file name, e.g. converting the file inbuilt_training_data.bin would result in these two symbols: binary_inbuilt_training_data_bin_start and binary_inbuilt_training_data_bin_end. I found some tutorials on the web which claim that these symbols should be declared as extern char _binary_inbuilt_training_data_bin_start;, but this does not seem to be right -- only extern char binary_inbuilt_training_data_bin_start; worked for me.

    Read the article

  • compiling a program to run in DOS mode

    - by dygi
    I write a simple program, to run in DOS mode. Everything works under emulated console in Win XP / Vista / Seven, but not in DOS. The error says: this program caonnot be run in DOS mode. I wonder is that a problem with compiler flags or something bigger. For programming i use Code::Blocks v 8.02 with such settings for compilation: -Wall -W -pedantic -pedantic-errors in Project \ Build options \ Compiler settings I've tried a clean DOS mode, booting from cd, and also setting up DOS in Virtual Machine. The same error appears. Should i turn on some more compiler flags ? Some specific 386 / 486 optimizations ? UPDATE Ok, i've downloaded, installed and configured DJGPP. Even resolved some problems with libs and includes. Still have two questions. 1) i can't compile a code, which calls _strdate and _strtime, i've double checked the includes, as MSDN says it needs time.h, but still error says: _strdate was not declared in this scope, i even tried to add std::_strdate, but then i have 4, not 2 errors sazing the same 2) the 2nd code is about gotoxy, it looks like that: #include <windows.h> void gotoxy(int x, int y) { COORD position; position.X = x; position.Y = y; SetConsoleCursorPosition(GetStdHandle(STD_OUTPUT_HANDLE), position); } error says there is no windows.h, so i've put it in place, but then there are many more errors saying some is missing from windows.h, I SUPPOSE it won't work because this functions is strictly for windows right ? is there any way to write similar gotoxy for DOS ? UPDATE2 1) solved using time(); instead of _strdate(); and _strtime(); here's the code time_t rawtime; struct tm * timeinfo; char buffer [20]; time ( &rawtime ); timeinfo = localtime ( &rawtime ); strftime (buffer,80,"%Y.%m.%d %H:%M:%S\0",timeinfo); string myTime(buffer); It now compiles under DJGPP. UPDATE3 Still need to solve a code using gotoxy - replaced it with some other code that compiles (under DJGPP). Thank You all for help. Just learnt some new things about compiling (flags, old IDE's like DJGPP, OpenWatcom) and refreshed memories setting DOS to work :--)

    Read the article

  • jQuery AJAX Web service works only locally

    - by Greg
    Hi, I have a simple ASP.NET Web Service [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.Web.Script.Services.ScriptService] public class Service : System.Web.Services.WebService { public Service () { } [WebMethod] public string SetName(string name) { return "hello my dear friend " + name; } } For this Web Service I created Virtual Directory, so I can receive the access by taping http://localhost:89/Service.asmx. I try to call it via simple html page with jQuery. For this purpose I use function CallWS() { $.ajax({ type: "POST", data: "{'name':'Pumba'}", dataType: "json", url: "http://localhost:89/Service.asmx/SetName", contentType: "application/json; charset=utf-8", success: function (msg) { $('#DIVid').html(msg.d); }, error: function (e) { $('#DIVid').html("Error"); } }); The most interesting fact: If I create the html page in the project with my WebService and change url to Service.asmx/SetName everything works excellent. But if I try to call this webservice remotely - success function works but msg is null. After that I tried to call this service even via SOAP. It is the the same - locally it works excellent, but remotely - not at all. var ServiceUrl = 'http://localhost:89/Service.asmx?op=SetName'; function beginSetName(Name) { var soapMessage = '<?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <SetName xmlns="http://tempuri.org/"> <name>' + Name + '</name> </SetName> </soap:Body> </soap:Envelope>'; $.ajax({ url: ServiceUrl, type: "POST", dataType: "xml", data: soapMessage, complete: endSetName, contentType: "text/xml; charset=\"utf-8\"" }); return false; } function endSetName(xmlHttpRequest, status) { $(xmlHttpRequest.responseXML) .find('SetNameResult') .each(function () { var name = $(this).text(); alert(name); }); } In this case status has value "parseerror". Could you please help me to resolve this problem? What should I do to call another WebService remotely by url via jQuery. Thank you in advance, Greg

    Read the article

  • How can I limit the cache used by copying so there is still memory available for other cache?

    - by Peter
    Basic situation: I am copying some NTFS disks in openSuSE. Each one is 2TB. When I do this, the system runs slow. My guesses: I believe it is likely due to caching. Linux decides to discard useful cache (eg. kde4 bloat, virtual machine disks, LibreOffice binaries, Thunderbird binaries, etc.) and instead fill all available memory (24 GB total) with stuff from the copying disks, which will be read only once, then written and never used again. So then any time I use these apps (or kde4), the disk needs to be read again, and reading the bloat off the disk again makes things freeze/hiccup. Due to the cache being gone and the fact that these bloated applications need lots of cache, this makes the system horribly slow. Since it is USB,the disk and disk controller are not the bottleneck, so using ionice does not make it faster. I believe it is the cache rather than just the motherboard going too slow, because if I stop everything copying, it still runs choppy for a while until it recaches everything. And if I restart the copying, it takes a minute before it is choppy again. But also, I can limit it to around 40 MB/s, and it runs faster again (not because it has the right things cached, but because the motherboard busses have lots of extra bandwidth for the system disks). I can fully accept a performance loss from my motherboard's IO capability being completely consumed (which is 100% used, meaning 0% wasted power which makes me happy), but I can't accept that this caching mechanism performs so terribly in this specific use case. # free total used free shared buffers cached Mem: 24731556 24531876 199680 0 8834056 12998916 -/+ buffers/cache: 2698904 22032652 Swap: 4194300 24764 4169536 I also tried the same thing on Ubuntu, which causes a total system hang instead. ;) And to clarify, I am not asking how to leave memory free for the "system", but for "cache". I know that cache memory is automatically given back to the system when needed, but my problem is that it is not reserved for caching of specific things. Question: Is there some way to tell these copy operations to limit memory usage so some important things remain cached, and therefore any slowdowns are a result of normal disk usage and not rereading the same commonly used files? For example, is there a setting of max memory per process/user/file system allowed to be used as cache/buffers?

    Read the article

  • How can a 1Gb Java heap on a 64bit machine use 3Gb of VIRT space?

    - by Graeme Moss
    I run the same process on a 32bit machine as on a 64bit machine with the same memory VM settings (-Xms1024m -Xmx1024m) and similar VM version (1.6.0_05 vs 1.6.0_16). However the virtual space used by the 64bit machine (as shown in top under "VIRT") is almost three times as big as that in 32bit! I know 64bit VMs will use a little more memory for the larger references, but how can it be three times as big? Am I reading VIRT in top incorrectly? Full data shown below, showing top and then the result of jmap -heap, first for 64bit, then for 32bit. Note the VIRT for 64bit is 3319m for 32bit is 1220m. * 64bit * PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22534 agent 20 0 3319m 163m 14m S 4.7 2.0 0:04.28 java $ jmap -heap 22534 Attaching to process ID 22534, please wait... Debugger attached successfully. Server compiler detected. JVM version is 10.0-b19 using thread-local object allocation. Parallel GC with 4 thread(s) Heap Configuration: MinHeapFreeRatio = 40 MaxHeapFreeRatio = 70 MaxHeapSize = 1073741824 (1024.0MB) NewSize = 2686976 (2.5625MB) MaxNewSize = -65536 (-0.0625MB) OldSize = 5439488 (5.1875MB) NewRatio = 2 SurvivorRatio = 8 PermSize = 21757952 (20.75MB) MaxPermSize = 88080384 (84.0MB) Heap Usage: PS Young Generation Eden Space: capacity = 268500992 (256.0625MB) used = 247066968 (235.62142181396484MB) free = 21434024 (20.441078186035156MB) 92.01715277089181% used From Space: capacity = 44695552 (42.625MB) used = 0 (0.0MB) free = 44695552 (42.625MB) 0.0% used To Space: capacity = 44695552 (42.625MB) used = 0 (0.0MB) free = 44695552 (42.625MB) 0.0% used PS Old Generation capacity = 715849728 (682.6875MB) used = 0 (0.0MB) free = 715849728 (682.6875MB) 0.0% used PS Perm Generation capacity = 21757952 (20.75MB) used = 16153928 (15.405586242675781MB) free = 5604024 (5.344413757324219MB) 74.24378912132907% used * 32bit * PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 30168 agent 20 0 1220m 175m 12m S 0.0 2.2 0:13.43 java $ jmap -heap 30168 Attaching to process ID 30168, please wait... Debugger attached successfully. Server compiler detected. JVM version is 14.2-b01 using thread-local object allocation. Parallel GC with 8 thread(s) Heap Configuration: MinHeapFreeRatio = 40 MaxHeapFreeRatio = 70 MaxHeapSize = 1073741824 (1024.0MB) NewSize = 1048576 (1.0MB) MaxNewSize = 4294901760 (4095.9375MB) OldSize = 4194304 (4.0MB) NewRatio = 8 SurvivorRatio = 8 PermSize = 16777216 (16.0MB) MaxPermSize = 67108864 (64.0MB) Heap Usage: PS Young Generation Eden Space: capacity = 89522176 (85.375MB) used = 80626352 (76.89128112792969MB) free = 8895824 (8.483718872070312MB) 90.0629940005033% used From Space: capacity = 14876672 (14.1875MB) used = 14876216 (14.187065124511719MB) free = 456 (4.3487548828125E-4MB) 99.99693479832048% used To Space: capacity = 14876672 (14.1875MB) used = 0 (0.0MB) free = 14876672 (14.1875MB) 0.0% used PS Old Generation capacity = 954466304 (910.25MB) used = 10598496 (10.107513427734375MB) free = 943867808 (900.1424865722656MB) 1.1104107034039412% used PS Perm Generation capacity = 16777216 (16.0MB) used = 11366448 (10.839889526367188MB) free = 5410768 (5.1601104736328125MB) 67.74930953979492% used

    Read the article

  • HelloWebView Sample: now using SDK 3 and getting killed

    - by Tim
    Hey Folks, Well I was trying to get the HelloWebview example working with SDK 7 with no success (see HelloWebView Sample: java.lang.SecurityException: Permission Denial thread), so I decided just out of curiosity to back off to SDK3 to see if I could learn anything. I have been able to get all the "Layout" samples to work and decided to try something a little harder. Unfortunately, I still cannot get the simple HelloWebView app to run. I no longer get a Permission Denial but now the app is getting killed. Killed usually implies that there are not enough resources (memory etc.) for an application to run.... Any thoughts? Are there any other log files I can look at either on my computer or on the emulator? The main.xml, manifest, and console output are below. Let me know if you need more information. Thanks, Tim main.xml: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" > <WebView android:id="@+id/webview" android:layout_width="fill_parent" android:layout_height="fill_parent"/> </LinearLayout> mainfest file: <uses-permission android:name="android.permission.INTERNET" /> <uses-sdk android:minSdkVersion="3" /> <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".HelloWebView3" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name=".HelloWebView3" android:label="@string/app_name" android:theme="@android:style/Theme.NoTitleBar"> </activity> </application> Console output: [2010-06-05 08:43:37 - HelloWebView3] ------------------------------ [2010-06-05 08:43:37 - HelloWebView3] Android Launch! [2010-06-05 08:43:37 - HelloWebView3] adb is running normally. [2010-06-05 08:43:37 - HelloWebView3] Performing com.example.hellowebview3.HelloWebView3 activity launch [2010-06-05 08:43:37 - HelloWebView3] Automatic Target Mode: launching new emulator with compatible AVD 'Android1.5' [2010-06-05 08:43:37 - HelloWebView3] Launching a new emulator with Virtual Device 'Android1.5' [2010-06-05 08:43:42 - HelloWebView3] New emulator found: emulator-5554 [2010-06-05 08:43:42 - HelloWebView3] Waiting for HOME ('android.process.acore') to be launched... [2010-06-05 08:45:04 - HelloWebView3] HOME is up on device 'emulator-5554' [2010-06-05 08:45:04 - HelloWebView3] Uploading HelloWebView3.apk onto device 'emulator-5554' [2010-06-05 08:45:04 - HelloWebView3] Installing HelloWebView3.apk... [2010-06-05 08:45:19 - HelloWebView3] Success! [2010-06-05 08:45:19 - HelloWebView3] Starting activity com.example.hellowebview3.HelloWebView3 on device [2010-06-05 08:45:23 - HelloWebView3] ActivityManager: Starting: Intent { action=android.intent.action.MAIN categories={android.intent.category.LAUNCHER} comp={com.example.hellowebview3/com.example.hellowebview3.HelloWebView3} } [2010-06-05 08:45:23 - HelloWebView3] ActivityManager: [1] Killed am start -n com....

    Read the article

  • Criteria query returns hydrated object in SQLite but not SqlServer

    - by Berryl
    I have a method that returns a resource fully hydrated when the db is SQLite but when the identical code is used by SqlServer the object is not fully hydrated. I'll explain that with the code after some brief background. I my domain various otherwise unrelated things like an Employee or a Machine can be used as a Resource that can be allocated to. In the object model an example of this would be: /// <summary>Wraps a <see cref="StaffMember"/> in a <see cref="ResourceBase"/>. </summary> public class StaffMemberResource : ResourceBase { public virtual StaffMember StaffMember { get; private set; } public StaffMemberResource(StaffMember staffMember) { Check.RequireNotNull<StaffMember>(staffMember); base.BusinessId = staffMember.Number.ToString(); base.Name = staffMember.Name.ToString(); base.OrganizationName = staffMember.Department.Name; StaffMember = staffMember; } [UsedImplicitly] protected StaffMemberResource() { } } And in the db tables, there is a table per class inheritance where the ResourceBase has a discriminator and the id of the actual resource (ie, StaffMember) StaffMember - 1 ---- M- ResourceBase - 1 ----- M - Allocation The Code public override StaffMemberResource BuildResource(IActivityService activityService) { var sessionFactory = _GetSessionFactory(); var session = sessionFactory.GetCurrentSession(); StaffMemberResource result; using (var tx = session.BeginTransaction()) { var propertyName = ExprHelper.GetPropertyName<StaffMember>(x => x.Number); var staff = session.CreateCriteria<StaffMember>() .Add(Restrictions.Eq(propertyName, new EmployeeNumber(_testData.Resource_1.BusinessId))) .UniqueResult<StaffMember>(); if (staff == null) { ... build up a staff member result = new StaffMemberResource(staff); } else { ////////// var property = ExprHelper.GetPropertyName<StaffMemberResource>(x => x.StaffMember); result = session.CreateCriteria<StaffMemberResource>() .Add(Restrictions.Eq(property, staff)) .UniqueResult<StaffMemberResource>(); } /////////// tx.Commit(); } return result; } It's that second criteria query that works "properly" with SQLite but not with SqlServer. By properly I mean that the employee numer is translated into a ResourceBase.BusinessId, Name is flattened out into a ResourceBase.Name, etc. Does anyone know why this might be? Cheers, Berryl

    Read the article

  • How can I configure a Factory with the possible providers?

    - by Jonathas Costa
    I have three assemblies: "Framework.DataAccess", "Framework.DataAccess.NHibernateProvider" and "Company.DataAccess". Inside the assembly "Framework.DataAccess", I have my factory (with the wrong implementation of discovery): public class DaoFactory { private static readonly object locker = new object(); private static IWindsorContainer _daoContainer; protected static IWindsorContainer DaoContainer { get { if (_daoContainer == null) { lock (locker) { if (_daoContainer != null) return _daoContainer; _daoContainer = new WindsorContainer(new XmlInterpreter()); // THIS IS WRONG! THIS ASSEMBLY CANNOT KNOW ABOUT SPECIALIZATIONS! _daoContainer.Register( AllTypes.FromAssemblyNamed("Company.DataAccess") .BasedOn(typeof(IReadDao<>)).WithService.FromInterface(), AllTypes.FromAssemblyNamed("Framework.DataAccess.NHibernateProvider") .BasedOn(typeof(IReadDao<>)).WithService.Base()); } } return _daoContainer; } } public static T Create<T>() where T : IDao { return DaoContainer.Resolve<T>(); } } This assembly also defines the base interface for data access IReadDao: public interface IReadDao<T> { IEnumerable<T> GetAll(); } I want to keep this assembly generic and with no references. This is my base data access assembly. Then I have the NHibernate provider's assembly, which implements the above IReadDao using NHibernate's approach. This assembly references the "Framework.DataAccess" assembly. public class NHibernateDao<T> : IReadDao<T> { public NHibernateDao() { } public virtual IEnumerable<T> GetAll() { throw new NotImplementedException(); } } At last, I have the "Company.DataAccess" assembly, which can override the default implementation of NHibernate provider and references both previously seen assemblies. public interface IProductDao : IReadDao<Product> { Product GetByName(string name); } public class ProductDao : NHibernateDao<Product>, IProductDao { public override IEnumerable<Product> GetAll() { throw new NotImplementedException("new one!"); } public Product GetByName(string name) { throw new NotImplementedException(); } } I want to be able to write... IRead<Product> dao = DaoFactory.Create<IRead<Product>>(); ... and then get the ProductDao implementation. But I can't hold inside my base data access any reference to specific assemblies! My initial idea was to read that from a xml config file. So, my question is: How can I externally configure this factory to use a specific provider as my default implementation and my client implementation?

    Read the article

  • sendmail and MX records when mail server is not on web host

    - by Jim Nelson
    This is a problem I'm sure is easy to fix, but I've been banging my head on it all day. I'm developing a new web site for a client. The web site resides at (this is an example) website.com. I have a PHP form script to email visitors' requests to [email protected]. When I coded this on a staging server on a different domain, all worked fine. When I moved it to website.com, the mail messages never arrived. The web server is on a virtual host with a major ISP. Here's what I've learned since then: My client's mail server is Microsoft Exchange on a box physically in their office. Whenever someone on the outside world emails [email protected], the mail arrives. But if the web server sends to the same email address, it fails every time. This is not a PHP problem. I secure shell in to the web server and have tested this both with sendmail and the UNIX mail application. I've also tested it by emailing various email accounts from the shell. I can email myself, for example, just nobody at the website.com domain. In short, when I'm logged in to website.com, mail to [email protected], [email protected], [email protected] all fail. All other addresses work fine. What I've discovered is those dropped emails are routed to the web server's "catchall" account where they sit in its inbox. I've done an MX lookup on website.com. The MX record points to mailsec.website.com. I can telnet to mailsec.website.com port 25 and see the SMTP server. It appears to me that website.com isn't doing an MX lookup when it's sending mail to [email protected]. My theory is that it recognizes the domain as local, sees that there's no "requests" user account to deliver it to, and drops the mail into the catchall account. What I want is to force sendmail to do the MX lookup and send the message on to the Exchange server. I'm at wit's end here. I can't figure out how to do this. For that matter, I may be way off base here and have misdiagnosed this entirely. Internet mail and MX has always seemed a black art to me, and my ignorance is certainly showing in this question.

    Read the article

  • Any tips on reducing wxWidgets application code size?

    - by Billy ONeal
    I have written a minimal wxWidgets application: stdafx.h #define wxNO_REGEX_LIB #define wxNO_XML_LIB #define wxNO_NET_LIB #define wxNO_EXPAT_LIB #define wxNO_JPEG_LIB #define wxNO_PNG_LIB #define wxNO_TIFF_LIB #define wxNO_ZLIB_LIB #define wxNO_ADV_LIB #define wxNO_HTML_LIB #define wxNO_GL_LIB #define wxNO_QA_LIB #define wxNO_XRC_LIB #define wxNO_AUI_LIB #define wxNO_PROPGRID_LIB #define wxNO_RIBBON_LIB #define wxNO_RICHTEXT_LIB #define wxNO_MEDIA_LIB #define wxNO_STC_LIB #include <wx/wxprec.h> Minimal.cpp #include "stdafx.h" #include <memory> #include <wx/wx.h> class Minimal : public wxApp { public: virtual bool OnInit(); }; IMPLEMENT_APP(Minimal) DECLARE_APP(Minimal) class MinimalFrame : public wxFrame { DECLARE_EVENT_TABLE() public: MinimalFrame(const wxString& title); void OnQuit(wxCommandEvent& e); void OnAbout(wxCommandEvent& e); }; BEGIN_EVENT_TABLE(MinimalFrame, wxFrame) EVT_MENU(wxID_ABOUT, MinimalFrame::OnAbout) EVT_MENU(wxID_EXIT, MinimalFrame::OnQuit) END_EVENT_TABLE() MinimalFrame::MinimalFrame(const wxString& title) : wxFrame(0, wxID_ANY, title) { std::auto_ptr<wxMenu> fileMenu(new wxMenu); fileMenu->Append(wxID_EXIT, L"E&xit\tAlt-X", L"Terminate the Minimal Example."); std::auto_ptr<wxMenu> helpMenu(new wxMenu); helpMenu->Append(wxID_ABOUT, L"&About\tF1", L"Show the about dialog box."); std::auto_ptr<wxMenuBar> bar(new wxMenuBar); bar->Append(fileMenu.get(), L"&File"); fileMenu.release(); bar->Append(helpMenu.get(), L"&Help"); helpMenu.release(); SetMenuBar(bar.get()); bar.release(); CreateStatusBar(2); SetStatusText(L"Welcome to wxWidgets!"); } void MinimalFrame::OnAbout(wxCommandEvent& e) { wxMessageBox(L"Some text about me!", L"About", wxOK, this); } void MinimalFrame::OnQuit(wxCommandEvent& e) { Close(); } bool Minimal::OnInit() { std::auto_ptr<MinimalFrame> mainFrame( new MinimalFrame(L"Minimal wxWidgets Application")); mainFrame->Show(); mainFrame.release(); return true; } This minimal program weighs in at 2.4MB! (Executable compression drops this to half a MB or so but that's still HUGE!) (I must statically link because this application needs to be single-binary-xcopy-deployed, so both the C runtime and wxWidgets itself are set for static linking) Any tips on cutting this down? (I'm using Microsoft Visual Studio 2010)

    Read the article

  • Spring.NET & Immediacy CMS (or how to inject to server side controls without using PageHandlerFactor

    - by Simon Rice
    Is there any way to inject dependencies into an Immediacy CMS control using Spring.NET, ideally without having to use to ContextRegistry when initialising the control? Update, with my own answer The issue here is that Immediacy already has a handler defined in web.config that deals with all aspx pages, & so it's not possible add an entry for Spring.NET's PageHandlerFactory in web.config as per a normal webforms app. That rules out making the control implement ISupportsWebDependencyInjection. Furthermore, most of Immediacy's generated pages are aspx pages that don't physically exist on the drive. I have changed the title of the question to reflect this. What I have done to get Dependency Injection working is: Add the usual entries to web.config for Spring.NET as outlined in the documentation, except for the adding the entry to the <httpHandlers> section. In this case I've got my object definitions in Spring.config. Create the following abstract base class that will deal with all of the Dependency Injection work: DIControl.cs public abstract class DIControl : ImmediacyControl { protected virtual string DIName { get { return this.GetType().Name; } } protected override void OnInit(EventArgs e) { if (ContextRegistry.GetContext().GetObject(DIName, this.GetType()) != null) ContextRegistry.GetContext().ConfigureObject(this, DIName); base.OnInit(e); } } For non-immediacy controls, you can make this server side control inherit from Control or whatever subclass of that you like. For any control with which you wish to use with Spring.NET's Inversion of Control container, define it to inherit from DIControl & add the relelvant entry to Spring.config, for example: SampleControl.cs public class SampleControl : DIControl, INamingContainer { public string Text { get; set; } protected string InjectedText { get; set; } public SampleControl() : base() { Text = "Hello world"; } protected override void RenderContents(HtmlTextWriter output) { output.Write(string.Format("{0} {1}", Text, InjectedText)); } } Spring.config <objects xmlns="http://www.springframework.net"> <object id="SampleControl" type="MyProject.SampleControl, MyAssembly"> <property name="InjectedText" value="from Spring.NET" /> </object> </objects> You can optionally override DIName if you wish to name your entry in Spring.config differently from the name of your class. Provided everything's done correctly, you will have the control writing out "Hello world from Spring.NET!" when used in a page. This solution uses Spring.NET's ContextRegistry from within the control, but I would be surprised if there's no way around that for Immediacy at least since the page objects themselves aren't accessible. However, can this be improved at all from a Spring.NET perspective? Is there maybe an Immediacy plugin that already does this that I'm completely unaware of? Or is there an approach that does this in a more elegant way? I'm open to suggestions.

    Read the article

  • Mocking a concrete class : templates and avoiding conditional compilation

    - by AshirusNW
    I'm trying to testing a concrete object with this sort of structure. class Database { public: Database(Server server) : server_(server) {} int Query(const char* expression) { server_.Connect(); return server_.ExecuteQuery(); } private: Server server_; }; i.e. it has no virtual functions, let alone a well-defined interface. I want to a fake database which calls mock services for testing. Even worse, I want the same code to be either built against the real version or the fake so that the same testing code can both: Test the real Database implementation - for integration tests Test the fake implementation, which calls mock services To solve this, I'm using a templated fake, like this: #ifndef INTEGRATION_TESTS class FakeDatabase { public: FakeDatabase() : realDb_(mockServer_) {} int Query(const char* expression) { MOCK_EXPECT_CALL(mockServer_, Query, 3); return realDb_.Query(); } private: // in non-INTEGRATION_TESTS builds, Server is a mock Server with // extra testing methods that allows mocking Server mockServer_; Database realDb_; }; #endif template <class T> class TestDatabaseContainer { public: int Query(const char* expression) { int result = database_.Query(expression); std::cout << "LOG: " << result << endl; return result; } private: T database_; }; Edit: Note the fake Database must call the real Database (but with a mock Server). Now to switch between them I'm planning the following test framework: class DatabaseTests { public: #ifdef INTEGRATION_TESTS typedef TestDatabaseContainer<Database> TestDatabase ; #else typedef TestDatabaseContainer<FakeDatabase> TestDatabase ; #endif TestDatabase& GetDb() { return _testDatabase; } private: TestDatabase _testDatabase; }; class QueryTestCase : public DatabaseTests { public: void TestStep1() { ASSERT(GetDb().Query(static_cast<const char *>("")) == 3); return; } }; I'm not a big fan of that compile-time switching between the real and the fake. So, my question is: Whether there's a better way of switching between Database and FakeDatabase? For instance, is it possible to do it at runtime in a clean fashion? I like to avoid #ifdefs. Also, if anyone has a better way of making a fake class that mimics a concrete class, I'd appreciate it. I don't want to have templated code all over the actual test code (QueryTestCase class). Feel free to critique the code style itself, too. You can see a compiled version of this code on codepad.

    Read the article

  • Make a compiled binary run at native speed flawlessly without recompiling from source on a another system?

    - by unknownthreat
    I know that many people, at a first glance of the question, may immediately yell out "Java", but no, I know Java's qualities. Allow me to elaborate my question first. Normally, when we want our program to run at a native speed on a system, whether it be Windows, Mac OS X, or Linux, we need to compile from source codes. If you want to run a program of another system in your system, you need to use a virtual machine or an emulator. While these tools allow you to use the program you need on the non-native OS, they sometimes have problems of performance and glitches. We also have a newer compiler called "JIT Compiler", where the compiler will parse the bytecode program to native machine language before execution. The performance may increase to a very good extent with JIT Compiler, but the performance is still not the same as running it on a native system. Another program on Linux, WINE, is also a good tool for running Windows program on Linux system. I have tried running Team Fortress 2 on it, and tried experiment with some settings. I got ~40 fps on Windows at its mid-high setting on 1280 x 1024. On Linux, I need to turn everything low at 1280 x 1024 to get ~40 fps. There are 2 notable things though: Polygon model settings do not seem to affect framerate whether I set it low or high. When there are post-processing effects or some special effects that require manipulation of drawn pixels of the current frame, the framerate will drop to 10-20 fps. From this point, I can see that normal polygon rendering is just fine, but when it comes to newer rendering methods that requires graphic card to the job, it slows down to a crawl. Anyway, this question is rather theoretical. Is there anything we can do at all? I see that WINE can run STEAM and Team Fortress 2. Although there are flaws, they can run at lower setting. Or perhaps, I should also ask, "is it possible to translate one whole program on a system to another system without recompiling from source and get native speed?" I see that we also have AOT Compiler, is it possible to use it for something like this? Or there are so many constraints (such as DirectX call or differences in software architecture) that make it impossible to have a flawless and not native to the system program that runs at native speed?

    Read the article

  • Repeating fields in similar database tables

    - by user1738833
    I have been tasked with working on a database that I have never seen before and I'm looking at the DB structure. Some of the central and most heavily queried and joined tables look like virtual duplicates of each other. Here's a massively simplified representation of the situation, with business-sensitive information changed, listing hypothetical table names and fields: TopLevelGroup: PK_TLGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK SubGroup: PK_SubGroupId, FK_ParentTopLevelGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK SubSubGroup: PK_SubSUbGroupId, FK_ParentSubGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK I haven't listed the types of the fields as I don't think it's particularly important to the situation. In addition, it's worth saying that rather than four repeated fields as in the example above, I'm looking at 86 repeated fields. For the most part, those fields genuinely do represent "facts" about the primary table entity, so it's not automatically wrong for that reason. In addition, the "groups" represented here have a property inheritance relationship. If DisplaysXOnBill is NULL in the SubSubGroup, it takes the value of DisplaysXOnBillfrom it's parent, the SubGroup, and so-on up to the TopLevelGroup. Further, the requirements will never require that the model extends beyond three levels, so there is no need for flexibility in that area. Is there a design smell from several tables which describe very similar entities having almost identical fields? If so, what might be a better design of the example above? I'm using the phrase "design smell" to indicate a possible problem. Of course, in any given situation, a particular design might well be the best solution. I'm looking for a more general answer - wondering what might be wrong with this design and what might be the better design were that the case. Possibly related, but not primary questions: Is this database schema in a reasonably normal form (e.g. to 3NF), insofar as can be told from the information I've provided. I can't see a problem with the requirements of 2NF and 3NF, except in their inheriting the requirements of 1NF. Is 1NF satisfied though? Are repeating groups allowed in different tables? Is there a best-practice method for implementing the inheritance relationship in a database as I require? The method above feels clunky to me because any query on the SubSubGroup necessarily needs to join onto the SubGroup and the TopLevelGroup tables to collect inherited facts, which can make even trivial joins requiring facts from the SubSubGroup table rather long-winded. There are, of course, political considerations to making a relatively large change like this. For the purpose of this question, I'm happy to ignore that fact in the interests of keeping the answers ring-fenced to the technical problem.

    Read the article

< Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >