Search Results

Search found 9011 results on 361 pages for 'common'.

Page 283/361 | < Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >

  • Wicket: How to implement an IDataProvider/LoadableDetachableModel for indexed lists

    - by cretzel
    What's the best way to implement an IDataProvider and a LoadableDetachable in Wicket for an indexed list? Suppose I have a Customer who has a list of Adresses. class Customer { List adresses; } Now I want to implement a data provider/ldm for the adresses of a customer. I suppose the usual way is an IDataProvider as an inner class which refers to the customer model of the component, like: class AdressDataProvider implements IDataProvider { public Iterator iterator() { Customer c = (Customer)Component.this.getModel(); // somehow get the customer model return c.getAdresses().iterator(); } public IModel model(Object o) { Adress a = (Adress) o; // Return an LDM which loads the adress by id. return new AdressLoadableDetachableModel(a.getId()); } } Question: How would I implement this, when the adress does not have an ID (e.g. it's a Hibernate Embeddable/CollectionOfElements) but can only be identified by its index in the customer.adresses list? How do I keep reference to the owning entity and the index? In fact, I know a solution, but I wonder if there's a common pattern to do this.

    Read the article

  • Single log file for multiple webapps

    - by Ashish Aggarwal
    In my tomcat there are multiple webapps deployed and they communicate with each other. Currently they all have their own log file. But when there is some issue comes from call I have to 1st check with the app to whom I made a call and check log file of respective apps involved in the call. So I want that, as all apps is deployed in same tomcat and sharing a common log4j, if a call made to any app then all logs should be in a single log file and no matters how my webapps are involved all error comes from any webapp during the call should be in a single log file. I have no idea how can I achieve this. So any help is appreciable. Edited: I think my question is not cleared so updated with use case: I have three webapps A, B, C having logs files as A.log, B.log and C.log. I made two calls. 1st one to A (that internally calls C) and 2nd to B (that internally calls C). Now logging of first call must be in A.log (with the logs of every step performed inside the webapp c) and second call must be in B.log (with the logs of every step performed inside the webapp c).

    Read the article

  • Why do InterruptedExceptions clear a thread's interrupted status?

    - by Hanno Fietz
    If a thread is interrupted while inside Object.wait() or Thread.join(), it throws an InterruptedException, which resets the thread's interrupted status. I. e., if I have a loop like this inside a Runnable.run(): while (!this._workerThread.isInterrupted()) { // do something try { synchronized (this) { this.wait(this._waitPeriod); } } catch (InterruptedException e) { if (!this._isStopping()) { this._handleFault(e); } } } the thread will continue to run after calling interrupt(). This means I have to explicitly break out of the loop by checking for my own stop flag in the loop condition, rethrow the exception, or add a break. Now, this is not exactly a problem, since this behaviour is well documented and doesn't prevent me from doing anything the way I want. However, I don't seem to understand the concept behind it: Why is a thread not considered interrupted anymore once the exception has been thrown? A similar behaviour also occurs if you get the interrupted status with interrupted() instead of isInterrupted(), then, too, the thread will only appear interrupted once. Am I doing something unusual here? For example, is it more common to catch the InterruptedException outside the loop? (Even though I'm not exactly a beginner, I tagged this "beginner", because it seems like a very basic question to me, looking at it.)

    Read the article

  • Spring, Need to use a a bean declared in a ApplicationContextFactory servlet, in a DispatcherServlet

    - by Ernest
    Hello, i have a web.xml with these 2 servlet: <servlet> <servlet-name>ApplicationContextFactory</servlet-name> <servlet-class>com.bamboo.common.factory.ApplicationContextFactory</servlet-class> <load-on-startup>1</load-on-startup> </servlet> AND <servlet> <servlet-name>dispatcher</servlet-name> <servlet-class> org.springframework.web.servlet.DispatcherServlet </servlet-class> <load-on-startup>1</load-on-startup> </servlet> I need to use these bean declared on the ApplicationContextFactory: <bean id="**catalogFacadeTarget**" class="com.bamboo.catW3.business.impl.CatalogFacadeImpl"> <property name="categoryDAO"><ref local="categoryDAOTarget"/></property> <property name="containerDAO"><ref local="containerDAOTarget"/></property> <property name="productDAO"><ref local="productDAOTarget"/></property> <property name="productOptionDAO"><ref local="productOptionDAOTarget"/></property> <property name="productStatusDAO"><ref local="productStatusDAOTarget"/></property> <property name="userDAO"><ref local="userDAOTarget"/></property> </bean> in the dispatcher-servlet like this: <bean name="welcome" class="com.bamboo.catW3.business.impl.Welcome"> <property name="successView"> <value>welcome</value> </property> <property name="catalogFacadeImpl"><ref local="**categoryDAOTarget**"/> </property> </bean> Is it posible some how? Thank you!

    Read the article

  • Add webservice reference, the classes in different file

    - by ArunDhaJ
    Hi, When I add webservice as service reference in my .Net project, it creates a service folder within "Service References". All the interfaces and classes are contained within that service folder. I wanted to know how to split the interface's method and classes. Actually I wanted the web service reference to import classes defined in different dll. I wanted to define this way because of my design constraints. I've 3 layered application. Of which third layer is communication layer which holds all web service references. second is business layer and first is presentation layer. If the class declarations are in layer-3 and I'm accessing those classes from presentation layer, it is logically a cross-layer-access-violation. Instead, I wanted a separate project which holds only the class declarations and this would be used in all 3 layers. I didn't faced any problems to achieve this with layer-1 and layer-2. But, I'm not sure how to make communication layer to use this common declaration dll. Your suggestion would help me to design my application better. Thanks in advance Regards ArunDhaJ

    Read the article

  • Unable to turn off notice errors in PHP 5.3.2

    - by knkk
    Hi everyone, I recently migrated to PHP 5.3.2, and realized that I am unable to turn off notice errors in my site now. I went to php.ini, and in these lines: ; Common Values: ; E_ALL & ~E_NOTICE (Show all errors, except for notices and coding standards warnings.) ; E_ALL & ~E_NOTICE | E_STRICT (Show all errors, except for notices) ; E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR (Show only errors) ; E_ALL | E_STRICT (Show all errors, warnings and notices including coding standards.) ; Default Value: E_ALL & ~E_NOTICE ; Development Value: E_ALL | E_STRICT ; Production Value: E_ALL & ~E_DEPRECATED ; http://php.net/error-reporting error_reporting = E_ALL & ~E_NOTICE ...I've tried setting everything (and I restart apache each time), but I am unable to get rid of notices. The only way I'm able to get rid of notice errors is by setting : display_errors = Off That is, of course, not something I can do since I need to see errors to fix them, and I would like to see errors on the webpage that I am coding rather than log them somewhere. Can someone help? Is this a bug in PHP 5.3.2 or something I am doing wrong? Thank you very much for your time! P. S. Also, would anyone know how I can get PHP 5.3.2 to support the .php3 extension?

    Read the article

  • Git merge 2 new file with removed content and added content

    - by Loïc Faure-Lacroix
    So we are working in with 2 different repositories and both designers modified the same file. the problem is quite simple but I have no ideas how to solve it yet. Both files are marked as new since they have almost nothing in common except that file. When I try to merge from branch A to B it mark the parts added in A deleted in B and on the other side, what was added in B appears deleted in A. git seems to try to outsmart me when I know that I need almost every changes and nothing should be mark as deletion. I have 2 other branch that should merge without problem after these 2 branch. I can't merge them yet since there are some recent changes that may not merge really well too. I have to merge A and B = E then C and D = F and then hopefully E and F So the big question here is how can I do a completely manual merge that will mark every changes as conflict anything deleted anything added should be marked as conflict that I can solve by myself using an editor. Git is trying to outsmart me and fail terribly at it.

    Read the article

  • looking to streamline my RSS feed mashup

    - by Mark Cejas
    Hello crafty developers, I have aggregated RSS feeds from various sources with RSSowl, fetching directly from the social mention API. The RSS feeds are categorized into the following major categories: blogs, news, twitter, Q&A and social networking sites. Each major category is nested with a common group of RSS feeds that represent a particular client/brand ontology. Merging these feeds into the RSSowl reader application, allows me to conduct and save refined search queries (from the aggregated data) into a single file - that I can then tag and further segment for analysis. This scheme is utilized for my own research needs and has helped me considerably. However, I find this RSS mashup scheme kinda clumsy, it requires quite a bit of time to initially organize all of the feeds and I would like to be able to do further natural language processing to the data as well as eventually be able to rank the collected list of URL's into some order of media prominence - right I don't want to pay the ridiculous radian6 web analytics fees, when my intuition is telling me that with a bit of 'elbow grease' I can maybe leverage some available resources online to develop a functional low scale web mining application and get some good intelligence from it. I am now starting to learn a little about computer science - my background is in physical science/statistics so is my thinking in the right track? So, I guess I am imagining an application that allows me to query in a refined manner. A manner that allows me to search for keyword combinations, applying AND/OR operators, selectively focus my queries into particular sources - like a collection of blogs or twitter, or social networking communities, then save the results of my queries into a structured format that can then be manipulated and explored. Am I dreaming? I just had to get all of this out. any bit of advice and insight would be hugely appreciated. my best, Mark

    Read the article

  • Controlling shell command line wildcard expansion in C or C++

    - by Adrian McCarthy
    I'm writing a program, foo, in C++. It's typically invoked on the command line like this: foo *.txt My main() receives the arguments in the normal way. On many systems, argv[1] is literally *.txt, and I have to call system routines to do the wildcard expansion. On Unix systems, however, the shell expands the wildcard before invoking my program, and all of the matching filenames will be in argv. Suppose I wanted to add a switch to foo that causes it to recurse into subdirectories. foo -a *.txt would process all text files in the current directory and all of its subdirectories. I don't see how this is done, since, by the time my program gets a chance to see the -a, then shell has already done the expansion and the user's *.txt input is lost. Yet there are common Unix programs that work this way. How do they do it? In Unix land, how can I control the wildcard expansion? (Recursing through subdirectories is just one example. Ideally, I'm trying to understand the general solution to controlling the wildcard expansion.)

    Read the article

  • WCF code generation for large/complex schema (HR-XML/OAGIS) - is there an alternative?

    - by Sasha Borodin
    Hello, and thank you for reading. I am implementing a WCF Service based on a predefined specification (HR-XML 3.0). As such, I am starting with the schema, and working my way back to code. There are a number of large Schema documents (which import yet more Schema documents) related to my implementation, provided by this specification. I am able to generate code using xsd.exe, by supplying the "main" and "supporting" xsd files as arguments. But there are several issues, and I am wondering if this is the right approach. there are litterally hundreds of classes - the code file is half a meg in size duplicate classes (ex. Type, Type1 - which both represent the same type) there are classes declared as inheriting from a base class, but that base class is not generated/defined I understand that there are limitations to the types of Schema supported by svcutil.exe/xsd.exe when targeting the DataContractSerializer and even XmlSerializer. My question is two-fold: Are code generation "issues" fairly common when dealing with larger, modular xsd files? Has anyone had success with generating data contracts from OAGIS or HR-XML schema? Given the above issues, are there better approaches to this task, avoiding generating code and working with concrete objects? Does it make better sence to read and compose a SOAP message directly, while still taking advantage of the rest of the WCF framework? I understand that I am loosing the convenience of working with .NET objects, and the framekwork-provided (de)serialization; given these losses, would it still be advantageous to base my Service on WCF? Is there some "middle ground" between working with .NET types and pure XML? Thank you very much! -Sasha Borodin DFWHC.org

    Read the article

  • C++ iterators, default initialization and what to use as an uninitialized sentinel.

    - by Hassan Syed
    The Context I have a custom template container class put together from a map and vector. The map resolves a string to an ordinal, and the vector resolves an ordinal (only an initial string to ordinal lookup is done, future references are to the vector) to the entry. The entries are modified intrusively to contain a a bool "assigned" and an iterator_type which is a const_iterator to the container class's map. My container class will use RCF's serialization code (which models boost::serialization) to serialize my container classes to nodes in a network. Serializing iterator's is not possible, or a can of worms, and I can easily regenerate them onces the vectors and maps are serialized on the remote site. The Question I need to default initialize, and be able to test that the iterator has not been assigned to (if it is assigned it is valid, if not it is invalid). Since map iterators are not invalidated upon operations performed on it (unless of course items are removed :D) am I to assume that map<x,y>::end() is a valid sentinel (regardless of the state of the map -- i.e., it could be empty) to initialize to ? I will always have access to the parent map, I'm just unsure wheather end() is the same as the map contents change. I don't want to use another level of indirection (--i.e., boost::optional) to achieve my goal, I'd rather forego compiler checks to correct logic, but it would be nice if I didn't need to. Misc This question exists, but most of its content seems non-sense. Assigning a NULL to an iterator is invalid according to g++ and clang++. This is another similar question, but it focuses on the common use-cases of iterators, which generally tends to be using the iterator to iterate, ofcourse in this use-case the state of the container isn't meant to change whilst iteration is going on.

    Read the article

  • How to detect that the internet connection has got disconnected through a java desktop application?

    - by Yatendra Goel
    I am developing a Java Desktop Application that access internet. It is a multi-threaded application, each thread do the same work (means each thread is an instance of same Thread class). Now, as all the threads need internet connection to be active, there should be some mechanism that detects whether an internet connection is active or not. Q1. How to detect whether the internet connection is active or not? Q2. Where to implement this internet-status-check-mechanism code? Should I start a separate thread for checking internet status regularly and notifies all the threads when the status changes from one state to another? Or should I let each thread check for the internet-status itself? Q3. This issue should be a very common issue as every application accessing an internet should deal with this problem. So how other developers usually deal with this problem? Q4. If you could give me a reference to a good demo application that addresses this issue then it would greatly help me.

    Read the article

  • Where is the "ListViewItemPlaceholderBackgroundThemeBrush" located?

    - by Dimi Toulakis
    I have a problem understanding one style definition in Windows 8 metro apps. When you create a metro style application with VS, there is also a folder named Common created. Inside this folder there is file called StandardStyles.xaml Now the following snippet is from this file: <!-- Grid-appropriate 250 pixel square item template as seen in the GroupedItemsPage and ItemsPage --> <DataTemplate x:Key="Standard250x250ItemTemplate"> <Grid HorizontalAlignment="Left" Width="250" Height="250"> <Border Background="{StaticResource ListViewItemPlaceholderBackgroundThemeBrush}"> <Image Source="{Binding Image}" Stretch="UniformToFill"/> </Border> <StackPanel VerticalAlignment="Bottom" Background="{StaticResource ListViewItemOverlayBackgroundThemeBrush}"> <TextBlock Text="{Binding Title}" Foreground="{StaticResource ListViewItemOverlayForegroundThemeBrush}" Style="{StaticResource TitleTextStyle}" Height="60" Margin="15,0,15,0"/> <TextBlock Text="{Binding Subtitle}" Foreground="{StaticResource ListViewItemOverlaySecondaryForegroundThemeBrush}" Style="{StaticResource CaptionTextStyle}" TextWrapping="NoWrap" Margin="15,0,15,10"/> </StackPanel> </Grid> </DataTemplate> What I do not understand here is the static resource definition, e.g. for the Border Background="{StaticResource ListViewItemPlaceholderBackgroundThemeBrush}" It is not about how you work with templates and binding and resources. Where is this ListViewItemPlaceholderBackgroundThemeBrush located? Many thanks for your help. Dimi

    Read the article

  • How to model has_many with polymorphism?

    - by Daniel Abrahamsson
    I've run into a situation that I am not quite sure how to model. Suppose I have a User class, and a user has many services. However, these services are quite different, for example a MailService and a BackupService, so single table inheritance won't do. Instead, I am thinking of using polymorphic associations together with an abstract base class: class User < ActiveRecord::Base has_many :services end class Service < ActiveRecord::Base validates_presence_of :user_id, :implementation_id, :implementation_type belongs_to :user belongs_to :implementation, :polymorphic = true delegate :common_service_method, :name, :to => :implementation end #Base class for service implementations class ServiceImplementation < ActiveRecord::Base validates_presence_of :user_id, :on => :create has_one :service, :as => :implementation has_one :user, :through => :service after_create :create_service_record #Tell Rails this class does not use a table. def self.abstract_class? true end #Default name implementation. def name self.class.name end protected #Sets up a service object def create_service_record service = Service.new(:user_id => user_id) service.implementation = self service.save! end end class MailService < ServiceImplementation #validations, etc... def common_service_method puts "MailService implementation of common service method" end end #Example usage MailService.create(..., :user_id => user.id) BackupService.create(...., :user_id => user.id) user.services.each do |s| puts "#{user.name} is using #{s.name}" end #Daniel is using MailService, Daniel is using BackupService So, is this the best solution? Or even a good one? How have you solved this kind of problem?

    Read the article

  • nServiceBus - Not all commands being received by handler

    - by SimonB
    In a test project based of the nServiceBus pub/sub sample, I've replace the bus.publish with bus.send in the server. The server sends 50 messages with a 1sec wait after each 5 (ie 10 burst of 5 messages). The client does not get all the messages. The soln has 3 projects - Server, Client, and common messages. The server and client are hosted via the nServiceBus generic host. Only a single bus is defined. Both client and server are configured to use StructureMap builder and BinarySerialisation. Server Endpoint: public class EndPointConfig : AsA_Publisher, IConfigureThisEndpoint, IWantCustomInitialization { public void Init() { NServiceBus.Configure.With() .StructureMapBuilder() .BinarySerializer(); } } Server Code : for (var nextId = 1; nextId <= 50; nextId++) { Console.WriteLine("Sending {0}", nextId); IDataMsg msg = new DataMsg { Id = nextId, Body = string.Format("Batch Msg #{0}", nextId) }; _bus.SendLocal(msg); Console.WriteLine(" ...sent {0}", nextId); if ((nextId % 5) == 0) Thread.Sleep(1000); } Client Endpoint: public class EndPointConfig : AsA_Client, IConfigureThisEndpoint, IWantCustomInitialization { public void Init() { NServiceBus.Configure.With() .StructureMapBuilder() .BinarySerializer(); } } Client Clode: public class DataMsgHandler : IMessageHandler<IDataMsg> { public void Handle(IDataMsg msg) { Console.WriteLine("DataMsgHandler.Handle({0}, {1}) - ({2})", msg.Id, msg.Body, Thread.CurrentThread.ManagedThreadId); } } Client and Server App.Config: <MsmqTransportConfig InputQueue="nsbt02a" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="" DistributorDataAddress=""> <MessageEndpointMappings> <add Messages="Test02.Messages" Endpoint="nsbt02a" /> </MessageEndpointMappings> </UnicastBusConfig> All run via VisualStudio 2008. All 50 messages are sent - but after the 1st or 2nd batch. only 1 msg per batch is sent? Any ideas? I'm assuming config or misuse but ....?

    Read the article

  • Give a reference to a python instance attribute at class definition

    - by Guenther Jehle
    I have a class with attributes which have a reference to another attribute of this class. See class Device, value1 and value2 holding a reference to interface: class Interface(object): def __init__(self): self.port=None class Value(object): def __init__(self, interface, name): self.interface=interface self.name=name def get(self): return "Getting Value \"%s\" with interface \"%s\""%(self.name, self.interface.port) class Device(object): interface=Interface() value1=Value(interface, name="value1") value2=Value(interface, name="value2") def __init__(self, port): self.interface.port=port if __name__=="__main__": d1=Device("Foo") print d1.value1.get() # >>> Getting Value "value1" with interface "Foo" d2=Device("Bar") print d2.value1.get() # >>> Getting Value "value1" with interface "Bar" print d1.value1.get() # >>> Getting Value "value1" with interface "Bar" The last print is wrong, cause d1 should have the interface "Foo". I know whats going wrong: The line interface=Interface() line is executed, when the class definition is parsed (once). So every Device class has the same instance of interface. I could change the Device class to: class Device(object): interface=Interface() value1=Value(interface, name="value1") value2=Value(interface, name="value2") def __init__(self, port): self.interface=Interface() self.interface.port=port So this is also not working: The values still have the reference to the original interface instance and the self.interface is just another instance... The output now is: >>> Getting Value "value1" with interface "None" >>> Getting Value "value1" with interface "None" >>> Getting Value "value1" with interface "None" So how could I solve this the pythonic way? I could setup a function in the Device class to look for attributes with type Value and reassign them the new interface. Isn't this a common problem with a typical solution for it? Thanks!

    Read the article

  • Strange behavior with Javascript's __defineSetter__

    - by Shea Barton
    I have a large project in which I need to intercept assignments to things like element.src, element.href, element.style, etc. I figured out to do this with defineSetter, but it is behaving very strangely (using Chrome 8.0.552.231) An example: var attribs = ["href", "src", "background", "action", "onblur", "style", "onchange", "onclick", "ondblclick", "onerror", "onfocus", "onkeydown", "onkeypress", "onkeyup", "onmousedown", "onmousemove", "onmouseover", "onmouseup", "onresize", "onselect", "onunload"]; for(a = 0; a < attribs.length; a++) { var attrib_name = attribs[a]; var func = new Function("attrib_value", "this.setAttribute(\"" + attrib_name + "\", attrib_value.toUpperCase());"); HTMLElement.prototype.__defineSetter__(attrib_name, func); } What this code should do is whenever common element attribute in attribs is assigned, it uses setAttribute() to set a uppercased version of that attribute. For some very strange reason, the setter works for only ~1/3 of the assignments. For example with element.src = "test" the new src is "TEST", like it should be however with element.href = "test" the new href is "test", not uppercase then even when I try element.__lookupSetter__("href"), it returns the proper, uppercasing setter the strangest thing is different variables are intercepted properly between Chrome and Firefox help!!

    Read the article

  • Identify server that made call to web service

    - by sleepybobos
    I am working within an intranet environment. We have both a production and development sharepoint server (WSS 3). We have a 3rd party workflow product which runs on top of sharepoint. It is installed on both the production and development sharepoint servers. The workflow product can call web services I have written which are hosted on our web server. How would I have the web services determine which sharepoint server made the call to the web service, be it the production or development server? I would then use this information to server specific information from web.config or database etc. Currently the site hosting web services is setup to allow anonymous access so code such as System.Web.HttpContext.Current.User.Identity.Name; returns and empty string. If windows authenticaion is used it returns the identity of the currently logged in user, which is no user in identifying the server the call was made from. I need a push in the right direction to address what I believe is probably a common scenario please.

    Read the article

  • PHP check http referer for form submitted by AJAX, secure?

    - by Michael Mao
    Hi all: This is the first time I am working for a front-end project that requires server-side authentication for AJAX requests. I've encountered problems like I cannot make a call of session_start as the beginning line of the "destination page", cuz that would get me a PHP Warning : Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at C:\xampp\htdocs\comic\app\ajaxInsert Book.php:1) in C:\xampp\htdocs\comic\app\common.php on line 10 I reckon this means I have to figure out a way other than checking PHP session variables to authenticate the "caller" of this PHP script, and this is my approach : I have a "protected" PHP page, which must be used as the "container" of my javascript that posts the form through jQuery $.ajax(); method In my "receiver" PHP script, what I've got is: <?php define(BOOKS_TABLE, "books"); define(APPROOT, "/comic/"); define(CORRECT_REFERER, "/protected/staff/addBook.php"); function isRefererCorrect() { // the following line evaluates the relative path for the referer uri, // Say, $_SERVER['HTTP_REFERER'] returns "http://localhost/comic/protected/staff/addBook.php" // Then the part we concern is just this "/protected/staff/addBook.php" $referer = substr($_SERVER['HTTP_REFERER'], 6 + strrpos($_SERVER['HTTP_REFERER'], APPROOT)); return (strnatcmp(CORRECT_REFERER, $referer) == 0) ? true : false; } //http://stackoverflow.com/questions/267546/correct-http-header-for-json-file header('Content-type: application/json charset=UTF-8'); header('Cache-Control: no-cache, must-revalidate'); echo json_encode(array ( "feedback"=>"ok", "info"=>isRefererCorrect() )); ?> My code works, but I wonder is there any security risks in this approach? Can someone manipulate the post request so that he can pretend that the caller javascript is from the "protected" page? Many thanks to any hints or suggestions.

    Read the article

  • Efficient Clojure workflow?

    - by Alex B
    I am developing a pet project with Clojure, but wonder if I can speed up my workflow a bit. My current workflow (with Compojure) is: Start Swank with lein swank. Go to Emacs, connect with M-x slime-connect. Load all existing source files one by one. This also starts a Jetty server and an application. Write some code in REPL. When satisfied with experiments, write a full version of a construct I had in mind. Eval (C-c C-c) it. Switch REPL to namespace where this construct resides and test it. Switch to browser and reload browser tab with the affected page. Tweak the code, eval it, check in the browser. Repeat any of the above. There are a number of annoyances with it: I have to switch between Emacs and the browser (or browsers if I am testing things like templating with multiple browsers) all the time. Is there a common idiom to automate this? I used to have a JavaScript bit that reloads the page continuously, but it's of limited utility, obviously, when I have to interact with the page for more than a few seconds. My JVM instance becomes "dirty" when I experiment and write test functions. Basically namespaces become polluted, especially if I'm refactoring and moving the functions between namespaces. This can lead to symbol collisions and I need to restart Swank. Can I undef a symbol? I load all source files one by one (C-c C-k) upon restarting Swank. I suspect I'm doing it all wrong. Switching between the REPL and the file editor can be a bit irritating, especially when I have a lot of Emacs tabs open, alongside the browser(s). I'm looking for ways to improve the above points and the entire workflow in general, so I'd appreciate if you'd share yours. P. S. I have also used Vimclojure before, so Vimclojure-based workflows are welcome too.

    Read the article

  • Standard and reliable mouse-reporting with GLUT

    - by Victor
    Hello! I'm trying to use GLUT (freeglut) in my OpenGL application, and I need to register some callbacks for mouse wheel events. I managed to dig out a fairly undocumented function: api documentation But the man page and the API entry for this function both state the same thing: Note: Due to lack of information about the mouse, it is impossible to implement this correctly on X at this time. Use of this function limits the portability of your application. (This feature does work on X, just not reliably.) You are encouraged to use the standard, reliable mouse-button reporting, rather than wheel events. Fair enough, but how do I use this standard, reliable mouse-reporting? And how do I know which is the standard? Do I just use glutMouseFunc() and use button values like 4 and 5 for the scroll up and down values respectively, say if 1, 2 and 3 are the left, middle and right buttons? Is this the reliable method? Bonus question: it seems the `xev' tool is reporting different values for my buttons. My mouse buttons are numbered from 1 to 5 with xev, but glut is reporting buttons from 0 to 4, i.e. an off-by-one. Is this common?

    Read the article

  • Immutable classes in C++

    - by ereOn
    Hi, In one of my projects, I have some classes that represent entities that cannot change once created, aka. immutable classes. Example : A class RSAKey that represent a RSA key which only has const methods. There is no point changing the existing instance: if you need another one, you just create one. My objects sometimes are heavy and I enforced the use of smart pointers to avoid copy. So far, I have the following pattern for my classes: class RSAKey : public boost::noncopyable, public boost::enable_shared_from_this<RSAKey> { public: /** * \brief Some factory. * \param member A member value. * \return An instance. */ static boost::shared_ptr<const RSAKey> createFromMember(int member); /** * \brief Get a member. * \return The member. */ int getMember() const; private: /** * \brief Constructor. * \param member A member. */ RSAKey(int member); /** * \brief Member. */ const int m_member; }; So you can only get a pointer (well, a smart pointer) to a const RSAKey. To me, it makes sense, because having a non-const reference to the instance is useless (it only has const methods). Do you guys see any issue regarding this pattern ? Are immutable classes something common in C++ or did I just created a monster ? Thank you for your advices !

    Read the article

  • Consecutive build of VS2005 and VS2008 C++ projects causes LNK1104 error

    - by TestAccount
    I have VS2005 and VS2008 installed on the same machine. I also have a common codebase that I build using both '05 and '08. For this purpose, I have 2 VC projects.. A '08 project called XYZ_2008.vcproj and a '05 project called XYZ_2005.vcproj, and the corresponding 2 slns as well. Both projects output dlls, libs and pdbs to the same output directory (all with appropriate _2005 and _2008 suffixes). Assuming that I am starting from a clean state, I first open XYZ_2005.sln (containing XYZ_2005.vcproj) in VS2005 and build it successfully. Then I close VS2005. Next, I open XYZ_2008.sln (containing XYZ_2008.vcproj) and build (not rebuild) it. At this point, I get an error saying: LINK : fatal error LNK1104: cannot open file 'mfc80u.lib' If now I rebuild the '08 solution, the error goes away and the build succeeds. The build also succeeds if I directly do a rebuild instead of a build for the '08 sln. In spite of everything being separate, the VS08 build seems to be picking up a MFC8 file (from VS05) instead of a MFC9 file. Can somebody please help out with this issue? Thanks in advance!

    Read the article

  • Including partial views when applying the Mode-View-ViewModel design pattern

    - by Filip Ekberg
    Consider that I have an application that just handles Messages and Users I want my Window to have a common Menu and an area where the current View is displayed. I can only work with either Messages or Users so I cannot work simultaniously with both Views. Therefore I have the following Controls MessageView.xaml UserView.xaml Just to make it a bit easier, both the Message Model and the User Model looks like this: Name Description Now, I have the following three ViewModels: MainWindowViewModel UsersViewModel MessagesViewModel The UsersViewModel and the MessagesViewModel both just fetch an ObserverableCollection<T> of its regarding Model which is bound in the corresponding View like this: <DataGrid ItemSource="{Binding ModelCollection}" /> The MainWindowViewModel hooks up two different Commands that have implemented ICommand that looks something like the following: public class ShowMessagesCommand : ICommand { private ViewModelBase ViewModel { get; set; } public ShowMessagesCommand (ViewModelBase viewModel) { ViewModel = viewModel; } public void Execute(object parameter) { var viewModel = new ProductsViewModel(); ViewModel.PartialViewModel = new MessageView { DataContext = viewModel }; } public bool CanExecute(object parameter) { return true; } public event EventHandler CanExecuteChanged; } And there is another one a like it that will show Users. Now this introduced ViewModelBase which only holds the following: public UIElement PartialViewModel { get { return (UIElement)GetValue(PartialViewModelProperty); } set { SetValue(PartialViewModelProperty, value); } } public static readonly DependencyProperty PartialViewModelProperty = DependencyProperty.Register("PartialViewModel", typeof(UIElement), typeof(ViewModelBase), new UIPropertyMetadata(null)); This dependency property is used in the MainWindow.xaml to display the User Control dynamicly like this: <UserControl Content="{Binding PartialViewModel}" /> There are also two buttons on this Window that fires the Commands: ShowMessagesCommand ShowUsersCommand And when these are fired, the UserControl changes because PartialViewModel is a dependency property. I want to know if this is bad practice? Should I not inject the User Control like this? Is there another "better" alternative that corresponds better with the design pattern? Or is this a nice way of including partial views?

    Read the article

  • Why is debugging better in an IDE?

    - by Bill Karwin
    I've been a software developer for over twenty years, programming in C, Perl, SQL, Java, PHP, JavaScript, and recently Python. I've never had a problem I could not debug using some careful thought, and well-placed debugging print statements. I respect that many people say that my techniques are primitive, and using a real debugger in an IDE is much better. Yet from my observation, IDE users don't appear to debug faster or more successfully than I can, using my stone knives and bear skins. I'm sincerely open to learning the right tools, I've just never been shown a compelling advantage to using visual debuggers. Moreover, I have never read a tutorial or book that showed how to debug effectively using an IDE, beyond the basics of how to set breakpoints and display the contents of variables. What am I missing? What makes IDE debugging tools so much more effective than thoughtful use of diagnostic print statements? Can you suggest resources (tutorials, books, screencasts) that show the finer techniques of IDE debugging? Sweet answers! Thanks much to everyone for taking the time. Very illuminating. I voted up many, and voted none down. Some notable points: Debuggers can help me do ad hoc inspection or alteration of variables, code, or any other aspect of the runtime environment, whereas manual debugging requires me to stop, edit, and re-execute the application (possibly requiring recompilation). Debuggers can attach to a running process or use a crash dump, whereas with manual debugging, "steps to reproduce" a defect are necessary. Debuggers can display complex data structures, multi-threaded environments, or full runtime stacks easily and in a more readable manner. Debuggers offer many ways to reduce the time and repetitive work to do almost any debugging tasks. Visual debuggers and console debuggers are both useful, and have many features in common. A visual debugger integrated into an IDE also gives you convenient access to smart editing and all the other features of the IDE, in a single integrated development environment (hence the name).

    Read the article

< Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >