Search Results

Search found 9634 results on 386 pages for 'proxy pattern'.

Page 110/386 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • How to best propagate changes upwards a hierarchical structure for binding?

    - by H.B.
    If i have a folder-like structure that uses the composite design pattern and i bind the root folder to a TreeView. It would be quite useful if i can display certain properties that are being accumulated from the folder's contents. The question is, how do i best inform the folder that changes occurred in a child-element so that the accumulative properties get updated? The context in which i need this is a small RSS-FeedReader i am trying to make. This are the most important objects and aspects of my model: Composite interface: public interface IFeedComposite : INotifyPropertyChanged { string Title { get; set; } int UnreadFeedItemsCount { get; } ObservableCollection<FeedItem> FeedItems { get; } } FeedComposite (aka Folder) public class FeedComposite : BindableObject, IFeedComposite { private string title = ""; public string Title { get { return title; } set { title = value; NotifyPropertyChanged("Title"); } } private ObservableCollection<IFeedComposite> children = new ObservableCollection<IFeedComposite>(); public ObservableCollection<IFeedComposite> Children { get { return children; } set { children.Clear(); foreach (IFeedComposite item in value) { children.Add(item); } NotifyPropertyChanged("Children"); } } public FeedComposite() { } public FeedComposite(string title) { Title = title; } public ObservableCollection<FeedItem> FeedItems { get { ObservableCollection<FeedItem> feedItems = new ObservableCollection<FeedItem>(); foreach (IFeedComposite child in Children) { foreach (FeedItem item in child.FeedItems) { feedItems.Add(item); } } return feedItems; } } public int UnreadFeedItemsCount { get { return (from i in FeedItems where i.IsUnread select i).Count(); } } Feed: public class Feed : BindableObject, IFeedComposite { private string url = ""; public string Url { get { return url; } set { url = value; NotifyPropertyChanged("Url"); } } ... private ObservableCollection<FeedItem> feedItems = new ObservableCollection<FeedItem>(); public ObservableCollection<FeedItem> FeedItems { get { return feedItems; } set { feedItems.Clear(); foreach (FeedItem item in value) { AddFeedItem(item); } NotifyPropertyChanged("Items"); } } public int UnreadFeedItemsCount { get { return (from i in FeedItems where i.IsUnread select i).Count(); } } public Feed() { } public Feed(string url) { Url = url; } Ok, so here's the thing, if i bind a TextBlock.Text to the UnreadFeedItemsCount there won't be simple notifications when an item is marked unread, so one of my approaches has been to handle the PropertyChanged event of every FeedItem and if the IsUnread-Property is changed i have my Feed make a notification that the property UnreadFeedItemsCount has been changed. With this approach i also need to handle all PropertyChanged events of all Feeds and FeedComposites in Children of FeedComposite, from the sound of it, it should be obvious that this is not such a very good idea, you need to be very careful that items never get added or removed to any collection without having attached the PropertyChanged event handler first and things like that. Also: What do i do with the CollectionChanged-Events which necessarily also cause a change in the sum of the unread items count? Sounds like more event handling fun. It is such a mess, it would be great if anyone has an elegant solution to this since i don't want the feed-reader to end up as awful as my first attempt years ago when i didn't even know about DataBinding...

    Read the article

  • Dynamically register constructor methods in an AbstractFactory at compile time using C++ templates

    - by Horacio
    When implementing a MessageFactory class to instatiate Message objects I used something like: class MessageFactory { public: static Message *create(int type) { switch(type) { case PING_MSG: return new PingMessage(); case PONG_MSG: return new PongMessage(); .... } } This works ok but every time I add a new message I have to add a new XXX_MSG and modify the switch statement. After some research I found a way to dynamically update the MessageFactory at compile time so I can add as many messages as I want without need to modify the MessageFactory itself. This allows for cleaner and easier to maintain code as I do not need to modify three different places to add/remove message classes: #include <stdio.h> #include <stdlib.h> #include <string.h> #include <inttypes.h> class Message { protected: inline Message() {}; public: inline virtual ~Message() { } inline int getMessageType() const { return m_type; } virtual void say() = 0; protected: uint16_t m_type; }; template<int TYPE, typename IMPL> class MessageTmpl: public Message { enum { _MESSAGE_ID = TYPE }; public: static Message* Create() { return new IMPL(); } static const uint16_t MESSAGE_ID; // for registration protected: MessageTmpl() { m_type = MESSAGE_ID; } //use parameter to instanciate template }; typedef Message* (*t_pfFactory)(); class MessageFactory· { public: static uint16_t Register(uint16_t msgid, t_pfFactory factoryMethod) { printf("Registering constructor for msg id %d\n", msgid); m_List[msgid] = factoryMethod; return msgid; } static Message *Create(uint16_t msgid) { return m_List[msgid](); } static t_pfFactory m_List[65536]; }; template <int TYPE, typename IMPL> const uint16_t MessageTmpl<TYPE, IMPL >::MESSAGE_ID = MessageFactory::Register( MessageTmpl<TYPE, IMPL >::_MESSAGE_ID, &MessageTmpl<TYPE, IMPL >::Create); class PingMessage: public MessageTmpl < 10, PingMessage > {· public: PingMessage() {} virtual void say() { printf("Ping\n"); } }; class PongMessage: public MessageTmpl < 11, PongMessage > {· public: PongMessage() {} virtual void say() { printf("Pong\n"); } }; t_pfFactory MessageFactory::m_List[65536]; int main(int argc, char **argv) { Message *msg1; Message *msg2; msg1 = MessageFactory::Create(10); msg1->say(); msg2 = MessageFactory::Create(11); msg2->say(); delete msg1; delete msg2; return 0; } The template here does the magic by registering into the MessageFactory class, all new Message classes (e.g. PingMessage and PongMessage) that subclass from MessageTmpl. This works great and simplifies code maintenance but I still have some questions about this technique: Is this a known technique/pattern? what is the name? I want to search more info about it. I want to make the array for storing new constructors MessageFactory::m_List[65536] a std::map but doing so causes the program to segfault even before reaching main(). Creating an array of 65536 elements is overkill but I have not found a way to make this a dynamic container. For all message classes that are subclasses of MessageTmpl I have to implement the constructor. If not it won't register in the MessageFactory. For example commenting the constructor of the PongMessage: class PongMessage: public MessageTmpl < 11, PongMessage > { public: //PongMessage() {} /* HERE */ virtual void say() { printf("Pong\n"); } }; would result in the PongMessage class not being registered by the MessageFactory and the program would segfault in the MessageFactory::Create(11) line. The question is why the class won't register? Having to add the empty implementation of the 100+ messages I need feels inefficient and unnecessary.

    Read the article

  • Opening the Internet Settings Dialog and using Windows Default Network Settings via Code

    - by Rick Strahl
    Ran into a question from a client the other day that asked how to deal with Internet Connection settings for running  HTTP requests. In this case this is an old FoxPro app and it's using WinInet to handle the actual HTTP connection. Another client asked a similar question about using the IE Web Browser control and configuring connection properties. Regardless of platform or tools used to do HTTP connections, you can probably configure custom connection and proxy settings in your application to configure http connection settings manually. However, this is a repetitive process for each application requires you to track system information in your application which is undesirable. Often it's much easier to rely on the system wide proxy settings that Windows provides via the Internet Settings dialog. The dialog is a Control Panel applet (inetcpl.cpl) and is the same dialog that you see when you pop up Internet Explorer's Options dialog: This dialog controls the Windows connection properties that determine how the Windows HTTP stack connects to the Internet and how Proxy's are used if configured. Depending on how the HTTP client is configured - it can typically inherit and use these global settings. Loading the Settings Dialog Programmatically The settings dialog is a Control Panel applet with the name of: inetcpl.cpl and you can use any Shell execution mechanism (Run dialog, ShellExecute API, Process.Start() in .NET etc.) to invoke the dialog. Changes made there are immediately reflected in any applications that use the default connection settings. In .NET you can simply do this to bring up the Internet Settings dialog with the Connection tab enabled: Process.Start("inetcpl.cpl",",4"); In FoxPro you can simply use the RUN command to execute inetcpl.cpl: lcCmd = "inetcpl.cpl ,4" RUN &lcCmd Using the Default Connection/Proxy Settings When using WinInet you specify the Http connect type in the call to InternetOpen() like this (FoxPro code here): hInetConnection=; InternetOpen(THIS.cUserAgent,0,; THIS.chttpproxyname,THIS.chttpproxybypass,0) The second parameter of 0 specifies that the default system proxy settings should be used and it uses the settings from the Internet Settings Connections tab. Other connection options for HTTP connections include 1 - direct (no proxies and ignore system settings), 3 - explicit Proxy specification. In most situations a connection mode setting of 0 should work. In .NET HTTP connections by default are direct connections and so you need to explicitly specify a default proxy or proxy configuration to use. The easiest way to do this is on the application level in the config file: <configuration> <system.net> <defaultProxy> <proxy bypassonlocal="False" autoDetect="True" usesystemdefault="True" /> </defaultProxy> </system.net> </configuration> You can do the same sort of thing in code specifying the proxy explicitly and using System.Net.WebProxy.GetDefaultProxy(). So when making HTTP calls to Web Services or using the HttpWebRequest class you can set the proxy with: StoreService.Proxy = WebProxy.GetDefaultProxy(); All of this is pretty easy to deal with and in my opinion is a way better choice to managing connection settings than having to track this stuff in your own application. Plus if you use default settings, most of the time it's highly likely that the connection settings are already properly configured making further configuration rare.© Rick Strahl, West Wind Technologies, 2005-2011Posted in Windows  HTTP  .NET  FoxPro   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • No bean named 'springSecurityFilterChain' is defined

    - by michaeljackson4ever
    When configs are loaded, I get the error SEVERE: Exception starting filter springSecurityFilterChain org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'springSecurityFilterChain' is defined My sec-config: <http use-expressions="true" access-denied-page="/error/casfailed.html" entry-point-ref="headerAuthenticationEntryPoint"> <intercept-url pattern="/" access="permitAll"/> <!-- <intercept-url pattern="/index.html" access="permitAll"/> --> <intercept-url pattern="/index.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/history.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/absence.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/search.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/employees.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/employee.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/contract.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/myforms.html" access="hasAnyRole('HLO','OPISK')"/> <intercept-url pattern="/vacationmsg.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/redirect.jsp" filters="none" /> <intercept-url pattern="/error/**" filters="none" /> <intercept-url pattern="/layout/**" filters="none" /> <intercept-url pattern="/js/**" filters="none" /> <intercept-url pattern="/**" access="isAuthenticated()" /> <!-- session-management invalid-session-url="/absence.html"/ --> <!-- logout logout-success-url="/logout.html"/ --> <custom-filter ref="ssoHeaderAuthenticationFilter" before="CAS_FILTER"/> <!-- CAS_FILTER ??? --> </http> <authentication-manager alias="authenticationManager"> <authentication-provider ref="doNothingAuthenticationProvider"/> </authentication-manager> <beans:bean id="doNothingAuthenticationProvider" class="com.nixu.security.sso.web.DoNothingAuthenticationProvider"/> <beans:bean id="ssoHeaderAuthenticationFilter" class="com.nixu.security.sso.web.HeaderAuthenticationFilter"> <beans:property name="groups"> <beans:map> <beans:entry key="cn=lake,ou=confluence,dc=utu,dc=fi" value="ROLE_ADMIN"/> </beans:map> </beans:property> </beans:bean> <beans:bean id="headerAuthenticationEntryPoint" class="com.nixu.security.sso.web.HeaderAuthenticationEntryPoint"/> And web.xml <context-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/applicationContext.xml /WEB-INF/sec-config.xml /WEB-INF/idm-config.xml /WEB-INF/ldap-config.xml </param-value> </context-param> <display-name>KeyCard</display-name> <context-param> <param-name>webAppRootKey</param-name> <param-value>KeyCardAppRoot</param-value> </context-param> <context-param> <param-name>log4jConfigLocation</param-name> <param-value>/WEB-INF/log4j.properties</param-value> </context-param> <!-- Reads request input using UTF-8 encoding --> <filter> <filter-name>characterEncodingFilter</filter-name> <filter-class>org.springframework.web.filter.CharacterEncodingFilter</filter-class> <init-param> <param-name>encoding</param-name> <param-value>UTF-8</param-value> </init-param> <init-param> <param-name>forceEncoding</param-name> <param-value>true</param-value> </init-param> </filter> <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> </filter> <filter-mapping> <filter-name>characterEncodingFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <listener> <listener-class>org.springframework.web.util.Log4jConfigListener</listener-class> </listener> <listener> <!-- this is for session scoped objects --> <listener-class>org.springframework.web.context.request.RequestContextListener</listener-class> </listener> <listener> <listener-class>org.springframework.security.web.session.HttpSessionEventPublisher</listener-class> </listener> <!-- Handles all requests into the application --> <servlet> <servlet-name>KeyCard</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet> <servlet-name>tiles</servlet-name> <servlet-class>org.apache.tiles.web.startup.TilesServlet</servlet-class> <init-param> <param-name> org.apache.tiles.impl.BasicTilesContainer.DEFINITIONS_CONFIG </param-name> <param-value> /WEB-INF/tilesViewContext.xml </param-value> </init-param> <load-on-startup>2</load-on-startup> </servlet> <servlet-mapping> <servlet-name>KeyCard</servlet-name> <url-pattern>*.html</url-pattern> </servlet-mapping> <session-config> <session-timeout> 120 </session-timeout> </session-config> <welcome-file-list> <welcome-file>index.jsp</welcome-file> </welcome-file-list> <!-- error-page> <exception-type>java.lang.Exception</exception-type> <location>/WEB-INF/error/error.jsp</location> </error-page --> </web-app> What's wrong?

    Read the article

  • Is there a pattern or best practice for passing a reference type to multiple classes vs a static class?

    - by Dave
    My .NET application creates HTML files, and as such, the structure looks like variable myData BuildHomePage() variable graph = new BuildGraphPage(myData) variable table = BuildTablePage(myData) BuildGraphPage and BuildTablePage both require access data, the myData object. In the above example, I've passed the myData object to 2 constructors. This is what I'm doing now, in my current project. The myData object, and it's properties are all readonly. The problem is, the number of pages which will require this object has grown. In the real project, there are currently 4, but the new spec is to have about 20. Passing this object to the constructor of each new object and assigning it to a field is a little time consuming, but not a hardship! This poses the question whether it's better practice to continue as I have, or to refactor and create a new static class for myData which can be referenced from any where in my project. I guess my abilities to use Google are poor, because I did try and find an appropriate pattern as I am sure this type of design must be common place but my results returned nothing. Is there a pattern which is suited, or do best practices lean towards one implementation over another.

    Read the article

  • Code maintenance: keeping a bad pattern when extending new code for being consistent or not ?

    - by Guillaume
    I have to extend an existing module of a project. I don't like the way it has been done (lots of anti-pattern involved, like copy/pasted code). I don't want to perform a complete refactor. Should I: create new methods using existing convention, even if I feel it wrong, to avoid confusion for the next maintainer and being consistent with the code base? or try to use what I feel better even if it is introducing another pattern in the code ? Precison edited after first answers: The existing code is not a mess. It is easy to follow and understand. BUT it is introducing lots of boilerplate code that can be avoided with good design (resulting code might become harder to follow then). In my current case it's a good old JDBC (spring template inboard) DAO module, but I have already encounter this dilemma and I'm seeking for other dev feedback. I don't want to refactor because I don't have time. And even with time it will be hard to justify that a whole perfectly working module needs refactoring. Refactoring cost will be heavier than its benefits. Remember: code is not messy or over-complex. I can not extract few methods there and introduce an abstract class here. It is more a flaw in the design (result of extreme 'Keep It Stupid Simple' I think) So the question can also be asked like that: You, as developer, do you prefer to maintain easy stupid boring code OR to have some helpers that will do the stupid boring code at your place ? Downside of the last possibility being that you'll have to learn some stuff and maybe you will have to maintain the easy stupid boring code too until a full refactoring is done)

    Read the article

  • Are first-class functions a substitute for the Strategy pattern?

    - by Prog
    The Strategy design pattern is often regarded as a substitute for first-class functions in languages that lack them. So for example say you wanted to pass functionality into an object. In Java you'd have to pass in the object another object which encapsulates the desired behavior. In a language such as Ruby, you'd just pass the functionality itself in the form of an annonymous function. However I was thinking about it and decided that maybe Strategy offers more than a plain annonymous function does. This is because an object can hold state that exists independently of the period when it's method runs. However an annonymous function by itself can only hold state that ceases to exist the moment the function finishes execution. So my question is: when using a language that features first-class functions, would you ever use the Strategy pattern (i.e. encapsulate the functionality you want to pass around in an explicit object), or would you always use an annonymous function? When would you decide to use Strategy when you can use a first-class function?

    Read the article

  • Is there a name for the Builder Pattern where the Builder is implemented via interfaces so certain parameters are required?

    - by Zipper
    So we implemented the builder pattern for most of our domain to help in understandability of what actually being passed to a constructor, and for the normal advantages that a builder gives. The one twist was that we exposed the builder through interfaces so we could chain required functions and unrequired functions to make sure that the correct parameters were passed. I was curious if there was an existing pattern like this. Example below: public class Foo { private int someThing; private int someThing2; private DateTime someThing3; private Foo(Builder builder) { this.someThing = builder.someThing; this.someThing2 = builder.someThing2; this.someThing3 = builder.someThing3; } public static RequiredSomething getBuilder() { return new Builder(); } public interface RequiredSomething { public RequiredDateTime withSomething (int value); } public interface RequiredDateTime { public OptionalParamters withDateTime (DateTime value); } public interface OptionalParamters { public OptionalParamters withSeomthing2 (int value); public Foo Build ();} public static class Builder implements RequiredSomething, RequiredDateTime, OptionalParamters { private int someThing; private int someThing2; private DateTime someThing3; public RequiredDateTime withSomething (int value) {someThing = value; return this;} public OptionalParamters withDateTime (int value) {someThing = value; return this;} public OptionalParamters withSeomthing2 (int value) {someThing = value; return this;} public Foo build(){return new Foo(this);} } } Example of how it's called: Foo foo = Foo.getBuilder().withSomething(1).withDateTime(DateTime.now()).build(); Foo foo2 = Foo.getBuilder().withSomething(1).withDateTime(DateTime.now()).withSomething2(3).build();

    Read the article

  • In developing a soap client proxy, which return structure is easier to use and more sensible?

    - by cori
    I'm writing (in PHP) a client/proxy for a SOAP web service. The return types are consistently wrapped in response objects that contain the return values. In many cases this make a lot of sense - for instance when multiple values are being returned: GetDetailsResponse Object ( Results Object ( [TotalResults] => 10 [NextPage] => 2 ) [Details] => Array ( [0] => Detail Object ( [Id] => 1 ) ) ) But some of the methods return a single scalar value or a single object or array wrapped in a response object: GetThingummyIdResponse Object ( [ThingummyId] => 42 ) In some cases these objects might be pretty deep, so getting at properties within requires drilling down several layers: $response->Details->Detail[0]->Contents->Item[5]->Id And if I unwrap them before passing them back I can strip out a layer from consumers' code. I know I'm probably being a little bit of an Architecture Astronaut here, but the latter style really bug me, so I've been working through my code to have my proxy methods just return the scalar value to the client code where there's no absolute need for a wrapper object. My question is, am I actually making things more difficult for the consumers of my code? Would I be better off just leaving the return values wrapped in response objects so that everything is consistent, or is removing unneccessary layers of indirection/abstraction worthwhile?

    Read the article

  • Is event sourcing ready for prime time?

    - by Dakotah North
    Event Sourcing was popularized by LMAX as a means to provide speed, performance scalability, transparent persistence and transparent live mirroring. Before being rebranded as Event Sourcing, this type of architectural pattern was known as System Prevalence but yet I was never familiar with this pattern before the LMAX team went public. Has this pattern proved itself in numerous production systems and therefore even conservative individuals should feel empowered to embrace this pattern or is event sourcing / system prevalence an exotic pattern that is best left for the fearless?

    Read the article

  • GIT repository layout for server with multiple projects

    - by Paul Alexander
    One of the things I like about the way I have Subversion set up is that I can have a single main repository with multiple projects. When I want to work on a project I can check out just that project. Like this \main \ProductA \ProductB \Shared then svn checkout http://.../main/ProductA As a new user to git I want to explore a bit of best practice in the field before committing to a specific workflow. From what I've read so far, git stores everything in a single .git folder at the root of the project tree. So I could do one of two things. Set up a separate project for each Product. Set up a single massive project and store products in sub folders. There are dependencies between the products, so the single massive project seems appropriate. We'll be using a server where all the developers can share their code. I've already got this working over SSH & HTTP and that part I love. However, the repositories in SVN are already many GB in size so dragging around the entire repository on each machine seems like a bad idea - especially since we're billed for excessive network bandwidth. I'd imagine that the Linux kernel project repositories are equally large so there must be a proper way of handling this with Git but I just haven't figured it out yet. Are there any guidelines or best practices for working with very large multi-project repositories?

    Read the article

  • Best way to re-use the same django models and admin for multiple apps

    - by kepioo
    Given a reference app ( called guide), how can I create additional apps that will reuse the same model/admin/views than guide - the motivation behind is to be able to individually control each subapp. guide guideApp1 exact same models/admin/views than guide guideApp2 exact same models/admin/views than guide in the Admin site, I should have : 1 section for guideApp1 with all the tables defined in guide, that applies to guideApp1 1 section for guideApp12 with all the tables defined in guide, that applies to guideApp2

    Read the article

  • Linq to SQL, Repository, IList and Persist All

    - by Dr. Zim
    This discusses a repository which returns IList that also uses Linq to SQL as a DAL. Once you do a .ToList(), IQueryable object is gone once you exit the Repository. This means that I need to send the objects back in to the Repo methods .Create(Model model), .Update(Model model), and .Delete(int ID). Assuming that is correct, how do you do the PersistAll()? For example, if you did the following, how would you code that in the repository? Changed a single string property in the object Called .Update(object); Changed a different string property in the object Called .Update(object); Called .PersistAll(), which would update the database with both changed strings. How would you associate the objects in the Repository parameters with the objects in the Linq to Sql data context, especially over multiple calls? I am sure this is a standard thing. Links to examples on the web would be great!

    Read the article

  • Replace with wildcard, in SQL

    - by Jay
    I know MS T-SQL does not support regular expression, but I need similar functionality. Here's what I'm trying to do: I have a varchar table field which stores a breadcrumb, like this: /ID1:Category1/ID2:Category2/ID3:Category3/ Each Category name is preceded by its Category ID, separated by a colon. I'd like to select and display these breadcrumbs but I want to remove the Category IDs and colons, like this: /Category1/Category2/Category3/ Everything between the leading slash (/) up to and including the colon (:) should be stripped out. I don't have the option of extracting the data, manipulating it externally, and re-inserting back into the table; so I'm trying to accomplish this in a SELECT statement. I also can't resort to using a cursor to loop through each row and clean each field with a nested loop, due to the number of rows returned in the SELECT. Can this be done? Thanks all - Jay

    Read the article

  • Javascript regex URL matching

    - by Blondie
    I have this so far: chrome.tabs.getSelected(null, function(tab) { var title = tab.title; var btn = '<a href="' + tab.url + '" onclick="save(\'' + title + '\');"> ' + title + '</a>'; if(tab.url.match('/http:\/\/www.mydomain.com\/version.php/i')) { document.getElementById('link').innerHTML = '<p>' + btn + '</p>'; } }); Basically it should match the domain within this: http://www.mydomain.com/version.php?* Anything that matches that even when it includes something like version.php?ver=1, etc When I used the code above of mine, it doesn't display anything, but when I remove the if statement, it's fine but it shows on other pages which it shouldn't only on the matched URL.

    Read the article

  • facebook connect: thumbnail images broken up in FB.Connect.streamPublish pop-up prompt, and on wall

    - by Hoff
    hi there! I'm using facebook connect so that users can publish comments they are leaving on my site on their facebook wall as well. It works as intended, except that in the confirmation pop up, the thumbnail image i provide is broken. Looking at the source, I can see that facebook prepended my image url like this: from: http://www.mysite.com/path/to/my/image.jpg to: http://platform.ak.fbcdn.net/www/app_full_proxy.php?app=303377111175&v=1&size=z&cksum=41a391c9f3a6f3dde2ede9892763c943&src=http%3A%2F%2Fwww.mysite.com%2Fpath%2Fto%2Fmy%2image.jpg The image on the facebook user's wall has the same prepended url, and is also broken for a couple of minutes, after which it's showing up correctly. But obviously, having a broken image in the confirmation window and on your wall for a couple of minutes is not a good experience... Has anybody experienced the same / knows how to work around this issue? Thanks a lot in advance! Martin PS: here's the part of the js call, if it's of any use... attachment = { 'media': [{ 'type': 'image', 'src': 'http://www.mysite.com/path/to/my/image.jpg', 'href': 'http://www.mysite.com/the/current/page' }] }; FB.Connect.streamPublish(user_message, attachment, action_links, target_id, user_message_prompt, fbcallback, false, actor_id)

    Read the article

  • How can I write a clean Repository without exposing IQueryable to the rest of my application?

    - by Simucal
    So, I've read all the Q&A's here on SO regarding the subject of whether or not to expose IQueryable to the rest of your project or not (see here, and here), and I've ultimately decided that I don't want to expose IQueryable to anything but my Model. Because IQueryable is tied to certain persistence implementations I don't like the idea of locking myself into this. Similarly, I'm not sure how good I feel about classes further down the call chain modifying the actual query that aren't in the repository. So, does anyone have any suggestions for how to write a clean and concise Repository without doing this? One problem I see, is my Repository will blow up from a ton of methods for various things I need to filter my query off of. Having a bunch of: IEnumerable GetProductsSinceDate(DateTime date); IEnumberable GetProductsByName(string name); IEnumberable GetProductsByID(int ID); If I was allowing IQueryable to be passed around I could easily have a generic repository that looked like: public interface IRepository<T> where T : class { T GetById(int id); IQueryable<T> GetAll(); void InsertOnSubmit(T entity); void DeleteOnSubmit(T entity); void SubmitChanges(); } However, if you aren't using IQueryable then methods like GetAll() aren't really practical since lazy evaluation won't be taking place down the line. I don't want to return 10,000 records only to use 10 of them later. What is the answer here? In Conery's MVC Storefront he created another layer called the "Service" layer which received IQueryable results from the respository and was responsible for applying various filters. Is this what I should do, or something similar? Have my repository return IQueryable but restrict access to it by hiding it behind a bunch of filter classes like GetProductByName, which will return a concrete type like IList or IEnumerable?

    Read the article

  • IIRF not working with AsP.NET PostBacks?

    - by MNT
    Hi, I have the following scenario Web server A: public on the internet, IIRF (current version) installed Web server B: on the intranet, visible to A, my APS.NET web app is installed on, name is pgdbtest3 I configure IIRF so that any request targetting directory /MMS/ on server A is redirected to the corresponding one http://pgdbtest3/MMS/. The ini file look like: StatusUrl /iirfStatus RemoteOk RedirectRule ^/MMS$ /MMS/ [I] ProxyPass ^/MMS/(.*)$ http://pgdbtest3/MMS/$1 [I] It is working fine except that any post back causes an error (404 is returned). I have tried many solutions including the removal of the action attribute from the form but with no luck. Please help!

    Read the article

  • Using C# and Repository Factory and the error: The requested database is not defined in configurati

    - by odiseh
    hi I am using Repository factory for visual studio 2008 for a personal project. It generated a class called ProductRepository which inherits from Repository. The ProductRepository has a constructor which gets a database name as string and passes it to its base (I mean Repository ). So when I try to debug my project step by step, I pass my database name to ProductRepository but it raises the following error: The requested database is not defined in configuration. What's wrong?

    Read the article

  • Very simple regex not working

    - by Thomas Wanner
    I have read that to match a word inside of a string using Regular expressions (in .NET), I can use the word boundary specifier (\b) within the regex. However, none of these calls result in any matches Regex.Match("INSERT INTO TEST(Col1,Col2) VALUES(@p1,@p2)", "\b@p1\b"); Regex.Match("INSERT INTO TEST(Col1,Col2) VALUES(@p1,@p2)", "\bINSERT\b"); Is there anything I am doing wrong ?

    Read the article

  • Nginx A/B testing

    - by Alex
    Hey, I'm trying to do A/B testing and I'm using Nginx fo this purpose. My Nginx config file looks like this: events { worker_connections 1024; } error_log /usr/local/experiments/apps/reddit_test/error.log notice; http { rewrite_log on; server { listen 8081; access_log /usr/local/experiments/apps/reddit_test/access.log combined; location / { if ($remote_addr ~ "[02468]$") { rewrite ^(.+)$ /experiment$1 last; } rewrite ^(.+)$ /main$1 last; } location /main { internal; proxy_pass http://www.reddit.com/r/lisp; } location /experiment { internal; proxy_pass http://www.reddit.com/r/haskell; } } } This is kind of working, but css and js files woon't load. Can anyone tell me what's wrong with this config file or what would be the right way to do it? Thanks, Alex

    Read the article

  • Custom Django admin URL + changelist view for custom list filter by Tags

    - by Botondus
    In django admin I wanted to set up a custom filter by tags (tags are introduced with django-tagging) I've made the ModelAdmin for this and it used to work fine, by appending custom urlconf and modifying the changelist view. It should work with URLs like: http://127.0.0.1:8000/admin/reviews/review/only-tagged-vista/ But now I get 'invalid literal for int() with base 10: 'only-tagged-vista', error which means it keeps matching the review edit page instead of the custom filter page, and I cannot figure out why since it used to work and I can't find what change might have affected this. Any help appreciated. Relevant code: class ReviewAdmin(VersionAdmin): def changelist_view(self, request, extra_context=None, **kwargs): from django.contrib.admin.views.main import ChangeList cl = ChangeList(request, self.model, list(self.list_display), self.list_display_links, self.list_filter, self.date_hierarchy, self.search_fields, self.list_select_related, self.list_per_page, self.list_editable, self) cl.formset = None if extra_context is None: extra_context = {} if kwargs.get('only_tagged'): tag = kwargs.get('tag') cl.result_list = cl.result_list.filter(tags__icontains=tag) extra_context['extra_filter'] = "Only tagged %s" % tag extra_context['cl'] = cl return super(ReviewAdmin, self).changelist_view(request, extra_context=extra_context) def get_urls(self): from django.conf.urls.defaults import patterns, url urls = super(ReviewAdmin, self).get_urls() def wrap(view): def wrapper(*args, **kwargs): return self.admin_site.admin_view(view)(*args, **kwargs) return update_wrapper(wrapper, view) info = self.model._meta.app_label, self.model._meta.module_name my_urls = patterns('', # make edit work from tagged filter list view # redirect to normal edit view url(r'^only-tagged-\w+/(?P<id>.+)/$', redirect_to, {'url': "/admin/"+self.model._meta.app_label+"/"+self.model._meta.module_name+"/%(id)s"} ), # tagged filter list view url(r'^only-tagged-(P<tag>\w+)/$', self.admin_site.admin_view(self.changelist_view), {'only_tagged':True}, name="changelist_view"), ) return my_urls + urls Edit: Original issue fixed. I now receive 'Cannot filter a query once a slice has been taken.' for line: cl.result_list = cl.result_list.filter(tags__icontains=tag) I'm not sure where this result list is sliced, before tag filter is applied. Edit2: It's because of the self.list_per_page in ChangeList declaration. However didn't find a proper solution yet. Temp fix: if kwargs.get('only_tagged'): list_per_page = 1000000 else: list_per_page = self.list_per_page cl = ChangeList(request, self.model, list(self.list_display), self.list_display_links, self.list_filter, self.date_hierarchy, self.search_fields, self.list_select_related, list_per_page, self.list_editable, self)

    Read the article

  • What is good practice in .NET system architecture design concerning multiple models and aggregates

    - by BuzzBubba
    I'm designing a larger enterprise architecture and I'm in a doubt about how to separate the models and design those. There are several points I'd like suggestions for: - models to define - way to define models Currently my idea is to define: Core (domain) model Repositories to get data to that domain model from a database or other store Business logic model that would contain business logic, validation logic and more specific versions of forms of data retrieval methods View models prepared for specifically formated data output that would be parsed by views of different kind (web, silverlight, etc). For the first model I'm puzzled at what to use and how to define the mode. Should this model entities contain collections and in what form? IList, IEnumerable or IQueryable collections? - I'm thinking of immutable collections which IEnumerable is, but I'd like to avoid huge data collections and to offer my Business logic layer access with LINQ expressions so that query trees get executed at Data level and retrieve only really required data for situations like the one when I'm retrieving a very specific subset of elements amongst thousands or hundreds of thousands. What if I have an item with several thousands of bids? I can't just make an IEnumerable collection of those on the model and then retrieve an item list in some Repository method or even Business model method. Should it be IQueryable so that I actually pass my queries to Repository all the way from the Business logic model layer? Should I just avoid collections in my domain model? Should I void only some collections? Should I separate Domain model and BusinessLogic model or integrate those? Data would be dealt trough repositories which would use Domain model classes. Should repositories be used directly using only classes from domain model like data containers? This is an example of what I had in mind: So, my Domain objects would look like (e.g.) public class Item { public string ItemName { get; set; } public int Price { get; set; } public bool Available { get; set; } private IList<Bid> _bids; public IQueryable<Bid> Bids { get { return _bids.AsQueryable(); } private set { _bids = value; } } public AddNewBid(Bid newBid) { _bids.Add(new Bid {.... } } Where Bid would be defined as a normal class. Repositories would be defined as data retrieval factories and used to get data into another (Business logic) model which would again be used to get data to ViewModels which would then be rendered by different consumers. I would define IQueryable interfaces for all aggregating collections to get flexibility and minimize data retrieved from real data store. Or should I make Domain Model "anemic" with pure data store entities and all collections define for business logic model? One of the most important questions is, where to have IQueryable typed collections? - All the way from Repositories to Business model or not at all and expose only solid IList and IEnumerable from Repositories and deal with more specific queries inside Business model, but have more finer grained methods for data retrieval within Repositories. So, what do you think? Have any suggestions?

    Read the article

  • How to make dynamically generated .net service client read configuration from another location than

    - by Bryan
    Hi, I've currently written code to use the ServiceContractGenerator to generate web service client code based on a wsdl, and then compile it into an assembly in memory using the code dom. I'm then using reflection to set up the binding, endpoint, service values/types, and then ultimately invoke the web service method based on xml configuration that can be altered at run time. This all currently works fine. However, the problem I'm currently running into, is that I'm hitting several exotic web services that require lots of custom binding/security settings. This is forcing me to add more and more configuration into my custom xml configurations, as well as the corresponding updates to my code to interpret and set those binding/security settings in code. Ultimately, this makes adding these 'exotic' services slower, and I can see myself eventually reimplementing the 'system.serviceModel' section of the web or app.config file, which is never a good thing. My question is, and this is where my lack of experience .net and C# shows, is there a way to define the configuration normally found in the web.config or app.config 'system.serviceModel' section somewhere else, and at run time supply this to configuration to the web service client? Is there a way to attach an app.config directly to an assembly as a resource or any other way to supply this configuration to the client? Basically, I'd like attach an app.config only containing a 'system.serviceModel' to the assembly containing a web service client so that it can use its configuration. This way I wouldn't need to handle every configuration under the sun, I could let .net do it for me. Fyi, it's not an option for me to put the configuration for every service in the app.config for the running application. Any help would be greatly appreciated. Thanks in advance! Bryan

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >