Search Results

Search found 9634 results on 386 pages for 'proxy pattern'.

Page 69/386 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • How to create multiple Repository object inside a Repository class using Unit Of Work?

    - by Santosh
    I am newbie to MVC3 application development, currently, we need following Application technologies as requirement MVC3 framework IOC framework – Autofac to manage object creation dynamically Moq – Unit testing Entity Framework Repository and Unit Of Work Pattern of Model class I have gone through many article to explore an basic idea about the above points but still I am little bit confused on the “Repository and Unit Of Work Pattern “. Basically what I understand Unit Of Work is a pattern which will be followed along with Repository Pattern in order to share the single DB Context among all Repository object, So here is my design : IUnitOfWork.cs public interface IUnitOfWork : IDisposable { IPermitRepository Permit_Repository{ get; } IRebateRepository Rebate_Repository { get; } IBuildingTypeRepository BuildingType_Repository { get; } IEEProjectRepository EEProject_Repository { get; } IRebateLookupRepository RebateLookup_Repository { get; } IEEProjectTypeRepository EEProjectType_Repository { get; } void Save(); } UnitOfWork.cs public class UnitOfWork : IUnitOfWork { #region Private Members private readonly CEEPMSEntities context = new CEEPMSEntities(); private IPermitRepository permit_Repository; private IRebateRepository rebate_Repository; private IBuildingTypeRepository buildingType_Repository; private IEEProjectRepository eeProject_Repository; private IRebateLookupRepository rebateLookup_Repository; private IEEProjectTypeRepository eeProjectType_Repository; #endregion #region IUnitOfWork Implemenation public IPermitRepository Permit_Repository { get { if (this.permit_Repository == null) { this.permit_Repository = new PermitRepository(context); } return permit_Repository; } } public IRebateRepository Rebate_Repository { get { if (this.rebate_Repository == null) { this.rebate_Repository = new RebateRepository(context); } return rebate_Repository; } } } PermitRepository .cs public class PermitRepository : IPermitRepository { #region Private Members private CEEPMSEntities objectContext = null; private IObjectSet<Permit> objectSet = null; #endregion #region Constructors public PermitRepository() { } public PermitRepository(CEEPMSEntities _objectContext) { this.objectContext = _objectContext; this.objectSet = objectContext.CreateObjectSet<Permit>(); } #endregion public IEnumerable<RebateViewModel> GetRebatesByPermitId(int _permitId) { // need to implment } } PermitController .cs public class PermitController : Controller { #region Private Members IUnitOfWork CEEPMSContext = null; #endregion #region Constructors public PermitController(IUnitOfWork _CEEPMSContext) { if (_CEEPMSContext == null) { throw new ArgumentNullException("Object can not be null"); } CEEPMSContext = _CEEPMSContext; } #endregion } So here I am wondering how to generate a new Repository for example “TestRepository.cs” using same pattern where I can create more then one Repository object like RebateRepository rebateRepo = new RebateRepository () AddressRepository addressRepo = new AddressRepository() because , what ever Repository object I want to create I need an object of UnitOfWork first as implmented in the PermitController class. So if I would follow the same in each individual Repository class that would again break the priciple of Unit Of Work and create multiple instance of object context. So any idea or suggestion will be highly appreciated. Thank you

    Read the article

  • Proxy Error 502 "Reason: Error reading from remote server" with Apache 2.2.3 (Debian) mod_proxy and Jetty 6.1.18

    - by Martin
    Apache is receiving requests at port :80 and proxying them to Jetty at port :8080 The proxy server received an invalid response from an upstream server The proxy server could not handle the request GET /. My dilemma: Everything works fine normally (fast requests, few seconds or few tens of seconds long requests are processed ok). Problems occur when request processing takes long (few minutes?). If I issue request instead directly to Jetty at port :8080 the request is processed OK. So problem is likely to sit somewhere between Apache and Jetty where I am using mod_proxy. How to solve this? I have already tried some "tricks" related to KeepAlive settings, without luck. Here is my current configuration, any suggestions? #keepalive Off ## I have tried this, does not help #SetEnv force-proxy-request-1.0 1 ## I have tried this, does not help #SetEnv proxy-nokeepalive 1 ## I have tried this, does not help #SetEnv proxy-initial-not-pooled 1 ## I have tried this, does not help KeepAlive 20 ## I have tried this, does not help KeepAliveTimeout 600 ## I have tried this, does not help ProxyTimeout 600 ## I have tried this, does not help NameVirtualHost *:80 <VirtualHost _default_:80> ServerAdmin [email protected] ServerName www.mydomain.fi ServerAlias mydomain.fi mydomain.com mydomain www.mydomain.com ProxyRequests On ProxyVia On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests Off ProxyPass / http://www.mydomain.fi:8080/ retry=1 acquire=3000 timeout=600 ProxyPassReverse / http://www.mydomain.fi:8080/ RewriteEngine On RewriteCond %{SERVER_NAME} !^www\.mydomain\.fi RewriteRule /(.*) http://www.mydomain.fi/$1 [redirect=301L] ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On </VirtualHost> Here is also the debug log from a failing request: 74.125.43.99 - - [29/Sep/2010:20:15:40 +0300] "GET /?wicket:bookmarkablePage=newWindow:com.mydomain.view.application.reports.SaveReportPage HTTP/1.1" 502 355 "https://www.mydomain.fi/?wicket:interface=:0:2:::" "Mozilla/5.0 (Windows; U; Windows NT 6.1; fi; rv:1.9.2.10) Gecko/20100914 Firefox/3.6.10" [Wed Sep 29 20:20:40 2010] [error] [client 74.125.43.99] proxy: error reading status line from remote server www.mydomain.fi, referer: https://www.mydomain.fi/?wicket:interface=:0:2::: [Wed Sep 29 20:20:40 2010] [error] [client 74.125.43.99] proxy: Error reading from remote server returned by /, referer: https://www.mydomain.fi/?wicket:interface=:0:2:::

    Read the article

  • Proxy Error 502 "Reason: Error reading from remote server" with Apache 2.2.3 (Debian) mod_proxy and Jetty 6.1.18

    - by Martin
    Apache is receiving requests at port :80 and proxying them to Jetty at port :8080 The proxy server received an invalid response from an upstream server The proxy server could not handle the request GET /. My dilemma: Everything works fine normally (fast requests, few seconds or few tens of seconds long requests are processed ok). Problems occur when request processing takes long (few minutes?). If I issue request instead directly to Jetty at port :8080 the request is processed OK. So problem is likely to sit somewhere between Apache and Jetty where I am using mod_proxy. How to solve this? I have already tried some "tricks" related to KeepAlive settings, without luck. Here is my current configuration, any suggestions? #keepalive Off ## I have tried this, does not help #SetEnv force-proxy-request-1.0 1 ## I have tried this, does not help #SetEnv proxy-nokeepalive 1 ## I have tried this, does not help #SetEnv proxy-initial-not-pooled 1 ## I have tried this, does not help KeepAlive 20 ## I have tried this, does not help KeepAliveTimeout 600 ## I have tried this, does not help ProxyTimeout 600 ## I have tried this, does not help NameVirtualHost *:80 <VirtualHost _default_:80> ServerAdmin [email protected] ServerName www.mydomain.fi ServerAlias mydomain.fi mydomain.com mydomain www.mydomain.com ProxyRequests On ProxyVia On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests Off ProxyPass / http://www.mydomain.fi:8080/ retry=1 acquire=3000 timeout=600 ProxyPassReverse / http://www.mydomain.fi:8080/ RewriteEngine On RewriteCond %{SERVER_NAME} !^www\.mydomain\.fi RewriteRule /(.*) http://www.mydomain.fi/$1 [redirect=301L] ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On </VirtualHost> Here is also the debug log from a failing request: 74.125.43.99 - - [29/Sep/2010:20:15:40 +0300] "GET /?wicket:bookmarkablePage=newWindow:com.mydomain.view.application.reports.SaveReportPage HTTP/1.1" 502 355 "https://www.mydomain.fi/?wicket:interface=:0:2:::" "Mozilla/5.0 (Windows; U; Windows NT 6.1; fi; rv:1.9.2.10) Gecko/20100914 Firefox/3.6.10" [Wed Sep 29 20:20:40 2010] [error] [client 74.125.43.99] proxy: error reading status line from remote server www.mydomain.fi, referer: https://www.mydomain.fi/?wicket:interface=:0:2::: [Wed Sep 29 20:20:40 2010] [error] [client 74.125.43.99] proxy: Error reading from remote server returned by /, referer: https://www.mydomain.fi/?wicket:interface=:0:2:::

    Read the article

  • Is there a bug with Apache 2.2 and content filters (and maybe mod_proxy)?

    - by asciiphil
    I'm running Apache 2.2.15-29 on RHEL 6 (actually Scientific Linux 6.4) and I'm trying to set up a reverse proxy with content rewriting so all of the links on the proxied web pages are rewritten to reference the proxy host. I'm running into a problem with some of the content rewriting and I'd like to know if this is a bug or if I'm doing something wrong (and how to do it right, if applicable). I'm proxying a subdirectory on an internal host (internal.example.com/foo) onto the root of an external host (external.example.com). I need to rewrite HTML, CSS, and Javascript content to fix all of the URLs. I'm also hosting some content locally on the external host, which I don't think is a problem but I'm mentioning here for completeness. My httpd.conf looks roughly like this: <VirtualHost *:80> ServerName external.example.com ServerAlias example.com # Serve all local content directly, reverse-proxy all unknown URIs. RewriteEngine On RewriteRule ^(/(index.html?)?)?$ http://internal.example.com/foo/ [P] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -f [OR] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -d RewriteRule ^.*$ - [L] RewriteRule ^/~ - [L] RewriteRule ^(.*)$ http://internal.example.com$1 [P] # Standard header rewriting. ProxyPassReverse / http://internal.example.com/foo/ ProxyPassReverseCookieDomain internal.example.com external.example.com ProxyPassReverseCookiePath /foo/ / # Strip any Accept-Encoding: headers from the client so we can process the pages # as plain text. RequestHeader unset Accept-Encoding # Use mod_proxy_html to fix URLs in text/html content. ProxyHTMLEnable On ProxyHTMLURLMap http://internal.example.com/foo/ / ProxyHTMLURLMap http://internal.example.com/foo / ProxyHTMLURLMap /foo/ / ## Use mod_substitute to fix URLs in CSS and Javascript #<Location /> # AddOutputFilterByType SUBSTITUTE text/css # AddOutputFilterByType SUBSTITUTE text/javascript # Substitute "s|http://internal.example.com/foo/|/|nq" #</Location> # Use mod_ext_filter to fix URLs in CSS and Javascript ExtFilterDefine fixurlcss mode=output intype=text/css cmd="/bin/sed -rf /etc/httpd/fixurls" ExtFilterDefine fixurljs mode=output intype=text/javascript cmd="/bin/sed -rf /etc/httpd/fixurls" <Location /> SetOutputFilter fixurlcss;fixurljs </Location> </VirtualHost> The text/html rewriting works just fine. When I use either mod_substitute or mod_ext_filter, the external server sends the pages as Transfer-Encoding: chunked, sends all of the data, and then closes the connection without sending the final, zero-length chunk. Some HTTP clients are unhappy with this. (Chrome won't process any content sent in this way, for example, so the pages don't get CSS applied to them.) Here's a sample wget session: $ wget -O /dev/null -S http://external.example.com/include/jquery.js --2013-11-01 11:36:36-- http://external.example.com/include/jquery.js Resolving external.example.com (external.example.com)... 192.168.0.1 Connecting to external.example.com (external.example.com)|192.168.0.1|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 200 OK Date: Fri, 01 Nov 2013 15:36:36 GMT Server: Apache Last-Modified: Tue, 29 Oct 2013 13:09:10 GMT ETag: "1d60026-187b8-4e9e0ec273e35" Accept-Ranges: bytes Vary: Accept-Encoding X-UA-Compatible: IE=edge,chrome=1 Content-Type: text/javascript;charset=utf-8 Connection: close Transfer-Encoding: chunked Length: unspecified [text/javascript] Saving to: `/dev/null' [ <=> ] 100,280 --.-K/s in 0.005s 2013-11-01 11:36:37 (19.8 MB/s) - Read error at byte 100280 (Success).Retrying. --2013-11-01 11:36:38-- (try: 2) http://external.example.com/include/jquery.js Connecting to external.example.com (external.example.com)|192.168.0.1|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 416 Requested Range Not Satisfiable Date: Fri, 01 Nov 2013 15:36:38 GMT Server: Apache Vary: Accept-Encoding Content-Type: text/html;charset=utf-8 Content-Length: 260 Connection: close The file is already fully retrieved; nothing to do. Am I doing something wrong? Am I hitting some sort of Apache bug? What do I need to do to get it working? (Note that I'd prefer solutions that work within RHEL-6-packaged RPMs and upgrading to Apache 2.4 would be a last resort, as we have a lot of infrastructure built around 2.2 on this system at the moment.)

    Read the article

  • Static factory pattern with EJB3/JBoss

    - by purecharger
    I'm fairly new to EJBs and full blown application servers like JBoss, having written and worked with special purpose standalone Java applications for most of my career, with limited use of JEE. I'm wondering about the best way to adapt a commonly used design pattern to EJB3 and JBoss: the static factory pattern. In fact this is Item #1 in Joshua Bloch's Effective Java book (2nd edition) I'm currently working with the following factory: public class CredentialsProcessorFactory { private static final Log log = LogFactory.getLog(CredentialsProcessorFactory.class); private static Map<CredentialsType, CredentialsProcessor> PROCESSORS = new HashMap<CredentialsType, CredentialsProcessor>(); static { PROCESSORS.put(CredentialsType.CSV, new CSVCredentialsProcessor()); } private CredentialsProcessorFactory() {} public static CredentialsProcessor getProcessor(CredentialsType type) { CredentialsProcessor p = PROCESSORS.get(type); if(p == null) throw new IllegalArgumentException("No CredentialsProcessor registered for type " + type.toString()); return p; } However, in the implementation classes of CredentialsProcessor, I require injected resources such as a PersistenceContext, so I have made the CredentialsProcessor interface a @Local interface, and each of the impl's marked with @Stateless. Now I can look them up in JNDI and use the injected resources. But now I have a disconnect because I am not using the factory anymore. My first thought was to change the getProcessor(CredentialsType) method to do a JNDI lookup and return the SLSB instance that is required, but then I need to configure and pass the proper qualified JNDI name. Before I go down that path, I wanted to do more research on accepted practices. How is this design pattern treated in EJB3 / JEE?

    Read the article

  • Factory pattern vs ease-of-use?

    - by Curtis White
    Background, I am extending the ASP.NET Membership with custom classes and extra tables. The ASP.NET MembershipUser has a protected constructor and a public method to read the data from the database. I have extended the database structure with custom tables and associated classes. Instead of using a static method to create a new member, as in the original API: I allow the code to instantiate a simple object and fill the data because there are several entities. Original Pattern #1 Protected constructor > static CreateUser(string mydata, string, mydata, ...) > User.Data = mydata; > User.Update() My Preferred Pattern #2 Public constructor > newUser = new MembershipUser(); > newUser.data = ... > newUser.ComplextObject.Data = ... > newUser.Insert() > newUser.Load(string key) I find pattern #2 to be easier and more natural to use. But method #1 is more atomic and ensured to contain proper data. I'd like to hear any opinions on pros/cons. The problem in my mind is that I prefer a simple CRUD/object but I am, also, trying to utilize the underlying API. These methods do not match completely. For example, the API has methods, like UnlockUser() and a readonly property for the IsLockedOut

    Read the article

  • Java - Is this a bad design pattern?

    - by Walter White
    Hi all, In our application, I have seen code written like this: User.java (User entity) public class User { protected String firstName; protected String lastName; ... getters/setters (regular POJO) } UserSearchCommand { protected List<User> users; protected int currentPage; protected int sortColumnIndex; protected SortOder sortOrder; // the current user we're editing, if at all protected User user; public String getFirstName() {return(user.getFirstName());} public String getLastName() {return(user.getLastName());} } Now, from my experience, this pattern or anti-pattern looks bad to me. For one, we're mixing several concerns together. While they're all user-related, it deviates from typical POJO design. If we're going to go this route, then shouldn't we do this instead? UserSearchCommand { protected List<User> users; protected int currentPage; protected int sortColumnIndex; protected SortOder sortOrder; // the current user we're editing, if at all protected User user; public User getUser() {return(user);} } Simply return the user object, and then we can call whatever methods on it as we wish? Since this is quite different from typical bean development, JSR 303, bean validation doesn't work for this model and we have to write validators for every bean. Does anyone else see anything wrong with this design pattern or am I just being picky as a developer? Walter

    Read the article

  • RDP through TCP Proxy

    - by johng100
    Hi, First time in Stackoverflow and I'm hoping someone can help me. I'm looking at a proof of concept to pass RDP traffic through a TCP Proxy/tunnel which will pass through firewalls using HTTPS. The problem has to do with deploying images to machines and so it can't be assumed that the .NET framework will be present, so C++ is being used at the deployment end of a connection. The basic system I have at present is a program which listens for client connections on a port then passes any data to a WCF service which stores it as a byte array. A deployment machine (using GSoap and C++) polls the WCF service for messages and if it finds them then passes the data onto the target server process via sockets. I know this sounds horrible, but it works for simple test clients and server passing data to and from simple test client and server programs via this WCF/C++/C# proxy layer. But I have to support traffic from RDP, VNC and possibly others, so I need a transparent proxy to do this and am wondering whether the above approach is worth pursuing. I've read up on SSH tunneling and that seems a possibility. My basic question is is it possible to tunnel RDP traffic over HTTPS using custom code. Thanks John

    Read the article

  • Silverlight Async Design Pattern Issue

    - by Mike Mengell
    I'm in the middle of a Silverlight application and I have a function which needs to call a webservice and using the result complete the rest of the function. My issue is that I would have normally done a synchronous web service call got the result and using that carried on with the function. As Silverlight doesn't support synchronous web service calls without additional custom classes to mimic it, I figure it would be best to go with the flow of async rather than fight it. So my question relates around whats the best design pattern for working with async calls in program flow. In the following example I want to use the myFunction TypeId parameter depending on the return value of the web service call. But I don't want to call the web service until this function is called. How can I alter my code design to allow for the async call? string _myPath; bool myFunction(Guid TypeId) { WS_WebService1.WS_WebService1SoapClient proxy = new WS_WebService1.WS_WebService1SoapClient(); proxy.GetPathByTypeIdCompleted += new System.EventHandler<WS_WebService1.GetPathByTypeIdCompleted>(proxy_GetPathByTypeIdCompleted); proxy.GetPathByTypeIdAsync(TypeId); // Get return value if (myPath == "\\Server1") { //Use the TypeId parameter in here } } void proxy_GetPathByTypeIdCompleted(object sender, WS_WebService1.GetPathByTypeIdCompletedEventArgs e) { string server = e.Result.Server; myPath = '\\' + server; } Thanks in advance, Mike

    Read the article

  • pattern for the following condition in java

    - by zahir hussain
    hi i want to know how to write pattern.. for example : the word is "AboutGoogle AdWords Drive traffic and customers to your site. Pay through Cheque, Net Banking or Credit Card. Google Toolbar Add a search box to your browser. Google SMS To find out local information simply SMS to 54664. Gmail Free email with 7.2GB storage and less spam. Try Gmail today. Our ProductsHelp Help with Google Search, Services and ProductsGoogle Web Search Features Translation, I'm Feeling Lucky, CachedGoogle Services & Tools Toolbar, Google Web APIs, ButtonsGoogle Labs Ideas, Demos, ExperimentsFor Site OwnersAdvertising AdWords, AdSenseBusiness Solutions Google Search Appliance, Google Mini, WebSearchWebmaster Central One-stop shop for comprehensive info about how Google crawls and indexes websitesSubmit your content to Google Add your site, Google SitemapsOur CompanyPress Center News, Images, ZeitgeistJobs at Google Openings, Perks, CultureCorporate Info Company overview, Philosophy, Diversity, AddressesInvestor Relations Financial info, Corporate governanceMore GoogleContact Us FAQs, Feedback, NewsletterGoogle Logos Official Logos, Holiday Logos, Fan LogosGoogle Blog Insights to Google products and cultureGoogle Store Pens, Shirts, Lava lamps©2010 Google - Privacy Policy - Terms of Service" I have to search some word... for example "google insights" so how to write the code in java... i just write small code... check my code and answer my question... that code only use for find the search word, where is that. but i need to display some words front of search word and display some words rear of search workd... similar to google search... my code is Pattern p = Pattern.compile("(?i)(.*?)"+search+""); Matcher m = p.matcher(full); String title=""; while (m.find() == true) { title=m.group(1); System.out.println(title); } the full is orignal content, search s search word... thanks and advance

    Read the article

  • CGLIB proxy error after spring bean definition loading into XmlWebApplicationContext at runtime

    - by VasylV
    I load additional singleton beans definitions at runtime from external jar file into existing XmlWebApplicationContext of my application: BeanFactory beanFactory = xmlWebApplicationContext.getBeanFactory(); DefaultListableBeanFactory defaultFactory = (DefaultListableBeanFactory)beanFactory; final URL url = new URL("external.jar"); final URL[] urls = {url}; ClassLoader loader = new URLClassLoader(urls, this.getClass().getClassLoader()); defaultFactory.setBeanClassLoader(loader); final ClassPathBeanDefinitionScanner scanner = new ClassPathBeanDefinitionScanner(defaultFactory); final DefaultResourceLoader resourceLoader = new DefaultResourceLoader(); resourceLoader.setClassLoader(loader); scanner.setResourceLoader(resourceLoader); scanner.scan("com.*"); Object bean = xmlWebApplicationContext.getBean("externalBean"); After all above xmlWebApplicationContext contains all external definitions of beans. But when i am trying to get bean from context exception is thrown: Couldn't generate CGLIB proxy for class ... I saw in debug mode that in the bean initialization process first time proxy is generated by org.springframework.aop.aspectj.autoproxy.AspectJAwareAdvisorAutoProxyCreator and than it is tried to generate proxy with org.springframework.aop.framework.autoproxy.BeanNameAutoProxyCreator but fails with mentioned exception.

    Read the article

  • Multiple leaf methods problem in composite pattern

    - by Ondrej Slinták
    At work, we are developing an PHP application that would be later re-programmed into Java. With some basic knowledge of Java, we are trying to design everything to be easily re-written, without any headaches. Interesting problem came out when we tried to implement composite pattern with huge number of methods in leafs. What are we trying to achieve (not using interfaces, it's just an example): class Composite { ... } class LeafOne { public function Foo( ); public function Moo( ); } class LeafTwo { public function Bar( ); public function Baz( ); } $c = new Composite( Array( new LeafOne( ), new LeafTwo( ) ) ); // will call method Foo in all classes in composite that contain this method $c->Foo( ); It seems like pretty much classic Composite pattern, but problem is that we will have quite many leaf classes and each of them might have ~5 methods (of which few might be different than others). One of our solutions, which seems to be the best one so far and might actually work, is using __call magic method to call methods in leafs. Unfortunately, we don't know if there is an equivalent of it in Java. So the actual question is: Is there a better solution for this, using code that would be eventually easily re-coded into Java? Or do you recommend any other solution? Perhaps there's some different, better pattern I could use here. In case there's something unclear, just ask and I'll edit this post.

    Read the article

  • PHP preg_match: a pattern which satisfies all MySQL field names (including 'table.field' formations)

    - by gsquare567
    i need a pattern which satisfies mysql field names, but also with the option of having a table name before it examples: mytable.myfield myfield my4732894__7289FiEld here's what i tried: $pattern = "/^[a-zA-Z0-9_]*?[\.[a-zA-Z0-9_]]?$/"; this worked for what i needed before, which was just the field name: $pattern = "/^[a-zA-Z0-9_]*$/"; any ideas why my addition isnt working? maybe i'm making up regex, so i'll explain what i added... the first '?' is to say that it isn't greedy, ie. it will stop if the next part, namely "[.[a-zA-Z0-9_]]?" is satisfied. now, that second part is just the same as the first except it is optional (hence the '?' at the end) and it starts with a period (hence the '[.' and ']' wrapping my old clause. and obviously, the "^" and "$" rep the beginning and end of the string so... any ideas? (also, i'm a tad confused as to why i need to put in those "/"s in the begining/end anyways, so if you could tell me why it's required, that'd be awesome) thanks a lot! (and thanks for reading this all if you actually did... it's quite a ramble)

    Read the article

  • Using proxy models

    - by smallB
    I've created Proxy model by subclassing QAbstractProxyModel and connected it as a model to my view. I also set up source model for this proxy model. Unfortunately something is wrong because I'm not getting anything displayed on my listView (it works perfectly when I have my model supplied as a model to view but when I supply this proxy model it just doesn't work). Here are some snippets from my code: #ifndef FILES_PROXY_MODEL_H #define FILES_PROXY_MODEL_H #include <QAbstractProxyModel> #include "File_List_Model.h" class File_Proxy_Model: public QAbstractProxyModel { public: explicit File_Proxy_Model(File_List_Model* source_model) { setSourceModel(source_model); } virtual QModelIndex mapFromSource(const QModelIndex & sourceIndex) const { return index(sourceIndex.row(),sourceIndex.column()); } virtual QModelIndex mapToSource(const QModelIndex & proxyIndex) const { return index(proxyIndex.row(),proxyIndex.column()); } virtual int columnCount(const QModelIndex & parent = QModelIndex()) const { return sourceModel()->columnCount(); } virtual int rowCount(const QModelIndex & parent = QModelIndex()) const { return sourceModel()->rowCount(); } virtual QModelIndex index(int row, int column, const QModelIndex & parent = QModelIndex()) const { return createIndex(row,column); } virtual QModelIndex parent(const QModelIndex & index) const { return QModelIndex(); } }; #endif // FILES_PROXY_MODEL_H //and this is a dialog class: Line_Counter::Line_Counter(QWidget *parent) : QDialog(parent), model_(new File_List_Model(this)), proxy_model_(new File_Proxy_Model(model_)), sel_model_(new QItemSelectionModel(proxy_model_,this)) { setupUi(this); setup_mvc_(); } void Line_Counter::setup_mvc_() { listView->setModel(proxy_model_); listView->setSelectionModel(sel_model_); }

    Read the article

  • Couldn't upload files to Sharepoint site while passing through Squid Proxy

    - by Ecio
    Hi all, we have this issue: one of our employees is collaborating with a supplier and he needs to upload documents on a Sharepoint site hosted on the supplier's main site. In our environment we use Squid Proxy to allow people navigate on the net (we have NTLM authentication and users transparently authenticate while using IE and FF). It seems that this specific Sharepoint site is using Integrated Windows Authentication only, and according to some research on the net it seems that this can have troubles with proxies. More specifically, we have tried two Squid versions: with Squid 3.0 we are unable to login to the site (the browser loads an empty page) with Squid 2.7 (that supports "Connection Pinning") we are able to login into the site, move on the different sections BUT.. when we try to upload a file that is bigger than a couple of KiloBytes (i.e. 10KB) the browser loads an error page (i think it's a 401 unauthorized but i must verify it) we've tried changing a couple of Squid options (in 2.7), what we got is that when you try to upload the file you got an authentication box (just like the initial login) and it refuses to go on even if you enter the same authentication credentials. What's really strange is that when you try to upload a small file (i.e. a text or binary 1KB file) the upload succeeds. I initially thought that maybe there was something misconfigured on their Sharepoint site but I've tried also this site: www.xsolive.com (it's a sharepoint 2007 demo site) and I've experienced the same problem. Has any of you experienced such behaviour? Thanks! Of course we've suggested to the supplier to activate also Basic+SSL and we're waiting for their reply..

    Read the article

  • Flash plugin locks up Firefox, Chrome and Safari behind a corporate proxy, IE6 works fine

    - by Shevek
    At work I am forced by corporate policy to use IE6. Obviously this is not so good so I use FF for most of my browsing. However there is a problem once I have installed the Flash plug-in - FF locks up when trying to load Flash media. Looking at the status bar at the time of the lock up it appears this happens when the browser tries to get cross domain data. The Flash Active X plug-in in IE does not suffer this issue. I have tried it in a brand new profile in FF with Flash as the only plug in and it locks up. We have 2 different proxy servers and both exhibit the same problem. I have also tried Chrome and Safari and both lock up with the plug-in installed. So, has anyone else had this problem and solved it? Or, is there any way to disable cross domain data access in the flash plug-in? Or, is there any way to disable the "This site needs an additional plug-in" ribbon which appears when the plug-in is not installed. Many thanks!

    Read the article

  • configure Squid3 proxy server on Ubuntu with caching and logging

    - by Panshul
    I have a ubuntu 11.10 machine. Installed Squid3. When i configure the squid as http_access allow all, everything works fine. my current configuration mostly default is as follows: 2012/09/10 13:19:57| Processing Configuration File: /etc/squid3/squid.conf (depth 0) 2012/09/10 13:19:57| Processing: acl manager proto cache_object 2012/09/10 13:19:57| Processing: acl localhost src 127.0.0.1/32 ::1 2012/09/10 13:19:57| Processing: acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 2012/09/10 13:19:57| Processing: acl SSL_ports port 443 2012/09/10 13:19:57| Processing: acl Safe_ports port 80 # http 2012/09/10 13:19:57| Processing: acl Safe_ports port 21 # ftp 2012/09/10 13:19:57| Processing: acl Safe_ports port 443 # https 2012/09/10 13:19:57| Processing: acl Safe_ports port 70 # gopher 2012/09/10 13:19:57| Processing: acl Safe_ports port 210 # wais 2012/09/10 13:19:57| Processing: acl Safe_ports port 1025-65535 # unregistered ports 2012/09/10 13:19:57| Processing: acl Safe_ports port 280 # http-mgmt 2012/09/10 13:19:57| Processing: acl Safe_ports port 488 # gss-http 2012/09/10 13:19:57| Processing: acl Safe_ports port 591 # filemaker 2012/09/10 13:19:57| Processing: acl Safe_ports port 777 # multiling http 2012/09/10 13:19:57| Processing: acl CONNECT method CONNECT 2012/09/10 13:19:57| Processing: http_access allow manager localhost 2012/09/10 13:19:57| Processing: http_access deny manager 2012/09/10 13:19:57| Processing: http_access deny !Safe_ports 2012/09/10 13:19:57| Processing: http_access deny CONNECT !SSL_ports 2012/09/10 13:19:57| Processing: http_access allow localhost 2012/09/10 13:19:57| Processing: http_access deny all 2012/09/10 13:19:57| Processing: http_port 3128 2012/09/10 13:19:57| Processing: coredump_dir /var/spool/squid3 2012/09/10 13:19:57| Processing: refresh_pattern ^ftp: 1440 20% 10080 2012/09/10 13:19:57| Processing: refresh_pattern ^gopher: 1440 0% 1440 2012/09/10 13:19:57| Processing: refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 2012/09/10 13:19:57| Processing: refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880 2012/09/10 13:19:57| Processing: refresh_pattern . 0 20% 4320 2012/09/10 13:19:57| Processing: http_access allow all 2012/09/10 13:19:57| Processing: cache_mem 512 MB 2012/09/10 13:19:57| Processing: logformat squid3 %ts.%03tu %6tr %>a %Ss/%03>Hs %<st %rm %ru 2012/09/10 13:19:57| Processing: access_log /home/panshul/squidCache/log/access.log squid3 The problem starts when I enable the following line: access_log /home/panshul/squidCache/log/access.log I start to get proxy server is refusing connections error in the browser. on commenting out the above line in my config, things go back to normal. The second problem starts when i add the following line to my config: cache_dir ufs /home/panshul/squidCache/cache 100 16 256 The squid server fails to start. Any suggestions what am I missing in the config. Please help.!!

    Read the article

  • Nginx proxy domain to another domain with no change URL

    - by Evgeniy
    My question is in the subj. I have a one domain, that's nginx's config of it: server { listen 80; server_name connect3.domain.ru www.connect3.domain.ru; access_log /var/log/nginx/connect3.domain.ru.access.log; error_log /var/log/nginx/connect3.domain.ru.error.log; root /home/httpd/vhosts/html; index index.html index.htm index.php; location ~* \.(avi|bin|bmp|css|dmg|doc|docx|dpkg|exe|flv|gif|htm|html|ico|ics|img|jpeg|jpg|js|m2a|m2v|mov|mp3|mp4|mpeg|mpg|msi|pdf|pkg|png|pps|ppt|pptx|ps|rar|rss|rtf|swf|tif|tiff|txt|wmv|xhtml|xls|xml|zip)$ { root /home/httpd/vhosts/html; access_log off; expires 1d; } location ~ /\.(git|ht|svn) { deny all; } location / { #rewrite ^ http://connect2.domain.ru/; proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_hide_header "Cache-Control"; add_header Cache-Control "no-store, no-cache, must-revalidate, post-check=0, pre-check=0"; proxy_hide_header "Pragma"; add_header Pragma "no-cache"; expires -1; add_header Last-Modified $sent_http_Expires; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } I need to proxy connect3.domain.ru host to connect2.domain.ru, but with no URL changed in browser's address bars. My commented out rewrite line could solve this problem, but it's just a rewrite, so I cannot stay with the same URL. I know that this question is easy, but please help. Thank you.

    Read the article

  • Dynamic fowarding with SOCKS5 proxy [on hold]

    - by bh3244
    I'm building my own SOCKS5 client and HTTP library and am having trouble figuring out how things work with dynamic port forwarding. So far I can connect successfully with my SOCKS5 client, but from there on I am stuck. I am using the ssh -D command. Considering I have my local machine "home" and my server "server" and I wanted to use "server" as proxy for all connections I understand I would type ssh -D "localport" "serverhostname" on my local machine "home". This command I understand has ssh accept connections with the SOCKS5 protocol. So now if I want to connect to google.com(74.125.224.72:80) and issue a GET for the front page, I assume I would send the SOCKS5 client request and the server would respond back with a 0x00 "succeeded" and from then on I am connected and I would send the HTTP GET request and the server would respond back accordingly with the data. Now if I want to navigate to a different website, must I issue another SOCKS5 connection request for that sites IP/hostname? I'm confused if this is the way it is done, or if there is a program listening on the local port of the "server" and handling outgoing and incoming data. To reiterate: Do SOCKS5 proxies work by sending repeated SOCKS5 connection requests for different addresses or is there just one connection to a local port on "server" and another program on "server" handles the outgoing connection to the internet by using that local port to send and receive data to/from "home"?

    Read the article

  • Load balancing a Windows File Share using HA-Proxy

    - by NathanE
    After pulling my hair out over DFS I just had this weird and potentially dangerous idea come into my head whereby, just possibly, I might be able to use HA-Proxy to load balance a file share between servers. I've done some remedial packet traces and it does appear that TCP port 445 is the only thing involved in using Windows file sharing. I've always thought for many years that UDP 139, 135 etc were also involved in at least establishing the connection - but apparently not! So I setup a basic test: listen SMBTest *:445 mode tcp server Smb1 172.16.61.201:445 server Smb2 172.16.61.202:445 And you'll never guess what... it works??? (!) Now obviously there is the whole concern about synchronisation between the file servers (of course). That could easily be taken care of with a little bit of Robocopy script. And considering I only need a HA read-only file share there wouldn't be any issues with regard to file locking etc. Can anyone tell me if what I'm playing with here is fire? I really didn't think it would work at all and now I'm a little shocked. What would be the downsides? Could this be relied upon for a production environment?

    Read the article

  • Proxy settings in Java mail API

    - by coder
    I've written a piece of java code where user1 sends email to user2. I'm behind a proxy and hence I'm getting a javax.mail.MessagingException. How do I solve this problem? Here is the code- import java.util.Properties; import javax.mail.Message; import javax.mail.MessagingException; import javax.mail.PasswordAuthentication; import javax.mail.Session; import javax.mail.Transport; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeMessage; public class Mail { public static void main(String[] args) { final String username = "[email protected]"; final String password = "abc"; Properties props = new Properties(); props = System.getProperties(); props.put("mail.smtp.auth", "true"); props.put("mail.smtp.starttls.enable", "true"); props.put("mail.smtp.host", "smtp.gmail.com"); props.put("mail.smtp.port", "587"); Session session = Session.getInstance(props, new javax.mail.Authenticator() { protected PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication(username, password); } }); try { Message message = new MimeMessage(session); message.setFrom(new InternetAddress("[email protected]")); message.setRecipients(Message.RecipientType.TO, InternetAddress.parse("[email protected]")); message.setSubject("Testing Subject"); message.setText("Dear Mail Crawler," + "\n\n No spam to my email, please!"); Transport.send(message); System.out.println("Done"); } catch (MessagingException e) { throw new RuntimeException(e); } } }

    Read the article

  • Setup a Reverse Proxy with Nginx and Apache on EC2

    - by heavymark
    Good Day, I am currently using the free Amazon EC2 micro instance to learn Linux and server setup. I wish to setup Nginx as a reverse web proxy. I found a great article on mediatemple on how to do it: http://wiki.mediatemple.net/w/Using_Nginx_as_a_Reverse_Web_Proxy The directions work for most any server except for EC2.One difference between EC2 and MediaTemple is how IPs work. Overall EC2 instances do not know their elastic IP. So when following the wiki directions in the virtual hosts for instance instead of myip:80 for instance I put *:80. When just using Apache this works perfectly. In the apache virtual hosts I did "127.0.0.1:80" and in the Nginx I put *:80. Apache restarts, by Nginx provides an error that it cannot bind because the ip is already in use. If I could add an actual IP in the Nginx file it would work but since EC2 requires me to put in the asterisk it ends up conflicting with the apache virtual hosts entry. Anyone know a simple way around this (other than not using EC2) ;-) Thank you! Cheers, Christopher

    Read the article

  • rsync --remove-source-files but only those that match a pattern

    - by Daniel
    Is this possible with rsync? Transfer everything from src:path/to/dir to dest:/path/to/other/dir and delete some of the source files in src:path/to/dir that match a pattern (or size limit) but keep all other files. I couldn't find a way to limit --remove-source-files with a regexp or size limit. Update1 (clarification): I'd like all files in src:path/to/dir to be copied to dest:/path/to/other/dir. Once this is done, I'd like to have some files (those that match a regexp or size limit) in src:path/to/dir deleted but don't want to have anything deleted in dest:/path/to/other/dir. Update2 (more clarification): Unfortunately, I can't simply rsync everything and then manually delete the files matching my regexp from src:. The files to be deleted are continuously created. So let's say there are N files of the type I'd like to delete after the transfer in src: when rsync starts. By the time rsync finishes there will be N+M such files there. If I now delete them manually, I'll lose the M files that were created while rsync was running. Hence I'd like to have a solution that guarantees that the only files deleted from src: are those known to be successfully copied over to dest:. I could fetch a file list from dest: after the rsync is complete, and compare that list of files with what I have in src:, and then do the removal manually. But I was wondering if rsync can do this by itself.

    Read the article

  • Sed: Deleting all content matching a pattern

    - by Svish
    I have some plist files on mac os x that I would like to shrink. They have a lot of <dict> with <key> and values. One of these keys is a thumbnail which has a <data> value with base64 encoded binary (I think). I would like to remove this key and value. I was thinking this could maybe be done by sed, but I don't really know how to use it and it seems like sed only works on a line-by-line basis? Either way I was hoping someone could help me out. In the file I would like to delete everything that matches the following pattern or something close to that: <key>Thumbnail<\/key>[^<]*<\/data> In the file it looks like this: // Other keys and values <key>Thumbnail</key> <data> TU0AKgAAOEi25Pqx3/ip2fak0vOdzPCVxu2RweuPv+mLu+mIt+aGtuaEtOSB ... dCBBcHBsZSBDb21wdXRlciwgSW5jLiwgMjAwNQAAAAA= </data> // Other keys and values Anyone know how I could do this? Also, if there are any better tools that I can use in the terminal to do this, I would like to know about that as well :)

    Read the article

  • iptables rules for DNS/Transparent proxy with ip exceptions

    - by SlimSCSI
    I am running a router (A Netgear WNDR3700 if that matters) with dd-wrt. For content filtering I am using OpenDNS. I wanted to make sure a user could not bypass OpenDNS by putting in their own name servers, so I have a rule to catch all DNS traffic. iptables -t nat -A PREROUTING -i br0 -p all --dport 53 -j DNAT --to $LAN_IP I did have one computer on the network I wanted to allow past OpenDNS filters. On that machine I manually set the name servers, and created another rule to allow it to pass iptables -t nat -I PREROUTING -i br0 -s 192.168.1.2 -j ACCEPT This worked well. Today, I installed a transparent proxy (squid) on the router and added these rules: iptables -t nat -A PREROUTING -i br0 -s $LAN_NET -d $LAN_NET -p tcp --dport 80 -j ACCEPT iptables -t nat -A PREROUTING -i br0 -s ! $PROXY_IP -p tcp --dport 80 -j DNAT --to $PROXY_IP:$PROXY_PORT iptables -t nat -I POSTROUTING -o br0 -s $LAN_NET -d $PROXY_IP -p tcp -j SNAT --to $LAN_IP iptables -I FORWARD -i br0 -o br0 -s $LAN_NET -d $PROXY_IP -p tcp --dport $PROXY_PORT -j ACCEPT This also works, however the 192.168.1.2 address does not get routed through squid. How can I have 192.168.1.2 (and maybe others in the future) by-pass the port 53 rules, but not the port 80 rules?

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >