Search Results

Search found 5913 results on 237 pages for 'rewrite rule'.

Page 84/237 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • Unable to set nginx to serve my staging website

    - by user100778
    I'm having some troubles setting up nginx to serve my staging website. What I did is change the server_name but for some reasons it just doesn't work. The url scheme is "domain.foo" is production, "staging.domain.foo" is staging, "foobar.domain.foo" is a web service, "foobar.staging.domain.foo" is the staging version of the same webserver, ".domain.foo" is routed to serve some s3 static HTML, ".staging.domain.foo" is routed to serve some s3 static HTML in another bucket. All production urls work and are correctly configured, all staging urls doesn't work. Here is my conf file. You will see some duplication, I will gladly accept any correction/optimization, I'm a coder and configuring servers is definitely not my thing (but I'm eager to learn and improve...). server { listen 80; ## listen for ipv4 server_name "domain.foo" "www.domain.foo" default_server; access_log /var/log/nginx/access.log; client_max_body_size 5M; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; location ~* \.(jpg|jpeg|gif|png|ico|css|bmp|js|html)$ { access_log off; expires max; root /home/foo/Foo/current/public; break; } if ($host ~ 'www.domain.foo') { rewrite ^/(.*)$ http://domain/foo/$1 permanent; } proxy_pass http://production; break; } } server { listen 80; server_name "staging.domain.foo"; access_log /var/log/nginx/access.staging.log; error_log /var/log/nginx/error.staging.log; client_max_body_size 5M; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://staging; break; } } server { listen 80; ## listen for ipv4 server_name "foobar.domain.foo"; access_log /var/log/nginx/access.log; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if ($host = 'foobar.domain.foo') { proxy_pass http://foobar; break; } } } server { listen 80; ## listen for ipv4 server_name foobar.staging.domain.foo; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://foobar_staging; break; } } server { listen 80; server_name "~^(.+)\.domain\.foo$"; location / { proxy_intercept_errors on; error_page 404 = http://domain.foo/404; set $subdomain $1; rewrite /$ "/$subdomain/index.html" break; rewrite ^ /$subdomain$request_uri? break; proxy_pass http://bucket.domain.foo.s3.amazonaws.com; } } server { listen 80; server_name "~^(.+)\.staging\.domain\.foo$"; location / { proxy_intercept_errors on; set $subdomain $1; rewrite /$ "/$subdomain/index.html" break; rewrite ^ /$subdomain$request_uri? break; proxy_pass http://bucket.staging.domain.foo.s3.amazonaws.com; } } upstream production { server 111.255.111.110:8000; server 111.255.111.110:8001; server 111.255.111.110:8002; server 111.255.111.110:8003; } upstream staging { server 222.255.222.222:8000; server 222.255.222.222:8001; } upstream foobar { server 111.255.222.165:9000; server 111.255.222.165:9001; server 111.255.222.165:9002; } upstream foobar_staging { server 222.255.222.222:9000; } What happens now when I point my browser to staging.domain.foo is that it hangs. Can't find anything in the logs, but for example the access.staging.log and errors.staging.log are created. Anybody has an idea? :)

    Read the article

  • What is the process of rewriting URLs?

    - by bozdoz
    What I would really like is a step by step resource on how to rewrite URLs. I have seen the documentation on mod_rewrite for example in Apache, but I still find myself a little lost. If I have example.com/products.html, can I change this to appear as example.com/products ? For that to happen, do I make all of my links point to /products and then have a rewrite rule that directs /products to /products.html? Or is it the other way around? Also, for PHP forms, I've noticed that I can't have a form action that points to a directory: for example, it requires /mail/index.php instead of just /mail. Can mod_rewrite fix this too?

    Read the article

  • How To: Automatically Remove www from a Domain in IIS7

    I recently moved the DevMavens.com site from one server to another and needed to ensure that the www.devmavens.com domain correctly redirected to simply devmavens.com.  This is important for SEO reasons (you dont want multiple domains to refer to the same content) and its generally better to use the shorter URL (www is so 20th century) rather than wasting 4 characters for zero gain. My friend and IIS guru Scott Forsyth pointed me to his blog post on how to set up IIS URL Rewriting.  To get started, you simply install IIS Rewrite from this link using the super awesome Web Platform Installer.  You should get something like this when youre done with the install: If you already have IIS Manager open, you may need to close it and re-open it before you see the URL Rewrite module.  Once you do, you should see it listed for any given Site under the IIS section: Double click on the URL Rewrite icon, and then choose the Add Rule(s) action.  You can simply create a blank rule, and name it Redirect from www to domain.com.  Essentially were following the instructions from Scott Forsyths post, but in reverse since hes showing how to add 4 useless characters to the URL and Im interested in removing them. After adding the name, well set the Match Url sections Using dropdown to Wildcards and specify a pattern of simply * to match anything. In the Conditions section we need to add a new condition with an Input of {HTTP_HOST} such that it should match the pattern www.devmavens.com (replace this with your domain). Ignore the Server Variables section. Set the action to Redirect and the Redirect URL to http://devmavens.com/{R:0} (replace with your domain).  The {R:0} will be replaced with whatever the user had entered.  So if they were going to http://www.devmavens.com/default.aspx theyll now be going to http://devmavens.com/default.aspx. The complete Inbound Rule should look like this: Thats it!  Test it out and make sure you havent accidentally used my exact URLs and started sending all of your users to devmavens.com! :)  Be sure to read Scotts post for more information on how to use regular expressions for your rules, and how to set them up via web.config rather than IIS manager. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • New release of the Windows Azure SDK and Tools (March CTP)

    - by kaleidoscope
    From now on, you only have to download the Windows Azure Tools for Microsoft Visual Studio and the SDK will be installed as part of that package. What’s new in Windows Azure SDK Support for developing Managed Full Trust applications. It also provides support for Native Code via PInvokes and spawning native processes. Support for developing FastCGI applications, including support for rewrite rules via URL Rewrite Module. Improved support for the integration of development storage with Visual Studio, including enhanced performance and support for SQL Server (only local instance). What’s new in Windows Azure Tools for Visual Studio Combined installer includes the Windows Azure SDK Addressed top customer bugs. Native Code Debugging. Update Notification of future releases. FastCGI template http://blogs.msdn.com/jnak/archive/2009/03/18/now-available-march-ctp-of-the-windows-azure-tools-and-sdk.aspx   Ritesh, D

    Read the article

  • T-SQL Tuesday #005 : SSRS Parameters and MDX Data Sets

    - by blakmk
    Well it this weeks  T-SQL Tuesday #005  topic seems quite fitting. Having spent the past few weeks creating reports and dashboards in SSRS and SSAS 2008, I was frustrated by how difficult it is to use custom datasets to generate parameter drill downs. It also seems Reporting Services can be quite unforgiving when it comes to renaming things like datasets, so I want to share a couple of techniques that I found useful. One of the things I regularly do is to add parameters to the querys. However doing this causes Reporting Services to generate a hidden dataset and parameter name for you. One of the things I like to do is tweak these hidden datasets removing the ‘ALL’ level which is a tip I picked up from Devin Knight in his blog: There are some rules i’ve developed for myself since working with SSRS and MDX, they may not be the best or only way but they work for me. Rule 1 – Never trust the automatically generated hidden datasets Or even ANY, automatically generated MDX queries for that matter.... I’ve previously blogged about this here.   If you examine the MDX generated in the hidden dataset you will see that it generates the MDX in the context of the originiating query by building a subcube, this mean it may NOT be appropriate to use this in a subsequent query which has a different context. Make sure you always understand what is going on. Often when i’m developing a dashboard or a report there are several parameter oriented datasets that I like to manually create. It can be that I have different datasets using the same dimension but in a different context. One example of this, is that I often use a dataset for last month and a dataset for the last 6 months. Both use the same date hierarchy. However Reporting Services seems not to be too smart when it comes to generating unique datasets when working with and renaming parameters and datasets. Very often I have come across this error when it comes to refactoring parameter names and default datasets. "an item with the same key has already been added" The only way I’ve found to reliably avoid this is to obey to rule 2. Rule 2 – Follow this sequence when it comes to working with Parameters and DataSets: 1.    Create Lookup and Default Datasets in advance 2.    Create parameters (set the datasets for available and default values) 3.    Go into query and tick parameter check box 4.    On dataset properties screen, select the parameter defined earlier from the parameter value defined earlier. Rule 3 – Dont tear your hair out when you have just renamed objects and your report doesn’t build Just use XML notepad on the original report file. I found I gained a good understanding of the structure of the underlying XML document just by using XML notepad. From this you can do a search and find references of the missing object. You can also just do a wholesale search and replace (after taking a backup copy of course ;-) So I hope the above help to save the sanity of anyone who regularly works with SSRS and MDX.   @Blakmk

    Read the article

  • SSRS Parameters and MDX Data Sets

    - by blakmk
    Having spent the past few weeks creating reports and dashboards in SSRS and SSAS 2008, I was frustrated by how difficult it is to use custom datasets to generate parameter drill downs. It also seems Reporting Services can be quite unforgiving when it comes to renaming things like datasets, so I want to share a couple of techniques that I found useful. One of the things I regularly do is to add parameters to the querys. However doing this causes Reporting Services to generate a hidden dataset and parameter name for you. One of the things I like to do is tweak these hidden datasets removing the ‘ALL’ level which is a tip I picked up from Devin Knight in his blog: There are some rules i’ve developed for myself since working with SSRS and MDX, they may not be the best or only way but they work for me. Rule 1 – Never trust the automatically generated hidden datasets Or even ANY, automatically generated MDX queries for that matter.... I’ve previously blogged about this here.   If you examine the MDX generated in the hidden dataset you will see that it generates the MDX in the context of the originiating query by building a subcube, this mean it may NOT be appropriate to use this in a subsequent query which has a different context. Make sure you always understand what is going on. Often when i’m developing a dashboard or a report there are several parameter oriented datasets that I like to manually create. It can be that I have different datasets using the same dimension but in a different context. One example of this, is that I often use a dataset for last month and a dataset for the last 6 months. Both use the same date hierarchy. However Reporting Services seems not to be too smart when it comes to generating unique datasets when working with and renaming parameters and datasets. Very often I have come across this error when it comes to refactoring parameter names and default datasets. "an item with the same key has already been added" The only way I’ve found to reliably avoid this is to obey to rule 2. Rule 2 – Follow this sequence when it comes to working with Parameters and DataSets: 1.    Create Lookup and Default Datasets in advance 2.    Create parameters (set the datasets for available and default values) 3.    Go into query and tick parameter check box 4.    On dataset properties screen, select the parameter defined earlier from the parameter value defined earlier. Rule 3 – Dont tear your hair out when you have just renamed objects and your report doesn’t build Just use XML notepad on the original report file. I found I gained a good understanding of the structure of the underlying XML document just by using XML notepad. From this you can do a search and find references of the missing object. You can also just do a wholesale search and replace (after taking a backup copy of course ;-) So I hope the above help to save the sanity of anyone who regularly works with SSRS and MDX.

    Read the article

  • MVC: Why put the business logic in the model? What happens when I've multiple types of storage?

    - by Steffen Winkler
    I always thought that the business logic has to be in the controller and that the controller, since it is the 'middle' part, stays static and that the model/view have to be capsuled via interfaces, that way you could change the business logic without affecting anything else, program multiple Models (one for each database/type of storage) and a dozens of views (for different platforms for example). Now I read in this question that you should always put the business logic into the model and that the controller is deeply connected with the view. To me, that doesn't really make sense and implies that each time I want to have the means of supporting another database/type of storage I've to rewrite my whole model including the business logic. And if I want another view, I've to rewrite both the view and the controller. May someone explain why that is or if I went wrong somewhere? Currently, that whole thing doesn't really make sense to me.

    Read the article

  • I can't shut down nor reboot without console

    - by jgomo3
    After update from 11.04 to 11.10 an wired conduct appears in my machine: Shutdown GUI methods (including reboot) cause only a log off, and in the login screen, shutdown nor reboot options do anything (if you wonder, reboot appears in the shutdown dialog). The only way i can reboot or shutdown is trough console sudo shutdown -h now or sudo reboot. This is OK for me, but not for the rest of the users. How to fix this? Update The syslog output when select shutdown from my desktop is: AptDaemon: INFO: Quitting due to inactivity AptDaemon: INFO: Quitting was requested CRON[5095]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) CRON[5094]: (root) MAIL (mailed 1 byte of output; but got status 0x00ff, #012) kernel: [17027.614974] psmouse.c: TouchPad at isa0060/serio4/input0 lost sync at byte 1 kernel: [17027.616510] psmouse.c: TouchPad at isa0060/serio4/input0 lost sync at byte 1 kernel: [17027.618037] psmouse.c: TouchPad at isa0060/serio4/input0 lost sync at byte 1 kernel: [17027.619557] psmouse.c: TouchPad at isa0060/serio4/input0 lost sync at byte 1 kernel: [17027.621046] psmouse.c: TouchPad at isa0060/serio4/input0 lost sync at byte 1 kernel: [17027.621051] psmouse.c: issuing reconnect request acpid: client 1032[0:0] has disconnected acpid: client connected from 1032[0:0] acpid: 1 client rule loaded gnome-session[1836]: WARNING: Unable to stop system: Authorization is required acpid: client 1032[0:0] has disconnected acpid: client connected from 6055[0:0] acpid: 1 client rule loaded rtkit-daemon[1313]: Successfully made thread 6134 of process 6134 (n/a) owned by '119' high priority at nice level -11. rtkit-daemon[1313]: Supervising 4 threads of 2 processes of 2 users. rtkit-daemon[1313]: Successfully made thread 6139 of process 6134 (n/a) owned by '119' RT at priority 5. rtkit-daemon[1313]: Supervising 5 threads of 2 processes of 2 users. rtkit-daemon[1313]: Successfully made thread 6140 of process 6134 (n/a) owned by '119' RT at priority 5. rtkit-daemon[1313]: Supervising 6 threads of 2 processes of 2 users. I suspect that the line gnome-session[1836]: WARNING: Unable to stop system: Authorization is required is related to the issue. When selecting shutdown from the login screen, the output is the same from the line pointed. This is the output: gnome-session[1836]: WARNING: Unable to stop system: Authorization is required acpid: client 1032[0:0] has disconnected acpid: client connected from 6055[0:0] acpid: 1 client rule loaded rtkit-daemon[1313]: Successfully made thread 6134 of process 6134 (n/a) owned by '119' high priority at nice level -11. rtkit-daemon[1313]: Supervising 4 threads of 2 processes of 2 users. rtkit-daemon[1313]: Successfully made thread 6139 of process 6134 (n/a) owned by '119' RT at priority 5. rtkit-daemon[1313]: Supervising 5 threads of 2 processes of 2 users. rtkit-daemon[1313]: Successfully made thread 6140 of process 6134 (n/a) owned by '119' RT at priority 5. rtkit-daemon[1313]: Supervising 6 threads of 2 processes of 2 users. acpid: client 6055[0:0] has disconnected acpid: client connected from 6055[0:0] acpid: 1 client rule loaded

    Read the article

  • How to get ISA 2006 Web Proxy to work with the Single Network Adapter template

    - by tronda
    I need to test an issue with running our application behind a proxy server with different type of configurations, so I installed ISA 2006 Enterprise on a desktop computer. Since this computer only has a single network card and I want to start out easy, I chose the "Single Network Adapter" template. We have a internal NAT'ed network which is in the 10 range. I have defined the internal network on the ISA server to be 10.XXX.YY.1 - 10.XXX.YY.255 I also have the Default rule which denies all traffic, but I've added the following Rule: Policy - Protocols - From - To Accept HTTP Internal External HTTPS Local Host Internal HTTS Server Localhost Then I configured Internet Explorer on a virutal machine running XP within virtualbox with Brigded network (gets same network address range as regular computers on our network) similar to this Instead of the server name I used the IP address. When I try to access a web page, this doesn't go through and I get the following log messages on the proxy server: Original Client IP Client Agent Authenticated Client Service Referring Server Destination Host Name Transport HTTP Method MIME Type Object Source Source Proxy Destination Proxy Bidirectional Client Host Name Filter Information Network Interface Raw IP Header Raw Payload GMT Log Time Source Port Processing Time Bytes Sent Bytes Received Cache Information Error Information Authentication Server Log Time Client IP Destination IP Destination Port Protocol Action Rule Result Code HTTP Status Code Client Username Source Network Destination Network URL Server Name Log Record Type 10.XXX.YY.174 - TCP - - - 24.08.2010 13:25:24 1080 0 0 0 0x0 0x0 - 24.08.2010 06:25:24 10.XXX.YY.174 10.XXX.YY.175 80 HTTP Initiated Connection MyHTTPAccess 0x0 ERROR_SUCCESS Internal Local Host - PROXYTEST Firewall 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:24 2275 0 0 0 0x0 0x0 - 24.08.2010 06:25:24 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Initiated Connection 0x0 ERROR_SUCCESS Local Host Local Host - PROXYTEST Firewall 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:25 2275 0 0 0 0x0 0x0 - 24.08.2010 06:25:25 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Closed Connection 0x80074e20 FWX_E_GRACEFUL_SHUTDOWN Local Host Local Host - PROXYTEST Firewall 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:25 2276 0 0 0 0x0 0x0 - 24.08.2010 06:25:25 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Initiated Connection 0x0 ERROR_SUCCESS Local Host Local Host - PROXYTEST Firewall 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:26 2276 0 0 0 0x0 0x0 - 24.08.2010 06:25:26 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Closed Connection 0x80074e20 FWX_E_GRACEFUL_SHUTDOWN Local Host Local Host - PROXYTEST Firewall 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:26 2277 0 0 0 0x0 0x0 - 24.08.2010 06:25:26 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Initiated Connection 0x0 ERROR_SUCCESS Local Host Local Host - PROXYTEST Firewall 10.XXX.YY.159 - UDP - - - 24.08.2010 13:25:26 68 0 0 0 0x0 0x0 - 24.08.2010 06:25:26 10.XXX.YY.159 255.255.255.255 67 DHCP (request) Denied Connection [Enterprise] Default rule 0xc004000d FWX_E_POLICY_RULES_DENIED Internal Local Host - PROXYTEST Firewall 10.XXX.YY.166 - UDP - - - 24.08.2010 13:25:26 68 0 0 0 0x0 0x0 - 24.08.2010 06:25:26 10.XXX.YY.166 255.255.255.255 67 DHCP (request) Denied Connection [Enterprise] Default rule 0xc004000d FWX_E_POLICY_RULES_DENIED Internal Local Host - PROXYTEST Firewall 0.0.0.0 Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Yes Proxy 10.XXX.YY.175 TCP GET Internet - - - Req ID: 096c76ae; Compression: client=No, server=No, compress rate=0% decompress rate=0% - - - 24.08.2010 13:25:27 0 2945 2581 446 0x0 0x40 24.08.2010 06:25:27 10.XXX.YY.174 10.XXX.YY.175 80 http Failed Connection Attempt MyHTTPAccess 10061 anonymous Internal Local Host http://www.vg.no/ PROXYTEST Web Proxy Filter 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:27 2277 0 0 0 0x0 0x0 - 24.08.2010 06:25:27 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Closed Connection 0x80074e20 FWX_E_GRACEFUL_SHUTDOWN Local Host Local Host - PROXYTEST Firewall

    Read the article

  • StreamInsight 2.1, meet LINQ

    - by Roman Schindlauer
    Someone recently called LINQ “magic” in my hearing. I leapt to LINQ’s defense immediately. Turns out some people don’t realize “magic” is can be a pejorative term. I thought LINQ needed demystification. Here’s your best demystification resource: http://blogs.msdn.com/b/mattwar/archive/2008/11/18/linq-links.aspx. I won’t repeat much of what Matt Warren says in his excellent series, but will talk about some core ideas and how they affect the 2.1 release of StreamInsight. Let’s tell the story of a LINQ query. Compile time It begins with some code: IQueryable<Product> products = ...; var query = from p in products             where p.Name == "Widget"             select p.ProductID; foreach (int id in query) {     ... When the code is compiled, the C# compiler (among other things) de-sugars the query expression (see C# spec section 7.16): ... var query = products.Where(p => p.Name == "Widget").Select(p => p.ProductID); ... Overload resolution subsequently binds the Queryable.Where<Product> and Queryable.Select<Product, int> extension methods (see C# spec sections 7.5 and 7.6.5). After overload resolution, the compiler knows something interesting about the anonymous functions (lambda syntax) in the de-sugared code: they must be converted to expression trees, i.e.,“an object structure that represents the structure of the anonymous function itself” (see C# spec section 6.5). The conversion is equivalent to the following rewrite: ... var prm1 = Expression.Parameter(typeof(Product), "p"); var prm2 = Expression.Parameter(typeof(Product), "p"); var query = Queryable.Select<Product, int>(     Queryable.Where<Product>(         products,         Expression.Lambda<Func<Product, bool>>(Expression.Property(prm1, "Name"), prm1)),         Expression.Lambda<Func<Product, int>>(Expression.Property(prm2, "ProductID"), prm2)); ... If the “products” expression had type IEnumerable<Product>, the compiler would have chosen the Enumerable.Where and Enumerable.Select extension methods instead, in which case the anonymous functions would have been converted to delegates. At this point, we’ve reduced the LINQ query to familiar code that will compile in C# 2.0. (Note that I’m using C# snippets to illustrate transformations that occur in the compiler, not to suggest a viable compiler design!) Runtime When the above program is executed, the Queryable.Where method is invoked. It takes two arguments. The first is an IQueryable<> instance that exposes an Expression property and a Provider property. The second is an expression tree. The Queryable.Where method implementation looks something like this: public static IQueryable<T> Where<T>(this IQueryable<T> source, Expression<Func<T, bool>> predicate) {     return source.Provider.CreateQuery<T>(     Expression.Call(this method, source.Expression, Expression.Quote(predicate))); } Notice that the method is really just composing a new expression tree that calls itself with arguments derived from the source and predicate arguments. Also notice that the query object returned from the method is associated with the same provider as the source query. By invoking operator methods, we’re constructing an expression tree that describes a query. Interestingly, the compiler and operator methods are colluding to construct a query expression tree. The important takeaway is that expression trees are built in one of two ways: (1) by the compiler when it sees an anonymous function that needs to be converted to an expression tree, and; (2) by a query operator method that constructs a new queryable object with an expression tree rooted in a call to the operator method (self-referential). Next we hit the foreach block. At this point, the power of LINQ queries becomes apparent. The provider is able to determine how the query expression tree is evaluated! The code that began our story was intentionally vague about the definition of the “products” collection. Maybe it is a queryable in-memory collection of products: var products = new[]     { new Product { Name = "Widget", ProductID = 1 } }.AsQueryable(); The in-memory LINQ provider works by rewriting Queryable method calls to Enumerable method calls in the query expression tree. It then compiles the expression tree and evaluates it. It should be mentioned that the provider does not blindly rewrite all Queryable calls. It only rewrites a call when its arguments have been rewritten in a way that introduces a type mismatch, e.g. the first argument to Queryable.Where<Product> being rewritten as an expression of type IEnumerable<Product> from IQueryable<Product>. The type mismatch is triggered initially by a “leaf” expression like the one associated with the AsQueryable query: when the provider recognizes one of its own leaf expressions, it replaces the expression with the original IEnumerable<> constant expression. I like to think of this rewrite process as “type irritation” because the rewritten leaf expression is like a foreign body that triggers an immune response (further rewrites) in the tree. The technique ensures that only those portions of the expression tree constructed by a particular provider are rewritten by that provider: no type irritation, no rewrite. Let’s consider the behavior of an alternative LINQ provider. If “products” is a collection created by a LINQ to SQL provider: var products = new NorthwindDataContext().Products; the provider rewrites the expression tree as a SQL query that is then evaluated by your favorite RDBMS. The predicate may ultimately be evaluated using an index! In this example, the expression associated with the Products property is the “leaf” expression. StreamInsight 2.1 For the in-memory LINQ to Objects provider, a leaf is an in-memory collection. For LINQ to SQL, a leaf is a table or view. When defining a “process” in StreamInsight 2.1, what is a leaf? To StreamInsight a leaf is logic: an adapter, a sequence, or even a query targeting an entirely different LINQ provider! How do we represent the logic? Remember that a standing query may outlive the client that provisioned it. A reference to a sequence object in the client application is therefore not terribly useful. But if we instead represent the code constructing the sequence as an expression, we can host the sequence in the server: using (var server = Server.Connect(...)) {     var app = server.Applications["my application"];     var source = app.DefineObservable(() => Observable.Range(0, 10, Scheduler.NewThread));     var query = from i in source where i % 2 == 0 select i; } Example 1: defining a source and composing a query Let’s look in more detail at what’s happening in example 1. We first connect to the remote server and retrieve an existing app. Next, we define a simple Reactive sequence using the Observable.Range method. Notice that the call to the Range method is in the body of an anonymous function. This is important because it means the source sequence definition is in the form of an expression, rather than simply an opaque reference to an IObservable<int> object. The variation in Example 2 fails. Although it looks similar, the sequence is now a reference to an in-memory observable collection: var local = Observable.Range(0, 10, Scheduler.NewThread); var source = app.DefineObservable(() => local); // can’t serialize ‘local’! Example 2: error referencing unserializable local object The Define* methods support definitions of operator tree leaves that target the StreamInsight server. These methods all have the same basic structure. The definition argument is a lambda expression taking between 0 and 16 arguments and returning a source or sink. The method returns a proxy for the source or sink that can then be used for the usual style of LINQ query composition. The “define” methods exploit the compile-time C# feature that converts anonymous functions into translatable expression trees! Query composition exploits the runtime pattern that allows expression trees to be constructed by operators taking queryable and expression (Expression<>) arguments. The practical upshot: once you’ve Defined a source, you can compose LINQ queries in the familiar way using query expressions and operator combinators. Notably, queries can be composed using pull-sequences (LINQ to Objects IQueryable<> inputs), push sequences (Reactive IQbservable<> inputs), and temporal sequences (StreamInsight IQStreamable<> inputs). You can even construct processes that span these three domains using “bridge” method overloads (ToEnumerable, ToObservable and To*Streamable). Finally, the targeted rewrite via type irritation pattern is used to ensure that StreamInsight computations can leverage other LINQ providers as well. Consider the following example (this example depends on Interactive Extensions): var source = app.DefineEnumerable((int id) =>     EnumerableEx.Using(() =>         new NorthwindDataContext(), context =>             from p in context.Products             where p.ProductID == id             select p.ProductName)); Within the definition, StreamInsight has no reason to suspect that it ‘owns’ the Queryable.Where and Queryable.Select calls, and it can therefore defer to LINQ to SQL! Let’s use this source in the context of a StreamInsight process: var sink = app.DefineObserver(() => Observer.Create<string>(Console.WriteLine)); var query = from name in source(1).ToObservable()             where name == "Widget"             select name; using (query.Bind(sink).Run("process")) {     ... } When we run the binding, the source portion which filters on product ID and projects the product name is evaluated by SQL Server. Outside of the definition, responsibility for evaluation shifts to the StreamInsight server where we create a bridge to the Reactive Framework (using ToObservable) and evaluate an additional predicate. It’s incredibly easy to define computations that span multiple domains using these new features in StreamInsight 2.1! Regards, The StreamInsight Team

    Read the article

  • How to use Public IP in case of two ISP when two differs from each other

    - by user1471995
    Please bare with my long explanation but this is important to explain the actual problem. Please also pardon my knowledge with PFsense as i am new to this. I have single PFSense box with 3 Ethernet adapter. Before moving to configuration for these, i want to let you know i have two Ethernet based Internet Leased Line Connectivity let's call them ISP A and ISP B. Then last inetrface is LAN which is connected to network switch. Typical network diagram ISP A ----- PFSense ----> Switch ---- > Servers ISP B ----- ISP A (Initially Purchased) WAN IP:- 113.193.X.X /29 Gateway IP :- 113.193.X.A and other 4 usable public IP in same subnet(So the gateway for those IP are also same). ISP B (Recently Purchased) WAN IP:- 115.115.X.X /30 Gateway IP :- 115.115.X.B and other 5 usable public IP in different subnet(So the gateway for those IP is different), for example if 115.119.X.X2 is one of the IP from that list then the gateway for this IP is 115.119.X.X1. Configuration for 3 Interfaces Interface : WAN Network Port : nfe0 Type : Static IP Address : 113.193.X.X /29 Gateway : 113.193.X.A Interface : LAN Network Port : vr0 Type : Static IP Address : 192.168.1.1 /24 Gateway : None Interface : RELWAN Network Port : rl0 Type : Static IP Address : 115.115.X.X /30 (I am not sure of the subnet) Gateway : 115.115.X.B To use Public IP from ISP A i have done following steps a) Created Virtual IP using either ARP or IP Alias. b) Using Firewall: NAT: Port Forward i have created specific natting from one public IP to my internal Lan private IP for example :- WAN TCP/UDP * * 113.193.X.X1 53 (DNS) 192.168.1.5 53 (DNS) WAN TCP/UDP * * 113.193.X.X1 80 (HTTP) 192.168.1.5 80 (HTTP) WAN TCP * * 113.193.X.X2 80 (HTTP) 192.168.1.7 80 (HTTP) etc., c) Current state for Firewall: NAT: Outbound is Manual and whatever default rule are defined for the WAN those are only present. d) If this section in relevant then for Firewall: Rules at WAN tab then following default rule has been generated. * RFC 1918 networks * * * * * Block private networks * Reserved/not assigned by IANA * * * * * * To use Public IP from ISP B i have done following steps a) Created Virtual IP using either ARP or IP Alias. b) Using Firewall: NAT: Port Forward i have created specific natting from one public IP to my internal Lan private IP for example :- RELWAN TCP/UDP * * 115.119.116.X.X1 80 (HTTP) 192.168.1.11 80 (HTTP) c) Current state for Firewall: NAT: Outbound is Manual and whatever default rule are defined for the RELWAN those are only present. d) If this section in relevant then for Firewall: Rules at RELWAN tab then following default rule has been generated. * RFC 1918 networks * * * * * * Reserved/not assigned by IANA * * * * * * Last thing before my actual query is to make you aware that to have multiple Wan setup i have done following steps a) Under System: Gateways at Groups Tab i have created new group as following MultipleGateway WANGW, RELWAN Tier 2,Tier 1 Multiple Gateway Test b) Then Under Firewall: Rules at LAN tab i have created a rule for internal traffic as follows * LAN net * * * MultipleGateway none c) This setup works if unplug first ISP traffic start routing using ISP 2 and vice-versa. Now my main query and problem is i am not able to use public IP address allocated by ISP B, i have tried many small tweaks but not successful in anyone. The notable difference between the two ISP is a) In case of ISP A there Public usable IP address are on same subnet so the gateway used for the WAN ip is same for the other public IP address. b) In case of ISP B there public usable IP address are on different subnet so the obvious the gateway IP for them is different from WAN gateway's IP. Please let me know how to use ISP B public usable IP address, in future also i am going to rely for more IPs from ISP B only.

    Read the article

  • My father wants to learn PHP-MySQL to port his application. What I should do to help?

    - by adijiwa
    My father is a doctor/physician. About 15 years ago he started writing an application to handle his patient's medical records in his clinic at home. The app has the ability to input patient's medical records (obviously), search patients by some criteria, manage medicine stocks, output receipt to printer, and some more CRUDs. He wrote it in dBase III+. A few years later he migrated to FoxPro 2.6 for DOS and finally in a few more years he rewrote his app in Visual FoxPro 9. And now (actually two years ago) he wants to rewrite it in PHP, but he don't know how. The Visual FoxPro version of this app is still running and has no serious problem except sometimes it performs slowly. Usually there are 1-5 concurrent users. The binary and database files are shared via windows share. He did all the coding as a hobby and for free (it is for his own clinic after all). He also use this app in two other offices he managed. Some reasons of why he wants to rewrite in PHP-MySQL: He wants to learn Easier to deploy (?) Easier client setup, need only a browser What should I do to help my father? How should he start? I explored some options: I let my father learn PHP and MySQL (and HTML (and JavaScript?)) from scratch. I create/bundle framework. I'm thinking on bundling CodeIgniter and a web UI framework (any suggestion?) especially to reduce effort on writing presentation codes. What do you think? tl;dr My father (a doctor) wants to rewrite his Visual FoxPro app in PHP-MySQL. He knows very little of PHP and MySQL but he wants to learn. What should I do to help? How should he start? Some facts: My father is 50 years old. His first encounter with a PC is in early 1980s. It was IBM PC with Intel 8088. He knows BASIC. He taught me how to use DOS and how to program with BASIC. The other language he knows fairly well is dBase/FoxPro. I got my bachelor CS degree last year. I know the internals of my father's app because sometime he wants me to help him writing his app. Sorry for my english.

    Read the article

  • Consultant in a firm that doesn't understand the tech!

    - by techsjs2012
    I got the job as a Consultant in a firm that has 3 other programmers. My job is to rewrite all the old system in Java, Spring etc but the staff programmers only know perl and the manager does not know any programming. I am trying to get them to understand that I have 6 projects to rewrite here but no one has design docs or spec. the staff programmers never had to write any documents. Plus I cant get the manager to understand the new java tech stuff.. he keeps asking some of the staff for views on things but the staff don't know it or understand it. Where do I go from here to make the manager understand that the staff programmers or someone has to write a design document so I know what to build. or if I have to write the documents how do I get the information?

    Read the article

  • iptables 1.4 and passive FTP on custom port

    - by Cracky
    after the upgrade from debian squeeze to wheezy I've got a problem with passive FTP connection. I could narrow it to be iptables related, as I could connect via FTP w/o problems after adding my IP to the iptables ACCEPT rule. Before the upgrade I was able just to do modprobe nf_conntract_ftp ports=21332 and adding iptables -A THRU -p tcp --dport 21332 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT now..it doesn't help anymore. The INPUT rule is being triggered as I can see in the counter, but the directory listing is the last thing it does. Setting up a passive-port range is the last thing I want to do, I dislike open ports. I also tried the trick with helper mod by adding following rule before the actual rule for 21332 iptables -A THRU -p tcp -i eth0 --dport 21332 -m state --state NEW -m helper --helper ftp-21332 -j ACCEPT but it doesn't help and is even not being triggered according to counter. The rule in the next line (w/o helper) is being triggered.. here some info: # iptables --version iptables v1.4.14 # lsmod |grep nf_ nf_nat_ftp 12460 0 nf_nat 18242 1 nf_nat_ftp nf_conntrack_ftp 12605 1 nf_nat_ftp nf_conntrack_ipv4 14078 32 nf_nat nf_defrag_ipv4 12483 1 nf_conntrack_ipv4 nf_conntrack 52720 7 xt_state,nf_conntrack_ipv4,xt_conntrack,nf_conntrack_ftp,nf_nat,nf_nat_ftp,xt_helper # uname -a Linux loki 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 GNU/Linux # iptables-save # Generated by iptables-save v1.4.14 on Sun Jun 30 03:54:28 2013 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :BLACKLIST - [0:0] :LOGDROP - [0:0] :SPAM - [0:0] :THRU - [0:0] :WEB - [0:0] :fail2ban-dovecot-pop3imap - [0:0] :fail2ban-pureftpd - [0:0] :fail2ban-ssh - [0:0] -A INPUT -p tcp -m multiport --dports 110,995,143,993 -j fail2ban-dovecot-pop3imap -A INPUT -p tcp -m multiport --dports 21,21332 -j fail2ban-pureftpd -A INPUT -p tcp -m multiport --dports 22 -j fail2ban-ssh -A INPUT -p tcp -m multiport --dports 110,995,143,993 -j fail2ban-dovecot-pop3imap -A INPUT -i lo -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROP -A INPUT -i eth0 -p tcp -m tcp --tcp-flags FIN,SYN FIN,SYN -j DROP -A INPUT -i eth0 -p tcp -m tcp --tcp-flags SYN,RST SYN,RST -j DROP -A INPUT -i eth0 -p tcp -m tcp --tcp-flags FIN,RST FIN,RST -j DROP -A INPUT -i eth0 -p tcp -m tcp --tcp-flags FIN,ACK FIN -j DROP -A INPUT -i eth0 -p tcp -m tcp --tcp-flags ACK,URG URG -j DROP -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A INPUT -j BLACKLIST -A INPUT -j THRU -A INPUT -j LOGDROP -A OUTPUT -j ACCEPT -A OUTPUT -s 93.223.38.223/32 -j ACCEPT -A BLACKLIST -s 38.113.165.0/24 -j LOGDROP -A BLACKLIST -s 202.177.216.0/24 -j LOGDROP -A BLACKLIST -s 130.117.190.0/24 -j LOGDROP -A BLACKLIST -s 117.79.92.0/24 -j LOGDROP -A BLACKLIST -s 72.47.228.0/24 -j LOGDROP -A BLACKLIST -s 195.200.70.0/24 -j LOGDROP -A BLACKLIST -s 195.200.71.0/24 -j LOGDROP -A LOGDROP -m limit --limit 5/sec -j LOG --log-prefix drop_packet_ --log-level 7 -A LOGDROP -p tcp -m tcp --dport 25 -m limit --limit 2/sec -j LOG --log-prefix spam_blacklist --log-level 7 -A LOGDROP -p tcp -m tcp --dport 80 -m limit --limit 2/sec -j LOG --log-prefix web_blacklist --log-level 7 -A LOGDROP -p tcp -m tcp --dport 22 -m limit --limit 2/sec -j LOG --log-prefix ssh_blacklist --log-level 7 -A LOGDROP -j REJECT --reject-with icmp-host-prohibited -A THRU -p icmp -m limit --limit 1/sec -m icmp --icmp-type 8 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 25 -j ACCEPT -A THRU -i eth0 -p udp -m udp --dport 53 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 80 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 110 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 143 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 465 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 585 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 993 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 995 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 2008 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 10011 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 21332 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 30033 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A fail2ban-dovecot-pop3imap -j RETURN -A fail2ban-dovecot-pop3imap -j RETURN -A fail2ban-pureftpd -j RETURN -A fail2ban-pureftpd -j RETURN -A fail2ban-ssh -j RETURN -A fail2ban-ssh -j RETURN COMMIT # Completed on Sun Jun 30 03:54:28 2013 So, as I said, I have no problems with connecting when adding my IP to go through..but that's not a solution as noone except me can connect anymore~ If someone got an idea what the problem is, please help me! Thanks Cracky

    Read the article

  • Can not open port 3306 on Ubuntu using iptables

    - by user94626
    I am trying to open port 3306 (for remote mysql connections) on my ubuntu 12.04 server machine but for the life of me can't get the damned thing to work! Here is what I did: 1) list current firewall rules: $> sudo iptables -nL -v output: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 225 16984 fail2ban-ssh tcp -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 22 220 69605 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- lo * 0.0.0.0/0 127.0.0.0/8 reject-with icmp-port-unreachable 486 54824 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1 60 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 19 988 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 1 52 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmptype 8 4 208 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 5/min burst 5 LOG flags 0 level 7 prefix "iptables denied: " 4 208 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 735 182K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-ssh (1 references) pkts bytes target prot opt in out source destination 225 16984 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 2) try to connect from remote machine: $> mysql -u root -p -h x.x.x.x output: timeout.... failed to connect 3) try to add a new rule to iptables: iptables -A INPUT -i eth0 -p tcp -m tcp --dport 3306 -j ACCEPT 4) make sure the new rule is added: $> sudo iptables -nL -v output: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 359 25972 fail2ban-ssh tcp -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 22 251 78665 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- lo * 0.0.0.0/0 127.0.0.0/8 reject-with icmp-port-unreachable 628 64420 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1 60 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 19 988 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 1 52 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmptype 8 5 260 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 5/min burst 5 LOG flags 0 level 7 prefix "iptables denied: " 5 260 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:3306 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 919 213K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-ssh (1 references) pkts bytes target prot opt in out source destination 359 25972 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 which appears to be the case (last line in "Chain INPUT" section). 5) try to connect again from remote machine: $> mysql -u root -p -h x.x.x.x output: timeout.... failed to connect which is failing again. 6) try to flush all rules: $> sudo iptables -F 7) this time I CAN CONNECT. 8) reboot server and try to connect, FAILURE. I suspect since the new rule is being appended at the end it will have no effect as there appears to be a "reject all" sort of rule before it. If this is the case, how to make sure the new rule is added in the right order? Otherwise, what am I missing? Please help.

    Read the article

  • Problem with DSL and Business Rules creation in Drools

    - by jillika iyer
    Hi, I am using Eclipse with the Drools plugin to create rules. I want to create business rules and main aim is to try and provide the user a set of options which he can use to create rules. For eg:If an Apple can have only 3 colors: I want to provide an option like a drop down so that the user can know before hand which are the options he can use in his rules. Is it possible? I am creating a dsl but unable to still provide the above functionality for a business rule. I am having an error implementing a basic dsl also. The code to add the dsl is as follows in my RuleRunner class() InputStream ruleSource = RuleRunner.class.getClassLoader().getResourceAsStream("/Rule1.dslr"); InputStream dslSource = RuleRunner.class.getClassLoader().getResourceAsStream("/sample-dsl.dsl"); //Load the rules , using DSL addRulesToThisPackage.addPackageFromDrl( new InputStreamReader(ruleSource),new InputStreamReader(dslSource)); I have both the sample-dsl .dsl and Rule1.dslr in my working directory. Error encountered at adding the dsl to the package (last line) Error stack: Exception in thread "main" java.lang.NullPointerException at java.io.Reader.<init>(Unknown Source) at java.io.InputStreamReader.<init>(Unknown Source) at com.org.RuleRunner.loadRuleFile(RuleRunner.java:96) at com.org.RuleRunner.loadRules(RuleRunner.java:48) at com.org.RuleRunner.runStatelessRules(RuleRunner.java:109) at com.org.RulesTest.main(RulesTest.java:41) my dsl file has basic mapping as per the online documentations. The dsl rule I created is: expander sample-dsl.dsl rule "A status changes B status" when There is an A - has an address There is a B - has name then - print updated A and Aaddress End I have created DSL in eclipse. Is the code I added for it to be loaded to my package correct?? Or am I missing something???? It seems like my program is unable to find the dsl? Please help. Can you point me towards the right direction to create a user friendly business rule ?? Thanks. J

    Read the article

  • Python/YACC: Resolving a shift/reduce conflict

    - by Rosarch
    I'm using PLY. Here is one of my states from parser.out: state 3 (5) course_data -> course . (6) course_data -> course . course_list_tail (3) or_phrase -> course . OR_CONJ COURSE_NUMBER (7) course_list_tail -> . , COURSE_NUMBER (8) course_list_tail -> . , COURSE_NUMBER course_list_tail ! shift/reduce conflict for OR_CONJ resolved as shift $end reduce using rule 5 (course_data -> course .) OR_CONJ shift and go to state 7 , shift and go to state 8 ! OR_CONJ [ reduce using rule 5 (course_data -> course .) ] course_list_tail shift and go to state 9 I want to resolve this as: if OR_CONJ is followed by COURSE_NUMBER: shift and go to state 7 else: reduce using rule 5 (course_data -> course .) How can I fix my parser file to reflect this? Do I need to handle a syntax error by backtracking and trying a different rule?

    Read the article

  • Django repeating vars/cache issue?

    - by Mark
    I'm trying to build a better/more powerful form class for Django. It's working well, except for these sub-forms. Actually, it works perfectly right after I re-start apache, but after I refresh the page a few times, my HTML output starts to look like this: <input class="text" type="text" id="pickup_addr-pickup_addr-pickup_addr-id-pickup_addr-venue" value="" name="pickup_addr-pickup_addr-pickup_addr-pickup_addr-venue" /> The pickup_addr- part starts repeating many times. I was looking for loops around the prefix code that might have cause this to happen, but the output isn't even consistent when I refresh the page, so I think something is getting cached somewhere, but I can't even imagine how that's possible. The prefix car should be reset when the class is initialized, no? Unless it's somehow not initializing something? class Form(object): count = 0 def __init__(self, data={}, prefix='', action='', id=None, multiple=False): self.fields = {} self.subforms = {} self.data = {} self.action = action self.id = fnn(id, 'form%d' % Form.count) self.errors = [] self.valid = True if not empty(prefix) and prefix[-1:] not in ('-','_'): prefix += '-' for name, field in inspect.getmembers(self, lambda m: isinstance(m, Field)): if name[:2] == '__': continue field_name = fnn(field.name, name) field.label = fnn(field.label, humanize(field_name)) field.name = field.widget.name = prefix + field_name + ife(multiple, '[]') field.id = field.auto_id = field.widget.id = ife(field.id==None, 'id-') + prefix + fnn(field.id, field_name) + ife(multiple, Form.count) field.errors = [] val = fnn(field.widget.get_value(data), field.default) if isinstance(val, basestring): try: val = field.coerce(field.format(val)) except Exception, err: self.valid = False field.errors.append(escape_html(err)) field.val = self.data[name] = field.widget.val = val for rule in field.rules: rule.fields = self.fields rule.val = field.val rule.name = field.name self.fields[name] = field for name, form in inspect.getmembers(self, lambda m: ispropersubclass(m, Form)): if name[:2] == '__': continue self.subforms[name] = self.__dict__[name] = form(data=data, prefix='%s%s-' % (prefix, name)) Form.count += 1 Let me know if you need more code... I know it's a lot, but I just can't figure out what's causing this!

    Read the article

  • iptables : how to allow incoming ftp traffic?

    - by logansama
    Hi, Still fighting my way through the jungle that is called iptables. I have managed to allow FTP access outside of our LAN: both these would work. NOTE: eth0 is the LAN interface and eth1 is the WAN interface. iptables -t filter -A FORWARD -i eth0 -p tcp --dport 20:21 -j ACCEPT or iptables -A FORWARD -i eth0 -o eth1 -p tcp --sport 20:21 --dport 1024:65535 -j ACCEPT But when i connect to a external FTP server i manage to log in and all is fine until it wishes to List the directory content. Then nothing happens as the data is blocked, due to the fact that i do not have a rule set up to allow it! (my last rule on the FORWARD chain is to block all traffic) I have tried a gazillion rules (many of which i did not understand) to try and allow the FTP traffic back through my server. One such rule for example was: iptables -A FORWARD -i eth1 -o eth0 -p tcp --sport 20:21 --dport 1024:65535 -j ACCEPT But i cannot get the List to work. It just times out after a while. Would anyone perhaps know how to build a rule which would allow FTP to List / allow such traffic back? Or have a link to sources i could work through? Thank you,

    Read the article

  • Enforce link in Team foundation server bug work item for duplicates

    - by Tewr
    We have just started out with Team Foundation Server 2008 / Visual Studio Team System and we are pleased to find how we can export and modify work items to our needs. However, this last thing that would make the setup perfect for us has proved somewhat difficult: We have exported the Bug work item type and have made modifications to it to appear differently to different groups of users. We do, however, see a potential problem in non-developers reporting bugs which turn out to be duplicates. We would like to enforce that users who close a ticket with resolved reason:duplicate also creates a link to the bug which is perceived as the first bug report. I have looked at System.RelatedLinkCount, and put the rule <FIELD type="Integer" name="RelatedLinkCount" refname="System.RelatedLinkCount"> <WHEN field="Microsoft.VSTS.Common.ResolvedReason" value="duplicate"> <PROHIBITEDVALUES> <LISTITEM value="0" /> </PROHIBITEDVALUES> </WHEN> </FIELD> However, when I try to put anything in that scope, the importer tells me that System.RelatedLinkCount does not accept the rule, no matter what I put, but the rule above shows what I am trying to do (even though the most preferable rule would also check that the bug that I link to is not a duplicate as well, though this is overkill :P) Has anyone else tried to enforce rules like this in work items? Is there another approach to solving the same issue? I am thankful for any thoughts on the matter.

    Read the article

  • IIS7 URL Rewriting: How not to drop HTTPS protocol from rewritten URL?

    - by Scott Mitchell
    I'm working on a website that's using IIS 7's URL rewriting feature to do a permanent redirect from example.com to www.example.com, as well as rewrites from similar domain names to the "main" one, such as from www.examples.com to www.example.com. This rewrite rule - shown below - has worked well for sometime now. However, we recently added HTTPS support and noticed that if users visit one of the URLs to be rewritten to www.example.com then HTTPS is dropped. For instance, if a user visits https://example.com they get redirected to http://www.example.com, whereas we would like them to be sent to https://www.example.com. Here is the rewrite rule of interest (in Web.config): <rule name="Canonical Host Name" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAny"> <add input="{HTTP_HOST}" pattern="^example\.com$" /> <add input="{HTTP_HOST}" pattern="^(www\.)?example\.net$" /> <add input="{HTTP_HOST}" pattern="^(www\.)?example\.info$" /> <add input="{HTTP_HOST}" pattern="^(www\.)?examples\.com$" /> </conditions> <action type="Redirect" url="http://www.example.com/{R:1}" redirectType="Permanent" /> </rule> As you can see, the action element's url attribute points directly to http://, so I get why https://example.com is redirected to http://www.example.com. My question is, how do I fix this? I tried (naively) to just drop the http:// part from the url attribute, but that didn't work. Thanks!

    Read the article

  • How to select "child" entities in subview?

    - by Andy
    I am trying to manage a drill-down list of data. I've got an entity, Contact, that has a to-many relationship with another entity, Rule. In my root view controller, I use a fetched results controller to manage and display the list of Contacts. When a Contact is tapped, I push a new view controller onto the stack with a list of the Contact's Rules. I have not been able to figure out how to use a second fetched results controller to display the Rules, so I'm using the following: // create a set of the contact's rules rules = [NSMutableSet set]; rules = [self.contact mutableSetValueForKey:@"rule"]; // create an array of rules from the set arrayOfRules = [NSMutableArray arrayWithCapacity:[rules count]]; for (id oneObject in rules) [arrayOfRules addObject:oneObject]; // sort the array of rules NSSortDescriptor *descriptor = [[NSSortDescriptor alloc] initWithKey:@"phoneLabel" ascending:YES]; [arrayOfRules sortUsingDescriptors:[NSArray arrayWithObject:descriptor]]; [descriptor release]; I create a set of Rules, then use that to create an array of Rules for sorting. I then use these two collections to populate the grouped table view. All of this appears to be working correctly. Here's my problem: There are several different actions a user can take in this view, and most of them require that I know which Rule was tapped. But I can't figure out how to get that. For instance, say a user wants to delete a Rule. It seems to me the proper approach is something like... [rules removeObject:ruleObjectToBeRemoved] ...but I can't figure out how to specifiy ruleObjectToBeRemoved. I hope all of this makes sense. As usual, thanks in advance for any advice you can offer.

    Read the article

  • Generics Java and Shadowing of type parameters

    - by rubixibuc
    This code seems to work fine class Rule<T> { public <T>Rule(T t) { } public <T> void Foo(T t) { } } Does the method type parameter shadow the class type parameter? Also when you create an object does it use the type parameter of the class? example Rule<String> r = new Rule<String>(); Does this normally apply to the type parameter of the class, in the situation where they do not conflict? I mean when only the class has a type parameter, not the constructor, or does this look for a type parameter in the constructor? If they do conflict how does this change? SEE DISCUSSION BELOW if I have a function call x = <Type Parameter>method(); // this is a syntax error even inside the function or class ; I must place a this before it, why is this, and does everything still hold true. Why don't I need to prefix anything for the constructor call. Shouldn't Oracle fix this.

    Read the article

  • Preserving Permalinks

    - by Daniel Moth
    One of the things that gets me on a rant is websites that break permalinks. If you have posted something somewhere and there is a public URL pointing to it, that URL should never ever return a 404. You are breaking all websites that ever linked to you and you are breaking all search engine links to your content (that others will try and follow). It is a pet peeve of mine. So when I had to move my blog, obviously I would preserve the root URL (www.danielmoth.com/Blog/), but I also wanted to preserve every URL my blog has generated over the years. To be clear, our focus here is on the URL formatting, not the content migration which I'll talk about in my next post. In this post, I'll describe my solution first and then what it solves. 1. The IIS7 Rewrite Module and web.config There are a few ways you can map an old URL to a new one (so when requests to the old URL come in, they get redirected to the new one). The new blog engine I use (dasBlog) has built-in functionality to do that (Scott refers to it here). Instead, the way I chose to address the issue was to use the IIS7 rewrite module. The IIS7 rewrite module allows redirecting URLs based on pattern matching, regular expressions and, of course, hardcoded full URLs for things that don't fall into any pattern. You can configure it visually from IIS Manager using a handy dialog that allows testing patterns against input URLs. Here is what mine looked like after configuring a few rules: To learn more about this technology check out this video, the reference page and this overview blog post; all 3 pages have a collection of related resources at the bottom worth checking out too. All the visual configuration ends up in a web.config file at the root folder of your website. If you are on a shared hosting service, probably the only way you can use the Rewrite Module is by directly editing the web.config file. Next, I'll describe the URLs I had to map and how that manifested itself in the web.config file. What I did was create the rules locally using the GUI, and then took the generated web.config file and uploaded it to my live site. You can view my web.config here. 2. Monthly Archives Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/2004_07_01_mothblog_archive.html dasBlog: /Blog/default,month,2004-07.aspx In my web.config file, the rule that deals with this is the one named "monthlyarchive_redirect". 3. Categories Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/labels/Personal.html dasBlog: /Blog/CategoryView,category,Personal.aspx In my web.config file the rule that deals with this is the one named "category_redirect". 4. Posts Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/2004/07/hello-world.html dasBlog: /Blog/Hello-World.aspx In my web.config file the rule that deals with this is the one named "post_redirect". Note: The decision is taken to use dasBlog URLs that do not include the date info (see the description of my Appearance settings). If we included the date info then it would have to include the day part, which blogger did not generate. This makes it impossible to redirect correctly and to have a single permalink for blog posts moving forward. An implication of this decision, is that no two blog posts can have the same title. The tool I will describe in my next post (inelegantly) deals with duplicates, but not with triplicates or higher. 5. Unhandled by a generic rule Unfortunately, the two blog engines use different rules for generating URLs for blog posts. Most of the time the conversion is as simple as the example of the previous section where a post titled "Hello World" generates a URL with the words separated by a hyphen. Some times that is not the case, for example: /Blog/2006/05/medc-wrap-up.html /Blog/MEDC-Wrapup.aspx or /Blog/2005/01/best-of-moth-2004.html /Blog/Best-Of-The-Moth-2004.aspx or /Blog/2004/11/more-windows-mobile-2005-details.html /Blog/More-Windows-Mobile-2005-Details-Emerge.aspx In short, blogger does not add words to the title beyond ~39 characters, it drops some words from the title generation (e.g. a, an, on, the), and it preserve hyphens that appear in the title. For this reason, we need to detect these and explicitly list them for redirects (no regular expression can help here because the full set of rules is not listed anywhere). In my web.config file the rule that deals with this is the one named "Redirect rule1 for FullRedirects" combined with the rewriteMap named "StaticRedirects". Note: The tool I describe in my next post will detect all the URLs that need to be explicitly redirected and will list them in a file ready for you to copy them to your web.config rewriteMap. 6. C# code doing the same as the web.config I wrote some naive code that does the same thing as the web.config: given a string it will return a new string converted according to the 3 rules above. It does not take into account the 4th case where an explicit hard-coded conversion is needed (the tool I present in the next post does take that into account). static string REGEX_post_redirect = "[0-9]{4}/[0-9]{2}/([0-9a-z-]+).html"; static string REGEX_category_redirect = "labels/([_0-9a-z-% ]+).html"; static string REGEX_monthlyarchive_redirect = "([0-9]{4})_([0-9]{2})_[0-9]{2}_mothblog_archive.html"; static string Redirect(string oldUrl) { GroupCollection g; if (RunRegExOnIt(oldUrl, REGEX_post_redirect, 2, out g)) return string.Concat(g[1].Value, ".aspx"); if (RunRegExOnIt(oldUrl, REGEX_category_redirect, 2, out g)) return string.Concat("CategoryView,category,", g[1].Value, ".aspx"); if (RunRegExOnIt(oldUrl, REGEX_monthlyarchive_redirect, 3, out g)) return string.Concat("default,month,", g[1].Value, "-", g[2], ".aspx"); return string.Empty; } static bool RunRegExOnIt(string toRegEx, string pattern, int groupCount, out GroupCollection g) { if (pattern.Length == 0) { g = null; return false; } g = new Regex(pattern, RegexOptions.IgnoreCase | RegexOptions.Compiled).Match(toRegEx).Groups; return (g.Count == groupCount); } Comments about this post welcome at the original blog.

    Read the article

  • Self-signed certificates for a known community

    - by costlow
    Recently announced changes scheduled for Java 7 update 51 (January 2014) have established that the default security slider will require code signatures and the Permissions Manifest attribute. Code signatures are a common practice recommended in the industry because they help determine that the code your computer will run is the same code that the publisher created. This post is written to help users that need to use self-signed certificates without involving a public Certificate Authority. The role of self-signed certificates within a known community You may still use self-signed certificates within a known community. The difference between self-signed and purchased-from-CA is that your users must import your self-signed certificate to indicate that it is valid, whereas Certificate Authorities are already trusted by default. This works for known communities where people will trust that my certificate is mine, but does not scale widely where I cannot actually contact or know the systems that will need to trust my certificate. Public Certificate Authorities are widely trusted already because they abide by many different requirements and frequent checks. An example would be students in a university class sharing their public certificates on a mailing list or web page, employees publishing on the intranet, or a system administrator rolling certificates out to end-users. Managed machines help this because you can automate the rollout, but they are not required -- the major point simply that people will trust and import your certificate. How to distribute self-signed certificates for a known community There are several steps required to distribute a self-signed certificate to users so that they will properly trust it. These steps are: Creating a public/private key pair for signing. Exporting your public certificate for others Importing your certificate onto machines that should trust you Verify work on a different machine Creating a public/private key pair for signing Having a public/private key pair will give you the ability both to sign items yourself and issue a Certificate Signing Request (CSR) to a certificate authority. Create your public/private key pair by following the instructions for creating key pairs.Every Certificate Authority that I looked at provided similar instructions, but for the sake of cohesiveness I will include the commands that I used here: Generate the key pair.keytool -genkeypair -alias erikcostlow -keyalg EC -keysize 571 -validity 730 -keystore javakeystore_keepsecret.jks Provide a good password for this file. The alias "erikcostlow" is my name and therefore easy to remember. Substitute your name of something like "mykey." The sigalg of EC (Elliptical Curve) and keysize of 571 will give your key a good strong lifetime. All keys are set to expire. Two years or 730 days is a reasonable compromise between not-long-enough and too-long. Most public Certificate Authorities will sign something for one to five years. You will be placing your keys in javakeystore_keepsecret.jks -- this file will contain private keys and therefore should not be shared. If someone else gets these private keys, they can impersonate your signature. Please be cautious about automated cloud backup systems and private key stores. Answer all the questions. It is important to provide good answers because you will stick with them for the "-validity" days that you specified above.What is your first and last name?  [Unknown]:  First LastWhat is the name of your organizational unit?  [Unknown]:  Line of BusinessWhat is the name of your organization?  [Unknown]:  MyCompanyWhat is the name of your City or Locality?  [Unknown]:  City NameWhat is the name of your State or Province?  [Unknown]:  CAWhat is the two-letter country code for this unit?  [Unknown]:  USIs CN=First Last, OU=Line of Business, O=MyCompany, L=City, ST=CA, C=US correct?  [no]:  yesEnter key password for <erikcostlow>        (RETURN if same as keystore password): Verify your work:keytool -list -keystore javakeystore_keepsecret.jksYou should see your new key pair. Exporting your public certificate for others Public Key Infrastructure relies on two simple concepts: the public key may be made public and the private key must be private. By exporting your public certificate, you are able to share it with others who can then import the certificate to trust you. keytool -exportcert -keystore javakeystore_keepsecret.jks -alias erikcostlow -file erikcostlow.cer To verify this, you can open the .cer file by double-clicking it on most operating systems. It should show the information that you entered during the creation prompts. This is the file that you will share with others. They will use this certificate to prove that artifacts signed by this certificate came from you. If you do not manage machines directly, place the certificate file on an area that people within the known community should trust, such as an intranet page. Import the certificate onto machines that should trust you In order to trust the certificate, people within your known network must import your certificate into their keystores. The first step is to verify that the certificate is actually yours, which can be done through any band: email, phone, in-person, etc. Known networks can usually do this Determine the right keystore: For an individual user looking to trust another, the correct file is within that user’s directory.e.g. USER_HOME\AppData\LocalLow\Sun\Java\Deployment\security\trusted.certs For system-wide installations, Java’s Certificate Authorities are in JAVA_HOMEe.g. C:\Program Files\Java\jre8\lib\security\cacerts File paths for Mac and Linux are included in the link above. Follow the instructions to import the certificate into the keystore. keytool -importcert -keystore THEKEYSTOREFROMABOVE -alias erikcostlow -file erikcostlow.cer In this case, I am still using my name for the alias because it’s easy for me to remember. You may also use an alias of your company name. Scaling distribution of the import The easiest way to apply your certificate across many machines is to just push the .certs or cacerts file onto them. When doing this, watch out for any changes that people would have made to this file on their machines. Trusted.certs: When publishing into user directories, your file will overwrite any keys that the user has added since last update. CACerts: It is best to re-run the import command with each installation rather than just overwriting the file. If you just keep the same cacerts file between upgrades, you will overwrite any CAs that have been added or removed. By re-importing, you stay up to date with changes. Verify work on a different machine Verification is a way of checking on the client machine to ensure that it properly trusts signed artifacts after you have added your signing certificate. Many people have started using deployment rule sets. You can validate the deployment rule set by: Create and sign the deployment rule set on the computer that holds the private key. Copy the deployment rule set on to the different machine where you have imported the signing certificate. Verify that the Java Control Panel’s security tab shows your deployment rule set. Verifying an individual JAR file or multiple JAR files You can test a certificate chain by using the jarsigner command. jarsigner -verify filename.jar If the output does not say "jar verified" then run the following command to see why: jarsigner -verify -verbose -certs filename.jar Check the output for the term “CertPath not validated.”

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >