Search Results

Search found 5589 results on 224 pages for 'rules and constraints'.

Page 42/224 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • GNU Makefile: multiple outputs from single rule + preventing intermediate files from being deleted

    - by makesaurus
    This is sort of a continuation of question from link text. The problem is that there is a rule generating multiple outputs from a single input, and the command is time-consuming so we would prefer to avoid recomputation. Now there is an additional twist, that we want to keep files from being deleted as intermediate files, and rules involve wildcards to allow for parameters. The solution suggested was that we set up the following rule: file-a.out: program file.in ./program file.in file-a.out file-b.out file-c.out file-b.out: file-a.out @ file-c.out: file-b.out @ Then, calling make file-c.out creates both and we avoid issues with running make in parallel with -j switch. All fine so far. The problem is the following. Because the above solution sets up a chain in the DAG, make considers it differently; the files file-a.out and file-b.out are treated as intermediate files, and they by default get deleted as unnecessary as soon as file-c.out is ready. A way of avoiding that was mentioned somewhere here, and consists of adding file-a.out and file-b.out as dependencies of a target .SECONDARY, which keeps them from being deleted. Unfortunately, this does not solve my case because my rules use wildcard patters; specifically, my rules look more like this: file-a-%.out: program file.in ./program $* file.in file-a-$*.out file-b-$*.out file-c-$*.out file-b-%.out: file-a-%.out @ file-c-%.out: file-b-%.out @ so that one can pass a parameter that gets included in the file name, for example by running make file-c-12.out The solution that make documentation suggests is to add these as implicit rules to the list of dependencies of .PRECIOUS, thus keeping these files from being deleted. The solution with .PRECIOUS works, but it also prevents these files from being deleted when a rule fails and files are incomplete. Is there any other way to make this work?

    Read the article

  • Preserving Permalinks

    - by Daniel Moth
    One of the things that gets me on a rant is websites that break permalinks. If you have posted something somewhere and there is a public URL pointing to it, that URL should never ever return a 404. You are breaking all websites that ever linked to you and you are breaking all search engine links to your content (that others will try and follow). It is a pet peeve of mine. So when I had to move my blog, obviously I would preserve the root URL (www.danielmoth.com/Blog/), but I also wanted to preserve every URL my blog has generated over the years. To be clear, our focus here is on the URL formatting, not the content migration which I'll talk about in my next post. In this post, I'll describe my solution first and then what it solves. 1. The IIS7 Rewrite Module and web.config There are a few ways you can map an old URL to a new one (so when requests to the old URL come in, they get redirected to the new one). The new blog engine I use (dasBlog) has built-in functionality to do that (Scott refers to it here). Instead, the way I chose to address the issue was to use the IIS7 rewrite module. The IIS7 rewrite module allows redirecting URLs based on pattern matching, regular expressions and, of course, hardcoded full URLs for things that don't fall into any pattern. You can configure it visually from IIS Manager using a handy dialog that allows testing patterns against input URLs. Here is what mine looked like after configuring a few rules: To learn more about this technology check out this video, the reference page and this overview blog post; all 3 pages have a collection of related resources at the bottom worth checking out too. All the visual configuration ends up in a web.config file at the root folder of your website. If you are on a shared hosting service, probably the only way you can use the Rewrite Module is by directly editing the web.config file. Next, I'll describe the URLs I had to map and how that manifested itself in the web.config file. What I did was create the rules locally using the GUI, and then took the generated web.config file and uploaded it to my live site. You can view my web.config here. 2. Monthly Archives Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/2004_07_01_mothblog_archive.html dasBlog: /Blog/default,month,2004-07.aspx In my web.config file, the rule that deals with this is the one named "monthlyarchive_redirect". 3. Categories Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/labels/Personal.html dasBlog: /Blog/CategoryView,category,Personal.aspx In my web.config file the rule that deals with this is the one named "category_redirect". 4. Posts Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/2004/07/hello-world.html dasBlog: /Blog/Hello-World.aspx In my web.config file the rule that deals with this is the one named "post_redirect". Note: The decision is taken to use dasBlog URLs that do not include the date info (see the description of my Appearance settings). If we included the date info then it would have to include the day part, which blogger did not generate. This makes it impossible to redirect correctly and to have a single permalink for blog posts moving forward. An implication of this decision, is that no two blog posts can have the same title. The tool I will describe in my next post (inelegantly) deals with duplicates, but not with triplicates or higher. 5. Unhandled by a generic rule Unfortunately, the two blog engines use different rules for generating URLs for blog posts. Most of the time the conversion is as simple as the example of the previous section where a post titled "Hello World" generates a URL with the words separated by a hyphen. Some times that is not the case, for example: /Blog/2006/05/medc-wrap-up.html /Blog/MEDC-Wrapup.aspx or /Blog/2005/01/best-of-moth-2004.html /Blog/Best-Of-The-Moth-2004.aspx or /Blog/2004/11/more-windows-mobile-2005-details.html /Blog/More-Windows-Mobile-2005-Details-Emerge.aspx In short, blogger does not add words to the title beyond ~39 characters, it drops some words from the title generation (e.g. a, an, on, the), and it preserve hyphens that appear in the title. For this reason, we need to detect these and explicitly list them for redirects (no regular expression can help here because the full set of rules is not listed anywhere). In my web.config file the rule that deals with this is the one named "Redirect rule1 for FullRedirects" combined with the rewriteMap named "StaticRedirects". Note: The tool I describe in my next post will detect all the URLs that need to be explicitly redirected and will list them in a file ready for you to copy them to your web.config rewriteMap. 6. C# code doing the same as the web.config I wrote some naive code that does the same thing as the web.config: given a string it will return a new string converted according to the 3 rules above. It does not take into account the 4th case where an explicit hard-coded conversion is needed (the tool I present in the next post does take that into account). static string REGEX_post_redirect = "[0-9]{4}/[0-9]{2}/([0-9a-z-]+).html"; static string REGEX_category_redirect = "labels/([_0-9a-z-% ]+).html"; static string REGEX_monthlyarchive_redirect = "([0-9]{4})_([0-9]{2})_[0-9]{2}_mothblog_archive.html"; static string Redirect(string oldUrl) { GroupCollection g; if (RunRegExOnIt(oldUrl, REGEX_post_redirect, 2, out g)) return string.Concat(g[1].Value, ".aspx"); if (RunRegExOnIt(oldUrl, REGEX_category_redirect, 2, out g)) return string.Concat("CategoryView,category,", g[1].Value, ".aspx"); if (RunRegExOnIt(oldUrl, REGEX_monthlyarchive_redirect, 3, out g)) return string.Concat("default,month,", g[1].Value, "-", g[2], ".aspx"); return string.Empty; } static bool RunRegExOnIt(string toRegEx, string pattern, int groupCount, out GroupCollection g) { if (pattern.Length == 0) { g = null; return false; } g = new Regex(pattern, RegexOptions.IgnoreCase | RegexOptions.Compiled).Match(toRegEx).Groups; return (g.Count == groupCount); } Comments about this post welcome at the original blog.

    Read the article

  • Transformation of Product Management in Telecommunications for Rapid Launch of Next Generation Products

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } The Telecom industry continues to evolve through disruptive products, uncertain markets, shorter product lifecycles and convergence of technologies. Today’s market has moved from network centric to consumer centric and focuses primarily on the customer experience. It has resulted in several product management challenges such as an increased complexity and volume of offerings, creating product variants, accelerating time-to-market, ability to provide multiple product views for varied stakeholders, leveraging OSS intelligence to BSS layer, product co-creation and increasing audit and security concerns for service providers. The document discusses how enterprise product management enabled by PLM-based product catalogue solutions helps to launch next generation products rapidly in the context of the Telecommunication Industry.   1.0.       Introduction   Figure 1: Business Scenario   Modern business demands the launch of complex products in a very short timeframe and effecting changes in the price plan faster without IT intervention. One of the key transformation initiatives companies are focusing on is in the area of product management transformation and operational efficiency improvement. As part of these initiatives, companies are investing in best- in-class COTs-based Product Management solutions developed on industry-wide standards.   The new COTs packages are planned to integrate with existing or new B/OSS systems to provide a strategic end-to-end agile solution for reduced time-to-market and order journey time. In addition, system rationalization is being undertaken to phase out legacy systems and migrate to strategic systems.   2.0.       An Overview of Product Management in Telecom   Product data in telecom is multi- dimensional and difficult to manage. It increased significantly due to the complexity of the product, product offerings on the converged network, increased volume of offerings, bundled offering structures and ever increasing regulatory requirements.   In addition, the shrinking product lifecycle in telecom makes it difficult to manage the dynamic product data. Mergers and acquisitions coupled with organic growth pose major challenges in product portfolio management. It is a roadblock in the journey towards becoming an agile organization.       Figure 2: Complexity in Product Management   Network Technology’ is the new dimension in telecom product management where the same products are realized through different networks i.e., Soiled network to Converged network. Consequently, the product solution is different.     Figure 3: Current Scenario - Pain Points in Product Management   The major business implications arising out of the current scenario are slow time-to-market and an inefficient process that affects innovation.   3.0. Transformation of Next Generation Product Management   Companies must focus on their Product Management Transformation Journey in the areas of:   ·       Management of single truth of product information across the organization/geographies which is currently managed in heterogeneous systems   ·       Management of the Intellectual Property (IP) on the product concept and partnership in the design of discrete components to integrate into the system   ·       Leveraging structured and unstructured product data within the extended enterprise to extract consumer insights and drive innovation   ·       Management of effective operational separation to comply with regulatory bodies   ·       Reuse of existing designs and add relevant features such as value-added services to enable effective product bundling     Figure 4: Next generation needs   PLM-based Enterprise Product Catalogue solutions efficiently address the above requirements and act as an enabler towards product management transformation and rapid product launch.   4.0. PLM-based Enterprise Product Management     Figure 5: PLM-based Enterprise Product Mastering   Enterprise Product Management (EPM) enables the business to manage complex product attributes of data in complex environments. Product Mastering helps create a 'single view' of the product by creating a business-driven, IT-supported environment where a global 'single truth record' is created, managed and reused.   4.1 The Business Case for Telco PLM-based solutions for Enterprise Product Management   ·       Telco PLM-based Product Mastering solutions provide a centralized authoring environment for product definition and control of all product data and rules   ·       PLM packages are designed to support multiple perspectives of product data (ordering perspective, billing perspective, provisioning perspective)   ·       Maintains relationships/links between different elements of the entire product definition   ·       Telco PLM packages are specialized in next generation lifecycle management requirements of products such as revision and state management, test and release management, role management and impact analysis)   ·       Takes into consideration all aspects of OSS product requirements compared to CRM product catalogue solutions where the product data managed is mostly order oriented and transactional     ·       New breed of Telco PLM packages are designed with 'open' standards such as SID and eTOM. They are interoperable, support integration frameworks such as subscription and notification.   ·       Telco PLM packages have developed good collaboration frameworks to integrate suppliers and partners into the product development value chain   4.2 Various Architectures/Approaches for Product Mastering using Telco PLM systems   4. 2.a Single Central Product Management (Mastering) Approach   Figure 6: Single Central Product Management (Master) Approach       This approach is implemented across verticals such as aerospace and automotive. It focuses on a physically centralized product master to which other sources are dependent on. The product definition data (Product bundles, service bundles, price plans, offers and discounts, product configuration rules and market campaigns) is created and maintained physically in a centralized environment. In addition, the product definition/authoring environment is centralized. The existing legacy product definition data available in CRM product catalogue, billing catalogue and the legacy product catalogue is migrated to the centralized PLM-based Enterprise Product Management solution.   Architectural changes must be made in the existing business landscape of applications to create and revise data because the applications have to refer to the central repository for approvals and validation of product configurations. It is achieved by modifying how the applications write data or how the applications can be adapted to use the rules to be managed and published.   Complete product configuration validation will be done in enterprise / central product catalogue and final configuration will be sent to the B/OSS system through the SOA compliant product distribution architecture. The approach/architecture enables greater control in terms of product data management and product data governance.   4.2.b Federated Product Management (Mastering) Architecture     Figure 7: Federated Product Management (Mastering) Architecture   In the federated product mastering approach, the basic unique product definition data (product id, description product hierarchy, basic price plans and simple product design rules) will be centrally created and will be maintained. And, the advanced product definition (Product bundling, promotions, offers & discount plans) will be created in respective down stream OSS systems. The advanced product definition (Product bundling, promotions, offers and discount plans) will be created in respective downstream OSS systems.   For example, basic product definitions such as attributes, product hierarchy and basic price plans will be created and maintained in Enterprise/Central product reference catalogue and distributed to downstream OSS systems. Respective downstream OSS systems build product bundles, promotions, advanced price plans over the basic product definition and master the advanced product definition. Central reference database accesses the respective other source product master data and assembles a point-in-time consolidated view of the product. The approach is typically adapted in some merger and acquisition scenarios where there is a low probability of a central physical authority managing the data. In addition, the migration effort in this case is minimal and there are no big architectural changes to the organization application landscape. However, this approach will not result in better product data management and data governance.   5.0 Customer Scenario – Before EPC deployment   A leading global telecommunications service provider wanted to launch a quad play and triple play service offering in the shortest possible lead time. The service provider was offering Broadband and VoIP services to customers. The company wanted to reuse a majority of the Broadband services and price plans and bundle them with new wireless and IPTV services for quad play and triple play. The challenges in launching the new service offerings were:       Figure 8: Triple Play Plan   ·       Broadband product data was stored in multiple product catalogues (CRM catalogue, Billing catalogue, spread sheets)   ·       Product managers spent a lot of time performing tasks involving duplication or re-keying of data. Manual effort caused errors, cost and time over-runs.   ·       No effective product and price data governance mechanism. Price change issues arising from the lack of data consistency across systems resulted in leakage of customer value and revenue.   ·       Product data had re-usability issues and was not in a structured format. It resulted in uncontrolled product portfolio creation and product management issues.   ·       Lack of enterprise product model resulted into product distribution challenges and thus delays in product launch.   ·       Designers are constrained by existing legacy product management solutions to model product/service requirements and product configuration rules such as upgrading, downgrading and cross selling.    5.1 Customer Scenario - After EPC deployment     Figure 9: SOA-based end-to-end EPC Solution   The company deployed PLM-based Enterprise Product Catalogue solutions to launch quad play service after evaluating various product catalogues. The broadband product offering, service and price data were migrated to the new system, and the product and price plan hierarchy for new offerings were created using the entities defined in the Enterprise Product Model. Supplier product catalogue data such as routers and set up boxes were loaded onto the new solution through SOA-based web service. Price plans and configuration rules were built in the new system. The validated final product configurations were extracted from the product catalogue in a SID format and were distributed to the downstream B/OSS systems through exposed SOA-based web services. The transformations required for the B/OSS system were handled using the transformation layer as part of the solution.   6.0 How PLM enabled Product Management Transformation         Figure 10: Product Management Transformation     PLM-based Product Catalogue Solution helped the customer reduce the product launch cycle time by 30% and enable transformation of Product Management for next generation services.   7.0 Conclusion   On the one hand, the telecom industry is undergoing changes due to disruptions, uncertain product markets and increased complexity of products. On the other hand, the ARPU is decreasing year-on-year. Communications Service Providers are embarking on convergence, bundled service offerings, flexibility to cross-sell and up-sell, introduce new value-added services, leverage Web 2.0 concepts and network capabilities. Consequently, large scale IT transformation initiatives to improve their ARPU supporting network and business transformations are a business imperative. Product Management has become a focus area. Companies are investing in best-in- class COTS solutions to reduce time-to-market, ensure rapid service delivery and improve operational efficiency. An efficient PLM-based enterprise product mastering solution plays a key role in achieving zero touch automation and rapid product launch.   References:   1.     Preston G.Smith, Donald G.Reineristsem, Van Nostrand Reinhold “Developing Products in Half the time”.   2.     John G. Innes, "Achieving Successful Product Change", Pitman Publishing.   3.     D T Pham and R M Setchi (16th Jan, 2001) "Authoring environment for documentation development" University of Wales Cardiff, U.K., Proceedings on Institution of Mechanical Engineers, Vol. 215, Part B.   4.     Oracle Product Hub for Communications:   http://www.oracle.com/us/products/applications/master-data-management/product-hub-082059.html  

    Read the article

  • Wildcard mapping in IIS 7.0 not working

    - by jmoney
    I can't seem to get the ASP.NET engine to handle ALL wildcard mapping. When I try to make a request that is supposed to be handled by the asp.net engine, i get a 404 error from the StaticFile handler Here is the content of my web.config file. You will notice that the last entry contains the wildcard mapping rules. <handlers> <clear /> <add name="LanapCaptchaHandler" path="LanapCaptcha.aspx" verb="*" type="Lanap.BotDetect.CaptchaHandler, Lanap.BotDetect" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="ScriptHandlerFactory" path="*.asmx" verb="*" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="ScriptHandlerFactoryAppServices" path="*_AppService.axd" verb="*" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="ScriptResource" path="ScriptResource.axd" verb="GET,HEAD" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="PHP5" path="*.php" verb="*" type="" modules="IsapiModule" scriptProcessor="C:\php\php5isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="bitness32" responseBufferLimit="4194304" /> <add name="rules-Integrated" path="*.rules" verb="*" type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="rules-ISAPI-2.0" path="*.rules" verb="*" type="" modules="IsapiModule" scriptProcessor="C:\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness32" responseBufferLimit="4194304" /> <add name="rules-64-ISAPI-2.0" path="*.rules" verb="*" type="" modules="IsapiModule" scriptProcessor="C:\Windows\Microsoft.NET\Framework64\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness64" responseBufferLimit="4194304" /> <add name="xoml-Integrated" path="*.xoml" verb="*" type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="xoml-ISAPI-2.0" path="*.xoml" verb="*" type="" modules="IsapiModule" scriptProcessor="C:\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness32" responseBufferLimit="4194304" /> <add name="xoml-64-ISAPI-2.0" path="*.xoml" verb="*" type="" modules="IsapiModule" scriptProcessor="C:\Windows\Microsoft.NET\Framework64\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness64" responseBufferLimit="4194304" /> <add name="svc-ISAPI-2.0-64" path="*.svc" verb="*" type="" modules="IsapiModule" scriptProcessor="%SystemRoot%\Microsoft.NET\Framework64\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness64" responseBufferLimit="4194304" /> <add name="svc-ISAPI-2.0" path="*.svc" verb="*" type="" modules="IsapiModule" scriptProcessor="%SystemRoot%\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness32" responseBufferLimit="4194304" /> <add name="svc-Integrated" path="*.svc" verb="*" type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="ASPClassic" path="*.asp" verb="GET,HEAD,POST" type="" modules="IsapiModule" scriptProcessor="%windir%\system32\inetsrv\asp.dll" resourceType="File" requireAccess="Script" allowPathInfo="false" preCondition="" responseBufferLimit="4194304" /> <add name="SecurityCertificate" path="*.cer" verb="GET,HEAD,POST" type="" modules="IsapiModule" scriptProcessor="%windir%\system32\inetsrv\asp.dll" resourceType="File" requireAccess="Script" allowPathInfo="false" preCondition="" responseBufferLimit="4194304" /> <add name="ISAPI-dll" path="*.dll" verb="*" type="" modules="IsapiModule" scriptProcessor="" resourceType="File" requireAccess="Execute" allowPathInfo="true" preCondition="" responseBufferLimit="4194304" /> <add name="TraceHandler-Integrated" path="trace.axd" verb="GET,HEAD,POST,DEBUG" type="System.Web.Handlers.TraceHandler" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="WebAdminHandler-Integrated" path="WebAdmin.axd" verb="GET,DEBUG" type="System.Web.Handlers.WebAdminHandler" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="AssemblyResourceLoader-Integrated" path="WebResource.axd" verb="GET,DEBUG" type="System.Web.Handlers.AssemblyResourceLoader" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="PageHandlerFactory-Integrated" path="*.aspx" verb="GET,HEAD,POST,DEBUG" type="System.Web.UI.PageHandlerFactory" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="SimpleHandlerFactory-Integrated" path="*.ashx" verb="GET,HEAD,POST,DEBUG" type="System.Web.UI.SimpleHandlerFactory" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="HttpRemotingHandlerFactory-rem-Integrated" path="*.rem" verb="GET,HEAD,POST,DEBUG" type="System.Runtime.Remoting.Channels.Http.HttpRemotingHandlerFactory, System.Runtime.Remoting, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="HttpRemotingHandlerFactory-soap-Integrated" path="*.soap" verb="GET,HEAD,POST,DEBUG" type="System.Runtime.Remoting.Channels.Http.HttpRemotingHandlerFactory, System.Runtime.Remoting, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="AXD-ISAPI-2.0-64" path="*.axd" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness64" responseBufferLimit="0" /> <add name="PageHandlerFactory-ISAPI-2.0-64" path="*.aspx" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness64" responseBufferLimit="0" /> <add name="SimpleHandlerFactory-ISAPI-2.0-64" path="*.ashx" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness64" responseBufferLimit="0" /> <add name="WebServiceHandlerFactory-ISAPI-2.0-64" path="*.asmx" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness64" responseBufferLimit="0" /> <add name="HttpRemotingHandlerFactory-rem-ISAPI-2.0-64" path="*.rem" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness64" responseBufferLimit="0" /> <add name="HttpRemotingHandlerFactory-soap-ISAPI-2.0-64" path="*.soap" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness64" responseBufferLimit="0" /> <add name="AXD-ISAPI-2.0" path="*.axd" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness32" responseBufferLimit="0" /> <add name="PageHandlerFactory-ISAPI-2.0" path="*.aspx" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness32" responseBufferLimit="0" /> <add name="SimpleHandlerFactory-ISAPI-2.0" path="*.ashx" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness32" responseBufferLimit="0" /> <add name="WebServiceHandlerFactory-ISAPI-2.0" path="*.asmx" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness32" responseBufferLimit="0" /> <add name="HttpRemotingHandlerFactory-rem-ISAPI-2.0" path="*.rem" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness32" responseBufferLimit="0" /> <add name="HttpRemotingHandlerFactory-soap-ISAPI-2.0" path="*.soap" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness32" responseBufferLimit="0" /> <add name="CGI-exe" path="*.exe" verb="*" type="" modules="CgiModule" scriptProcessor="" resourceType="File" requireAccess="Execute" allowPathInfo="true" preCondition="" responseBufferLimit="4194304" /> <add name="TRACEVerbHandler" path="*" verb="TRACE" type="" modules="ProtocolSupportModule" scriptProcessor="" resourceType="Unspecified" requireAccess="None" allowPathInfo="false" preCondition="" responseBufferLimit="4194304" /> <add name="OPTIONSVerbHandler" path="*" verb="OPTIONS" type="" modules="ProtocolSupportModule" scriptProcessor="" resourceType="Unspecified" requireAccess="None" allowPathInfo="false" preCondition="" responseBufferLimit="4194304" /> <add name="StaticFile" path="*" verb="*" type="" modules="StaticFileModule,DefaultDocumentModule,DirectoryListingModule" scriptProcessor="" resourceType="Either" requireAccess="Read" allowPathInfo="false" preCondition="" responseBufferLimit="4194304" /> <add name="WILDCARD MAPPING 32 BIT" path="*" verb="*" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="None" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness32" responseBufferLimit="4194304" /> </handlers>

    Read the article

  • Joining Tables Based on Foreign Keys

    - by maestrojed
    I have a table that has a lot of fields that are foreign keys referencing a related table. I am writing a script in PHP that will do the db queries. When I query this table for its data I need to know the values associated with these keys not the key. How do most people go about this? A 101 way to do this would be to query this table for its data including the foreign keys and then query the related tables to get each key's value. This could be a lot of queries (~10). Question 1: I think I could write 1 query with a bunch of joins. Would that be better? This approach also requires the querying script to know which table fields are foreign keys. Since I have many tables like this but all with different fields, this means writing nice generic functions is hard. MySQL InnoDB tables allow for foreign constraints. I know the database has these set up correctly. Question 2: What about the idea of querying the table and identifying what the constraints are and then matching them up using whatever process I decide on from Question 1. I like this idea but never see it being used in code. Makes me think its not a good idea for some reason. I would use something like SHOW CREATE TABLE tbl_name; to find what constraints/relationships exist for that table. Thank you for any suggestions or advice.

    Read the article

  • Oracle manually add an FK constraint

    - by Oxymoron
    Alright, since a client wants to automate a certain process, which includes creating a new key structure in a LIVE database, I need to create relations between tables.columns. Now I've found the tables ALL_CONS_COLS en USER_CONSTRAINTS to hold information about constraints. If I were to manually create constraints, by inserting into these tables, I should be able to recreate the original constraints. My question: are there any more tables I should look into? Do you have an alternate suggestions, as this sounds VERY dirty and error prone to begin with. Current modus operandi: Create a new column in each table for the PK; Generate a guid for this PK; Create a new column in each table for the FKs; Fetch the guid associated with the FK; ....... done sofar...... Add new constraint based on the old one; Remove old constraint; Rename new columns; This is kind of dodgy and I'd rather change my method, any ideas would be helpful. To put it different, client wants to change key structure from int to guid on a live database. What's the best way to approach this

    Read the article

  • Accessing and inheriting Windows Message for other Windows Message in Delphi

    - by HX_unbanned
    I am using WMSysCommand messages to modify Caption bar button ( Maximize / Minimize ) behaivor and recent update requiered to use WMNCHitTest, but I do not want to split these two related messages in multiplie procedures because of lengthy code. Can I access private declaration ( message ) from other message? And if I can - How to do it? procedure TForm1.WMNCHitTest(var Msg: TWMNCHitTest) ; begin SendMessage(Handle, HTCAPTION, WM_NCHitTest, 0); // or other wParam or lParam ???? end; procedure TForm1.WMSysCommand; begin if (Msg.CmdType = SC_MAXIMIZE or 61488) or (Msg.Result = htCaption or 2) then // if command is Maximize or reciever message of Caption Bar click begin if CheckWin32Version(6, 0) then Constraints.MaxHeight := 507 else Constraints.MaxHeight := 499; Constraints.MaxWidth := 0; end else if (Msg.CmdType = SC_MINIMIZE or 61472) or (Msg.Result = htCaption or 2) then // if command is Minimize begin if (EnsureRange(Width, 252, 510) >= (510 / 2)) then PreviewOpn.Caption := '<' else PreviewOpn.Caption := '>'; end; DefaultHandler(Msg); // reset Message handler to default ( Application ) end; Soo ... do I think correctly and sipmly do not know correct commands or I am thinking total bullsh*t? Regards. Thanks for any help...

    Read the article

  • Java / Spring MVC 3 validation of an email address

    - by Tim
    I have a Java backend with Spring MVC and I am using validation in this way on my domain object for an email address: import javax.validation.constraints.NotNull; import javax.validation.constraints.Pattern; import javax.validation.constraints.Size; ... @NotNull @Size(min = 1, max = 100) @Pattern(regexp="^([a-zA-Z0-9\\-\\.\\_]+)'+'(\\@)([a-zA-Z0-9\\-\\.]+)'+'(\\.)([a-zA-Z]{2,4})$") private String email; But all I get with these lines of code Set<ConstraintViolation<Person>> failures = validator.validate(personObject); ... Map<String, String> failureMessages = new HashMap<String, String>(); for (ConstraintViolation<Person> failure : failures) { failureMessages.put(failure.getPropertyPath().toString(), failure.getMessage()); System.out.println(failure.getPropertyPath().toString()+" - "+failure.getMessage()) } I get this on the console: email - must match "^([a-zA-Z0-9\\-\\.\\_]+)'+'(\\@)([a-zA-Z0-9\\-\\.]+)'+'(\\.)([a-zA-Z]{2,4})$" but I have as email address [email protected], so the regexp does not match. So I have two prolems: What's wrong here? And how can I define a error message on my own, because display this to the user, that is not a good thing :-) Thank you in advance for your help and Best Regards.

    Read the article

  • ZFS/Btrfs/LVM2-like storage with advanced features on Linux?

    - by Easter Sunshine
    I have 3 identical internal 7200 RPM SATA hard disk drives on a Linux machine. I'm looking for a storage set-up that will give me all of this: Different data sets (filesystems or subtrees) can have different RAID levels so I can choose performance, space overhead, and risk trade-offs differently for different data sets while having a few number of physical disks (very important data can be 3xRAID1, important data can be 3xRAID5, unimportant reproducible data can be 3xRAID0). If each data set has an explicit size or size limit, then the ability to grow and shrink the size limit (offline if need be) Avoid out-of-kernel modules R/W or read-only COW snapshots. If it's a block-level snapshots, the filesystem should be synced and quiesced during a snapshot. Ability to add physical disks and then grow/redistribute RAID1, RAID5, and RAID0 volumes to take advantage of the new spindle and make sure no spindle is hotter than the rest (e.g., in NetApp, growing a RAID-DP raid group by a few disks will not balance the I/O across them without an explicit redistribution) Not required but nice-to-haves: Transparent compression, per-file or subtree. Even better if, like NetApps, analyzes the data first for compressibility and only compresses compressible data Deduplication that doesn't have huge performance penalties or require obscene amounts of memory (NetApp does scheduled deduplication on weekends, which is good) Resistance to silent data corruption like ZFS (this is not required because I have never seen ZFS report any data corruption on these specific disks) Storage tiering, either automatic (based on caching rules) or user-defined rules (yes, I have all-identical disks now but this will let me add a read/write SSD cache in the future). If it's user-defined rules, these rules should have the ability to promote to SSD on a file level and not a block level. Space-efficient packing of small files I tried ZFS on Linux but the limitations were: Upgrading is additional work because the package is in an external repository and is tied to specific kernel versions; it is not integrated with the package manager Write IOPS does not scale with number of devices in a raidz vdev. Cannot add disks to raidz vdevs Cannot have select data on RAID0 to reduce overhead and improve performance without additional physical disks or giving ZFS a single partition of the disks ext4 on LVM2 looks like an option except I can't tell whether I can shrink, extend, and redistribute onto new spindles RAID-type logical volumes (of course, I can experiment with LVM on a bunch of files). As far as I can tell, it doesn't have any of the nice-to-haves so I was wondering if there is something better out there. I did look at LVM dangers and caveats but then again, no system is perfect.

    Read the article

  • broken apache .htaccess (mod_rewrite)

    - by Tim
    Hey there, I'm running into an apache mod_rewrite configuration issue on one of our machines. Has anyone encountered / overcome anyone of these issues. URL1 ( http://www.uppereast.com ) is not being redirected to URL2 ( http://www.nyclocalliving.com ). This definitely worked in my test environment where a localhost address was rewritten to URL2 ( RewriteRule ^http://upe.localhost$ http://www.nyclocalliving.com ). I'm trying to get the all of the redirect rules working ( 2200 + ), but the 'http://www.nyclocalliving.com' site encounters a server error if I use more that 1000 or more rules. A) .htaccess file - I've tried the simplest approach which worked in a local environment 75 # Various rewrite rules. 76 <IfModule mod_rewrite.c> 77 RewriteEngine on 78 79 # BEGIN new URL Mapping rules 80 #RewriteRule ^http://www.uppereast.com/$ http://www.nyclocalliving.com ... 2307 #RewriteRule ^http://www.uppereast.com/zipcodechange.html$ http://www.nyclocalliving.com/zip-code-change fig. 1 B) /var/log/httpd/error_log file - there are these seg. fault errors when I enable the first rule ( line 80 ). no error logs otherwise. 1893 [Fri Sep 25 17:53:46 2009] [notice] Digest: generating secret for digest authentication ... 1894 [Fri Sep 25 17:53:46 2009] [notice] Digest: done 1895 [Fri Sep 25 17:53:46 2009] [notice] Apache/2.2.3 (CentOS) configured -- resuming normal operations 1896 [Fri Sep 25 17:53:47 2009] [notice] child pid 29774 exit signal Segmentation fault (11) 1897 [Fri Sep 25 17:53:47 2009] [notice] child pid 29775 exit signal Segmentation fault (11) 1898 [Fri Sep 25 17:53:47 2009] [notice] child pid 29776 exit signal Segmentation fault (11) 1899 [Fri Sep 25 17:53:47 2009] [notice] child pid 29777 exit signal Segmentation fault (11) 1900 [Fri Sep 25 17:53:47 2009] [notice] child pid 29778 exit signal Segmentation fault (11) 1901 [Fri Sep 25 17:53:47 2009] [notice] child pid 29779 exit signal Segmentation fault (11) fig. 2 C) Some more debug information from the shell; the mod_rewrite is turned on and this is the machine architecture 1 # apachectl -t -D DUMP_MODULES | more 2 Loaded Modules: 3 core_module (static) 4 ... 5 rewrite_module (shared) 1 # uname -a 2 Linux RegionalWeb 2.6.24-23-xen #1 SMP Mon Jan 26 03:09:12 UTC 2009 x86_64 x86_64 x86_64 GNU/Linux fig. 3 I looked into some previous posts (http://serverfault.com/questions/18744/htaccess-not-working-modrewrite), but didn't find a solution for this. I'm sure there's a small switch somewhere that I'm missing. Thanks in advance Tim

    Read the article

  • Is it possibile to alow port forwarding only for specific IP public addresses

    - by adopilot
    I have freeBSD router and it host public IP address, I am using ipnat.rules to configure port forwarding prom public network inside my private network. Now I wondering can I restrict only specific public IP addresses to can pass trough my port forwarding. What I want is to only my specific public IP addresses can walk inside my network on specific ports. Here is how now look like my ipnat.rules file rdr fxp0 217.199.XXX.XXX/32 port 7900-> 192.168.1.12 port 80 tcp

    Read the article

  • Debugger for Iptables

    - by chris_l
    Hi, I'm looking for an easy way to follow a packet through the iptables rules. This is not so much about logging, because I don't want to log all traffic (and I only want to have LOG targets for very few rules). Something like Wireshark for Iptables. Or maybe even something similar to a debugger for a programming language. Thanks Chris

    Read the article

  • How to unblock outgoing HTTP and HTTPS traffic in iptables?

    - by EApubs
    With the following iptable rules, I was unable to do an apt update and ping a website. Whats wrong with the rules? How to fix it? What is the exact rule to fix it? Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:325 DROP all -- anywhere anywhere Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • How to rate-limit in nginx, but including/excluding certain IP addresses?

    - by Jason Cohen
    I'm able to use limit_req to rate-limit all requests to my server. However I'd like to remove the rate restriction for certain IP addresses (i.e. whitelist) and use a different rate restriction for certain others (i.e. certain IPs I'd like as low as 1r/s). I tried using conditionals (e.g. if ( $remote_addr = "1.2.3.4" ) {}) but that seems to work only with rewrite rules, not for rate-limit rules.

    Read the article

  • How can I export search folders in Outlook 2010?

    - by Martin
    In Outlook it is possible to export rules. Is it also possible to export custom search folders? I am trying to export the custom search folders I have defined in Outlook 2010 (the logic, not the contents). I have tried: right clicking the search folders and looking into the available menus going into the outlook Import/Export menu, but I can only export real folders to .pst etc. looked into the rules menu

    Read the article

  • iptables & allowed port refusing connection

    - by marfarma
    Can you see what I'm doing wrong? On Ubuntu Server 9.1, I'm attempting to allow traffic on port 1143 for a non-privileged IMAP host. Connection is refused when testing with telnet example.com 1143 but connection is allowed testing with telnet example.com 80 from my pc to remote internet hosted server. Both rules appear identical and are located near each other with no rules rejecting connections intervening in the rules file. I can't figure it out. iptables -L returns this: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere REJECT all -- anywhere 127.0.0.0/8 reject-with icmp-port-unreachable ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:www ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT tcp -- anywhere anywhere tcp dpt:http-alt ACCEPT tcp -- anywhere anywhere tcp dpt:7070 ACCEPT tcp -- anywhere anywhere tcp dpt:1143 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh ACCEPT icmp -- anywhere anywhere icmp echo-request LOG all -- anywhere anywhere limit: avg 5/min burst 5 LOG level debug prefix `iptables denied: ' REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere and my rules file contains this: # Generated by iptables-save v1.4.4 on Wed May 26 19:08:34 2010 *nat :PREROUTING ACCEPT [3556:217296] :POSTROUTING ACCEPT [6909:414847] :OUTPUT ACCEPT [6909:414847] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8080 -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8080 COMMIT # Completed on Wed May 26 19:08:34 2010 # Generated by iptables-save v1.4.4 on Wed May 26 19:08:34 2010 *filter :INPUT ACCEPT [1:52] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1:212] -A INPUT -i lo -j ACCEPT -A INPUT -d 127.0.0.0/8 ! -i lo -j REJECT --reject-with icmp-port-unreachable -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT -A INPUT -p tcp -m tcp --dport 8080 -j ACCEPT -A INPUT -p tcp -m tcp --dport 7070 -j ACCEPT -A INPUT -p tcp -m tcp --dport 1143 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 -A INPUT -j REJECT --reject-with icmp-port-unreachable -A FORWARD -j REJECT --reject-with icmp-port-unreachable -A OUTPUT -j ACCEPT COMMIT # Completed on Wed May 26 19:08:34 2010

    Read the article

  • "make menuconfig" throwing cannot find -lc error in my Fedora 11 PC

    - by Sen
    When i try to do a make menuconfig in a Fedora 11 machine it is throwing the following error message: [root@PC04 kernel]# make menuconfig HOSTCC -static scripts/basic/fixdep scripts/basic/fixdep.c: In function âtrapsâ: scripts/basic/fixdep.c:377: warning: dereferencing type-punned pointer will break strict-aliasing rules scripts/basic/fixdep.c:379: warning: dereferencing type-punned pointer will break strict-aliasing rules /usr/bin/ld: cannot find -lc collect2: ld returned 1 exit status make[1]: *** [scripts/basic/fixdep] Error 1 make: *** [scripts_basic] Error 2 Please help me on this issue? How can i solve this? Thanks, Sen

    Read the article

  • Custom Rule Action in Outlook

    - by Zee99
    How to create my own custom action in Microsoft Outlook Rules? In Outlook, when creating a rule in the Rules wizard, we set first the conditions and then set the actions that we choose from a list of predefined actions. Is there a way to add my own action to the existing actions programmatically? I also see an action called custom action, when i click it it opens up a small window with an empty combobox, can i add my custom action there, and how?

    Read the article

  • Can't login to Debian (OpenVZ guest) server after setting up IPTables. How to Fix it?

    - by EApubs
    I have an OpenVZ VPS server with Debian. I just setup IPTables to allow the SSH port rebooted it. (Also set the rules to auto load on startup). Now I can't login to the server! How to fix this? Here are the rules : Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:325 DROP all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • New-ManagedContentSettings - not working properly under Exchange 2010

    - by mfinni
    I have a client that is divesting a business unit into a new AD forest, Exchange org, etc. We're using Quest tools to migrate users and mailboxes. However, I have to build the new infrastructure to match the old one. In the old one, we're using Managed Folder Mailbox Policies to limit (or allow) retention. They started with Exchange 2007 and never upgraded to Retention Policies; oh well. So, in the old environment, when you use a 2007 server to define a new Managed Content Setting, you can pick "Email" from the dropdown for MessageClass. This is a display name; the actual MessageClass values are thus: MessageClass : IPM.Note;IPM.Note.AS/400 Move Notification Form v1.0;IPM.Note.Delayed;IPM.Note.Exchange.ActiveSync.Report;IPM.Note.JournalReport.Msg;IPM.Note.JournalReport.Tnef;IPM.Note.Microsoft.Missed.Voice;IPM.Note.Rules.OofTemplate.Microsoft;IPM.Note.Rules.ReplyTemplate.Microsoft;IPM.Note.Secure.Sign;IPM.Note.SMIME;IPM.Note.SMIME.MultipartSigned;IPM.Note.StorageQuotaWarning;IPM.Note.StorageQuotaWarning.Warning;IPM.Notification.Meeting.Forward;IPM.Outlook.Recall;IPM.Recall.Report.Success;IPM.Schedule.Meeting.*;REPORT.IPM.Note.NDR If I take that and try to mangle it into a new cmdlet for Ex2010 in my new environment here's what I get New-ManagedContentSettings -Name "Delete Messages older then 90 days" -FolderName "Entire Mailbox" -RetentionEnabled $True -AgeLimitForRetention 90 -TriggerForRetention WhenDelivered -RetentionAction DeleteAndAllowRecovery -MessageClass "IPM.Note","IPM.Note.AS/400MoveNotificationFormv1.0","IPM.Note.Delayed","IPM.Note.Exchange.ActiveSync.Report","IPM.Note.JournalReport.Msg","IPM.Note.JournalReport.Tnef","IPM.Note.Microsoft.Missed.Voice","IPM.Note.Rules.OofTemplate.Microsoft","IPM.Note.Rules.ReplyTemplate.Microsoft","IPM.Note.Secure.Sign","IPM.Note.SMIME","IPM.Note.SMIME.MultipartSigned","IPM.Note.StorageQuotaWarning","IPM.Note.StorageQuotaWarning.Warning","IPM.Notification.Meeting.Forward","IPM.Outlook.Recall","IPM.Recall.Report.Success","IPM.Schedule.Meeting.*","REPORT.IPM.Note.NDR" -whatif Invoke-Command : Cannot bind parameter 'MessageClass' to the target. Exception setting "MessageClass": "The length of t he property is too long. The maximum length is 255 and the length of the value provided is 518." At C:\Users\MFinnigan.sa\AppData\Roaming\Microsoft\Exchange\RemotePowerShell\pfexcas02.fve.ad.5ssl.com\pfexcas02.fve.ad .5ssl.com.psm1:28204 char:29 + $scriptCmd = { & <<<< $script:InvokeCommand ` + CategoryInfo : WriteError: (:) [New-ManagedContentSettings], ParameterBindingException + FullyQualifiedErrorId : ParameterBindingFailed,Microsoft.Exchange.Management.SystemConfigurationTasks.NewManaged ContentSettings So, the config object can store all that mess, but I can't fit it in through the cmdlet to create the object. Lovely. Any ideas?

    Read the article

  • Reversing a mod_rewrite rule

    - by KIRA
    I want to redirect accesses from http://www.domain.com/test.php?sub=subdomain&type=cars to http://subdomain.domain.com/cars I already have mod_rewrite rules to do the opposite: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{HTTP_HOST} !^(www)\. [NC] RewriteCond %{HTTP_HOST} ^(.*)\.(.*)\.com [NC] RewriteRule (.*) http://www.%2.com/index.php?route=$1&name=%1 [R=301,L] What changes do I need to make to these rules to redirect requests from the script to the subdomain?

    Read the article

  • Creating a whitelist for RDP access

    - by Tgys
    Since people are getting unauthorized access to my Windows Server (bruteforced over several months..), I'd like to set up a whitelist for RDP access. I have tried the following with Windows Firewall inbound rules: This still allows other users to connect through RDP. Is there any way to block such unauthorized access through a whitelist? EDIT: The firewall is enabled, and it's the only firewall running on the machine. Rules like allowing port 80 traffic behave correctly.

    Read the article

  • Response code for Chinese spiders? [closed]

    - by pt2ph8
    My server is being "attacked" by Chinese spiders that don't respect the rules in my robots.txt. They are being very aggressive and using a lot of resources, so I'm going to set up some rules in nginx to block them by user agent. Question: which response code should I return, 403, 444 (empty response in nginx) or something else? I'm wondering how the spiders will react to different status codes. What's the best practice?

    Read the article

  • Algorithms for City Simulation?

    - by anon
    I want to create a city filled with virtual creatures. Say like Sim City, where each creature walks around, doing it's own tasks. I'd prefer the city to not 'explode' or do weird things -- like the population dies off, or the population leaves, or any other unexpected crap. Is there a set of basic rules I can encode each agent with so that the city will be 'stable'? (Much like how for physics simulations, we have some basic rules that govern everything; is there a set of rules that governs how a simulation of a virtual city will be stable?) I'm new to this area and have no idea what algorithms/books to look into. Insights deeply appreciated. Thanks!

    Read the article

  • adding jquery validation after form created

    - by CoffeeCode
    im using a jquery validation plugin first i add the validation to the form: $('#EmployeeForm').validate({ rules: { "Employee.FirstName": "required", "Employee.PatronymicName": "required", "Employee.LastName": "required", "Employee.BirthDay": { required: true, date: true } }, messages: { "Employee.FirstName": { required: "*" }, "Employee.PatronymicName": { required: "*" }, "Employee.LastName": { required: "*" }, "Employee.BirthDay": { required: "*", date: "00.00.00 format" } } }); and then latter a need to add validation rules to other form elements: $('#Address_A.Phone1, #Address_A.Phone2, #Address_B.Phone1, #Address_B.Phone2').rules("add", { digits: true, messages: { digits: "?????? ?????" } }); but i get an error: 'form' is null or not an object i check, the form and all the elements in it are created before i add validation to it. cant figer out whats wrong.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >