Search Results

Search found 8605 results on 345 pages for 'general dynamics'.

Page 326/345 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • Why Does Adding a UDF or Code Truncates the # of Resources in List?

    - by Jeffrey McDaniel
    Go to the Primavera - Resource Assignment History subject area.  Go under Resources, General and add fields Resource Id, Resource Name and Current Flag. Because this is using a historical subject area with Type II slowly changing dimensions for Resources you may get multiple rows for each resource if there have been any changes on the resource.  You may see a few records with current flags = 0, and you will see a row with current flag = 1 for all resources. Current flag = 1 represents this is the most up to date row for this resource.  In this query the OBI server is only querying the W_RESOURCE_HD dimension.  (Query from nqquery log) select distinct 0 as c1,      D1.c1 as c2,      D1.c2 as c3,      D1.c3 as c4 from       (select distinct T10745.CURRENT_FLAG as c1,                T10745.RESOURCE_ID as c2,                T10745.RESOURCE_NAME as c3           from                 W_RESOURCE_HD T10745 /* Dim_W_RESOURCE_HD_Resource */            where  ( T10745.LAST_RUN_PER_DAY_FLAG = 1 )       ) D1 If you add a resource code to the query now it is forcing the OBI server to include data from W_RESOURCE_HD, W_CODES_RESOURCE_HD, as well as W_ASSIGNMENT_SPREAD_HF. Because the Resource and Resource Codes are in different dimensions they must be joined through a common fact table. So if at anytime you are pulling data from different dimensions it will ALWAYS pass through the fact table in that subject areas. One rule is if there is no fact value related to that dimensional data then nothing will show. In this case if you have a list of 100 resources when you query just Resource Id, Resource Name and Current Flag but when you add a Resource Code the list drops to 60 it could be because those resources exist at a dictionary level but are not assigned to any activities and therefore have no facts. As discussed in a previous blog, its all about the facts.   Here is a look at the query returned from the OBI server when trying to query Resource Id, Resource Name, Current Flag and a Resource Code.  You'll see in the query there is an actual fact included (AT_COMPLETION_UNITS) even though it is never returned when viewing the data through the Analysis. select distinct 0 as c1,      D1.c2 as c2,      D1.c3 as c3,      D1.c4 as c4,      D1.c5 as c5,      D1.c1 as c6 from       (select sum(T10754.AT_COMPLETION_UNITS) as c1,                T10706.CODE_VALUE_02 as c2,                T10745.CURRENT_FLAG as c3,                T10745.RESOURCE_ID as c4,                T10745.RESOURCE_NAME as c5           from                 W_RESOURCE_HD T10745 /* Dim_W_RESOURCE_HD_Resource */ ,                W_CODES_RESOURCE_HD T10706 /* Dim_W_CODES_RESOURCE_HD_Resource_Codes_HD */ ,                W_ASSIGNMENT_SPREAD_HF T10754 /* Fact_W_ASSIGNMENT_SPREAD_HF_Assignment_Spread */            where  ( T10706.RESOURCE_OBJECT_ID = T10754.RESOURCE_OBJECT_ID and T10706.LAST_RUN_PER_DAY_FLAG = 1 and T10745.ROW_WID = T10754.RESOURCE_WID and T10745.LAST_RUN_PER_DAY_FLAG = 1 and T10754.LAST_RUN_PER_DAY_FLAG = 1 )            group by T10706.CODE_VALUE_02, T10745.RESOURCE_ID, T10745.RESOURCE_NAME, T10745.CURRENT_FLAG      ) D1 order by c4, c5, c3, c2 When querying in any subject area and you cross different dimensions, especially Type II slowly changing dimensions, if the result set appears to be short the first place to look is to see if that object has associated facts.

    Read the article

  • Notifications for Expiring DBSNMP Passwords

    - by Courtney Llamas
    Most user accounts these days have a password profile on them that automatically expires the password after a set number of days.   Depending on your company’s security requirements, this may be as little as 30 days or as long as 365 days, although typically it falls between 60-90 days. For a normal user, this can cause a small interruption in your day as you have to go get your password reset by an admin. When this happens to privileged accounts, such as the DBSNMP account that is responsible for monitoring database availability, it can cause bigger problems. In Oracle Enterprise Manager 12c you may notice the error message “ORA-28002: the password will expire within 5 days” when you connect to a target, or worse you may get “ORA-28001: the password has expired". If you wait too long, your monitoring will fail because the password is locked out. Wouldn’t it be nice if we could get an alert 10 days before our DBSNMP password expired? Thanks to Oracle Enterprise Manager 12c Metric Extensions (ME), you can! See the Oracle Enterprise Manager Cloud Control Administrator’s Guide for more information on Metric Extensions. To create a metric extension, select Enterprise / Monitoring / Metric Extensions, and then click on Create. On the General Properties screen select either Cluster Database or Database Instance, depending on which target you need to monitor.  If you have both RAC and Single instance you may need to create one for each. In this example we will create a Cluster Database metric.  Enter a Name for the ME and a Display Name. Then select SQL for the Adapter.  Adjust the Collection Schedule as desired, for this example we will collect this metric every 1 day. Notice for metric collected every day, we can determine the exact time we want to collect. On the Adapter page, enter the query that you wish to execute.  In this example we will use the query below that specifically checks for the DBSNMP user that is expiring within 10 days. Of course, you can adjust this query to alert for any user that can cause an outage such as an application account or service account such as RMAN. select username, account_status, trunc(expiry_date-sysdate) days_to_expirefrom dba_userswhere username = 'DBSNMP'and expiry_date is not null; The next step is to create the columns to store the data returned from the query.  Click Add and add a column for each of the fields in the same order that data is returned.  The table below will help you complete the column additions. Name Display Name Column Type Value Type Metric Category Unit Username User Name Key String Security AccountStatus Account Status Data String Security DaysToExpire Days Until Expiration Data Number Security Days When creating the DaysToExpire column, you can add a default threshold here for Warning and Critical (say < 10 and 5).  When all columns have been added, click Next. On the Credentials page, you can choose to use the default monitoring credentials or specify new credentials.  We will use the default credentials established for our target (dbsnmp). The next step is to test your Metric Extension.  Click on Add to select a target for testing, then click Select. Now click the button Run Test to execute the test against the selected target(s). We can see in the example below that the Metric Extension has executed and returned a value of 68 days to expire. Click Next to proceed. Review the metric extension in the final screen and click Finish. The metric will be created in Editable status.  Select the metric, click Actions and select Deployable Draft. You can do this once more to move to Published. Finally, we want to apply this metric to a target. When managing many targets, it’s best to add your metric to a template, for details on adding a Metric Extension to a template see the Administrator’s Guide. For this example, we will deploy this to a target directly. Select Actions / Deploy to Targets. Click Add and select the target you wish to deploy to and click Submit.  Once deployment is complete, we can go to the target and view the Metric & Collection Settings to see the new metric and its thresholds.   After some time, you will find the metric has collected and the days to expiration for DBSNMP user can be seen in the All Metrics view.   For metrics collected once per day, you may have to wait up to 24 hours to see the metric and current severity. In the example below, the current severity is Clear (green check) as it is not scheduled to expire within 10 days. To test the notification, we can edit the thresholds for the new metric so they trigger an alert.  Our password expires in 139 days, so we’ll change our Warning to 140 and leave Critical at 5, in our example we also changed the collection time to every 5 minutes.  At the next collection, you’ll find that the current severity changes to a Warning and any related Incident Rules would be triggered to create an Incident or Notification as desired. Now that you get a notification that your DBSNMP passwords is about to expire, you can use OEM Command Line Interface (EM CLI) verb update_db_password to change it at both the database target and the OEM target in one step.  The caveat is you must know the existing password to use the update_db_password command.  To learn more about EM CLI, see the Oracle Enterprise Manager Command Line Interface Guide.  Below is an example of changing the password with the update_db_password verb.  $ ./emcli update_db_password -target_name=emrep -target_type=oracle_database -user_name=dbsnmp -change_at_target=yes -change_all_references=yes Enter value for old_password :Enter value for new_password :Enter value for retype_new_password :Successfully submitted a job to change the password in Enterprise Manager and on the target database: "emrep"Execute "emcli get_jobs -job_id=FA66C1C4D663297FE0437656F20ACC84" to check the status of the job.Search for job name "CHANGE_PWD_JOB_FA66C1C4D662297FE0437656F20ACC84" on the Jobs home page to check job execution details. The subsequent job created will typically run quickly enough that a blackout is not needed, however if you submit a script with many targets to change, your job may run slower so adding a blackout to the script is recommended. $ ./emcli get_jobs -job_id=FA66C1C4D663297FE0437656F20ACC84 Name Type Job ID Execution ID Scheduled Completed TZ Offset Status Status ID Owner Target Type Target Name CHANGE_PWD_JOB_FA66C1C4D662297FE0437656F20ACC84 ChangePassword FA66C1C4D663297FE0437656F20ACC84 FA66C1C4D665297FE0437656F20ACC84 2014-05-28 09:39:12 2014-05-28 09:39:18 GMT-07:00 Succeeded 5 SYSMAN oracle_database emrep After implementing the above Metric Extension and using the EM CLI update_db_password verb, you will be able to stay on top of your DBSNMP password changes without experiencing an unplanned monitoring outage.  

    Read the article

  • Delight and Excite

    - by Applications User Experience
    Mick McGee, CEO & President, EchoUser Editor’s Note: EchoUser is a User Experience design firm in San Francisco and a member of the Oracle Usability Advisory Board. Mick and his staff regularly consult on Oracle Applications UX projects. Being part of a user experience design firm, we have the luxury of working with a lot of great people across many great companies. We get to help people solve their problems.  At least we used to. The basic design challenge is still the same; however, the goal is not necessarily to solve “problems” anymore; it is, “I want our products to delight and excite!” The question for us as UX professionals is how to design to those goals, and then how to assess them from a usability perspective. I’m not sure where I first heard “delight and excite” (A book? blog post? Facebook  status? Steve Jobs quote?), but now I hear these listed as user experience goals all the time. In particular, somewhat paradoxically, I routinely hear them in enterprise software conversations. And when asking these same enterprise companies what will make the project successful, we very often hear, “Make it like Apple.” In past days, it was “make it like Yahoo (or Amazon or Google“) but now Apple is the common benchmark. Steve Jobs and Apple were not secrets, but with Jobs’ passing and Apple becoming the world’s most valuable company in the last year, the impact of great design and experience is suddenly very widespread. In particular, users’ expectations have gone way up. Being an enterprise company is no shield to the general expectations that users now have, for all products. Designing a “Minimum Viable Product” The user experience challenge has historically been, to echo the words of Eric Ries (author of Lean Startup) , to create a “minimum viable product”: the proverbial, “make it good enough”. But, in our profession, the “minimum viable” part of that phrase has oftentimes, unfortunately, referred to the design and user experience. Technology typically dominated the focus of the biggest, most successful companies. Few have had the laser focus of Apple to also create and sell design and user experience alongside great technology. But now that Apple is the most valuable company in the world, copying their success is a common undertaking. Great design is now a premium offering that everyone wants, from the one-person startup to the largest companies, consumer and enterprise. This emerging business paradigm will have significant impact across the user experience design process and profession. One area that particularly interests me is, how are we going to evaluate these new emerging “delight and excite” experiences, which are further customized to each particular domain? How to Measure “Delight and Excite” Traditional usability measures of task completion rate, assists, time, and errors are still extremely useful in many situations; however, they are too blunt to offer much insight into emerging experiences “Satisfaction” is usually assessed in user testing, in roughly equivalent importance to the above objective metrics. Various surveys and scales have provided ways to measure satisfying UX, with whatever questions they include. However, to meet the demands of new business goals and keep users at the center of design and development processes, we have to explore new methods to better capture custom-experience goals and emotion-driven user responses. We have had success assessing custom experiences, including “delight and excite”, by employing a variety of user testing methods that tend to combine formative and summative techniques (formative being focused more on identifying usability issues and ways to improve design, and summative focused more on metrics). Our most successful tool has been one we’ve been using for a long time, Magnitude Estimation Technique (MET). But it’s not necessarily about MET as a measure, rather how it is created. Caption: For one client, EchoUser did two rounds of testing.  Each test was a mix of performing representative tasks and gathering qualitative impressions. Each user participated in an in-person moderated 1-on-1 session for 1 hour, using a testing set-up where they held the phone. The primary goal was to identify usability issues and recommend design improvements. MET is based on a definition of the desired experience, which users will then use to rate items of interest (usually tasks in a usability test). In other words, a custom experience definition needs to be created. This can then be used to measure satisfaction in accomplishing tasks; “delight and excite”; or anything else from strategic goals, user demands, or elsewhere. For reference, our standard MET definition in usability testing is: “User experience is your perception of how easy to use, well designed and productive an interface is to complete tasks.” Articulating the User Experience We’ve helped construct experience definitions for several clients to better match their business goals. One example is a modification of the above that was needed for a company that makes medical-related products: “User experience is your perception of how easy to use, well-designed, productive and safe an interface is for conducting tasks. ‘Safe’ is how free an environment (including devices, software, facilities, people, etc.) is from danger, risk, and injury.” Another example is from a company that is pushing hard to incorporate “delight” into their enterprise business line: “User experience is your perception of a product’s ease of use and learning, satisfaction and delight in design, and ability to accomplish objectives.” I find the last one particularly compelling in that there is little that identifies the experience as being for a highly technical enterprise application. That definition could easily be applied to any number of consumer products. We have gone further than the above, including “sexy” and “cool” where decision-makers insisted they were part of the desired experience. We also applied it to completely different experiences where the “interface” was, for example, riding public transit, the “tasks” were train rides, and we followed the participants through the train-riding journey and rated various aspects accordingly: “A good public transportation experience is a cost-effective way of reliably, conveniently, and safely getting me to my intended destination on time.” To construct these definitions, we’ve employed both bottom-up and top-down approaches, depending on circumstances. For bottom-up, user inputs help dictate the terms that best fit the desired experience (usually by way of cluster and factor analysis). Top-down depends on strategic, visionary goals expressed by upper management that we then attempt to integrate into product development (e.g., “delight and excite”). We like a combination of both approaches to push the innovation envelope, but still be mindful of current user concerns. Hopefully the idea of crafting your own custom experience, and a way to measure it, can provide you with some ideas how you can adapt your user experience needs to whatever company you are in. Whether product-development or service-oriented, nearly every company is ultimately providing a user experience. The Bottom Line Creating great experiences may have been popularized by Steve Jobs and Apple, but I’ll be honest, it’s a good feeling to be moving from “good enough” to “delight and excite,” despite the challenge that entails. In fact, it’s because of that challenge that we will expand what we do as UX professionals to help deliver and assess those experiences. I’m excited to see how we, Oracle, and the rest of the industry will live up to that challenge.

    Read the article

  • Big data: An evening in the life of an actual buyer

    - by Jean-Pierre Dijcks
    Here I am, and this is an actual story of one of my evenings, trying to spend money with a company and ultimately failing. I just gave up and bought a service from another vendor, not the incumbent. Here is that story and how I think big data could actually fix this (and potentially prevent some of this from happening). In the end this story should illustrate how big data can benefit me (get me what I want without causing grief) and the company I am trying to buy something from. Note: Lots of details left out, I have no intention of being the annoyed blogger moaning about a specific company. What did I want to get? We watch TV, we have internet and we do have a land line. The land line is from a different vendor then the TV and the internet. I have decided that this makes no sense and I was going to get a bundle (no need to infer who this is, I just picked the generic bundle word as this is what I want to get) of all three services as this seems to save me money. I also want to not talk to people, I just want to click on a website when I feel like it and get it all sorted. I do think that is reality. I want to just do my shopping at 9.30pm while watching silly reruns on TV. Problem 1 - Bad links So, I'm an existing customer of the company I want to buy my bundle from. I go to the website, I click on offers. Turns out they are offers for new customers. After grumbling about how good they are, I click on offers for existing customers. Bummer, it goes to offers for new customers, so I click again on the link for offers for existing customers. No cigar... it just does not work. Big data solutions: 1) Do not show an existing customer the offers for new customers unless they are the same => This is only partially doable without login, but if a customer logs in the application should always know that this is an existing customer. But in general, imagine I do this from my home going through the internet service of this vendor to their domain... an instant filter should move me into the "existing customer route". 2) Flag dead or incorrect links => I've clicked the link for "existing customer offers" at least 3 times in under 5 seconds... Identifying patterns like this is easy in Hadoop and can very quickly make a list of potentially incorrect links. No need for realtime fixing, just the fact that this link can be pro-actively fixed across my entire web domain is a good thing. Preventative maintenance! Problem 2 - Purchase cannot be completed Apart from the fact that the browsing pattern to actually get to what I want is poorly designed, my purchase never gets past a specific point. In other words, I put something into my shopping cart and when I want to move on the application either crashes (with me going to an error page) or hangs or goes into something like chat. So I try again, and again and again. I think I tried this entire path (while being logged in!!) at least 10 times over the course of 20 minutes. I also clicked on the feedback button and, frustrated as I was, tried to explain this did not work... Big Data Solutions: 1) This web site does shopping cart analysis. I got an email next day stating I have things in my shopping cart, just click here to complete my purchase. After the above experience, this just added insult to my pain... 2) What should have happened, is a Hadoop job going over all logged in customers that are on the buy flow. It should flag anyone who is trying (multiple attempts from the same user to do the same thing), analyze the shopping card, the clicks to identify what the customers wants, his feedback provided (note: always own your own website feedback, never just farm this out!!) and in a short turn around time (30 minutes to 2 hours or so) email me with a link to complete my purchase. Not with a link to my shopping cart 12 hours later, but a link to actually achieve what I wanted... Why should this company go through the big data effort? I do believe this is relatively easy to do using our Oracle Event Processing and Big Data Appliance solutions combined. It is almost so simple (to my mind) that it makes no sense that this is not in place? But, now I am ranting... Why is this interesting? It is because of $$$$. After trying really hard, I mean I did this all in the evening, and again in the morning before going to work. I kept on failing, But I really wanted this to work... so an email that said, sorry, we noticed you tried to get a bundle (the log knows what I wanted, where I failed, so easy to generate), here is the link to click and complete your purchase. And here is 2 movies on us as an apology would have kept me as a customer, and got the additional $$$$ per month for the next couple of years. It would also lead to upsell on my phone package etc. Instead, I went to a completely different company, bought service from them. Lost money for company A, negative sentiment for company A and me telling this story at the water cooler so I'm influencing more people to think negatively about company A. All in all, a loss of easy money, a ding in sentiment and image where a relatively simple solution exists and can be in place on the software I describe routinely in this blog... For those who are coming to Openworld and maybe see value in solving the above, or are thinking of how to solve this, come visit us in Moscone North - Oracle Red Lounge or in the Engineered Systems Showcase.

    Read the article

  • Who could ask for more with LESS CSS? (Part 2 of 3&ndash;Setup)

    - by ToStringTheory
    Welcome to part two in my series covering the LESS CSS language.  In the first post, I covered the two major CSS precompiled languages - LESS and SASS to a small extent, iterating over some of the features that you could expect to find in them.  In this post, I will go a little further in depth into the setup and execution of using the LESS framework. Introduction It really doesn’t take too much to get LESS working in your project.  The basic workflow will be including the necessary translator in your project, defining bundles for the LESS files, add the necessary code to your layouts.cshtml file, and finally add in all your necessary styles to the LESS files!  Lets get started… New Project Just like all great experiments in Visual Studio, start up a File > New Project, and create a new MVC 4 Web Application.  The Base Package After you have the new project spun up, use the Nuget Package Manager to install the Bundle Transformer: LESS package. This will take care of installing the main translator that we will be using for LESS code (dotless which is another Nuget package), as well as the core framework for the Bundle Transformer library.  The installation will come up with some instructions in a readme file on how to modify your web.config to handle all your *.less requests through the Bundle Transformer, which passes the translating onto dotless. Where To Put These LESS Files?! This step isn’t really a requirement, however I find that I don’t like how ASP.Net MVC just has a content directory where they store CSS, content images, css images….  In my project, I went ahead and created a new directory just for styles – LESS files, CSS files, and images that are only referenced in LESS or CSS.  Ignore the MVC directory as this was my testbed for another project I was working on at the same time.  As you can see here, I have: A top level directory for images which contains only images used in a page A top level directory for scripts A top level directory for Styles A few directories for plugins I am using (Colrizr, JQueryUI, Farbtastic) Multiple *.less files for different functions (I’ll go over these in a minute) I find that this layout offers the best separation of content types.  Bring Out Your Bundles! The next thing that we need to do is add in the necessary code for the bundling of these LESS files.  Go ahead and open your BundleConfig.cs file, usually located in the /App_Start/ folder of the project.  As you will see in a minute, instead of using the method Microsoft does in the base MVC 4 project, I change things up a bit.  Define Constants The first thing I do is define constants for each of the virtual paths that will be used in the bundler: The main reason is that I hate magic strings in my program, so the fact that you first defined a virtual path in the BundleConfig file, and then used that path in the _Layout.cshtml file really irked me. Add Bundles to the BundleCollection Next, I am going to define the bundles for my styles in my AddStyleBundles method: That is all it takes to get all of my styles in play with LESS.  The CssTransformer and NullOrderer types come from the Bundle Transformer we grabbed earlier.  If we didn’t use that package, we would have to write our own function (not too hard, but why do it if it’s been done). I use the site.less file as my main hub for LESS - I will cover that more in the next section. Add Bundles To Layout.cshtml File With the constants in the BundleConfig file, instead of having to use the same magic string I defined for the bundle virtual path, I am able to do this: Notice here that besides the RenderSection magic strings (something I am working on in another side project), all of the bundles are now based on const strings.  If I need to change the virtual path, I only have to do it in one place.  Nifty! Get Started! We are now ready to roll!  As I said in the previous section, I use the site.less file as a central hub for my styles: As seen here, I have a reset.css file which is a simple CSS reset.  Next, I have created a file for managing all my color variables – colors.less: Here, you can see some of the standards I started to use, in this case for color variables.  I define all color variables with the @col prefix.  Currently, I am going for verbose variable names. The next file imported is my font.less file that defines the typeface information for the site: Simple enough.  A couple of imports for fonts from Google, and then declaring variables for use throughout LESS.  I also set up the heading sizes, margins, etc..  You can also see my current standardization for font declaration strings – @font. Next, I pull in a mixins.less file that I grabbed from the Twitter Bootstrap library that gives some useful parameterized mixins for use such as border-radius, gradient, box-shadow, etc… The common.less file is a file that just contains items that I will be defining that can be used across all my LESS files.  Kind of like my own mixins or font-helpers: Finally I have my layout.less file that contains all of my definitions for general site layout – width, main/sidebar widths, footer layout, etc: That’s it!  For the rest of my one off definitions/corrections, I am currently putting them into the site.less file beneath my original imports Note Probably my favorite side effect of using the LESS handler/translator while bundling is that it also does a CSS checkup when rendering…  See, when your web.config is set to debug, bundling will output the url to the direct less file, not the bundle, and the http handler intercepts the call, compiles the less, and returns the result.  If there is an error in your LESS code, the CSS file can be returned empty, or may have the error output as a comment on the first couple lines. If you have the web.config set to not debug, then if there is an error in your code, you will end up with the usual ASP.Net exception page (unless you catch the exception of course), with information regarding the failure of the conversion, such as brace mismatch, undefined variable, etc…  I find it nifty. Conclusion This is really just the beginning.  LESS is very powerful and exciting!  My next post will show an actual working example of why LESS is so powerful with its functions and variables…  At least I hope it will!  As for now, if you have any questions, comments, or suggestions on my current practice, I would love to hear them!  Feel free to drop a comment or shoot me an email using the contact page.  In the mean time, I plan on posting the final post in this series tomorrow or the day after, with my side project, as well as a whole base ASP.Net MVC4 templated project with LESS added in it so that you can check out the layout I have in this post.  Until next time…

    Read the article

  • FOUR questions to ask if you are implementing DATABASE-AS-A-SERVICE

    - by Sudip Datta
    During my ongoing tenure at Oracle, I have met all types of DBAs. Happy DBAs, unhappy DBAs, proud DBAs, risk-loving DBAs, cautious DBAs. These days, as Database-as-a-Service (DBaaS) becomes more mainstream, I find some complacent DBAs who are basking in their achievement of having implemented DBaaS. Some others, however, are not that happy. They grudgingly complain that they did not have much of a say in the implementation, they simply had to follow what their cloud architects (mostly infrastructure admins) offered them. In most cases it would be a database wrapped inside a VM that would be labeled as “Database as a Service”. In other cases, it would be existing brute-force automation simply exposed in a portal. As much as I think that there is more to DBaaS than those approaches and often get tempted to propose Enterprise Manager 12c, I try to be objective. Neither do I want to dampen the spirit of the happy ones, nor do I want to stoke the pain of the unhappy ones. As I mentioned in my previous post, I don’t deny vanilla automation could be useful. I like virtualization too for what it has helped us accomplish in terms of resource management, but we need to scrutinize its merit on a case-by-case basis and apply it meaningfully. For DBAs who either claim to have implemented DBaaS or are planning to do so, I simply want to provide four key questions to ponder about: 1. Does it make life easier for your end users? Database-as-a-Service can have several types of end users. Junior DBAs, QA Engineers, Developers- each having their own skillset. The objective of DBaaS is to make their life simple, so that they can focus on their core responsibilities without having to worry about additional stuff. For example, if you are a Developer using Oracle Application Express (APEX), you want to deal with schema, objects and PL/SQL code and not with datafiles or listener configuration. If you are a QA Engineer needing database copies for functional testing, you do not want to deal with underlying operating system patching and compliance issues. The question to ask, therefore, is, whether DBaaS makes life easier for those users. It is often convenient to give them VM shells to deal with a la Amazon EC2 IaaS, but is that what they really want? Is it a productive use of a developer's time if he needs to apply RPM errata to his Linux operating system. Asking him to keep the underlying operating system current is like making a guest responsible for a restaurant's decor. 2. Does it make life easier for your administrators? Cloud, in general, is supposed to free administrators from attending to mundane tasks like provisioning services for every single end user request. It is supposed to enable a readily consumable platform and enforce standardization in the process. For example, if a Service Catalog exposes DBaaS of specific database versions and configurations, it, by its very nature, enforces certain discipline and standardization within the IT environment. What if, instead of specific database configurations, cloud allowed each end user to create databases of their liking resulting in hundreds of version and patch levels and thousands of individual databases. Therefore the right question to ask is whether the unwanted consequence of DBaaS is OS and database sprawl. And if so, who is responsible for tracking them, backing them up, administering them? Studies have shown that these administrative overheads increase exponentially with new targets, and it could result in a management nightmare. That leads us to our next question. 3. Does it satisfy your Security Officers and Compliance Auditors? Compliance Auditors need to know who did what and when. They also want the cloud platform to be secure, so that end users have little freedom in tampering with it. Dealing with VM sprawl is not the easiest of challenges, let alone dealing with them as they keep getting reconfigured and moved around. This leads to the proverbial needle in the haystack problem, and all it needs is one needle to cause a serious compliance issue in the enterprise. Bottomline is, flexibility and agility should not come at the expense of compliance and it is very important to get the balance right. Can we have security and isolation without creating compliance challenges? Instead of a ‘one size fits all approach’ i.e. OS level isolation, can we think smartly about database isolation or schema based isolation? This is where the appropriate resource modeling needs to be applied. The usual systems management vendors out there with heterogeneous common-denominator approach have compromised on these semantics. If you follow Enterprise Manager’s DBaaS solution, you will see that we have considered different models, not precluding virtualization, for different customer use cases. The judgment to use virtual assemblies versus databases on physical RAC versus Schema-as-a-Service in a single database, should be governed by the need of the applications and not by putting compliance considerations in the backburner. 4. Does it satisfy your CIO? Finally, does it satisfy your higher ups? As the sponsor of cloud initiative, the CIO is expected to lead an IT transformation project, not merely a run-of-the-mill IT operations. Simply virtualizing server resources and delivering them through self-service is a good start, but hardly transformational. CIOs may appreciate the instant benefit from server consolidation, but studies have revealed that the ROI from consolidation would flatten out at 20-25%. The question would be: what next? As we go higher up in the stack, the need to virtualize, segregate and optimize shifts to those layers that are more palpable to the business users. As Sushil Kumar noted in his blog post, " the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment." Business users could not care less about infrastructure consolidation or virtualization - they care about business agility and service level assurance. Last but not the least, lot of CIOs get miffed if we ask them to throw away their existing hardware investments for implementing DBaaS. In Oracle, we always emphasize on freedom of choosing a platform; hence Enterprise Manager’s DBaaS solution is platform neutral. It can work on any Operating System (that the agent is certified on) Oracle’s hardware as well as 3rd party hardware. As a parting note, I urge you to remember these 4 questions. Remember that your satisfaction as an implementer lies in the satisfaction of others.

    Read the article

  • Data Source Connection Pool Sizing

    - by Steve Felts
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} One of the most time-consuming procedures of a database application is establishing a connection. The connection pooling of the data source can be used to minimize this overhead.  That argues for using the data source instead of accessing the database driver directly. Configuring the size of the pool in the data source is somewhere between an art and science – this article will try to move it closer to science.  From the beginning, WLS data source has had an initial capacity and a maximum capacity configuration values.  When the system starts up and when it shrinks, initial capacity is used.  The pool can grow to maximum capacity.  Customers found that they might want to set the initial capacity to 0 (more on that later) but didn’t want the pool to shrink to 0.  In WLS 10.3.6, we added minimum capacity to specify the lower limit to which a pool will shrink.  If minimum capacity is not set, it defaults to the initial capacity for upward compatibility.   We also did some work on the shrinking in release 10.3.4 to reduce thrashing; the algorithm that used to shrink to the maximum of the currently used connections or the initial capacity (basically the unused connections were all released) was changed to shrink by half of the unused connections. The simple approach to sizing the pool is to set the initial/minimum capacity to the maximum capacity.  Doing this creates all connections at startup, avoiding creating connections on demand and the pool is stable.  However, there are a number of reasons not to take this simple approach. When WLS is booted, the deployment of the data source includes synchronously creating the connections.  The more connections that are configured in initial capacity, the longer the boot time for WLS (there have been several projects for parallel boot in WLS but none that are available).  Related to creating a lot of connections at boot time is the problem of logon storms (the database gets too much work at one time).   WLS has a solution for that by setting the login delay seconds on the pool but that also increases the boot time. There are a number of cases where it is desirable to set the initial capacity to 0.  By doing that, the overhead of creating connections is deferred out of the boot and the database doesn’t need to be available.  An application may not want WLS to automatically connect to the database until it is actually needed, such as for some code/warm failover configurations. There are a number of cases where minimum capacity should be less than maximum capacity.  Connections are generally expensive to keep around.  They cause state to be kept on both the client and the server, and the state on the backend may be heavy (for example, a process).  Depending on the vendor, connection usage may cost money.  If work load is not constant, then database connections can be freed up by shrinking the pool when connections are not in use.  When using Active GridLink, connections can be created as needed according to runtime load balancing (RLB) percentages instead of by connection load balancing (CLB) during data source deployment. Shrinking is an effective technique for clearing the pool when connections are not in use.  In addition to the obvious reason that there times where the workload is lighter,  there are some configurations where the database and/or firewall conspire to make long-unused or too-old connections no longer viable.  There are also some data source features where the connection has state and cannot be used again unless the state matches the request.  Examples of this are identity based pooling where the connection has a particular owner and XA affinity where the connection is associated with a particular RAC node.  At this point, WLS does not re-purpose (discard/replace) connections and shrinking is a way to get rid of the unused existing connection and get a new one with the correct state when needed. So far, the discussion has focused on the relationship of initial, minimum, and maximum capacity.  Computing the maximum size requires some knowledge about the application and the current number of simultaneously active users, web sessions, batch programs, or whatever access patterns are common.  The applications should be written to only reserve and close connections as needed but multiple statements, if needed, should be done in one reservation (don’t get/close more often than necessary).  This means that the size of the pool is likely to be significantly smaller then the number of users.   If possible, you can pick a size and see how it performs under simulated or real load.  There is a high-water mark statistic (ActiveConnectionsHighCount) that tracks the maximum connections concurrently used.  In general, you want the size to be big enough so that you never run out of connections but no bigger.   It will need to deal with spikes in usage, which is where shrinking after the spike is important.  Of course, the database capacity also has a big influence on the decision since it’s important not to overload the database machine.  Planning also needs to happen if you are running in a Multi-Data Source or Active GridLink configuration and expect that the remaining nodes will take over the connections when one of the nodes in the cluster goes down.  For XA affinity, additional headroom is also recommended.  In summary, setting initial and maximum capacity to be the same may be simple but there are many other factors that may be important in making the decision about sizing.

    Read the article

  • Maintenance plans love story

    - by Maria Zakourdaev
    There are about 200 QA and DEV SQL Servers out there.  There is a maintenance plan on many of them that performs a backup of all databases and removes the backup history files. First of all, I must admit that I’m no big fan of maintenance plans in particular or the SSIS packages in general.  In this specific case, if I ever need to change anything in the way backup is performed, such as the compression feature or perform some other change, I have to open each plan one by one. This is quite a pain. Therefore, I have decided to replace the maintenance plans with a stored procedure that will perform exactly the same thing.  Having such a procedure will allow me to open multiple server connections and just execute an ALTER PROCEDURE whenever I need to change anything in it. There is nothing like good ole T-SQL. The first challenge was to remove the unneeded maintenance plans. Of course, I didn’t want to do it server by server.  I found the procedure msdb.dbo.sp_maintplan_delete_plan, but it only has a parameter for the maintenance plan id and it has no other parameters, like plan name, which would have been much more useful. Now I needed to find the table that holds all maintenance plans on the server. You would think that it would be msdb.dbo.sysdbmaintplans but, unfortunately, regardless of the number of maintenance plans on the instance, it contains just one row.    After a while I found another table: msdb.dbo.sysmaintplan_subplans. It contains the plan id that I was looking for, in the plan_id column and well as the agent’s job id which is executing the plan’s package: That was all I needed and the rest turned out to be quite easy.  Here is a script that can be executed against hundreds of servers from a multi-server query window to drop the specific maintenance plans. DECLARE @PlanID uniqueidentifier   SELECT @PlanID = plan_id FROM msdb.dbo.sysmaintplan_subplans Where name like ‘BackupPlan%’   EXECUTE msdb.dbo.sp_maintplan_delete_plan @plan_id=@PlanID   The second step was to create a procedure that will perform  all of the old maintenance plan tasks: create a folder for each database, backup all databases on the server and clean up the old files. The script is below. Enjoy.   ALTER PROCEDURE BackupAllDatabases                                   @PrintMode BIT = 1 AS BEGIN          DECLARE @BackupLocation VARCHAR(500)        DECLARE @PurgeAferDays INT        DECLARE @PurgingDate VARCHAR(30)        DECLARE @SQLCmd  VARCHAR(MAX)        DECLARE @FileName  VARCHAR(100)               SET @PurgeAferDays = -14        SET @BackupLocation = '\\central_storage_servername\BACKUPS\'+@@servername               SET @PurgingDate = CONVERT(VARCHAR(19), DATEADD (dd,@PurgeAferDays,GETDATE()),126)               SET @FileName = '?_full_'+                      + REPLACE(CONVERT(VARCHAR(19), GETDATE(),126),':','-')                      +'.bak';          SET @SQLCmd = '               IF ''?'' <> ''tempdb'' BEGIN                      EXECUTE master.dbo.xp_create_subdir N'''+@BackupLocation+'\?\'' ;                        BACKUP DATABASE ? TO  DISK = N'''+@BackupLocation+'\?\'+@FileName+'''                      WITH NOFORMAT, NOINIT,  SKIP, REWIND, NOUNLOAD, COMPRESSION,  STATS = 10 ;                        EXECUTE master.dbo.xp_delete_file 0,N'''+@BackupLocation+'\?\'',N''bak'',N'''+@PurgingDate+''',1;               END'          IF @PrintMode = 1 BEGIN               PRINT @SQLCmd        END               EXEC sp_MSforeachdb @SQLCmd        END

    Read the article

  • Complex event system for DungeonKeeper like game

    - by paul424
    I am working on opensource GPL3 game. http://opendungeons.sourceforge.net/ , new coders would be welcome. Now there's design question regarding Event System: We want to improve the game logic, that is program a new event system. I will just repost what's settled up already on http://forum.freegamedev.net/viewtopic.php?f=45&t=3033. From the discussion came the idea of the Publisher / Subscriber pattern + "domains": My current idea is to use the subscirbers / publishers model. Its similar to Observable pattern, but instead one subscribes to Events types, not Object's Events. For each Event would like to have both static and dynamic type. Static that is its's type would be resolved by belonging to the proper inherited class from Event. That is from Event we would have EventTile, EventCreature, EvenMapLoader, EventGameMap etc. From that there are of course subtypes like EventCreature would be EventKobold, EventKnight, EventTentacle etc. The listeners would collect the event from publishers, and send them subcribers , each of them would be a global singleton. The Listeners type hierachy would exactly mirror the type hierarchy of Events. In each constructor of Event type, the created instance would notify the proper listeners. That is when calling EventKnight the proper ctor would notify the Listeners : EventListener, CreatureLisener and KnightListener. The default action for an listner would be to notify all subscribers, but there would be some exceptions , like EventAttack would notify AttackListener which would dispatch event by the dynamic part ( that is the Creature pointer or hash). Any comments ? #include <vector> class Subscriber; class SubscriberAttack; class Event{ private: int foo; int bar; protected: // static std::vector<Publisher*> publishersList; static std::vector<Subscriber*> subscribersList; static std::vector<Event*> eventQueue; public: Event(){ eventQueue.push_back(this); } static int subscribe(Subscriber* ss); static int unsubscribe(Subscriber* ss); //static int reg_publisher(Publisher* pp); //static int unreg_publisher(Publisher* pp); }; // class Publisher{ // }; class Subscriber{ public: int (*newEvent) (Event* ee); Subscriber( ){ Event::subscribe(this); } Subscriber( int (*fp) (Event* ee) ):newEvent(fp){ Subscriber(); } ~Subscriber(){ Event::unsubscribe(this); } }; class EventAttack: Event{ private: int foo; int bar; protected: // static std::vector<Publisher*> publishersList; static std::vector<SubscriberAttack*> subscribersList; static std::vector<EventAttack*> eventQueue; public: EventAttack(){ eventQueue.push_back(this); } static int subscribe(SubscriberAttack* ss); static int unsubscribe(SubscriberAttack* ss); //static int reg_publisher(Publisher* pp); //static int unreg_publisher(Publisher* pp); }; class AttackSubscriber :Subscriber{ public: int (*newEvent) (EventAttack* ee); AttackSubscriber( ){ EventAttack::subscribe(this); } AttackSubscriber( int (*fp) (EventAttack* ee) ):newEventAttack(fp){ AttackSubscriber(); } ~AttackSubscriber(){ EventAttack::unsubscribe(this); } }; From that point, others wanted the Subject-Observer pattern, that is one would subscribe to all event types produced by particular object. That way it came out to add the domain system : Huh, to meet the ability to listen to particular game's object events, I though of introducing entity domains . Domains are trees, which nodes are labeled by unique names for each level. ( like the www addresses ). Each Entity wanting to participate in our event system ( that is be able to publish / produce events ) should at least now its domain name. That would end up in Player1/Room/Treasury/#24 or Player1/Creature/Kobold/#3 producing events. The subscriber picks some part of a tree. For example by specifiing subtree with the root in one of the nodes like Player1/Room/* ,would subscribe us to all Players1's room's event, and Player1/Creature/Kobold/#3 would subscribe to Players' third kobold's event. Does such event system make sense to you ? I have many implementation details to ask as well, but first let's start some general discussion. Note1: Notice that in the case of a fight between two creatues fight , the creature being attacked would have to throw an event, becuase it is HE/SHE/IT who have its domain address. So that would be BeingAttackedEvent() etc. I will edit that post if some other reflections on this would come out. Note2: the existing class hierarchy might be used to get the domains addresses being build in constructor . In a ctor you would just add + ."className" to domain address. If you are in a class'es hierarchy leaf constructor one might use nextID , hash or any other charactteristic, just to make the addresses distinguishable . Note3:subscribing to all entity's Events would require knowledge of all possible events produced by this entity . This could be done in one function call, but information on E produced would have to be handled for every Entity. SmartNote4 : Finding proper subscribers in a tree would be easy. One would start in particular Leaf for example Player1/Creature/Kobold/#3 and go up one parent a time , notifiying each Subscriber in a Node ie. : Player1/Creature/Kobold/* , Player1/Creature/* , Player1/* etc, , up to a root that is /* .<<<< Note5: The Event system was needed to have some way of incorporating Angelscript code into application. So the Event dispatcher was to be a gate to A-script functions. But it came out to this one.

    Read the article

  • Cisco ASA 5505 - L2TP over IPsec

    - by xraminx
    I have followed this document on cisco site to set up the L2TP over IPsec connection. When I try to establish a VPN to ASA 5505 from my Windows XP, after I click on "connect" button, the "Connecting ...." dialog box appears and after a while I get this error message: Error 800: Unable to establish VPN connection. The VPN server may be unreachable, or security parameters may not be configured properly for this connection. ASA version 7.2(4) ASDM version 5.2(4) Windows XP SP3 Windows XP and ASA 5505 are on the same LAN for test purposes. Edit 1: There are two VLANs defined on the cisco device (the standard setup on cisco ASA5505). - port 0 is on VLAN2, outside; - and ports 1 to 7 on VLAN1, inside. I run a cable from my linksys home router (10.50.10.1) to the cisco ASA5505 router on port 0 (outside). Port 0 have IP 192.168.1.1 used internally by cisco and I have also assigned the external IP 10.50.10.206 to port 0 (outside). I run a cable from Windows XP to Cisco router on port 1 (inside). Port 1 is assigned an IP from Cisco router 192.168.1.2. The Windows XP is also connected to my linksys home router via wireless (10.50.10.141). Edit 2: When I try to establish vpn, the Cisco device real time Log viewer shows 7 entries like this: Severity:5 Date:Sep 15 2009 Time: 14:51:29 SyslogID: 713904 Destination IP = 10.50.10.141, Decription: No crypto map bound to interface... dropping pkt Edit 3: This is the setup on the router right now. Result of the command: "show run" : Saved : ASA Version 7.2(4) ! hostname ciscoasa domain-name default.domain.invalid enable password HGFHGFGHFHGHGFHGF encrypted passwd NMMNMNMNMNMNMN encrypted names name 192.168.1.200 WebServer1 name 10.50.10.206 external-ip-address ! interface Vlan1 nameif inside security-level 100 ip address 192.168.1.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address external-ip-address 255.0.0.0 ! interface Vlan3 no nameif security-level 50 no ip address ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! ftp mode passive dns server-group DefaultDNS domain-name default.domain.invalid object-group service l2tp udp port-object eq 1701 access-list outside_access_in remark Allow incoming tcp/http access-list outside_access_in extended permit tcp any host WebServer1 eq www access-list outside_access_in extended permit udp any any eq 1701 access-list inside_nat0_outbound extended permit ip any 192.168.1.208 255.255.255.240 access-list inside_cryptomap_1 extended permit ip interface outside interface inside pager lines 24 logging enable logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool PPTP-VPN 192.168.1.210-192.168.1.220 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-524.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface www WebServer1 www netmask 255.255.255.255 access-group outside_access_in in interface outside timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute http server enable http 192.168.1.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set TRANS_ESP_3DES_SHA esp-3des esp-sha-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA mode transport crypto ipsec transform-set TRANS_ESP_3DES_MD5 esp-3des esp-md5-hmac crypto ipsec transform-set TRANS_ESP_3DES_MD5 mode transport crypto map outside_map 1 match address inside_cryptomap_1 crypto map outside_map 1 set transform-set TRANS_ESP_3DES_MD5 crypto map outside_map interface inside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash md5 group 2 lifetime 86400 telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! dhcpd address 192.168.1.2-192.168.1.33 inside dhcpd enable inside ! group-policy DefaultRAGroup internal group-policy DefaultRAGroup attributes dns-server value 192.168.1.1 vpn-tunnel-protocol IPSec l2tp-ipsec username myusername password FGHFGHFHGFHGFGFHF nt-encrypted tunnel-group DefaultRAGroup general-attributes address-pool PPTP-VPN default-group-policy DefaultRAGroup tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key * tunnel-group DefaultRAGroup ppp-attributes no authentication chap authentication ms-chap-v2 ! ! prompt hostname context Cryptochecksum:a9331e84064f27e6220a8667bf5076c1 : end

    Read the article

  • How to create multiboot flash drive

    - by Nrew
    I've found a guide here: http://www.pendrivelinux.com/boot-multiple-iso-from-usb-multiboot-usb/ And found this menu.lst in my flash drive, which seems to be the one that I'm seeing when I boot using my flash drive: # This Menu Created by Lance http://www.pendrivelinux.com # Ongoing Suggested Menu Entries and the Suggestor are noted! default 0 timeout 30 color NORMAL HIGHLIGHT HELPTEXT HEADING splashimage=(hd0,0)/splash.xpm.gz foreground=FFFFFF background=0066FF title Memtest86+ find --set-root /memtest86+-4.00.iso map --mem /memtest86+-4.00.iso (hd32) map --hook root (hd32) chainloader (hd32) # Suggested by madprofessor title Boot Clonezilla root (hd0,0) kernel /clonezilla/live/vmlinuz live-media-path=clonezilla/live bootfrom=/dev/sd boot=live union=aufs noprompt ocs_live_run="ocs-live-general" ocs_live_extra_param="" ocs_live_keymap="" ocs_live_batch="no" ocs_lang="" vga=791 ip=frommedia initrd /clonezilla/live/initrd.img title Parted Magic 4.9 (Partition Tools) find --set-root /pmagic-4.9.iso map /pmagic-4.9.iso (hd32) map --hook root (hd32) chainloader (hd32) # Suggested by Deb title Partition Wizard 4.2 (Partition Tools) find --set-root /pwhe42.iso map /pwhe42.iso (hd32) map --hook root (hd32) chainloader (hd32) title Balder DOS image (FreeDOS) map --unsafe-boot /balder10.img (fd0) map --hook chainloader --force (fd0)+1 rootnoverify (fd0) # Suggested by Szymon Silski title Linux Mint 8 find --set-root /LinuxMint-8.iso map /LinuxMint-8.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/mint.seed boot=casper persistent iso-scan/filename=/LinuxMint-8.iso splash initrd /casper/initrd.lz title Ubuntu 10.04 find --set-root /ubuntu-10.04-desktop-i386.iso map /ubuntu-10.04-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper persistent iso-scan/filename=/ubuntu-10.04-desktop-i386.iso splash initrd /casper/initrd.lz title Xubuntu 10.04 (XFCE Desktop) find --set-root /xubuntu-10.04-desktop-i386.iso map /xubuntu-10.04-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/xubuntu.seed boot=casper persistent iso-scan/filename=/xubuntu-10.04-desktop-i386.iso splash initrd /casper/initrd.lz title Kubuntu 10.04 (KDE Desktop) find --set-root /kubuntu-10.04-desktop-i386.iso map /kubuntu-10.04-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/kubuntu.seed boot=casper persistent iso-scan/filename=/kubuntu-10.04-desktop-i386.iso splash initrd /casper/initrd.lz # Suggested by Ambriel title Lubuntu 10.04 (LXDE Lightweight Desktop) find --set-root /lubuntu-10.04.iso map /lubuntu-10.04.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/lubuntu.seed boot=casper persistent iso-scan/filename=/lubuntu-10.04.iso splash initrd /casper/initrd.lz title Ubuntu 10.04 Netbook Remix (NetBook Distro) find --set-root /ubuntu-10.04-netbook-i386.iso map /ubuntu-10.04-netbook-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/netbook-remix.seed boot=casper persistent iso-scan/filename=/ubuntu-10.04-netbook-i386.iso splash initrd /casper/initrd.lz title Ubuntu 10.04 Server Edition Installer (32 bit Installer Only) find --set-root /ubuntu-10.04-server-i386.iso map /ubuntu-10.04-server-i386.iso (0xff) map --hook root (0xff) kernel /install/vmlinuz file=/cdrom/preseed/ubuntu-server.seed boot=install iso-scan/filename=/ubuntu-10.04-server-i386.iso splash initrd /install/initrd.gz title Ubuntu 9.10 find --set-root /ubuntu-9.10-desktop-i386.iso map /ubuntu-9.10-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper persistent iso-scan/filename=/ubuntu-9.10-desktop-i386.iso splash initrd /casper/initrd.lz title Xubuntu 9.10 find --set-root /xubuntu-9.10-desktop-i386.iso map /xubuntu-9.10-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/xubuntu.seed boot=casper persistent iso-scan/filename=/xubuntu-9.10-desktop-i386.iso splash initrd /casper/initrd.lz title Kubuntu 9.10 find --set-root /kubuntu-9.10-desktop-i386.iso map /kubuntu-9.10-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/kubuntu.seed boot=casper persistent iso-scan/filename=/kubuntu-9.10-desktop-i386.iso splash initrd /casper/initrd.lz # Ubuntu Server and Netbook Remix suggested by Wojciech Holek title Ubuntu 9.10 Server Edition Installer (Installer Only) find --set-root /ubuntu-9.10-server-i386.iso map /ubuntu-9.10-server-i386.iso (0xff) map --hook root (0xff) kernel /install/vmlinuz file=/cdrom/preseed/ubuntu-server.seed boot=install iso-scan/filename=/ubuntu-9.10-server-i386.iso splash initrd /install/initrd.gz title Ubuntu 9.10 Netbook Remix (NetBook Distro) find --set-root /ubuntu-9.10-netbook-remix-i386.iso map /ubuntu-9.10-netbook-remix-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/netbook-remix.seed boot=casper persistent iso-scan/filename=/ubuntu-9.10-netbook-remix-i386.iso splash initrd /casper/initrd.lz title Ubuntu 9.10 Rescue Remix (Recovery Tools) find --set-root /ubuntu-rescue-remix-9-10-revision1.iso map /ubuntu-rescue-remix-9-10-revision1.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper iso-scan/filename=/ubuntu-rescue-remix-9-10-revision1.iso splash initrd /casper/initrd.lz title DSL 4.4.10 find --set-root /dsl-4.4.10-initrd.iso map --mem /dsl-4.4.10-initrd.iso (hd32) map --hook root (hd32) chainloader (hd32) title AVG Rescue CD (Anti-Virus + Anti-Spyware) find --set-root /avg_arl_en_90_100114.iso map /avg_arl_en_90_100114.iso (hd32) map --hook chainloader (hd32) title Ultimate Boot CD 4.11 find --set-root /ubcd411.iso map /ubcd411.iso (hd32) map --hook chainloader (hd32) title OphCrack XP 2.3.1 (XP Password Cracker) find --set-root /ophcrack-xp-livecd-2.3.1.iso map /ophcrack-xp-livecd-2.3.1.iso (0xff) map --hook root (0xff) kernel /boot/bzImage rw root=/dev/null vga=normal lang=C kmap=us screen=1024x768x16 autologin initrd /boot/rootfs.gz title OphCrack Vista 2.3.1 (Vista Password Cracker) find --set-root /ophcrack-vista-livecd-2.3.1.iso map /ophcrack-vista-livecd-2.3.1.iso (0xff) map --hook root (0xff) kernel /boot/bzImage rw root=/dev/null vga=normal lang=C kmap=us screen=1024x768x16 autologin initrd /boot/rootfs.gz # Suggested by Greg Steer title Offline NT Password & Registy Editor find --set-root /cd080802.iso map /cd080802.iso (hd32) map --hook chainloader (hd32) title SliTaz 2.0 find --set-root /slitaz-2.0.iso map --mem /slitaz-2.0.iso (hd32) map --hook chainloader (hd32) title Riplinux 9.3 find --set-root /RIPLinuX-9.3.iso map --heads=0 --sectors-per-track=0 /RIPLinuX-9.3.iso (0xff) || map --heads=0 --sectors-per-track=0 --mem /RIPLinuX-9.3.iso (0xff) map --hook chainloader (0xff) # Suggested by Sunny title YlmF (Windows Like OS) find --set-root /YlmF_OS_EN_v1.0.iso map /YlmF_OS_EN_v1.0.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper persistent iso-scan/filename=/YlmF_OS_EN_v1.0.iso splash initrd /casper/initrd.lz # Suggested by Martin Andersson title DBAN 1.0.7 (Drive Nuker) find --set-root /dban-1.0.7_i386.iso map --mem /dban-1.0.7_i386.iso (hd32) map --hook root (hd32) chainloader (hd32) # Suggested by Robin McGough title xPUD 0.9.2 (NetBook Distro) find --set-root --ignore-floppies --ignore-cd /xpud-0.9.2.iso map --heads=0 --sectors-per-track=0 /xpud-0.9.2.iso (hd32) map --hook chainloader (hd32) title Puppy 4.3.1 find --set-root /puppy/pup-431.sfs kernel /puppy/vmlinuz initrd /puppy/initrd.gz # Suggested by Relst title Run a Linux OS from the Internet kernel /gpxe.lkrn I also put some .iso files for os installers (Windows xp sp2 and Ubuntu 10.04) But they didn't show up in the list when I booted Do I need to: extract the .iso files and put in in their respective folders? Add the os that I added on the menu.lst? How do I add the iso image(os) in the menu.lst? Before adding the .iso files I first made a folder named Windows xp sp2 then placed the .iso files in there. Please help, I think I need to add the folder name or the file name on the menu.lst but I don't know how

    Read the article

  • ISA 2006 refuses VPN DHCP requests as spoofing

    - by Daniel
    I'm running ISA 2006 with PPTP VPN for my AD-controlled network. DHCP is located on the ISA server itself and authentication is done by RADIUS (NPS) located on the DC. Right now my VPN clients can connect, access local DNS, and can ping ISA, the DC, and other clients. Here's where it gets weird. I noticed that despite all this, ipconfig shows the following: PPP adapter North Horizon VPN: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : North Horizon VPN Physical Address. . . . . . . . . : DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 10.42.4.7(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.255 Default Gateway . . . . . . . . . : 0.0.0.0 DNS Servers . . . . . . . . . . . : 10.42.1.10 NetBIOS over Tcpip. . . . . . . . : Enabled So I went over and checked my ISA logs for both DHCP requests and replies, only to find out that my VPN clients are being denied because ISA thinks its a spoof. Here's some relevant information from the log (the VPN subnet is 10.42.4.0/24): Client IP: 10.42.4.6 Destination: 255.255.255.255:67 Client Username: (blank) Protocol: DHCP (request) Action: Denied Connection Rule: (blank) Source Network: VPN Clients Destination Network: Local Host Result Code: 0xc0040014 FWX_E_FWE_SPOOFING_PACKET_DROPPED Network Interface: 10.42.4.11 --------------------------------------------------------- Original Client IP: 10.42.4.6 Destination: 10.42.1.1 Client Username: (valid user) Protocol: PING Action: Initiated Connection Rule: Allow PING to ISA Source Network: VPN Clients Destination Network: Local Host Result Code: 0x0 ERROR_SUCCESS Network Interface: (blank) I wasn't sure what this 10.42.4.11 network interface was - it certainly wasn't something I had setup - untill I saw it in Routing and Remote Access under IP Routing General as an interface called "Internal" bound to the same IP address. I also noticed that since ISA takes blocks of 10 IP addresses from DHCP for VPN, it had reserved 10.42.4.2-11. I'm not sure if it means anything, though. Thanks for your help.

    Read the article

  • Making Thunar the default file browser without hiding the desktop icons

    - by Manu
    I really dislike Ubuntu's default file browser, nautilus, and decided to opt for a lighter alternative (Thunar or Xfe). I've used the following script to change the default to Thunar, but now all my icons are gone from the desktop ! The files are still there, in /home/myid/Desktop, but they do not appear. Is there a way to show them, or is this a consequence of removing nautilus as the default file browser ? Can I modify the following script* in order to keep the icons ? *copied from https://help.ubuntu.com/...: ## Originally written by aysiu from the Ubuntu Forums ## This is GPL'ed code ## So improve it and re-release it ## Define portion to make Thunar the default if that appears to be the appropriate action makethunardefault() { ## I went with --no-install-recommends because ## I didn't want to bring in a whole lot of junk, ## and Jaunty installs recommended packages by default. echo -e "\nMaking sure Thunar is installed\n" sudo apt-get update && sudo apt-get install thunar --no-install-recommends ## Does it make sense to change to the directory? ## Or should all the individual commands just reference the full path? echo -e "\nChanging to application launcher directory\n" cd /usr/share/applications echo -e "\nMaking backup directory\n" ## Does it make sense to create an entire backup directory? ## Should each file just be backed up in place? sudo mkdir nonautilusplease echo -e "\nModifying folder handler launcher\n" sudo cp nautilus-folder-handler.desktop nonautilusplease/ ## Here I'm using two separate sed commands ## Is there a way to string them together to have one ## sed command make two replacements in a single file? sudo sed -i -n 's/nautilus --no-desktop/thunar/g' nautilus-folder-handler.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-folder-handler.desktop echo -e "\nModifying browser launcher\n" sudo cp nautilus-browser.desktop nonautilusplease/ sudo sed -i -n 's/nautilus --no-desktop --browser/thunar/g' nautilus-browser.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-browser.desktop echo -e "\nModifying computer icon launcher\n" sudo cp nautilus-computer.desktop nonautilusplease/ sudo sed -i -n 's/nautilus --no-desktop/thunar/g' nautilus-computer.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-computer.desktop echo -e "\nModifying home icon launcher\n" sudo cp nautilus-home.desktop nonautilusplease/ sudo sed -i -n 's/nautilus --no-desktop/thunar/g' nautilus-home.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-home.desktop echo -e "\nModifying general Nautilus launcher\n" sudo cp nautilus.desktop nonautilusplease/ sudo sed -i -n 's/Exec=nautilus/Exec=thunar/g' nautilus.desktop ## This last bit I'm not sure should be included ## See, the only thing that doesn't change to the ## new Thunar default is clicking the files on the desktop, ## because Nautilus is managing the desktop (so technically ## it's not launching a new process when you double-click ## an icon there). ## So this kills the desktop management of icons completely ## Making the desktop pretty useless... would it be better ## to keep Nautilus there instead of nothing? Or go so far ## as to have Xfce manage the desktop in Gnome? echo -e "\nChanging base Nautilus launcher\n" sudo dpkg-divert --divert /usr/bin/nautilus.old --rename /usr/bin/nautilus && sudo ln -s /usr/bin/thunar /usr/bin/nautilus echo -e "\nRemoving Nautilus as desktop manager\n" killall nautilus echo -e "\nThunar is now the default file manager. To return Nautilus to the default, run this script again.\n" } restorenautilusdefault() { echo -e "\nChanging to application launcher directory\n" cd /usr/share/applications echo -e "\nRestoring backup files\n" sudo cp nonautilusplease/nautilus-folder-handler.desktop . sudo cp nonautilusplease/nautilus-browser.desktop . sudo cp nonautilusplease/nautilus-computer.desktop . sudo cp nonautilusplease/nautilus-home.desktop . sudo cp nonautilusplease/nautilus.desktop . echo -e "\nRemoving backup folder\n" sudo rm -r nonautilusplease echo -e "\nRestoring Nautilus launcher\n" sudo rm /usr/bin/nautilus && sudo dpkg-divert --rename --remove /usr/bin/nautilus echo -e "\nMaking Nautilus manage the desktop again\n" nautilus --no-default-window & ## The only change that isn't undone is the installation of Thunar ## Should Thunar be removed? Or just kept in? ## Don't want to load the script with too many questions? } ## Make sure that we exit if any commands do not complete successfully. ## Thanks to nanotube for this little snippet of code from the early ## versions of UbuntuZilla set -o errexit trap 'echo "Previous command did not complete successfully. Exiting."' ERR ## This is the main code ## Is it necessary to put an elseif in here? Or is ## redundant, since the directory pretty much ## either exists or it doesn't? ## Is there a better way to keep track of whether ## the script has been run before? if [[ -e /usr/share/applications/nonautilusplease ]]; then restorenautilusdefault else makethunardefault fi;

    Read the article

  • Postfix configuration - Uing virtual min but server is bouncing back my mail.

    - by brodiebrodie
    I have no experience in setting up postfix, and thought virtualmin minght do the legwork for me. Appears not. When I try to send mail to the domain (either [email protected] [email protected] or [email protected]) I get the following message returned This is the mail system at host dedq239.localdomain. I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to <postmaster> If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system <[email protected]> (expanded from <[email protected]>): User unknown in virtual alias table Final-Recipient: rfc822; [email protected] Original-Recipient: rfc822;[email protected] Action: failed Status: 5.0.0 Diagnostic-Code: X-Postfix; User unknown in virtual alias table How can I diagnose the problem here? It seems that the mail gets to my server but the server fails to locally deliver the message to the correct user. (This is a guess, truthfully I have no idea what is happening). I have checked my virtual alias table and it seems to be set up correctly (I can post if this would be helpful). Can anyone give me a clue as to the next step? Thanks alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 html_directory = no local_recipient_maps = $virtual_mailbox_maps mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_recipient_restrictions = permit_mynetworks reject_unauth_destination smtpd_sasl_auth_enable = yes soft_bounce = no unknown_local_recipient_reject_code = 550 virtual_alias_maps = hash:/etc/postfix/virtual My mail log file (the last entry) Sep 30 15:13:47 dedq239 postfix/cleanup[7237]: 207C6B18158: message-id=<[email protected]> Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 207C6B18158: from=<[email protected]>, size=1805, nrcpt=1 (queue active) Sep 30 15:13:47 dedq239 postfix/error[7238]: 207C6B18158: to=<[email protected]>, orig_to=<[email protected]>, relay=none, delay=0.64, delays=0.61/0.01/0/0.02, dsn=5.0.0, status=bounced (User unknown in virtual alias table) Sep 30 15:13:47 dedq239 postfix/cleanup[7237]: 8DC13B18169: message-id=<[email protected]> Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 8DC13B18169: from=<>, size=3691, nrcpt=1 (queue active) Sep 30 15:13:47 dedq239 postfix/bounce[7239]: 207C6B18158: sender non-delivery notification: 8DC13B18169 Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 207C6B18158: removed Sep 30 15:13:48 dedq239 postfix/smtp[7240]: 8DC13B18169: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[209.85.216.55]:25, delay=1.3, delays=0.02/0.01/0.58/0.75, dsn=2.0.0, status=sent (250 2.0.0 OK 1254348828 36si15082901pxi.91) Sep 30 15:13:48 dedq239 postfix/qmgr[7177]: 8DC13B18169: removed Sep 30 15:14:17 dedq239 postfix/smtpd[7233]: disconnect from mail-bw0-f228.google.com[209.85.218.228] etc.aliases file below I have not touched this file - myvirtualdomain is a replacement for my real domain name # Aliases in this file will NOT be expanded in the header from # Mail, but WILL be visible over networks or from /bin/mail. # # >>>>>>>>>> The program "newaliases" must be run after # >> NOTE >> this file is updated for any changes to # >>>>>>>>>> show through to sendmail. # # Basic system aliases -- these MUST be present. mailer-daemon: postmaster postmaster: root # General redirections for pseudo accounts. bin: root daemon: root adm: root lp: root sync: root shutdown: root halt: root mail: root news: root uucp: root operator: root games: root gopher: root ftp: root nobody: root radiusd: root nut: root dbus: root vcsa: root canna: root wnn: root rpm: root nscd: root pcap: root apache: root webalizer: root dovecot: root fax: root quagga: root radvd: root pvm: root amanda: root privoxy: root ident: root named: root xfs: root gdm: root mailnull: root postgres: root sshd: root smmsp: root postfix: root netdump: root ldap: root squid: root ntp: root mysql: root desktop: root rpcuser: root rpc: root nfsnobody: root ingres: root system: root toor: root manager: root dumper: root abuse: root newsadm: news newsadmin: news usenet: news ftpadm: ftp ftpadmin: ftp ftp-adm: ftp ftp-admin: ftp www: webmaster webmaster: root noc: root security: root hostmaster: root info: postmaster marketing: postmaster sales: postmaster support: postmaster # trap decode to catch security attacks decode: root # Person who should get root's mail #root: marc abuse-myvirtualdomain.com: [email protected] My etc/postfix/virtual file is below - again myvirtualdomain is a replacement. I think this file was generated by Virtualmin and I have tried messing around with is with no success... This is the version without my changes. myunixusername@myvirtualdomain .com myunixusername myvirtualdomain .com myvirtualdomain.com [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]

    Read the article

  • why i7-860 is two times slower than E3-1220?

    - by javapowered
    I have financical trading software. It decodes fast/fix messages. I'm running same binaries on two different machines on very similar set of data. Software receive "messages" and decodes them. The general rule - longer message takes more time to decode: i7-860, Windows 7: Debug 18:23:48.8047325 count=51 decoding take microseconds = 300 Debug 18:23:49.7287854 count=53 decoding take microseconds = 349 Debug 18:23:49.7397860 count=110 decoding take microseconds = 516 Debug 18:23:49.7497866 count=92 decoding take microseconds = 512 Debug 18:23:49.7597872 count=49 decoding take microseconds = 267 Debug 18:23:49.7717878 count=194 decoding take microseconds = 823 Debug 18:23:49.7797883 count=49 decoding take microseconds = 296 Debug 18:23:49.7997894 count=50 decoding take microseconds = 299 Debug 18:23:50.7328428 count=101 decoding take microseconds = 583 Debug 18:23:50.7418433 count=42 decoding take microseconds = 281 Debug 18:23:50.7538440 count=151 decoding take microseconds = 764 Debug 18:23:50.7618445 count=57 decoding take microseconds = 279 Debug 18:23:50.7738452 count=122 decoding take microseconds = 712 Debug 18:23:50.8028468 count=52 decoding take microseconds = 281 Debug 18:23:51.7389004 count=137 decoding take microseconds = 696 Debug 18:23:51.7499010 count=100 decoding take microseconds = 485 Debug 18:23:51.7689021 count=185 decoding take microseconds = 872 Debug 18:23:51.8079043 count=49 decoding take microseconds = 315 Debug 18:23:52.7349573 count=90 decoding take microseconds = 532 Debug 18:23:52.7439578 count=53 decoding take microseconds = 277 Debug 18:23:52.7539584 count=134 decoding take microseconds = 623 Debug 18:23:52.7629589 count=47 decoding take microseconds = 294 Debug 18:23:52.7749596 count=198 decoding take microseconds = 868 Debug 18:23:52.8039613 count=52 decoding take microseconds = 291 Debug 18:23:53.7400148 count=132 decoding take microseconds = 666 Debug 18:23:53.7480153 count=81 decoding take microseconds = 430 Debug 18:23:53.7570158 count=49 decoding take microseconds = 301 Debug 18:23:53.7710166 count=156 decoding take microseconds = 752 Debug 18:23:53.7770169 count=45 decoding take microseconds = 270 Debug 18:23:54.7350717 count=108 decoding take microseconds = 578 Debug 18:23:54.7430722 count=52 decoding take microseconds = 286 Debug 18:23:54.7540728 count=138 decoding take microseconds = 567 Debug 18:23:54.7760741 count=160 decoding take microseconds = 753 Debug 18:23:54.8030756 count=53 decoding take microseconds = 292 Debug 18:23:55.7411293 count=110 decoding take microseconds = 629 Debug 18:23:55.7481297 count=48 decoding take microseconds = 294 Debug 18:23:55.7591303 count=84 decoding take microseconds = 386 Debug 18:23:55.7701309 count=90 decoding take microseconds = 484 Debug 18:23:55.7801315 count=120 decoding take microseconds = 527 Debug 18:23:55.8101332 count=53 decoding take microseconds = 290 Debug 18:23:56.7341861 count=121 decoding take microseconds = 667 Debug 18:23:56.7421865 count=53 decoding take microseconds = 293 Debug 18:23:56.7531872 count=127 decoding take microseconds = 586 Debug 18:23:56.7621877 count=58 decoding take microseconds = 306 Debug 18:23:56.7751884 count=138 decoding take microseconds = 649 Debug 18:23:56.8021900 count=53 decoding take microseconds = 288 Debug 18:23:57.7392436 count=139 decoding take microseconds = 699 Debug 18:23:57.7502442 count=121 decoding take microseconds = 548 Debug 18:23:57.7582446 count=61 decoding take microseconds = 301 Debug 18:23:57.7692453 count=98 decoding take microseconds = 500 Debug 18:23:57.7792458 count=94 decoding take microseconds = 460 Debug 18:23:57.8092476 count=41 decoding take microseconds = 274 Xeon E3-1220, Windows Server 2008 R2 foundation: Debug 18:28:57.5087967 count=117 decoding take microseconds = 255 Debug 18:28:57.5087967 count=85 decoding take microseconds = 187 Debug 18:28:57.5087967 count=55 decoding take microseconds = 155 Debug 18:28:57.5243967 count=86 decoding take microseconds = 189 Debug 18:28:57.5243967 count=53 decoding take microseconds = 139 Debug 18:28:57.5243967 count=52 decoding take microseconds = 153 Debug 18:28:57.5243967 count=55 decoding take microseconds = 146 Debug 18:28:57.5243967 count=103 decoding take microseconds = 239 Debug 18:28:57.5243967 count=83 decoding take microseconds = 182 Debug 18:28:57.5243967 count=85 decoding take microseconds = 180 Debug 18:28:57.5243967 count=80 decoding take microseconds = 202 Debug 18:28:57.5243967 count=58 decoding take microseconds = 135 Debug 18:28:57.5243967 count=55 decoding take microseconds = 140 Debug 18:28:57.5243967 count=81 decoding take microseconds = 183 Debug 18:28:57.5243967 count=74 decoding take microseconds = 172 Debug 18:28:57.5243967 count=80 decoding take microseconds = 174 Debug 18:28:57.5243967 count=88 decoding take microseconds = 175 Debug 18:28:57.5243967 count=55 decoding take microseconds = 131 Debug 18:28:57.5243967 count=80 decoding take microseconds = 182 Debug 18:28:57.5243967 count=80 decoding take microseconds = 183 Debug 18:28:57.5243967 count=101 decoding take microseconds = 231 Debug 18:28:57.5243967 count=58 decoding take microseconds = 134 Debug 18:28:57.5243967 count=57 decoding take microseconds = 126 Debug 18:28:57.5243967 count=57 decoding take microseconds = 134 Debug 18:28:57.5399967 count=115 decoding take microseconds = 234 Debug 18:28:57.5399967 count=106 decoding take microseconds = 225 Debug 18:28:57.5399967 count=108 decoding take microseconds = 241 Debug 18:28:57.5399967 count=84 decoding take microseconds = 177 Debug 18:28:57.5399967 count=54 decoding take microseconds = 141 Debug 18:28:57.5399967 count=84 decoding take microseconds = 186 Debug 18:28:57.5399967 count=82 decoding take microseconds = 184 Debug 18:28:57.5399967 count=82 decoding take microseconds = 179 Debug 18:28:57.5399967 count=56 decoding take microseconds = 133 Debug 18:28:57.5399967 count=57 decoding take microseconds = 127 Debug 18:28:57.5399967 count=82 decoding take microseconds = 185 Debug 18:28:57.5399967 count=76 decoding take microseconds = 178 Debug 18:28:57.5399967 count=82 decoding take microseconds = 184 Debug 18:28:57.5399967 count=54 decoding take microseconds = 139 Debug 18:28:57.5399967 count=54 decoding take microseconds = 137 Debug 18:28:57.5399967 count=81 decoding take microseconds = 184 Debug 18:28:57.5399967 count=136 decoding take microseconds = 275 Debug 18:28:57.5399967 count=55 decoding take microseconds = 138 Debug 18:28:57.5555968 count=52 decoding take microseconds = 140 Debug 18:28:57.5555968 count=53 decoding take microseconds = 136 Debug 18:28:57.5555968 count=54 decoding take microseconds = 139 Debug 18:28:57.5555968 count=55 decoding take microseconds = 138 Debug 18:28:57.5555968 count=57 decoding take microseconds = 134 Debug 18:28:57.5555968 count=53 decoding take microseconds = 136 Debug 18:28:57.5555968 count=80 decoding take microseconds = 174 Debug 18:28:57.5555968 count=74 decoding take microseconds = 175 Debug 18:28:57.5555968 count=57 decoding take microseconds = 133 Debug 18:28:57.5555968 count=57 decoding take microseconds = 149 Debug 18:28:57.5555968 count=100 decoding take microseconds = 262 Debug 18:28:57.5555968 count=56 decoding take microseconds = 156 Debug 18:28:57.5555968 count=55 decoding take microseconds = 165 From this test I see that E3-1220 is almost two times faster than i7-860. Is that possible? Because in the processors ratings these processors are about the same. Is it possible that this is because of cache or something? And if so which processor I better to buy to decode messages two more times faster?

    Read the article

  • Exposing the AnyConnect HTTPS service to outside network

    - by Maciej Swic
    We have a Cisco ASA 5505 with firmware ASA9.0(1) and ASDM 7.0(2). It is configured with a public ip address, and when trying to reach it from the outside by HTTPS for AnyConnect VPN, we get the following log output: 6 Nov 12 2012 07:01:40 <client-ip> 51000 <asa-ip> 443 Built inbound TCP connection 2889 for outside:<client-ip>/51000 (<client-ip>/51000) to identity:<asa-ip>/443 (<asa-ip>/443) 6 Nov 12 2012 07:01:40 <client-ip> 50999 <asa-ip> 443 Built inbound TCP connection 2890 for outside:<client-ip>/50999 (<client-ip>/50999) to identity:<asa-ip>/443 (<asa-ip>/443) 6 Nov 12 2012 07:01:40 <client-ip> 51000 <asa-ip> 443 Teardown TCP connection 2889 for outside:<client-ip>/51000 to identity:<asa-ip>/443 duration 0:00:00 bytes 0 No valid adjacency 6 Nov 12 2012 07:01:40 <client-ip> 50999 <asa-ip> 443 Teardown TCP connection 2890 for outside:<client-ip>/50999 to identity:<asa-ip>/443 duration 0:00:00 bytes 0 No valid adjacency We finished the startup wizard and the anyconnect vpn wizard and here is the resulting configuration: Cryptochecksum: 12262d68 23b0d136 bb55644a 9c08f86b : Saved : Written by enable_15 at 07:08:30.519 UTC Mon Nov 12 2012 ! ASA Version 9.0(1) ! hostname vpn domain-name office.<redacted>.com enable password <redacted> encrypted passwd <redacted> encrypted names ip local pool vpn-pool 192.168.67.2-192.168.67.253 mask 255.255.255.0 ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.68.250 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address <redacted> 255.255.255.248 ! ftp mode passive dns server-group DefaultDNS domain-name office.<redacted>.com object network obj_any subnet 0.0.0.0 0.0.0.0 pager lines 24 logging enable logging asdm informational mtu outside 1500 mtu inside 1500 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 no arp permit-nonconnected ! object network obj_any nat (inside,outside) dynamic interface timeout xlate 3:00:00 timeout pat-xlate 0:00:30 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy user-identity default-domain LOCAL http server enable http 192.168.68.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart warmstart crypto ipsec ikev2 ipsec-proposal DES protocol esp encryption des protocol esp integrity sha-1 md5 crypto ipsec ikev2 ipsec-proposal 3DES protocol esp encryption 3des protocol esp integrity sha-1 md5 crypto ipsec ikev2 ipsec-proposal AES protocol esp encryption aes protocol esp integrity sha-1 md5 crypto ipsec ikev2 ipsec-proposal AES192 protocol esp encryption aes-192 protocol esp integrity sha-1 md5 crypto ipsec ikev2 ipsec-proposal AES256 protocol esp encryption aes-256 protocol esp integrity sha-1 md5 crypto ipsec security-association pmtu-aging infinite crypto dynamic-map SYSTEM_DEFAULT_CRYPTO_MAP 65535 set ikev2 ipsec-proposal AES256 AES192 AES 3DES DES crypto map outside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map outside_map interface outside crypto map inside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map inside_map interface inside crypto ca trustpoint _SmartCallHome_ServerCA crl configure crypto ca trustpoint ASDM_TrustPoint0 enrollment self subject-name CN=vpn proxy-ldc-issuer crl configure crypto ca trustpool policy crypto ca certificate chain _SmartCallHome_ServerCA certificate ca 6ecc7aa5a7032009b8cebcf4e952d491 <redacted> quit crypto ca certificate chain ASDM_TrustPoint0 certificate f678a050 <redacted> quit crypto ikev2 policy 1 encryption aes-256 integrity sha group 5 2 prf sha lifetime seconds 86400 crypto ikev2 policy 10 encryption aes-192 integrity sha group 5 2 prf sha lifetime seconds 86400 crypto ikev2 policy 20 encryption aes integrity sha group 5 2 prf sha lifetime seconds 86400 crypto ikev2 policy 30 encryption 3des integrity sha group 5 2 prf sha lifetime seconds 86400 crypto ikev2 policy 40 encryption des integrity sha group 5 2 prf sha lifetime seconds 86400 crypto ikev2 enable outside client-services port 443 crypto ikev2 remote-access trustpoint ASDM_TrustPoint0 telnet timeout 5 ssh 192.168.68.0 255.255.255.0 inside ssh timeout 5 console timeout 0 vpn-addr-assign local reuse-delay 60 dhcpd auto_config outside ! dhcpd address 192.168.68.254-192.168.68.254 inside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept ssl trust-point ASDM_TrustPoint0 inside ssl trust-point ASDM_TrustPoint0 outside webvpn enable outside enable inside anyconnect image disk0:/anyconnect-win-3.1.01065-k9.pkg 1 anyconnect image disk0:/anyconnect-linux-3.1.01065-k9.pkg 2 anyconnect image disk0:/anyconnect-macosx-i386-3.1.01065-k9.pkg 3 anyconnect profiles GM-AnyConnect_client_profile disk0:/GM-AnyConnect_client_profile.xml anyconnect enable tunnel-group-list enable group-policy GroupPolicy_GM-AnyConnect internal group-policy GroupPolicy_GM-AnyConnect attributes wins-server none dns-server value 192.168.68.254 vpn-tunnel-protocol ikev2 ssl-client default-domain value office.<redacted>.com webvpn anyconnect profiles value GM-AnyConnect_client_profile type user username <redacted> password <redacted> encrypted tunnel-group GM-AnyConnect type remote-access tunnel-group GM-AnyConnect general-attributes address-pool vpn-pool default-group-policy GroupPolicy_GM-AnyConnect tunnel-group GM-AnyConnect webvpn-attributes group-alias GM-AnyConnect enable ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect ip-options ! service-policy global_policy global prompt hostname context call-home reporting anonymous Cryptochecksum:12262d6823b0d136bb55644a9c08f86b : end Clearly we are missing something, but the question is, what?

    Read the article

  • Cisco ASA - Enable communication between same security level

    - by Conor
    I have recently inherited a network with a Cisco ASA (running version 8.2). I am trying to configure it to allow communication between two interfaces configured with the same security level (DMZ-DMZ) "same-security-traffic permit inter-interface" has been set, but hosts are unable to communicate between the interfaces. I am assuming that some NAT settings are causing my issue. Below is my running config: ASA Version 8.2(3) ! hostname asa enable password XXXXXXXX encrypted passwd XXXXXXXX encrypted names ! interface Ethernet0/0 switchport access vlan 400 ! interface Ethernet0/1 switchport access vlan 400 ! interface Ethernet0/2 switchport access vlan 420 ! interface Ethernet0/3 switchport access vlan 420 ! interface Ethernet0/4 switchport access vlan 450 ! interface Ethernet0/5 switchport access vlan 450 ! interface Ethernet0/6 switchport access vlan 500 ! interface Ethernet0/7 switchport access vlan 500 ! interface Vlan400 nameif outside security-level 0 ip address XX.XX.XX.10 255.255.255.248 ! interface Vlan420 nameif public security-level 20 ip address 192.168.20.1 255.255.255.0 ! interface Vlan450 nameif dmz security-level 50 ip address 192.168.10.1 255.255.255.0 ! interface Vlan500 nameif inside security-level 100 ip address 192.168.0.1 255.255.255.0 ! ftp mode passive clock timezone JST 9 same-security-traffic permit inter-interface same-security-traffic permit intra-interface object-group network DM_INLINE_NETWORK_1 network-object host XX.XX.XX.11 network-object host XX.XX.XX.13 object-group service ssh_2220 tcp port-object eq 2220 object-group service ssh_2251 tcp port-object eq 2251 object-group service ssh_2229 tcp port-object eq 2229 object-group service ssh_2210 tcp port-object eq 2210 object-group service DM_INLINE_TCP_1 tcp group-object ssh_2210 group-object ssh_2220 object-group service zabbix tcp port-object range 10050 10051 object-group service DM_INLINE_TCP_2 tcp port-object eq www group-object zabbix object-group protocol TCPUDP protocol-object udp protocol-object tcp object-group service http_8029 tcp port-object eq 8029 object-group network DM_INLINE_NETWORK_2 network-object host 192.168.20.10 network-object host 192.168.20.30 network-object host 192.168.20.60 object-group service imaps_993 tcp description Secure IMAP port-object eq 993 object-group service public_wifi_group description Service allowed on the Public Wifi Group. Allows Web and Email. service-object tcp-udp eq domain service-object tcp-udp eq www service-object tcp eq https service-object tcp-udp eq 993 service-object tcp eq imap4 service-object tcp eq 587 service-object tcp eq pop3 service-object tcp eq smtp access-list outside_access_in remark http traffic from outside access-list outside_access_in extended permit tcp any object-group DM_INLINE_NETWORK_1 eq www access-list outside_access_in remark ssh from outside to web1 access-list outside_access_in extended permit tcp any host XX.XX.XX.11 object-group ssh_2251 access-list outside_access_in remark ssh from outside to penguin access-list outside_access_in extended permit tcp any host XX.XX.XX.10 object-group ssh_2229 access-list outside_access_in remark http from outside to penguin access-list outside_access_in extended permit tcp any host XX.XX.XX.10 object-group http_8029 access-list outside_access_in remark ssh from outside to internal hosts access-list outside_access_in extended permit tcp any host XX.XX.XX.13 object-group DM_INLINE_TCP_1 access-list outside_access_in remark dns service to internal host access-list outside_access_in extended permit object-group TCPUDP any host XX.XX.XX.13 eq domain access-list dmz_access_in extended permit ip 192.168.10.0 255.255.255.0 any access-list dmz_access_in extended permit tcp any host 192.168.10.29 object-group DM_INLINE_TCP_2 access-list public_access_in remark Web access to DMZ websites access-list public_access_in extended permit object-group TCPUDP any object-group DM_INLINE_NETWORK_2 eq www access-list public_access_in remark General web access. (HTTP, DNS & ICMP and Email) access-list public_access_in extended permit object-group public_wifi_group any any pager lines 24 logging enable logging asdm informational mtu outside 1500 mtu public 1500 mtu dmz 1500 mtu inside 1500 no failover icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 60 global (outside) 1 interface global (dmz) 2 interface nat (public) 1 0.0.0.0 0.0.0.0 nat (dmz) 1 0.0.0.0 0.0.0.0 nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface 2229 192.168.0.29 2229 netmask 255.255.255.255 static (inside,outside) tcp interface 8029 192.168.0.29 www netmask 255.255.255.255 static (dmz,outside) XX.XX.XX.13 192.168.10.10 netmask 255.255.255.255 dns static (dmz,outside) XX.XX.XX.11 192.168.10.30 netmask 255.255.255.255 dns static (dmz,inside) 192.168.0.29 192.168.10.29 netmask 255.255.255.255 static (dmz,public) 192.168.20.30 192.168.10.30 netmask 255.255.255.255 dns static (dmz,public) 192.168.20.10 192.168.10.10 netmask 255.255.255.255 dns static (inside,dmz) 192.168.10.0 192.168.0.0 netmask 255.255.255.0 dns access-group outside_access_in in interface outside access-group public_access_in in interface public access-group dmz_access_in in interface dmz route outside 0.0.0.0 0.0.0.0 XX.XX.XX.9 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy http server enable http 192.168.0.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 telnet timeout 5 ssh 192.168.0.0 255.255.255.0 inside ssh timeout 20 console timeout 0 dhcpd dns 61.122.112.97 61.122.112.1 dhcpd auto_config outside ! dhcpd address 192.168.20.200-192.168.20.254 public dhcpd enable public ! dhcpd address 192.168.0.200-192.168.0.254 inside dhcpd enable inside ! threat-detection basic-threat threat-detection statistics host threat-detection statistics access-list no threat-detection statistics tcp-intercept ntp server 130.54.208.201 source public webvpn ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect ip-options inspect netbios inspect rsh inspect rtsp inspect skinny inspect esmtp inspect sqlnet inspect sunrpc inspect tftp inspect sip inspect xdmcp !

    Read the article

  • Performance required to improve Windows Experience Index?

    - by Ian Boyd
    Is there a guide on the metrics required to obtain a certain Windows Experience Index? A Microsoft guy said in January 2009: On the matter of transparency, it is indeed our plan to disclose in great detail how the scores are calculated, what the tests attempt to measure, why, and how they map to realistic scenarios and usage patterns. Has that amount of transparency happened? Is there a technet article somewhere? If my score was limited by my Memory subscore of 5.9. A nieve person would suggest: Buy a faster RAM Which is wrong of course. From the Windows help: If your computer has a 64-bit central processing unit (CPU) and 4 gigabytes (GB) or less random access memory (RAM), then the Memory (RAM) subscore for your computer will have a maximum of 5.9. You can buy the fastest, overclocked, liquid-cooled, DDR5 RAM on the planet; you'll still have a maximum Memory subscore of 5.9. So in general the knee-jerk advice "buy better stuff" is not helpful. What i am looking for is attributes required to achieve a certain score, or move beyond a current limitation. The information i've been able to compile so far, chiefly from 3 Windows blog entries, and an article: Memory subscore Score Conditions ======= ================================ 1.0 < 256 MB 2.0 < 500 MB 2.9 <= 512 MB 3.5 < 704 MB 3.9 < 944 MB 4.5 <= 1.5 GB 5.9 < 4.0GB-64MB on a 64-bit OS Windows Vista highest score 7.9 Windows 7 highest score Graphics Subscore Score Conditions ======= ====================== 1.0 doesn't support DX9 1.9 doesn't support WDDM 4.9 does not support Pixel Shader 3.0 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 7.9 Windows 7 highest score Gaming graphics subscore Score Result ======= ============================= 1.0 doesn't support D3D 2.0 supports D3D9, DX9 and WDDM 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 6.0-6.9 good framerates (e.g. 40-50fps) at normal resoltuions (e.g. 1280x1024) 7.0-7.9 even higher framerates at even higher resolutions 7.9 Windows 7 highest score Processor subscore Score Conditions ======= ========================================================================== 5.9 Windows Vista highest score 6.0-6.9 many quad core processors will be able to score in the high 6 low 7 ranges 7.0+ many quad core processors will be able to score in the high 6 low 7 ranges 7.9 8-core systems will be able to approach 8.9 Windows 7 highest score Primary hard disk subscore (note) Score Conditions ======= ======================================== 1.9 Limit for pathological drives that stop responding when pending writes 2.0 Limit for pathological drives that stop responding when pending writes 2.9 Limit for pathological drives that stop responding when pending writes 3.0 Limit for pathological drives that stop responding when pending writes 5.9 highest you're likely to see without SSD Windows Vista highest score 7.9 Windows 7 highest score Bonus Chatter You can find your WEI detailed test results in: C:\Windows\Performance\WinSAT\DataStore e.g. 2011-11-06 01.00.19.482 Disk.Assessment (Recent).WinSAT.xml <WinSAT> <WinSPR> <DiskScore>5.9</DiskScore> </WinSPR> <Metrics> <DiskMetrics> <AvgThroughput units="MB/s" score="6.4" ioSize="65536" kind="Sequential Read">89.95188</AvgThroughput> <AvgThroughput units="MB/s" score="4.0" ioSize="16384" kind="Random Read">1.58000</AvgThroughput> <Responsiveness Reason="UnableToAssess" Kind="Cap">TRUE</Responsiveness> </DiskMetrics> </Metrics> </WinSAT> Pre-emptive snarky comment: "WEI is useless, it has no relation to reality" Fine, how do i increase my hard-drive's random I/O throughput? Update - Amount of memory limits rating Some people don't believe Microsoft's statement that having less than 4GB of RAM on a 64-bit edition of Windows doesn't limit the rating to 5.9: And from xxx.Formal.Assessment (Recent).WinSAT.xml: <WinSPR> <LimitsApplied> <MemoryScore> <LimitApplied Friendly="Physical memory available to the OS is less than 4.0GB-64MB on a 64-bit OS : limit mem score to 5.9" Relation="LT">4227858432</LimitApplied> </MemoryScore> </LimitsApplied> </WinSPR> References Windows Vista Team Blog: Windows Experience Index: An In-Depth Look Understand and improve your computer's performance in Windows Vista Engineering Windows 7 Blog: Engineering the Windows 7 “Windows Experience Index”

    Read the article

  • VFS: file-max limit 1231582 reached

    - by Rick Koshi
    I'm running a Linux 2.6.36 kernel, and I'm seeing some random errors. Things like ls: error while loading shared libraries: libpthread.so.0: cannot open shared object file: Error 23 Yes, my system can't consistently run an 'ls' command. :( I note several errors in my dmesg output: # dmesg | tail [2808967.543203] EXT4-fs (sda3): re-mounted. Opts: (null) [2837776.220605] xv[14450] general protection ip:7f20c20c6ac6 sp:7fff3641b368 error:0 in libpng14.so.14.4.0[7f20c20a9000+29000] [4931344.685302] EXT4-fs (md16): re-mounted. Opts: (null) [4982666.631444] VFS: file-max limit 1231582 reached [4982666.764240] VFS: file-max limit 1231582 reached [4982767.360574] VFS: file-max limit 1231582 reached [4982901.904628] VFS: file-max limit 1231582 reached [4982964.930556] VFS: file-max limit 1231582 reached [4982966.352170] VFS: file-max limit 1231582 reached [4982966.649195] top[31095]: segfault at 14 ip 00007fd6ace42700 sp 00007fff20746530 error 6 in libproc-3.2.8.so[7fd6ace3b000+e000] Obviously, the file-max errors look suspicious, being clustered together and recent. # cat /proc/sys/fs/file-max 1231582 # cat /proc/sys/fs/file-nr 1231712 0 1231582 That also looks a bit odd to me, but the thing is, there's no way I have 1.2 million files open on this system. I'm the only one using it, and it's not visible to anyone outside the local network. # lsof | wc 16046 148253 1882901 # ps -ef | wc 574 6104 44260 I saw some documentation saying: file-max & file-nr: The kernel allocates file handles dynamically, but as yet it doesn't free them again. The value in file-max denotes the maximum number of file- handles that the Linux kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit. Historically, the three values in file-nr denoted the number of allocated file handles, the number of allocated but unused file handles, and the maximum number of file handles. Linux 2.6 always reports 0 as the number of free file handles -- this is not an error, it just means that the number of allocated file handles exactly matches the number of used file handles. Attempts to allocate more file descriptors than file-max are reported with printk, look for "VFS: file-max limit reached". My first reading of this is that the kernel basically has a built-in file descriptor leak, but I find that very hard to believe. It would imply that any system in active use needs to be rebooted every so often to free up the file descriptors. As I said, I can't believe this would be true, since it's normal to me to have Linux systems stay up for months (even years) at a time. On the other hand, I also can't believe that my nearly-idle system is holding over a million files open. Does anyone have any ideas, either for fixes or further diagnosis? I could, of course, just reboot the system, but I don't want this to be a recurring problem every few weeks. As a stopgap measure, I've quit Firefox, which was accounting for almost 2000 lines of lsof output (!) even though I only had one window open, and now I can run 'ls' again, but I doubt that will fix the problem for long. (edit: Oops, spoke too soon. By the time I finished typing out this question, the symptom was/is back) Thanks in advance for any help. And another update: My system was basically unusable, so I decided I had no option but to reboot. But before I did, I carefully quit one process at a time, checking /proc/sys/fs/file-nr after each termination. I found that, predictably, the number of open files gradually went down as I closed things down. Unfortunately, it wasn't a large effect. Yes, I was able to clear up 5000-10000 open files, but there were still over 1.2 million left. I shut down just about everything. All interactive shells, except for the one ssh I left open to finish closing down, httpd, even nfs service. Basically everything in the process table that wasn't a kernel process, and there were still an appalling number of files apparently left open. After the reboot, I found that /proc/sys/fs/file-nr showed about 2000 files open, which is much more reasonable. Starting up 2 Xvnc sessions as usual, along with the dozen or so monitoring windows I like to keep open, brought the total up to about 4000 files. I can see nothing wrong with that, of course, but I've obviously failed to identify the root cause. I'm still looking for ideas, since I definitely expect it to happen again. And another update, the next day: I watched the system carefully, and discovered that /proc/sys/fs/file-nr showed a growth of about 900 open files per hour. I shut down the system's only NFS client for the night, and the growth stopped. Mind you, it didn't free up the resources, but it did at least stop consuming more. Is this a known bug with NFS? I'll be bringing the NFS client back online today, and I'll narrow it down further. If anyone is familiar with this behavior, feel free to jump in with "Yeah, NFS4 has this problem, go back to NFS3" or something like that.

    Read the article

  • Cisco ASA 5505 - L2TP over IPsec

    - by xraminx
    I have followed this document on cisco site to set up the L2TP over IPsec connection. When I try to establish a VPN to ASA 5505 from my Windows XP, after I click on "connect" button, the "Connecting ...." dialog box appears and after a while I get this error message: Error 800: Unable to establish VPN connection. The VPN server may be unreachable, or security parameters may not be configured properly for this connection. ASA version 7.2(4) ASDM version 5.2(4) Windows XP SP3 Windows XP and ASA 5505 are on the same LAN for test purposes. Edit 1: There are two VLANs defined on the cisco device (the standard setup on cisco ASA5505). - port 0 is on VLAN2, outside; - and ports 1 to 7 on VLAN1, inside. I run a cable from my linksys home router (10.50.10.1) to the cisco ASA5505 router on port 0 (outside). Port 0 have IP 192.168.1.1 used internally by cisco and I have also assigned the external IP 10.50.10.206 to port 0 (outside). I run a cable from Windows XP to Cisco router on port 1 (inside). Port 1 is assigned an IP from Cisco router 192.168.1.2. The Windows XP is also connected to my linksys home router via wireless (10.50.10.141). Edit 2: When I try to establish vpn, the Cisco device real time Log viewer shows 7 entries like this: Severity:5 Date:Sep 15 2009 Time: 14:51:29 SyslogID: 713904 Destination IP = 10.50.10.141, Decription: No crypto map bound to interface... dropping pkt Edit 3: This is the setup on the router right now. Result of the command: "show run" : Saved : ASA Version 7.2(4) ! hostname ciscoasa domain-name default.domain.invalid enable password HGFHGFGHFHGHGFHGF encrypted passwd NMMNMNMNMNMNMN encrypted names name 192.168.1.200 WebServer1 name 10.50.10.206 external-ip-address ! interface Vlan1 nameif inside security-level 100 ip address 192.168.1.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address external-ip-address 255.0.0.0 ! interface Vlan3 no nameif security-level 50 no ip address ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! ftp mode passive dns server-group DefaultDNS domain-name default.domain.invalid object-group service l2tp udp port-object eq 1701 access-list outside_access_in remark Allow incoming tcp/http access-list outside_access_in extended permit tcp any host WebServer1 eq www access-list outside_access_in extended permit udp any any eq 1701 access-list inside_nat0_outbound extended permit ip any 192.168.1.208 255.255.255.240 access-list inside_cryptomap_1 extended permit ip interface outside interface inside pager lines 24 logging enable logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool PPTP-VPN 192.168.1.210-192.168.1.220 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-524.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface www WebServer1 www netmask 255.255.255.255 access-group outside_access_in in interface outside timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute http server enable http 192.168.1.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set TRANS_ESP_3DES_SHA esp-3des esp-sha-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA mode transport crypto ipsec transform-set TRANS_ESP_3DES_MD5 esp-3des esp-md5-hmac crypto ipsec transform-set TRANS_ESP_3DES_MD5 mode transport crypto map outside_map 1 match address inside_cryptomap_1 crypto map outside_map 1 set transform-set TRANS_ESP_3DES_MD5 crypto map outside_map interface inside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash md5 group 2 lifetime 86400 telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! dhcpd address 192.168.1.2-192.168.1.33 inside dhcpd enable inside ! group-policy DefaultRAGroup internal group-policy DefaultRAGroup attributes dns-server value 192.168.1.1 vpn-tunnel-protocol IPSec l2tp-ipsec username myusername password FGHFGHFHGFHGFGFHF nt-encrypted tunnel-group DefaultRAGroup general-attributes address-pool PPTP-VPN default-group-policy DefaultRAGroup tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key * tunnel-group DefaultRAGroup ppp-attributes no authentication chap authentication ms-chap-v2 ! ! prompt hostname context Cryptochecksum:a9331e84064f27e6220a8667bf5076c1 : end

    Read the article

  • Setup routing and iptables for new VPN connection to redirect **only** ports 80 and 443

    - by Steve
    I have a new VPN connection (using openvpn) to allow me to route around some ISP restrictions. Whilst it is working fine, it is taking all the traffic over the vpn. This is causing me issues for downloading (my internet connection is a lot faster than the vpn allows), and for remote access. I run an ssh server, and have a daemon running that allows me to schdule downloads via my phone. I have my existing ethernet connection on eth0, and the new VPN connection on tun0. I believe I need to setup the default route to use my existing eth0 connection on the 192.168.0.0/24 network, and set the default gateway to 192.168.0.1 (my knowledge is shaky as I haven't done this for a number of years). If that is correct, then I'm not exactly sure how to do it!. My current routing table is: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface MSS Window irtt 0.0.0.0 10.51.0.169 0.0.0.0 UG 0 0 0 tun0 0 0 0 10.51.0.1 10.51.0.169 255.255.255.255 UGH 0 0 0 tun0 0 0 0 10.51.0.169 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 0 0 0 85.25.147.49 192.168.0.1 255.255.255.255 UGH 0 0 0 eth0 0 0 0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 0 0 0 192.168.0.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 0 0 0 After fixing the routing, I believe I need to use iptables to configure prerouting or masquerading to force everything for destination port 80 or 443 over tun0. Again, I'm not exactly sure how to do this! Everything I've found on the internet is trying to do something far more complicated, and trying to sort the wood from the trees is proving difficult. Any help would be much appreciated. UPDATE So far, from the various sources, I've cobbled together the following: #!/bin/sh DEV1=eth0 IP1=`ifconfig|perl -nE'/dr:(\S+)/&&say$1'|grep 192.` GW1=192.168.0.1 TABLE1=internet TABLE2=vpn DEV2=tun0 IP2=`ifconfig|perl -nE'/dr:(\S+)/&&say$1'|grep 10.` GW2=`route -n | grep 'UG[ \t]' | awk '{print $2}'` ip route flush table $TABLE1 ip route flush table $TABLE2 ip route show table main | grep -Ev ^default | while read ROUTE ; do ip route add table $TABLE1 $ROUTE ip route add table $TABLE2 $ROUTE done ip route add table $TABLE1 $GW1 dev $DEV1 src $IP1 ip route add table $TABLE2 $GW2 dev $DEV2 src $IP2 ip route add table $TABLE1 default via $GW1 ip route add table $TABLE2 default via $GW2 echo "1" > /proc/sys/net/ipv4/ip_forward echo "1" > /proc/sys/net/ipv4/ip_dynaddr ip rule add from $IP1 lookup $TABLE1 ip rule add from $IP2 lookup $TABLE2 ip rule add fwmark 1 lookup $TABLE1 ip rule add fwmark 2 lookup $TABLE2 iptables -t nat -A POSTROUTING -o $DEV1 -j SNAT --to-source $IP1 iptables -t nat -A POSTROUTING -o $DEV2 -j SNAT --to-source $IP2 iptables -t nat -A PREROUTING -m state --state ESTABLISHED,RELATED -j CONNMARK --restore-mark iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j CONNMARK --restore-mark iptables -t nat -A PREROUTING -i $DEV1 -m state --state NEW -j CONNMARK --set-mark 1 iptables -t nat -A PREROUTING -i $DEV2 -m state --state NEW -j CONNMARK --set-mark 2 iptables -t nat -A PREROUTING -m connmark --mark 1 -j MARK --set-mark 1 iptables -t nat -A PREROUTING -m connmark --mark 2 -j MARK --set-mark 2 iptables -t nat -A PREROUTING -m state --state NEW -m connmark ! --mark 0 -j CONNMARK --save-mark iptables -t mangle -A PREROUTING -i $DEV2 -m state --state NEW -p tcp --dport 80 -j CONNMARK --set-mark 2 iptables -t mangle -A PREROUTING -i $DEV2 -m state --state NEW -p tcp --dport 443 -j CONNMARK --set-mark 2 route del default route add default gw 192.168.0.1 eth0 Now this seems to be working. Except it isn't! Connections to the blocked websites are going through, connections not on ports 80 and 443 are using the non-VPN connection. However port 80 and 443 connections that aren't to the blocked websites are using the non-VPN connection too! As the general goal has been reached, I'm relatively happy, but it would be nice to know why it isn't working exactly right. Any ideas? For reference, I now have 3 routing tables, main, internet, and vpn. The listing of them is as follows... Main: default via 192.168.0.1 dev eth0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1 Internet: default via 192.168.0.1 dev eth0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1 192.168.0.1 dev eth0 scope link src 192.168.0.73 VPN: default via 10.38.0.205 dev tun0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1

    Read the article

  • Bad disks in ancient server

    - by Joel Coel
    I have a 1998-era Netware 3.12 server that runs everything on our campus: general ledger, purchasing, payroll, student information, grades, you name it. The server has an Adaptec RAID controller with two volumes: RAID 1, 2 17GB scsi disks, Seagate ST318417W RAID 5, 3 4GB scsi disks, 2 Seagate ST34573W and 1 ST34572W. We are currently in the early stages of a project to replace this system, but you don't just jump into a new system like that and so I need to keep this server running until at least November 2011. This week we had not one but two hard drives fail. Thankfully they are from different volumes and we're able to keep running for the moment, but given the close nature of these failures I have serious doubts that I'll be able to avoid catastrophic failure from this server through the November target as is without restoring the RAID redundancy — it'll only take one more drive failure anywhere and I'm completely hosed. We are fortunate enough to have exact match "spares" lying around for both drives, but the spares are in unknown condition. I tried swapping just them in, but the RAID controller isn't smart enough to handle this and it renders the system unbootable. As for the RAID controller itself, there is utility I can get into during POST via a Ctrl-A shortcut, but I can't do much useful from there. To actually manage volumes I must first boot in to Netware, at which point I can use CI/O Array Management Software Version 2.0 to actually look at volume information. I suspect that the normal way to manage things is to boot from a special floppy with the controller software on it, but that floppy is long gone. Going through the options in the RAID software, I think the only supported way to replace a disk in an existing RAID volume is to physically add the disk, boot up and configure it as a "spare" for a volume, force the volume to use the spare to replace an existing down disk (and at this point I'm only guessing) so that the down disk becomes the spare, repair the volume, remove the spare from the volume, and then shut down and remove the disk. Then start all over for the other failed disk. All this amounts to a lot of downtime, assuming I can even make it work and that my spares are any good. As for finding reliable spares, I have no clue where to even begin looking to find a new 4GB scsi drive, or even which exact scsi system I'm looking for, as it's gone through a few different iterations over time. Another option is to migrate this to a virtual machine (hyper-v), but all previous attempts we've made in this area have failed to get very far. When this machine was installed I was just graduating from high school, and so it requires lower level knowledge of netware and dos than I ever developed, or if I did have since forgotten (I'm not exactly a dos neophyte, either). Part of my problem is this is a high-use server, and taking it down for a few days to figure things out isn't gonna fly very well. As for the question, I'm looking for anything that might be helpful in this situation: a recommendation on a place to find good spares from this era, personal experience repairing RAID volumes using a similar controller or building a hyper-v vm from an old netware server, a line on a floppy with better software for the RAID controller, recommendation on a good Novell consultant in Nebraska that would be able to put things right, a whole other option I haven't considered yet, etc. Update: For backups, we have good (recently verified via restore) backups of the data only -- nothing for the software that actually runs things. Update 2: Just a progress report that I currently have a working Netware 3.12 install in VMWare Virtual Server 2.0, thanks largely to the guide I found here: http://cerbulescubogdan.blogspot.com/2010/11/novell-netware-312-on-vmware.html The next steps are preparing empty netware volumes to match the additional volumes on my existing server, taking a dump of everything on the C:\ drive and netware volumes on my existing server, and figuring out from that information what modules need added to netware, installing my licenses (we do still have that disk, if it's any good), and moving data over. I have approval to bring the server down for a week after the first of the year (sadly not before), so, aside from creating empty volumes, the rest of the work will have to wait until then. Final Update (Jan 5, 2011): I was able to get spares working in both raid arrays without data loss this week. Both are now listed by the controller as "FAULT TOLLERANT" (yay!). I was also able to build on the progress from my last update and now have a functional "spare" server in VMWare Server 2.0. The spare can run and use our erp software, but I can't put it into production because I can't (yet) print from that box (and I have no idea why). Even so, this VM will do in a pinch if I have no other choice, and between it and the repaired RAID arrays I'm comfortable pushing on until I can junk the machine in November.

    Read the article

  • Can't access shared drive when connecting over VPN

    - by evolvd
    I can ping all network devices but it doesn't seem that DNS is resolving their hostnames. ipconfig/ all is showing that I am pointing to the correct dns server. I can "ping "dnsname"" and it will resolve but it wont resolve any other names. Split tunnel is set up so outside DNS is resolving fine So one issue might be DNS but I have the IP address of the server share so I figure I could just get to it that way. example: \10.0.0.1\ well I can't get to it that way either and I get "the specified network name is no longer available" I can ping it but I can't open the share. Below is the ASA config : ASA Version 8.2(1) ! hostname KG-ASA domain-name example.com names ! interface Vlan1 nameif inside security-level 100 ip address 10.0.0.253 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address dhcp setroute ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! ftp mode passive clock timezone EST -5 clock summer-time EDT recurring dns domain-lookup outside dns server-group DefaultDNS name-server 10.0.0.101 domain-name blah.com access-list OUTSIDE_IN extended permit tcp any host 10.0.0.253 eq 10000 access-list OUTSIDE_IN extended permit tcp any host 10.0.0.253 eq 8333 access-list OUTSIDE_IN extended permit tcp any host 10.0.0.253 eq 902 access-list SPLIT-TUNNEL-VPN standard permit 10.0.0.0 255.0.0.0 access-list NONAT extended permit ip 10.0.0.0 255.255.255.0 10.0.1.0 255.255.255.0 pager lines 24 logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool IPSECVPN-POOL 10.0.1.2-10.0.1.50 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-621.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list NONAT nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface 10000 10.0.0.101 10000 netmask 255.255.255.255 static (inside,outside) tcp interface 8333 10.0.0.101 8333 netmask 255.255.255.255 static (inside,outside) tcp interface 902 10.0.0.101 902 netmask 255.255.255.255 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy aaa authentication enable console LOCAL aaa authentication http console LOCAL aaa authentication serial console LOCAL aaa authentication ssh console LOCAL aaa authentication telnet console LOCAL http server enable http 10.0.0.0 255.255.0.0 inside http 0.0.0.0 0.0.0.0 outside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set myset esp-aes esp-sha-hmac crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map dynmap 1 set transform-set myset crypto dynamic-map dynmap 1 set reverse-route crypto map IPSEC-MAP 65535 ipsec-isakmp dynamic dynmap crypto map IPSEC-MAP interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 crypto isakmp policy 65535 authentication pre-share encryption aes hash sha group 2 lifetime 86400 telnet 0.0.0.0 0.0.0.0 inside telnet timeout 5 ssh 0.0.0.0 0.0.0.0 inside ssh 70.60.228.0 255.255.255.0 outside ssh 74.102.150.0 255.255.254.0 outside ssh 74.122.164.0 255.255.252.0 outside ssh timeout 5 console timeout 0 dhcpd dns 10.0.0.101 dhcpd lease 7200 dhcpd domain blah.com ! dhcpd address 10.0.0.110-10.0.0.170 inside dhcpd enable inside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept ntp server 63.111.165.21 webvpn enable outside svc image disk0:/anyconnect-win-2.4.1012-k9.pkg 1 svc enable group-policy EASYVPN internal group-policy EASYVPN attributes dns-server value 10.0.0.101 vpn-tunnel-protocol IPSec l2tp-ipsec svc webvpn split-tunnel-policy tunnelspecified split-tunnel-network-list value SPLIT-TUNNEL-VPN ! tunnel-group client type remote-access tunnel-group client general-attributes address-pool (inside) IPSECVPN-POOL address-pool IPSECVPN-POOL default-group-policy EASYVPN dhcp-server 10.0.0.253 tunnel-group client ipsec-attributes pre-shared-key * tunnel-group CLIENTVPN type ipsec-l2l tunnel-group CLIENTVPN ipsec-attributes pre-shared-key * ! class-map inspection_default match default-inspection-traffic ! ! policy-map global_policy class inspection_default inspect icmp ! service-policy global_policy global prompt hostname context I'm not sure where I should go next with troubleshooting nslookup result: Default Server: blahname.blah.lan Address: 10.0.0.101

    Read the article

  • Web browsing is fast, but downloads are slow

    - by Ricket
    I work for a company on my university's campus, helping with general IT problems and some web development. But lately there has been a problem that has me and my boss completely stumped. We, plus one contractor, make up the entire IT department, so I'm reaching out to you for help. All around the office, we have wall jacks. These collect in a closet down the hall and all plug into a switch. This switch, along with our individual server jacks, plugs into another switch, and that switch plugs into our firewall hardware. Then the firewall is connected out to our campus network. Our campus internet is, well, very fast. I don't know exactly the terms, tiers, etc., but we have thousands of students and downloads can run as fast as 10 MB/s at night; uploads are sometimes even faster. I think we're practically ISP level. In short, I have a lot of faith that it is not the campus side of things that is causing a problem, combined with other evidence I'll mention in a moment. So our symptoms: web browsing is fast. Web pages, images, etc. load instantly. No problems there. But then when I go to download something, the download starts fast but very quickly (a matter of seconds) drops to nearly 0. Often it will actually drop to 0 and time out. This happens with even very small files, 1 MB or less. It smells to me like a QoS sort of thing. I'm not entirely sure, and I wanted to get your opinions first. My boss is hesitant to touch our firewall, much less let me touch it, and it was set up and is managed by a consultant remotely. These problems don't seem tied to a time of the day. I've tried downloads after 5:00 and still the same thing happens. From my desk, I can turn on my wireless adapter and pick up the campus wireless access point. If I unplug ethernet and connect to it, downloads are fast. This adds to my suspicion that it's limited to our company network. Also, a number of weeks ago the consultant upgraded our firewall firmware. Suddenly everything was very fast. I tested with downloads from Sun and speedtest.net and things were blazing fast, as they should be with our campus internet! It was wonderful, and I figured the slow speeds were an old firmware bug. In a matter of days, things steadily declined until they were back to the old symptoms. Oh, and we have antivirus installed on every computer, and we keep it up to date. Though I suppose the possibility is still there that someone could have spyware which is bogging down our internet, in which case what is the easiest/best way to find this out? (maybe this should go in a separate question) Thank you for your patience in reading all of this. Do you have any ideas as to what I can try? Is this something that you've experienced before? What sort of tools or methods can I use to try and diagnose the problem? P.S. everything here is Windows. Windows Server 2003 and 2008 on our servers, and Windows XP on employees' machines. Update: We are submitting a ticket to the university to just take a look and see if they see anything unusual and/or can suggestion methods for us to try and pinpoint our problem. Hopefully they'll be helpful! I'll update this to let you know what goes on. Update again: We found a hub (yes, a HUB) right between our campus connection and our firewall. It had only those two ethernet cables plugged into it, nothing else. After removing the hub, our speeds have jumped up to several mbps. However in talking with the campus, we got them to run a gigabit line to our firewall in place of the 100mbps line. As of friday, we are at about 65 mbps up and down (according to speedtest.net at 8am)!! Go NC State!!

    Read the article

  • Moving a Drupal between linux servers, best practice to avoid file-ownership problems

    - by zero
    I want to port over a Drupal commons 6x24 from a local LAMP-stack to a production webserver. Both systems run OpenSuse Linux. How do I do this, what are the most important steps. How should I handle file-ownership. It's important for me to have to have full control of the file ownership. If I use the wwwrun account, I frequently run into problems, due to a very strict webserver-admin. See for example the long history of looking for fixes and solutions see this thread and even more interesting see this very long and impressive thread here. All troubles I run into have to do with file-owernship and permissions. This is my current setup; Note: This was just a quick hacked installation - quick and dirty. Well my interest is after the general options i have in the port of a drupal from linux to linux linux-vi17:/srv/www/htdocs/com624 # ls -l insgesamt 224 -rwxrwxrwx 1 root www 45285 19. Jan 00:54 CHANGELOG.txt -rwxrwxrwx 1 root www 925 19. Jan 00:54 COPYRIGHT.txt -rwxrwxrwx 1 root www 206 19. Jan 00:54 cron.php drwxrwxrwx 2 root www 4096 19. Jan 00:54 includes -rwxrwxrwx 1 root www 923 19. Jan 00:54 index.php -rwxrwxrwx 1 root www 1244 19. Jan 00:54 INSTALL.mysql.txt -rwxrwxrwx 1 root www 1011 19. Jan 00:54 INSTALL.pgsql.txt -rwxrwxrwx 1 root www 47073 19. Jan 00:54 install.php -rwxrwxrwx 1 root www 15572 19. Jan 00:54 INSTALL.txt -rwxrwxrwx 1 root www 14940 19. Jan 00:54 LICENSE.txt -rwxrwxrwx 1 root www 1858 19. Jan 00:54 MAINTAINERS.txt drwxrwxrwx 3 root www 4096 19. Jan 00:54 misc drwxrwxrwx 35 root www 4096 19. Jan 00:54 modules drwxrwxrwx 4 root www 4096 19. Jan 00:54 profiles -rwxrwxrwx 1 root www 1470 19. Jan 00:54 robots.txt drwxrwxrwx 2 root www 4096 19. Jan 00:54 scripts drwxrwxrwx 4 root www 4096 19. Jan 00:54 sites drwxrwxrwx 7 root www 4096 19. Jan 00:54 themes -rwxrwxrwx 1 root www 26250 19. Jan 00:54 update.php -rwxrwxrwx 1 root www 4864 19. Jan 00:54 UPGRADE.txt -rwxrwxrwx 1 root www 294 19. Jan 00:54 xmlrpc.php linux-vi17:/srv/www/htdocs/com624 # thx to BetaRides answer here a quick overview on the drush functionality with rsync http://drush.ws/ core-rsync Rsync the Drupal tree to/from another server using ssh. Examples: drush rsync @dev @stage Rsync Drupal root from dev to stage (one of which must be local). drush rsync ./ @stage:%files/img Rsync all files in the current directory to the 'img' directory in the file storage folder on stage. Arguments: source May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. destination May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. Options: --mode The unary flags to pass to rsync; --mode=rultz implies rsync -rultz. Default is -az. --RSYNC-FLAG Most rsync flags passed to drush sync will be passed on to rsync. See rsync documentation. --exclude-conf Excludes settings.php from being rsynced. Default. --include-conf Allow settings.php to be rsynced --exclude-files Exclude the files directory. --exclude-sites Exclude all directories in "sites/" except for "sites/all". --exclude-other-sites Exclude all directories in "sites/" except for "sites/all" and the site directory for the site being synced. Note: if the site directory is different between the source and destination, use --exclude-sites followed by "drush rsync @from:%site @to:%site" --exclude-paths List of paths to exclude, seperated by : (Unix-based systems) or ; (Windows). --include-paths List of paths to include, seperated by : (Unix-based systems) or ; (Windows). Topics: docs-aliases Site aliases overview with examples Aliases: rsync

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >