Search Results

Search found 45843 results on 1834 pages for 'network access'.

Page 581/1834 | < Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >

  • Integrating Flickr with ASP.Net application

    - by sreejukg
    Flickr is the popular photo management and sharing application offered by yahoo. The services from flicker allow you to store and share photos and videos online. Flicker offers strong API support for almost all services they provide. Using this API, developers can integrate photos to their public website. Since 2005, developers have collaborated on top of Flickr's APIs to build fun, creative, and gorgeous experiences around photos that extend beyond Flickr. In this article I am going to demonstrate how easily you can bring the photos stored on flicker to your website. Let me explain the scenario this article is trying to address. I have a flicker account where I upload photos and share in many ways offered by Flickr. Now I have a public website, instead of re-upload the photos again to public website, I want to show this from Flickr. Also I need complete control over what photo to display. So I went and referred the Flickr documentation and there is API support ready to address my scenario (and more… ). FlickerAPI for ASP.Net To Integrate Flicker with ASP.Net applications, there is a library available in CodePlex. You can find it here http://flickrnet.codeplex.com/ Visit the URL and download the latest version. The download includes a Zip file, when you unzip you will get a number of dlls. Since I am going to use ASP.Net application, I need FlickrNet.dll. See the screenshot of all the dlls, and there is a help file available in the download (.chm) for your reference. Once you have the dll, you need to use Flickr API from your website. I assume you have a flicker account and you are familiar with Flicker services. Arrange your photos using Sets in Flickr In flicker, you can define sets and add your uploaded photos to sets. You can compare set to photo album. A set is a logical collection of photos, which is an excellent option for you to categorize your photos. Typically you will have a number of sets each set having few photos. You can write application that brings photos from sets to your website. For the purpose of this article I already created a set Flickr and added some photos to it. Once you logged in to Flickr, you can see the Sets under the Menu. In the Sets page, you will see all the sets you have created. As you notice, you can see certain sample images I have uploaded just to test the functionality. Though I wish I couldn’t create good photos so please bear with me. I have created 2 photo sets named Blue Album and Red Album. Click on the image for the set, will take you to the corresponding set page. In the set “Red Album” there are 4 photos and the set has a unique ID (highlighted in the URL). You can simply retrieve the photos with the set id from your application. In this article I am going to retrieve the images from Red album in my ASP.Net page. For that First I need to setup FlickrAPI for my usage. Configure Flickr API Key As I mentioned, we are going to use Flickr API to retrieve the photos stored in Flickr. In order to get access to Flickr API, you need an API key. To create an API key, navigate to the URL http://www.flickr.com/services/apps/create/ Click on Request an API key link, now you need to tell Flickr whether your application in commercial or non-commercial. I have selected a non-commercial key. Now you need to enter certain information about your application. Once you enter the details, Click on the submit button. Now Flickr will create the API key for your application. Generating non-commercial API key is very easy, in couple of steps the key will be generated and you can use the key in your application immediately. ASP.Net application for retrieving photos Now we need write an ASP.Net application that display pictures from Flickr. Create an empty web application (I named this as FlickerIntegration) and add a reference to FlickerNet.dll. Add a web form page to the application where you will retrieve and display photos(I have named this as Gallery.aspx). After doing all these, the solution explorer will look similar to following. I have used the below code in the Gallery.aspx page. The output for the above code is as follows. I am going to explain the code line by line here. First it is adding a reference to the FlickrNet namespace. using FlickrNet; Then create a Flickr object by using your API key. Flickr f = new Flickr("<yourAPIKey>"); Now when you retrieve photos, you can decide what all fields you need to retrieve from Flickr. Every photo in Flickr contains lots of information. Retrieving all will affect the performance. For the demonstration purpose, I have retrieved all the available fields as follows. PhotoSearchExtras.All But if you want to specify the fields you can use logical OR operator(|). For e.g. the following statement will retrieve owner name and date taken. PhotoSearchExtras extraInfo = PhotoSearchExtras.OwnerName | PhotoSearchExtras.DateTaken; Then retrieve all the photos from a photo set using PhotoSetsGetPhotos method. I have passed the PhotoSearchExtras object created earlier. PhotosetPhotoCollection photos = f.PhotosetsGetPhotos("72157629872940852", extraInfo); The PhotoSetsGetPhotos method will return a collection of Photo objects. You can just navigate through the collection using a foreach statement. foreach (Photo p in photos) {     //access each photo properties } Photo class have lot of properties that map with the properties from Flickr. The chm documentation comes along with the CodePlex download is a great asset for you to understand the fields. In the above code I just used the following p.LargeUrl – retrieves the large image url for the photo. p.ThumbnailUrl – retrieves the thumbnail url for the photo p.Title – retrieves the Title of the photo p.DateUploaded – retrieves the date of upload Visual Studio intellisense will give you all properties, so it is easy, you can just try with Visual Studio intellisense to find the right properties you are looking for. Most of hem are self-explanatory. So you can try retrieving the required properties. In the above code, I just pushed the photos to the page. In real time you can use the retrieved photos along with JQuery libraries to create animated photo galleries, slideshows etc. Configuration and Troubleshooting If you get access denied error while executing the code, you need to disable the caching in Flickr API. FlickrNet cache the photos to your local disk when retrieved. You can specify a cache folder where the application need write permission. You can specify the Cache folder in the code as follows. Flickr.CacheLocation = Server.MapPath("./FlickerCache/"); If the application doesn’t have have write permission to the cache folder, the application will throw access denied error. If you cannot give write permission to the cache folder, then you must disable the caching. You can do this from code as follows. Flickr.CacheDisabled = true; Disabling cache will have an impact on the performance. Take care! Also you can define the Flickr settings in web.config file.You can find the documentation here. http://flickrnet.codeplex.com/wikipage?title=ExampleConfigFile&ProjectName=flickrnet Flickr is a great place for storing and sharing photos. The API access allows developers to do seamless integration with the photos uploaded on Flickr.

    Read the article

  • It's not just “Single Sign-on” by Steve Knott (aurionPro SENA)

    - by Greg Jensen
    It is true that Oracle Enterprise Single Sign-on (Oracle ESSO) started out as purely an application single sign-on tool but as we have seen in the previous articles in this series the product has matured into a suite of tools that can do more than just automated single sign-on and can also provide rapidly deployed, cost effective solution to many demanding password management problems. In the last article of this series I would like to discuss three cases where customers faced password scenarios that required more than just single sign-on and how some of the less well known tools in the Oracle ESSO suite “kitbag” helped solve these challenges. Case #1 One of the issues often faced by our customers is how to keep their applications compliant. I had a client who liked the idea of automated single sign-on for most of his applications but had a key requirement to actually increase the security for one specific SOX application. For the SOX application he wanted to secure access by using two-factor authentication with a smartcard. The problem was that the application did not support two-factor authentication. The solution was to use a feature from the Oracle ESSO suite called authentication manager. This feature enables you to have multiple authentication methods for the same user which in this case was a smartcard and the Windows password.  Within authentication manager each authenticator can be configured with a security grade so we gave the smartcard a high grade and the Windows password a normal grade. Security grading in Oracle ESSO can be configured on a per application basis so we set the SOX application to require the higher grade smartcard authenticator. The end result for the user was that they enjoyed automated single sign-on for most of the applications apart from the SOX application. When the SOX application was launched, the user was required by ESSO to present their smartcard before being given access to the application. Case #2 Another example solving compliance issues was in the case of a large energy company who had a number of core billing applications. New regulations required that users change their password regularly and use a complex password. The problem facing the customer was that the core billing applications did not have any native user password change functionality. The customer could not replace the core applications because of the cost and time required to re-develop them. With a reputation for innovation aurionPro SENA were approached to provide a solution to this problem using Oracle ESSO. Oracle ESSO has a password expiry feature that can be triggered periodically based on the timestamp of the users’ last password creation therefore our strategy here was to leverage this feature to provide the password change experience. The trigger can launch an application change password event however in this scenario there was no native change password feature that could be launched therefore a “dummy” change password screen was created that could imitate the missing change password function and connect to the application database on behalf of the user. Oracle ESSO was configured to trigger a change password event every 60 days. After this period if the user launched the application Oracle ESSO would detect the logon screen and invoke the password expiry feature. Oracle ESSO would trigger the “dummy screen,” detect it automatically as the application change password screen and insert a complex password on behalf of the user. After the password event had completed the user was logged on to the application with their new password. All this was provided at a fraction of the cost of re-developing the core applications. Case #3 Recent popular initiatives such as the BYOD and working from home schemes bring with them many challenges in administering “unmanaged machines” and sometimes “unmanageable users.” In a recent case, a client had a dispersed community of casual contractors who worked for the business using their own laptops to access applications. To improve security the around password management the security goal was to provision the passwords directly to these contractors. In a previous article we saw how Oracle ESSO has the capability to provision passwords through Provisioning Gateway but the challenge in this scenario was how to get the Oracle ESSO agent to the casual contractor on an unmanaged machine. The answer was to use another tool in the suite, Oracle ESSO Anywhere. This component can compile the normal Oracle ESSO functionality into a deployment package that can be made available from a website in a similar way to a streamed application. The ESSO Anywhere agent does not actually install into the registry or program files but runs in a folder within the user’s profile therefore no local administrator rights are required for installation. The ESSO Anywhere package can also be configured to stay persistent or disable itself at the end of the user’s session. In this case the user just needed to be told where the website package was located and download the package. Once the download was complete the agent started automatically and the user was provided with single sign-on to their applications without ever knowing the application passwords. Finally, as we have seen in these series Oracle ESSO not only has great utilities in its own tool box but also has direct integration with Oracle Privileged Account Manager, Oracle Identity Manager and Oracle Access Manager. Integrated together with these tools provides a complete and complementary platform to address even the most complex identity and access management requirements. So what next for Oracle ESSO? “Agentless ESSO available in the cloud” – but that will be a subject for a future Oracle ESSO series!                                                                                                                               

    Read the article

  • Java FAQ: Tudo o que você precisa saber

    - by Bruno.Borges
    Com frequência recebo e-mails de clientes com dúvidas sobre "quando sairá a próxima versão do Java?", ou então "quando vai expirar o Java?" ou ainda "quais as mudanças da próxima versão?". Por isso resolvi escrever aqui um FAQ, respondendo estas dúvidas e muitas outras. Este post estará sempre atualizado, então se você possui alguma dúvida, envie para mim no Twitter @brunoborges. Qual a diferença entre o Oracle JDK e o OpenJDK?O projeto OpenJDK funciona como a implementação de referência Open Source do Java Standard Edition. Empresas como a Oracle, IBM, e Azul Systems suportam e investem no projeto OpenJDK para continuar evoluindo a plataforma Java. O Oracle JDK é baseado no OpenJDK, mas traz outras ferramentas como o Mission Control, e a máquina virtual traz algumas features avançadas como por exemplo o Flight Recorder. Até a versão 6, a Oracle oferecia duas máquinas virtuais: JRockit (BEA) e HotSpot (Sun). A partir da versão 7 a Oracle unificou as máquinas virtuais, e levou as features avançadas do JRockit para dentro da VM HotSpot. Leia também o OpenJDK FAQ. Onde posso obter binários beta Early Access do JDK 7, JDK 8, JDK 9 para testar?A partir do projeto OpenJDK, existe um projeto específico para cada versão do Java. Nestes projetos você pode encontrar binários beta Early Access, além do código-fonte. JDK 6 - http://jdk6.java.net/ JDK 7 - http://jdk7.java.net/ JDK 8 - http://jdk8.java.net/ JDK 9 - http://jdk9.java.net/ Quando acaba o suporte do Oracle Java SE 6, 7, 8? Somente produtos e versões com release oficial são suportados pela Oracle (exemplo: não há suporte para binários beta do JDK 7, JDK 8, ou JDK 9). Existem duas categorias de datas que o usuriário do Java deve estar ciente:  EOPU - End of Public UpdatesMomento em que a Oracle não mais disponibiliza publicamente atualizações Oracle SupportPolítica de suporte da Oracle para produtos, incluindo o Oracle Java SE O Oracle Java SE é um produto e portando os períodos de suporte são regidos pelo Oracle Lifetime Support Policy. Consulte este documento para datas atualizadas e específicas para cada versão do Java. O Oracle Java SE 6 já atingiu EOPU (End of Public Updates) e agora é mantido e atualizado somente para clientes através de contrato comercial de suporte. Para maiores informações, consulte a página sobre Oracle Java SE Support.  O mais importante aqui é você estar ciente sobre as datas de EOPU para as versões do Java SE da Oracle.Consulte a página do Oracle Java SE Support Roadmap e busque nesta página pela tabela com nome Java SE Public Updates. Nela você encontrará a data em que determinada versão do Java irá atingir EOPU. Como funciona o versionamento do Java?Em 2013, a Oracle divulgou um novo esquema de versionamento do Java para facilmente identificar quando é um release CPU e quando é um release LFR, e também para facilitar o planejamento e desenvolvimento de correções e features para futuras versões. CPU - Critical Patch UpdateAtualizações com correções de segurança. Versão será múltipla de 5, ou com soma de 1 para manter o número ímpar. Exemplos: 7u45, 7u51, 7u55. LFR - Limited Feature ReleaseAtualizações com correções de funcionalidade, melhorias de performance, e novos recursos. Versões de números pares múltiplos de 20, com final 0. Exemplos: 7u40, 7u60, 8u20. Qual a data da próxima atualização de segurança (CPU) do Java SE?Lançamentos do tipo CPU são controlados e pré-agendados pela Oracle e se aplicam a todos os produtos, inclusive o Oracle Java SE. Estes releases acontecem a cada 3 meses, sempre na Terça-feira mais próxima do dia 17 dos meses de Janeiro, Abril, Julho, e Outubro. Consulte a página Critical Patch Updates, Security Alerts and Third Party Bulleting para saber das próximas datas. Caso tenha interesse, você pode acompanhar através de recebimentos destes boletins diretamente no seu email. Veja como assinar o Boletim de Segurança da Oracle. Qual a data da próxima atualização de features (LFR) do Java SE?A Oracle reserva o direito de não divulgar estas datas, assim como o faz para todos os seus produtos. Entretanto é possível acompanhar o desenvolvimento da próxima versão pelos sites do projeto OpenJDK. A próxima versão do JDK 7 será o update 60 e binários beta Early Access já estão disponíveis para testes. A próxima versão doJDK 8 será o update 20 e binários beta Early Access já estão disponíveis para testes. Onde posso ver as mudanças e o que foi corrigido para a próxima versão do Java?A Oracle disponibiliza um changelog para cada binário beta Early Access divulgado no portal Java.net. JDK 7 update 60 changelogs JDK 8 update 20 changelogs Quando o Java da minha máquina (ou do meu usuário) vai expirar?Conheçendo o sistema de versionamento do Java e a periodicidade dos releases de CPU, o usuário pode determinar quando que um update do Java irá expirar. De todo modo, a cada novo update, a Oracle já informa quando que este update deverá expirar diretamente no release notes da versão. Por exemplo, no release notes da versão Oracle Java SE 7 update 55, está escrito na seção JRE Expiration Date o seguinte: The JRE expires whenever a new release with security vulnerability fixes becomes available. Critical patch updates, which contain security vulnerability fixes, are announced one year in advance on Critical Patch Updates, Security Alerts and Third Party Bulletin. This JRE (version 7u55) will expire with the release of the next critical patch update scheduled for July 15, 2014. For systems unable to reach the Oracle Servers, a secondary mechanism expires this JRE (version 7u55) on August 15, 2014. After either condition is met (new release becoming available or expiration date reached), the JRE will provide additional warnings and reminders to users to update to the newer version. For more information, see JRE Expiration Date.Ou seja, a versão 7u55 irá expirar com o lançamento do próximo release CPU, pré-agendado para o dia 15 de Julho de 2014. E caso o computador do usuário não possa se comunicar com o servidor da Oracle, esta versão irá expirar forçadamente no dia 15 de Agosto de 2014 (através de um mecanismo embutido na versão 7u55). O usuário não é obrigado a atualizar para versões LFR e portanto, mesmo com o release da versão 7u60, a versão atual 7u55 não irá expirar.Veja o release notes do Oracle Java SE 8 update 5. Encontrei um bug. Como posso reportar bugs ou problemas no Java SE, para a Oracle?Sempre que possível, faça testes com os binários beta antes da versão final ser lançada. Qualquer problema que você encontrar com estes binários beta, por favor descreva o problema através do fórum de Project Feebdack do JDK.Caso você encontre algum problema em uma versão final do Java, utilize o formulário de Bug Report. Importante: bugs reportados por estes sistemas não são considerados Suporte e portanto não há SLA de atendimento. A Oracle reserva o direito de manter o bug público ou privado, e também de informar ou não o usuário sobre o progresso da resolução do problema. Tenho uma dúvida que não foi respondida aqui. Como faço?Se você possui uma pergunta que não foi respondida aqui, envie para bruno.borges_at_oracle.com e caso ela seja pertinente, tentarei responder neste artigo. Para outras dúvidas, entre em contato pelo meu Twitter @brunoborges.

    Read the article

  • Easier ASP.NET MVC Routing

    - by Steve Wilkes
    I've recently refactored the way Routes are declared in an ASP.NET MVC application I'm working on, and I wanted to share part of the system I came up with; a really easy way to declare and keep track of ASP.NET MVC Routes, which then allows you to find the name of the Route which has been selected for the current request. Traditional MVC Route Declaration Traditionally, ASP.NET MVC Routes are added to the application's RouteCollection using overloads of the RouteCollection.MapRoute() method; for example, this is the standard way the default Route which matches /controller/action URLs is created: routes.MapRoute(     "Default",     "{controller}/{action}/{id}",     new { controller = "Home", action = "Index", id = UrlParameter.Optional }); The first argument declares that this Route is to be named 'Default', the second specifies the Route's URL pattern, and the third contains the URL pattern segments' default values. To then write a link to a URL which matches the default Route in a View, you can use the HtmlHelper.RouteLink() method, like this: @ this.Html.RouteLink("Default", new { controller = "Orders", action = "Index" }) ...that substitutes 'Orders' into the {controller} segment of the default Route's URL pattern, and 'Index' into the {action} segment. The {Id} segment was declared optional and isn't specified here. That's about the most basic thing you can do with MVC routing, and I already have reservations: I've duplicated the magic string "Default" between the Route declaration and the use of RouteLink(). This isn't likely to cause a problem for the default Route, but once you get to dozens of Routes the duplication is a pain. There's no easy way to get from the RouteLink() method call to the declaration of the Route itself, so getting the names of the Route's URL parameters correct requires some effort. The call to MapRoute() is quite verbose; with dozens of Routes this gets pretty ugly. If at some point during a request I want to find out the name of the Route has been matched.... and I can't. To get around these issues, I wanted to achieve the following: Make declaring a Route very easy, using as little code as possible. Introduce a direct link between where a Route is declared, where the Route is defined and where the Route's name is used, so I can use Visual Studio's Go To Definition to get from a call to RouteLink() to the declaration of the Route I'm using, making it easier to make sure I use the correct URL parameters. Create a way to access the currently-selected Route's name during the execution of a request. My first step was to come up with a quick and easy syntax for declaring Routes. 1 . An Easy Route Declaration Syntax I figured the easiest way of declaring a route was to put all the information in a single string with a special syntax. For example, the default MVC route would be declared like this: "{controller:Home}/{action:Index}/{Id}*" This contains the same information as the regular way of defining a Route, but is far more compact: The default values for each URL segment are specified in a colon-separated section after the segment name The {Id} segment is declared as optional simply by placing a * after it That's the default route - a pretty simple example - so how about this? routes.MapRoute(     "CustomerOrderList",     "Orders/{customerRef}/{pageNo}",     new { controller = "Orders", action = "List", pageNo = UrlParameter.Optional },     new { customerRef = "^[a-zA-Z0-9]+$", pageNo = "^[0-9]+$" }); This maps to the List action on the Orders controller URLs which: Start with the string Orders/ Then have a {customerRef} set of characters and numbers Then optionally a numeric {pageNo}. And again, it’s quite verbose. Here's my alternative: "Orders/{customerRef:^[a-zA-Z0-9]+$}/{pageNo:^[0-9]+$}*->Orders/List" Quite a bit more brief, and again, containing the same information as the regular way of declaring Routes: Regular expression constraints are declared after the colon separator, the same as default values The target controller and action are specified after the -> The {pageNo} is defined as optional by placing a * after it With an appropriate parser that gave me a nice, compact and clear way to declare routes. Next I wanted to have a single place where Routes were declared and accessed. 2. A Central Place to Declare and Access Routes I wanted all my Routes declared in one, dedicated place, which I would also use for Route names when calling RouteLink(). With this in mind I made a single class named Routes with a series of public, constant fields, each one relating to a particular Route. With this done, I figured a good place to actually declare each Route was in an attribute on the field defining the Route’s name; the attribute would parse the Route definition string and make the resulting Route object available as a property. I then made the Routes class examine its own fields during its static setup, and cache all the attribute-created Route objects in an internal Dictionary. Finally I made Routes use that cache to register the Routes when requested, and to access them later when required. So the Routes class declares its named Routes like this: public static class Routes{     [RouteDefinition("Orders/{customerName}->Orders/Index")]     public const string OrdersCustomerIndex = "OrdersCustomerIndex";     [RouteDefinition("Orders/{customerName}/{orderId:^([0-9]+)$}->Orders/Details")]     public const string OrdersDetails = "OrdersDetails";     [RouteDefinition("{controller:Home}*/{action:Index}*")]     public const string Default = "Default"; } ...which are then used like this: @ this.Html.RouteLink(Routes.Default, new { controller = "Orders", action = "Index" }) Now that using Go To Definition on the Routes.Default constant takes me to where the Route is actually defined, it's nice and easy to quickly check on the parameter names when using RouteLink(). Finally, I wanted to be able to access the name of the current Route during a request. 3. Recovering the Route Name The RouteDefinitionAttribute creates a NamedRoute class; a simple derivative of Route, but with a Name property. When the Routes class examines its fields and caches all the defined Routes, it has access to the name of the Route through the name of the field against which it is defined. It was therefore a pretty easy matter to have Routes give NamedRoute its name when it creates its cache of Routes. This means that the Route which is found in RequestContext.RouteData.Route is now a NamedRoute, and I can recover the Route's name during a request. For visibility, I made NamedRoute.ToString() return the Route name and URL pattern, like this: The screenshot is from an example project I’ve made on bitbucket; it contains all the named route classes and an MVC 3 application which demonstrates their use. I’ve found this way of defining and using Routes much tidier than the default MVC system, and you find it useful too

    Read the article

  • Portal And Content - Content Integration - Best Practices

    - by Stefan Krantz
    Lately we have seen an increase in projects that have failed to either get user friendly content integration or non satisfactory performance. Our intention is to mitigate any knowledge gap that our previous post might have left you with, therefore this post will repeat some recommendation or reference back to old useful post. Moreover this post will help you understand ground up how to design, architect and implement business enabled, responsive and performing portals with complex requirements on business centric information publishing. Design the Information Model The key to successful portal deployments is Information modeling, it's a key task to understand the use case you designing for, therefore I have designed a set of question you need to ask yourself or your customer: Question: Who will own the content, IT or Business? Answer: BusinessQuestion: Who will publish the content, IT or Business? Answer: BusinessQuestion: Will there be multiple publishers? Answer: YesQuestion: Are the publishers computer scientist?Answer: NoQuestion: How often do the information changes, daily, weekly, monthly?Answer: Daily, weekly If your answers to the questions matches at least 2, we strongly recommend you design your content with following principles: Divide your pages in to logical sections, where each section is marked with its purpose Assign capabilities to each section, does it contain text, images, formatting and/or is it static and is populated through other contextual information Select editor/design element type WYSIWYG - Rich Text Plain Text - non-format text Image - Image object Static List - static list of formatted informationDynamic Data List - assembled information from multiple data files through CMIS query The result of such design map could look like following below examples: Based on the outcome of the required elements in the design column 3 from the left you will now simply design a data model in WebCenter Content - Site Studio by creating a Region Definition structure matching your design requirements.For more information on how to create a Region definition see following post: Region Definition Post - note see instruction 7 for details. Each region definition can now be used to instantiate data files, a data file will hold the actual data for each element in the region definition. Another way you can see this is to compare the region definition as an extension to the metadata model in WebCenter Content for each data file item. Design content templates With a solid dependable information model we can now proceed to template creation and page design, in this phase focuses on how to place the content sections from the region definition on the page via a Content Presenter template. Remember by creating content presenter templates you will leverage the latest and most integrated technology WebCenter has to offer. This phase is much easier since the you already have the information model and design wire-frames to base the logic on, however there is still few considerations to pay attention to: Base the template on ADF and make only necessary exceptions to markup when required Leverage ADF design components for Tabs, Accordions and other similar components, this way the design in the content published areas will comply with other design areas based on custom ADF taskflows There is no performance impact when using meta data or region definition based data All data access regardless of type, metadata or xml data it can be accessed via the Content Presenter - Node. See below for applied examples on how to access data Access metadata property from Document - #{node.propertyMap['myProp'].value}myProp in this example can be for instance (dDocName, dDocTitle, xComments or any other available metadata) Access element data from data file xml - #{node.propertyMap['[Region Definition Name]:[Element name]'].asTextHtml}Region Definition Name is the expect region definition that the current data file is instantiatingElement name is the element value you like to grab from the data file I recommend you read following  useful post on content template topic:CMIS queries and template creation - note see instruction 9 for detailsStatic List template rendering For more information on templates:Single Item Content TemplateMulti Item Content TemplateExpression Language Internationalization Considerations When integrating content assets via content presenter you by now probably understand that the content item/data file is wired to the page, what is also pretty common at this stage is that the content item/data file only support one language since its not practical or business friendly to mix that into a complex structure. Therefore you will be left with a very common dilemma that you will have to either build a complete new portal for each locale, which is not an good option! However with little bit of information modeling and clear naming convention this can be addressed. Basically you can simply make sure that all content item/data file are named with a predictable naming convention like "Content1_EN" for the English rendition and "Content1_ES" for the Spanish rendition. This way through simple none complex customizations you will be able to dynamically switch the actual content item/data file just before rendering. By following proposed approach above you not only enable a simple mechanism for internationalized content you also preserve the functionality in the content presenter to support business accessible run-time publishing of information on existing and new pages. I recommend you read following useful post on Internationalization topics:Internationalize with Content Presenter Integrate with Review & Approval processes Today the Review and approval functionality and configuration is based out of WebCenter Content - Criteria Workflows. Criteria Workflows uses the metadata of the checked in document to evaluate if the document is under any review/approval process. So for instance if a Criteria Workflow is configured to force any documents with Version = "2" or "higher" and Content Type is "Instructions", any matching content item version on check in will now enter the workflow before getting released for general access. Few things to consider when configuring Criteria Workflows: Make sure to not trigger on version one for Content Items that are Data Files - if you trigger on version 1 you will not only approve an empty document you will also have a content presenter pointing to a none existing document - since the document will only be available after successful completion of the workflow Approval workflows sometimes requires more complex criteria, the recommendation if that is the case is that the meta data triggering such criteria is automatically populated, this can be achieved through many approaches including Content Profiles Criteria workflows are configured and managed in WebCenter Content Administration Applets where you can configure one or more workflows. When you configured Criteria workflows the Content Presenter will support the editors with the approval process directly inline in the "Contribution mode" of the portal. In addition to approve/reject and details of the task, the content presenter natively support the user to view the current and future version of the change he/she is approving. See below for example: Architectural recommendation To support review&approval processes - minimize the amount of data files per page Each CMIS query can consume significant time depending on the complexity of the query - minimize the amount of CMIS queries per page Use Content Presenter Templates based on ADF - this way you minimize the design considerations and optimize the usage of caching Implement the page in as few Data files as possible - simplifies publishing process, increases performance and simplifies release process Named data file (node) or list of named nodes when integrating to pages increases performance vs. querying for data Named data file (node) or list of named nodes when integrating to pages enables business centric page creation and publishing and reduces the need for IT department interaction Summary Just because one architectural decision solves a business problem it doesn't mean its the right one, when designing portals all architecture has to be in harmony and not impacting each other. For instance the most technical complex solution is not always the best since it will most likely defeat the business accessibility, performance or both, therefore the best approach is to first design for simplicity that even a non-technical user can operate, after that consider the performance impact and final look at the technology challenges these brings and workaround them first with out-of-the-box features, after that design and develop functions to complement the short comings.

    Read the article

  • General monitoring for SQL Server Analysis Services using Performance Monitor

    - by Testas
    A recent customer engagement required a setup of a monitoring solution for SSAS, due to the time restrictions placed upon this, native Windows Performance Monitor (Perfmon) and SQL Server Profiler Monitoring Tools was used as using a third party tool would have meant the customer providing an additional monitoring server that was not available.I wanted to outline the performance monitoring counters that was used to monitor the system on which SSAS was running. Due to the slow query performance that was occurring during certain scenarios, perfmon was used to establish if any pressure was being placed on the Disk, CPU or Memory subsystem when concurrent connections access the same query, and Profiler to pinpoint how the query was being managed within SSAS, profiler I will leave for another blogThis guide is not designed to provide a definitive list of what should be used when monitoring SSAS, different situations may require the addition or removal of counters as presented by the situation. However I hope that it serves as a good basis for starting your monitoring of SSAS. I would also like to acknowledge Chris Webb’s awesome chapters from “Expert Cube Development” that also helped shape my monitoring strategy:http://cwebbbi.spaces.live.com/blog/cns!7B84B0F2C239489A!6657.entrySimulating ConnectionsTo simulate the additional connections to the SSAS server whilst monitoring, I used ascmd to simulate multiple connections to the typical and worse performing queries that were identified by the customer. A similar sript can be downloaded from codeplex at http://www.codeplex.com/SQLSrvAnalysisSrvcs.     File name: ASCMD_StressTestingScripts.zip. Performance MonitorWithin performance monitor,  a counter log was created that contained the list of counters below. The important point to note when running the counter log is that the RUN AS property within the counter log properties should be changed to an account that has rights to the SSAS instance when monitoring MSAS counters. Failure to do so means that the counter log runs under the system account, no errors or warning are given while running the counter log, and it is not until you need to view the MSAS counters that they will not be displayed if run under the default account that has no right to SSAS. If your connection simulation takes hours, this could prove quite frustrating if not done beforehand JThe counters used……  Object Counter Instance Justification System Processor Queue legnth N/A Indicates how many threads are waiting for execution against the processor. If this counter is consistently higher than around 5 when processor utilization approaches 100%, then this is a good indication that there is more work (active threads) available (ready for execution) than the machine's processors are able to handle. System Context Switches/sec N/A Measures how frequently the processor has to switch from user- to kernel-mode to handle a request from a thread running in user mode. The heavier the workload running on your machine, the higher this counter will generally be, but over long term the value of this counter should remain fairly constant. If this counter suddenly starts increasing however, it may be an indicating of a malfunctioning device, especially if the Processor\Interrupts/sec\(_Total) counter on your machine shows a similar unexplained increase Process % Processor Time sqlservr Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process % Processor Time msmdsrv Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process Working Set sqlservr If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Process Working Set msmdsrv If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Processor % Processor Time _Total and individual cores measures the total utilization of your processor by all running processes. If multi-proc then be mindful only an average is provided Processor % Privileged Time _Total To see how the OS is handling basic IO requests. If kernel mode utilization is high, your machine is likely underpowered as it's too busy handling basic OS housekeeping functions to be able to effectively run other applications. Processor % User Time _Total To see how the applications is interacting from a processor perspective, a high percentage utilisation determine that the server is dealing with too many apps and may require increasing thje hardware or scaling out Processor Interrupts/sec _Total  The average rate, in incidents per second, at which the processor received and serviced hardware interrupts. Shoulr be consistant over time but a sudden unexplained increase could indicate a device malfunction which can be confirmed using the System\Context Switches/sec counter Memory Pages/sec N/A Indicates the rate at which pages are read from or written to disk to resolve hard page faults. This counter is a primary indicator of the kinds of faults that cause system-wide delays, this is the primary counter to watch for indication of possible insufficient RAM to meet your server's needs. A good idea here is to configure a perfmon alert that triggers when the number of pages per second exceeds 50 per paging disk on your system. May also want to see the configuration of the page file on the Server Memory Available Mbytes N/A is the amount of physical memory, in bytes, available to processes running on the computer. if this counter is greater than 10% of the actual RAM in your machine then you probably have more than enough RAM. monitor it regularly to see if any downward trend develops, and set an alert to trigger if it drops below 2% of the installed RAM. Physical Disk Disk Transfers/sec for each physical disk If it goes above 10 disk I/Os per second then you've got poor response time for your disk. Physical Disk Idle Time _total If Disk Transfers/sec is above  25 disk I/Os per second use this counter. which measures the percent time that your hard disk is idle during the measurement interval, and if you see this counter fall below 20% then you've likely got read/write requests queuing up for your disk which is unable to service these requests in a timely fashion. Physical Disk Disk queue legnth For the OLAP and SQL physical disk A value that is consistently less than 2 means that the disk system is handling the IO requests against the physical disk Network Interface Bytes Total/sec For the NIC Should be monitored over a period of time to see if there is anb increase/decrease in network utilisation Network Interface Current Bandwidth For the NIC is an estimate of the current bandwidth of the network interface in bits per second (BPS). MSAS 2005: Memory Memory Limit High KB N/A Shows (as a percentage) the high memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Limit Low KB N/A Shows (as a percentage) the low memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Usage KB N/A Displays the memory usage of the server process. MSAS 2005: Memory File Store KB N/A Displays the amount of memory that is reserved for the Cache. Note if total memory limit in the msmdsrv.ini is set to 0, no memory is reserved for the cache MSAS 2005: Storage Engine Query Queries from Cache Direct / sec N/A Displays the rate of queries answered from the cache directly MSAS 2005: Storage Engine Query Queries from Cache Filtered / Sec N/A Displays the Rate of queries answered by filtering existing cache entry. MSAS 2005: Storage Engine Query Queries from File / Sec N/A Displays the Rate of queries answered from files. MSAS 2005: Storage Engine Query Average time /query N/A Displays the average time of a query MSAS 2005: Connection Current connections N/A Displays the number of connections against the SSAS instance MSAS 2005: Connection Requests / sec N/A Displays the rate of query requests per second MSAS 2005: Locks Current Lock Waits N/A Displays thhe number of connections waiting on a lock MSAS 2005: Threads Query Pool job queue Length N/A The number of queries in the job queue MSAS 2005:Proc Aggregations Temp file bytes written/sec N/A Shows the number of bytes of data processed in a temporary file MSAS 2005:Proc Aggregations Temp file rows written/sec N/A Shows the number of bytes of data processed in a temporary file 

    Read the article

  • MySQL Cluster 7.2: Over 8x Higher Performance than Cluster 7.1

    - by Mat Keep
    0 0 1 893 5092 Homework 42 11 5974 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary The scalability enhancements delivered by extensions to multi-threaded data nodes enables MySQL Cluster 7.2 to deliver over 8x higher performance than the previous MySQL Cluster 7.1 release on a recent benchmark What’s New in MySQL Cluster 7.2 MySQL Cluster 7.2 was released as GA (Generally Available) in February 2012, delivering many enhancements to performance on complex queries, new NoSQL Key / Value API, cross-data center replication and ease-of-use. These enhancements are summarized in the Figure below, and detailed in the MySQL Cluster New Features whitepaper Figure 1: Next Generation Web Services, Cross Data Center Replication and Ease-of-Use Once of the key enhancements delivered in MySQL Cluster 7.2 is extensions made to the multi-threading processes of the data nodes. Multi-Threaded Data Node Extensions The MySQL Cluster 7.2 data node is now functionally divided into seven thread types: 1) Local Data Manager threads (ldm). Note – these are sometimes also called LQH threads. 2) Transaction Coordinator threads (tc) 3) Asynchronous Replication threads (rep) 4) Schema Management threads (main) 5) Network receiver threads (recv) 6) Network send threads (send) 7) IO threads Each of these thread types are discussed in more detail below. MySQL Cluster 7.2 increases the maximum number of LDM threads from 4 to 16. The LDM contains the actual data, which means that when using 16 threads the data is more heavily partitioned (this is automatic in MySQL Cluster). Each LDM thread maintains its own set of data partitions, index partitions and REDO log. The number of LDM partitions per data node is not dynamically configurable, but it is possible, however, to map more than one partition onto each LDM thread, providing flexibility in modifying the number of LDM threads. The TC domain stores the state of in-flight transactions. This means that every new transaction can easily be assigned to a new TC thread. Testing has shown that in most cases 1 TC thread per 2 LDM threads is sufficient, and in many cases even 1 TC thread per 4 LDM threads is also acceptable. Testing also demonstrated that in some instances where the workload needed to sustain very high update loads it is necessary to configure 3 to 4 TC threads per 4 LDM threads. In the previous MySQL Cluster 7.1 release, only one TC thread was available. This limit has been increased to 16 TC threads in MySQL Cluster 7.2. The TC domain also manages the Adaptive Query Localization functionality introduced in MySQL Cluster 7.2 that significantly enhanced complex query performance by pushing JOIN operations down to the data nodes. Asynchronous Replication was separated into its own thread with the release of MySQL Cluster 7.1, and has not been modified in the latest 7.2 release. To scale the number of TC threads, it was necessary to separate the Schema Management domain from the TC domain. The schema management thread has little load, so is implemented with a single thread. The Network receiver domain was bound to 1 thread in MySQL Cluster 7.1. With the increase of threads in MySQL Cluster 7.2 it is also necessary to increase the number of recv threads to 8. This enables each receive thread to service one or more sockets used to communicate with other nodes the Cluster. The Network send thread is a new thread type introduced in MySQL Cluster 7.2. Previously other threads handled the sending operations themselves, which can provide for lower latency. To achieve highest throughput however, it has been necessary to create dedicated send threads, of which 8 can be configured. It is still possible to configure MySQL Cluster 7.2 to a legacy mode that does not use any of the send threads – useful for those workloads that are most sensitive to latency. The IO Thread is the final thread type and there have been no changes to this domain in MySQL Cluster 7.2. Multiple IO threads were already available, which could be configured to either one thread per open file, or to a fixed number of IO threads that handle the IO traffic. Except when using compression on disk, the IO threads typically have a very light load. Benchmarking the Scalability Enhancements The scalability enhancements discussed above have made it possible to scale CPU usage of each data node to more than 5x of that possible in MySQL Cluster 7.1. In addition, a number of bottlenecks have been removed, making it possible to scale data node performance by even more than 5x. Figure 2: MySQL Cluster 7.2 Delivers 8.4x Higher Performance than 7.1 The flexAsynch benchmark was used to compare MySQL Cluster 7.2 performance to 7.1 across an 8-node Intel Xeon x5670-based cluster of dual socket commodity servers (6 cores each). As the results demonstrate, MySQL Cluster 7.2 delivers over 8x higher performance per data nodes than MySQL Cluster 7.1. More details of this and other benchmarks will be published in a new whitepaper – coming soon, so stay tuned! In a following blog post, I’ll provide recommendations on optimum thread configurations for different types of server processor. You can also learn more from the Best Practices Guide to Optimizing Performance of MySQL Cluster Conclusion MySQL Cluster has achieved a range of impressive benchmark results, and set in context with the previous 7.1 release, is able to deliver over 8x higher performance per node. As a result, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs.

    Read the article

  • Scripting Language Sessions at Oracle OpenWorld and MySQL Connect, 2012

    - by cj
    This posts highlights some great scripting language sessions coming up at the Oracle OpenWorld and MySQL Connect conferences. These events are happening in San Francisco from the end of September. You can search for other interesting conference sessions in the Content Catalog. Also check out what is happening at JavaOne in that event's Content Catalog (I haven't included sessions from it in this post.) To find the timeslots and locations of each session, click their respective link and check the "Session Schedule" box on the top right. GEN8431 - General Session: What’s New in Oracle Database Application Development This general session takes a look at what’s been new in the last year in Oracle Database application development tools using the latest generation of database technology. Topics range from Oracle SQL Developer and Oracle Application Express to Java and PHP. (Thomas Kyte - Architect, Oracle) BOF9858 - Meet the Developers of Database Access Services (OCI, ODBC, DRCP, PHP, Python) This session is your opportunity to meet in person the Oracle developers who have built Oracle Database access tools and products such as the Oracle Call Interface (OCI), Oracle C++ Call Interface (OCCI), and Open Database Connectivity (ODBC) drivers; Transparent Application Failover (TAF); Oracle Database Instant Client; Database Resident Connection Pool (DRCP); Oracle Net Services, and so on. The team also works with those who develop the PHP, Ruby, Python, and Perl adapters for Oracle Database. Come discuss with them the features you like, your pains, and new product enhancements in the latest database technology. CON8506 - Syndication and Consolidation: Oracle Database Driver for MySQL Applications This technical session presents a new Oracle Database driver that enables you to run MySQL applications (written in PHP, Perl, C, C++, and so on) against Oracle Database with almost no code change. Use cases for such a driver include application syndication such as interoperability across a relationship database management system, application migration, and database consolidation. In addition, the session covers enhancements in database technology that enable and simplify the migration of third-party databases and applications to and consolidation with Oracle Database. Attend this session to learn more and see a live demo. (Srinath Krishnaswamy - Director, Software Development, Oracle. Kuassi Mensah - Director Product Management, Oracle. Mohammad Lari - Principal Technical Staff, Oracle ) CON9167 - Current State of PHP and MySQL Together, PHP and MySQL power large parts of the Web. The developers of both technologies continue to enhance their software to ensure that developers can be satisfied despite all their changing and growing needs. This session presents an overview of changes in PHP 5.4, which was released earlier this year and shows you various new MySQL-related features available for PHP, from transparent client-side caching to direct support for scaling and high-availability needs. (Johannes Schlüter - SoftwareDeveloper, Oracle) CON8983 - Sharding with PHP and MySQL In deploying MySQL, scale-out techniques can be used to scale out reads, but for scaling out writes, other techniques have to be used. To distribute writes over a cluster, it is necessary to shard the database and store the shards on separate servers. This session provides a brief introduction to traditional MySQL scale-out techniques in preparation for a discussion on the different sharding techniques that can be used with MySQL server and how they can be implemented with PHP. You will learn about static and dynamic sharding schemes, their advantages and drawbacks, techniques for locating and moving shards, and techniques for resharding. (Mats Kindahl - Senior Principal Software Developer, Oracle) CON9268 - Developing Python Applications with MySQL Utilities and MySQL Connector/Python This session discusses MySQL Connector/Python and the MySQL Utilities component of MySQL Workbench and explains how to write MySQL applications in Python. It includes in-depth explanations of the features of MySQL Connector/Python and the MySQL Utilities library, along with example code to illustrate the concepts. Those interested in learning how to expand or build their own utilities and connector features will benefit from the tips and tricks from the experts. This session also provides an opportunity to meet directly with the engineers and provide feedback on your issues and priorities. You can learn what exists today and influence future developments. (Geert Vanderkelen - Software Developer, Oracle) BOF9141 - MySQL Utilities and MySQL Connector/Python: Python Developers, Unite! Come to this lively discussion of the MySQL Utilities component of MySQL Workbench and MySQL Connector/Python. It includes in-depth explanations of the features and dives into the code for those interested in learning how to expand or build their own utilities and connector features. This is an audience-driven session, so put on your best Python shirt and let’s talk about MySQL Utilities and MySQL Connector/Python. (Geert Vanderkelen - Software Developer, Oracle. Charles Bell - Senior Software Developer, Oracle) CON3290 - Integrating Oracle Database with a Social Network Facebook, Flickr, YouTube, Google Maps. There are many social network sites, each with their own APIs for sharing data with them. Most developers do not realize that Oracle Database has base tools for communicating with these sites, enabling all manner of information, including multimedia, to be passed back and forth between the sites. This technical presentation goes through the methods in PL/SQL for connecting to, and then sending and retrieving, all types of data between these sites. (Marcelle Kratochvil - CTO, Piction) CON3291 - Storing and Tuning Unstructured Data and Multimedia in Oracle Database Database administrators need to learn new skills and techniques when the decision is made in their organization to let Oracle Database manage its unstructured data. They will face new scalability challenges. A single row in a table can become larger than a whole database. This presentation covers the techniques a DBA needs for managing the large volume of data in a standard Oracle Database instance. (Marcelle Kratochvil - CTO, Piction) CON3292 - Using PHP, Perl, Visual Basic, Ruby, and Python for Multimedia in Oracle Database These five programming languages are just some of the most popular ones in use at the moment in the marketplace. This presentation details how you can use them to access and retrieve multimedia from Oracle Database. It covers programming techniques and methods for achieving faster development against Oracle Database. (Marcelle Kratochvil - CTO, Piction) UGF5181 - Building Real-World Oracle DBA Tools in Perl Perl is not normally associated with building mission-critical application or DBA tools. Learn why Perl could be a good choice for building your next killer DBA app. This session draws on real-world experience of building DBA tools in Perl, showing the framework and architecture needed to deal with portability, efficiency, and maintainability. Topics include Perl frameworks; Which Comprehensive Perl Archive Network (CPAN) modules are good to use; Perl and CPAN module licensing; Perl and Oracle connectivity; Compiling and deploying your app; An example of what is possible with Perl. (Arjen Visser - CEO & CTO, Dbvisit Software Limited) CON3153 - Perl: A DBA’s and Developer’s Best (Forgotten) Friend This session reintroduces Perl as a language of choice for many solutions for DBAs and developers. Discover what makes Perl so successful and why it is so versatile in our day-to-day lives. Perl can automate all those manual tasks and is truly platform-independent. Perl may not be in the limelight the way other languages are, but it is a remarkable language, it is still very current with ongoing development, and it has amazing online resources. Learn what makes Perl so great (including CPAN), get an introduction to Perl language syntax, find out what you can use Perl for, hear how Oracle uses Perl, discover the best way to learn Perl, and take away a small Perl project challenge. (Arjen Visser - CEO & CTO, Dbvisit Software Limited) CON10332 - Oracle RightNow CX Cloud Service’s Connect PHP API: Intro, What’s New, and Roadmap Connect PHP is a public API that enables developers to build solutions with the Oracle RightNow CX Cloud Service platform. This API is used primarily by developers working within the Oracle RightNow Customer Portal Cloud Service framework who are looking to gain access to data and services hosted by the Oracle RightNow CX Cloud Service platform through a backward-compatible API. Connect for PHP leverages the same data model and services as the Connect Web Services for SOAP API. Come to this session to get an introduction and learn what’s new and what’s coming up. (Mark Rhoads - Senior Principal Applications Engineer, Oracle. Mark Ericson - Sr. Principle Product Manager, Oracle) CON10330 - Oracle RightNow CX Cloud Service APIs and Frameworks Overview Oracle RightNow CX Cloud Service APIs are available in the following areas: desktop UI, Web services, customer portal, PHP, and knowledge. These frameworks provide access to Oracle RightNow CX Cloud Service’s Connect Common Object Model and custom objects. This session provides a broad overview of capabilities in all these areas. (Mark Ericson - Sr. Principle Product Manager, Oracle)

    Read the article

  • Accessing SharePoint 2010 Data with REST/OData on Windows Phone 7

    - by Jan Tielens
    Consuming SharePoint 2010 data in Windows Phone 7 applications using the CTP version of the developer tools is quite a challenge. The issue is that the SharePoint 2010 data is not anonymously available; users need to authenticate to be able to access the data. When I first tried to access SharePoint 2010 data from my first Hello-World-type Windows Phone 7 application I thought “Hey, this should be easy!” because Windows Phone 7 development based on Silverlight and SharePoint 2010 has a Client Object Model for Silverlight. Unfortunately you can’t use the Client Object Model of SharePoint 2010 on the Windows Phone platform; there’s a reference to an assembly that’s not available (System.Windows.Browser). My second thought was “OK, no problem!” because SharePoint 2010 also exposes a REST/OData API to access SharePoint data. Using the REST API in SharePoint 2010 is as easy as making a web request for a URL (in which you specify the data you’d like to retrieve), e.g. http://yoursiteurl/_vti_bin/listdata.svc/Announcements. This is very easy to accomplish in a Silverlight application that’s running in the context of a page in a SharePoint site, because the credentials of the currently logged on user are automatically picked up and passed to the WCF service. But a Windows Phone application is of course running outside of the SharePoint site’s page, so the application should build credentials that have to be passed to SharePoint’s WCF service. This turns out to be a small challenge in Silverlight 3, the WebClient doesn’t support authentication; there is a Credentials property but when you set it and make the request you get a NotImplementedException exception. Probably this issued will be solved in the very near future, since Silverlight 4 does support authentication, and there’s already a WCF Data Services download that uses this new platform feature of Silverlight 4. So when Windows Phone platform switches to Silverlight 4, you can just use the WebClient to get the data. Even more, if the OData Client Library for Windows Phone 7 gets updated after that, things should get even easier! By the way: the things I’m writing in this paragraph are just assumptions that I make which make a lot of sense IMHO, I don’t have any info all of this will happen, but I really hope so. So are SharePoint developers out of the Windows Phone development game until they get this fixed? Well luckily not, when the HttpWebRequest class is being used instead, you can pass credentials! Using the HttpWebRequest class is slightly more complex than using the WebClient class, but the end result is that you have access to your precious SharePoint 2010 data. The following code snippet is getting all the announcements of an Annoucements list in a SharePoint site: HttpWebRequest webReq =     (HttpWebRequest)HttpWebRequest.Create("http://yoursite/_vti_bin/listdata.svc/Announcements");webReq.Credentials = new NetworkCredential("username", "password"); webReq.BeginGetResponse(    (result) => {        HttpWebRequest asyncReq = (HttpWebRequest)result.AsyncState;         XDocument xdoc = XDocument.Load(            ((HttpWebResponse)asyncReq.EndGetResponse(result)).GetResponseStream());         XNamespace ns = "http://www.w3.org/2005/Atom";        var items = from item in xdoc.Root.Elements(ns + "entry")                    select new { Title = item.Element(ns + "title").Value };         this.Dispatcher.BeginInvoke(() =>        {            foreach (var item in items)                MessageBox.Show(item.Title);        });    }, webReq); When you try this in a Windows Phone 7 application, make sure you add a reference to the System.Xml.Linq assembly, because the code uses Linq to XML to parse the resulting Atom feed, so the Title of every announcement is being displayed in a MessageBox. Check out my previous post if you’d like to see a more polished sample Windows Phone 7 application that displays SharePoint 2010 data.When you plan to use this technique, it’s of course a good idea to encapsulate the code doing the request, so it becomes really easy to get the data that you need. In the following code snippet you can find the GetAtomFeed method that gets the contents of any Atom feed, even if you need to authenticate to get access to the feed. delegate void GetAtomFeedCallback(Stream responseStream); public MainPage(){    InitializeComponent();     SupportedOrientations = SupportedPageOrientation.Portrait |         SupportedPageOrientation.Landscape;     string url = "http://yoursite/_vti_bin/listdata.svc/Announcements";    string username = "username";    string password = "password";    string domain = "";     GetAtomFeed(url, username, password, domain, (s) =>    {        XNamespace ns = "http://www.w3.org/2005/Atom";        XDocument xdoc = XDocument.Load(s);         var items = from item in xdoc.Root.Elements(ns + "entry")                    select new { Title = item.Element(ns + "title").Value };         this.Dispatcher.BeginInvoke(() =>        {            foreach (var item in items)            {                MessageBox.Show(item.Title);            }        });    });} private static void GetAtomFeed(string url, string username,     string password, string domain, GetAtomFeedCallback cb){    HttpWebRequest webReq = (HttpWebRequest)HttpWebRequest.Create(url);    webReq.Credentials = new NetworkCredential(username, password, domain);     webReq.BeginGetResponse(        (result) =>        {            HttpWebRequest asyncReq = (HttpWebRequest)result.AsyncState;            HttpWebResponse resp = (HttpWebResponse)asyncReq.EndGetResponse(result);            cb(resp.GetResponseStream());        }, webReq);}

    Read the article

  • Partner Blog Series: PwC Perspectives - "Is It Time for an Upgrade?"

    - by Tanu Sood
    Is your organization debating their next step with regard to Identity Management? While all the stakeholders are well aware that the one-size-fits-all doesn’t apply to identity management, just as true is the fact that no two identity management implementations are alike. Oracle’s recent release of Identity Governance Suite 11g Release 2 has innovative features such as a customizable user interface, shopping cart style request catalog and more. However, only a close look at the use cases can help you determine if and when an upgrade to the latest R2 release makes sense for your organization. This post will describe a few of the situations that PwC has helped our clients work through. “Should I be considering an upgrade?” If your organization has an existing identity management implementation, the questions below are a good start to assessing your current solution to see if you need to begin planning for an upgrade: Does the current solution scale and meet your projected identity management needs? Does the current solution have a customer-friendly user interface? Are you completely meeting your compliance objectives? Are you still using spreadsheets? Does the current solution have the features you need? Is your total cost of ownership in line with well-performing similar sized companies in your industry? Can your organization support your existing Identity solution? Is your current product based solution well positioned to support your organization's tactical and strategic direction? Existing Oracle IDM Customers: Several existing Oracle clients are looking to move to R2 in 2013. If your organization is on Sun Identity Manager (SIM) or Oracle Identity Manager (OIM) and if your current assessment suggests that you need to upgrade, you should strongly consider OIM 11gR2. Oracle provides upgrade paths to Oracle Identity Manager 11gR2 from SIM 7.x / 8.x as well as Oracle Identity Manager 10g / 11gR1. The following are some of the considerations for migration: Check the end of product support (for Sun or legacy OIM) schedule There are several new features available in R2 (including common Helpdesk scenarios, profiling of disconnected applications, increased scalability, custom connectors, browser-based UI configurations, portability of configurations during future upgrades, etc) Cost of ownership (for SIM customers)\ Customizations that need to be maintained during the upgrade Time/Cost to migrate now vs. waiting for next version If you are already on an older version of Oracle Identity Manager and actively maintaining your support contract with Oracle, you might be eligible for a free upgrade to OIM 11gR2. Check with your Oracle sales rep for more details. Existing IDM infrastructure in place: In the past year and half, we have seen a surge in IDM upgrades from non-Oracle infrastructure to Oracle. If your organization is looking to improve the end-user experience related to identity management functions, the shopping cart style access request model and browser based personalization features may come in handy. Additionally, organizations that have a large number of applications that include ecommerce, LDAP stores, databases, UNIX systems, mainframes as well as a high frequency of user identity changes and access requests will value the high scalability of the OIM reconciliation and provisioning engine. Furthermore, we have seen our clients like OIM's out of the box (OOB) support for multiple authoritative sources. For organizations looking to integrate applications that do not have an exposed API, the Generic Technology Connector framework supported by OIM will be helpful in quickly generating custom connector using OOB wizard. Similarly, organizations in need of not only flexible on-boarding of disconnected applications but also strict access management to these applications using approval flows will find the flexible disconnected application profiling feature an extremely useful tool that provides a high degree of time savings. Organizations looking to develop custom connectors for home grown or industry specific applications will likewise find that the Identity Connector Framework support in OIM allows them to build and test a custom connector independently before integrating it with OIM. Lastly, most of our clients considering an upgrade to OIM 11gR2 have also expressed interest in the browser based configuration feature that allows an administrator to quickly customize the user interface without adding any custom code. Better yet, code customizations, if any, made to the product are portable across the future upgrades which, is viewed as a big time and money saver by most of our clients. Below are some upgrade methodologies we adopt based on client priorities and the scale of implementation. For illustration purposes, we have assumed that the client is currently on Oracle Waveset (formerly Sun Identity Manager).   Integrated Deployment: The integrated deployment is typically where a client wants to split the implementation to where their current IDM is continuing to handle the front end workflows and OIM takes over the back office operations incrementally. Once all the back office operations are moved completely to OIM, the front end workflows are migrated to OIM. Parallel Deployment: This deployment is typically done where there can be a distinct line drawn between which functionality the platforms are supporting. For example the current IDM implementation is handling the password reset functionality while OIM takes over the access provisioning and RBAC functions. Cutover Deployment: A cutover deployment is typically recommended where a client has smaller less complex implementations and it makes sense to leverage the migration tools to move them over immediately. What does this mean for YOU? There are many variables to consider when making upgrade decisions. For most customers, there is no ‘easy’ button. Organizations looking to upgrade or considering a new vendor should start by doing a mapping of their requirements with product features. The recommended approach is to take stock of both the short term and long term objectives, understand product features, future roadmap, maturity and level of commitment from the R&D and build the implementation plan accordingly. As we said, in the beginning, there is no one-size-fits-all with Identity Management. So, arm yourself with the knowledge, engage in industry discussions, bring in business stakeholders and start building your implementation roadmap. In the next post we will discuss the best practices on R2 implementations. We will be covering the Do's and Don't's and share our thoughts on making implementations successful. Meet the Writers: Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years.

    Read the article

  • Curing the Database-Application mismatch

    - by Phil Factor
    If an application requires access to a database, then you have to be able to deploy it so as to be version-compatible with the database, in phase. If you can deploy both together, then the application and database must normally be deployed at the same version in which they, together, passed integration and functional testing.  When a single database supports more than one application, then the problem gets more interesting. I’ll need to be more precise here. It is actually the application-interface definition of the database that needs to be in a compatible ‘version’.  Most databases that get into production have no separate application-interface; in other words they are ‘close-coupled’.  For this vast majority, the whole database is the application-interface, and applications are free to wander through the bowels of the database scot-free.  If you’ve spurned the perceived wisdom of application architects to have a defined application-interface within the database that is based on views and stored procedures, any version-mismatch will be as sensitive as a kitten.  A team that creates an application that makes direct access to base tables in a database will have to put a lot of energy into keeping Database and Application in sync, to say nothing of having to tackle issues such as security and audit. It is not the obvious route to development nirvana. I’ve been in countless tense meetings with application developers who initially bridle instinctively at the apparent restrictions of being ‘banned’ from the base tables or routines of a database.  There is no good technical reason for needing that sort of access that I’ve ever come across.  Everything that the application wants can be delivered via a set of views and procedures, and with far less pain for all concerned: This is the application-interface.  If more than zero developers are creating a database-driven application, then the project will benefit from the loose-coupling that an application interface brings. What is important here is that the database development role is separated from the application development role, even if it is the same developer performing both roles. The idea of an application-interface with a database is as old as I can remember. The big corporate or government databases generally supported several applications, and there was little option. When a new application wanted access to an existing corporate database, the developers, and myself as technical architect, would have to meet with hatchet-faced DBAs and production staff to work out an interface. Sure, they would talk up the effort involved for budgetary reasons, but it was routine work, because it decoupled the database from its supporting applications. We’d be given our own stored procedures. One of them, I still remember, had ninety-two parameters. All database access was encapsulated in one application-module. If you have a stable defined application-interface with the database (Yes, one for each application usually) you need to keep the external definitions of the components of this interface in version control, linked with the application source,  and carefully track and negotiate any changes between database developers and application developers.  Essentially, the application development team owns the interface definition, and the onus is on the Database developers to implement it and maintain it, in conformance.  Internally, the database can then make all sorts of changes and refactoring, as long as source control is maintained.  If the application interface passes all the comprehensive integration and functional tests for the particular version they were designed for, nothing is broken. Your performance-testing can ‘hang’ on the same interface, since databases are judged on the performance of the application, not an ‘internal’ database process. The database developers have responsibility for maintaining the application-interface, but not its definition,  as they refactor the database. This is easily tested on a daily basis since the tests are normally automated. In this setting, the deployment can proceed if the more stable application-interface, rather than the continuously-changing database, passes all tests for the version of the application. Normally, if all goes well, a database with a well-designed application interface can evolve gracefully without changing the external appearance of the interface, and this is confirmed by integration tests that check the interface, and which hopefully don’t need to be altered at all often.  If the application is rapidly changing its ‘domain model’  in the light of an increased understanding of the application domain, then it can change the interface definitions and the database developers need only implement the interface rather than refactor the underlying database.  The test team will also have to redo the functional and integration tests which are, of course ‘written to’ the definition.  The Database developers will find it easier if these tests are done before their re-wiring  job to implement the new interface. If, at the other extreme, an application receives no further development work but survives unchanged, the database can continue to change and develop to keep pace with the requirements of the other applications it supports, and needs only to take care that the application interface is never broken. Testing is easy since your automated scripts to test the interface do not need to change. The database developers will, of course, maintain their own source control for the database, and will be likely to maintain versions for all major releases. However, this will not need to be shared with the applications that the database servers. On the other hand, the definition of the application interfaces should be within the application source. Changes in it have to be subject to change-control procedures, as they will require a chain of tests. Once you allow, instead of an application-interface, an intimate relationship between application and database, we are in the realms of impedance mismatch, over and above the obvious security problems.  Part of this impedance problem is a difference in development practices. Whereas the application has to be regularly built and integrated, this isn’t necessarily the case with the database.  An RDBMS is inherently multi-user and self-integrating. If the developers work together on the database, then a subsequent integration of the database on a staging server doesn’t often bring nasty surprises. A separate database-integration process is only needed if the database is deliberately built in a way that mimics the application development process, but which hampers the normal database-development techniques.  This process is like demanding a official walking with a red flag in front of a motor car.  In order to closely coordinate databases with applications, entire databases have to be ‘versioned’, so that an application version can be matched with a database version to produce a working build without errors.  There is no natural process to ‘version’ databases.  Each development project will have to define a system for maintaining the version level. A curious paradox occurs in development when there is no formal application-interface. When the strains and cracks happen, the extra meetings, bureaucracy, and activity required to maintain accurate deployments looks to IT management like work. They see activity, and it looks good. Work means progress.  Management then smile on the design choices made. In IT, good design work doesn’t necessarily look good, and vice versa.

    Read the article

  • Nashorn, the rhino in the room

    - by costlow
    Nashorn is a new runtime within JDK 8 that allows developers to run code written in JavaScript and call back and forth with Java. One advantage to the Nashorn scripting engine is that is allows for quick prototyping of functionality or basic shell scripts that use Java libraries. The previous JavaScript runtime, named Rhino, was introduced in JDK 6 (released 2006, end of public updates Feb 2013). Keeping tradition amongst the global developer community, "Nashorn" is the German word for rhino. The Java platform and runtime is an intentional home to many languages beyond the Java language itself. OpenJDK’s Da Vinci Machine helps coordinate work amongst language developers and tool designers and has helped different languages by introducing the Invoke Dynamic instruction in Java 7 (2011), which resulted in two major benefits: speeding up execution of dynamic code, and providing the groundwork for Java 8’s lambda executions. Many of these improvements are discussed at the JVM Language Summit, where language and tool designers get together to discuss experiences and issues related to building these complex components. There are a number of benefits to running JavaScript applications on JDK 8’s Nashorn technology beyond writing scripts quickly: Interoperability with Java and JavaScript libraries. Scripts do not need to be compiled. Fast execution and multi-threading of JavaScript running in Java’s JRE. The ability to remotely debug applications using an IDE like NetBeans, Eclipse, or IntelliJ (instructions on the Nashorn blog). Automatic integration with Java monitoring tools, such as performance, health, and SIEM. In the remainder of this blog post, I will explain how to use Nashorn and the benefit from those features. Nashorn execution environment The Nashorn scripting engine is included in all versions of Java SE 8, both the JDK and the JRE. Unlike Java code, scripts written in nashorn are interpreted and do not need to be compiled before execution. Developers and users can access it in two ways: Users running JavaScript applications can call the binary directly:jre8/bin/jjs This mechanism can also be used in shell scripts by specifying a shebang like #!/usr/bin/jjs Developers can use the API and obtain a ScriptEngine through:ScriptEngine engine = new ScriptEngineManager().getEngineByName("nashorn"); When using a ScriptEngine, please understand that they execute code. Avoid running untrusted scripts or passing in untrusted/unvalidated inputs. During compilation, consider isolating access to the ScriptEngine and using Type Annotations to only allow @Untainted String arguments. One noteworthy difference between JavaScript executed in or outside of a web browser is that certain objects will not be available. For example when run outside a browser, there is no access to a document object or DOM tree. Other than that, all syntax, semantics, and capabilities are present. Examples of Java and JavaScript The Nashorn script engine allows developers of all experience levels the ability to write and run code that takes advantage of both languages. The specific dialect is ECMAScript 5.1 as identified by the User Guide and its standards definition through ECMA international. In addition to the example below, Benjamin Winterberg has a very well written Java 8 Nashorn Tutorial that provides a large number of code samples in both languages. Basic Operations A basic Hello World application written to run on Nashorn would look like this: #!/usr/bin/jjs print("Hello World"); The first line is a standard script indication, so that Linux or Unix systems can run the script through Nashorn. On Windows where scripts are not as common, you would run the script like: jjs helloWorld.js. Receiving Arguments In order to receive program arguments your jjs invocation needs to use the -scripting flag and a double-dash to separate which arguments are for jjs and which are for the script itself:jjs -scripting print.js -- "This will print" #!/usr/bin/jjs var whatYouSaid = $ARG.length==0 ? "You did not say anything" : $ARG[0] print(whatYouSaid); Interoperability with Java libraries (including 3rd party dependencies) Another goal of Nashorn was to allow for quick scriptable prototypes, allowing access into Java types and any libraries. Resources operate in the context of the script (either in-line with the script or as separate threads) so if you open network sockets and your script terminates, those sockets will be released and available for your next run. Your code can access Java types the same as regular Java classes. The “import statements” are written somewhat differently to accommodate for language. There is a choice of two styles: For standard classes, just name the class: var ServerSocket = java.net.ServerSocket For arrays or other items, use Java.type: var ByteArray = Java.type("byte[]")You could technically do this for all. The same technique will allow your script to use Java types from any library or 3rd party component and quickly prototype items. Building a user interface One major difference between JavaScript inside and outside of a web browser is the availability of a DOM object for rendering views. When run outside of the browser, JavaScript has full control to construct the entire user interface with pre-fabricated UI controls, charts, or components. The example below is a variation from the Nashorn and JavaFX guide to show how items work together. Nashorn has a -fx flag to make the user interface components available. With the example script below, just specify: jjs -fx -scripting fx.js -- "My title" #!/usr/bin/jjs -fx var Button = javafx.scene.control.Button; var StackPane = javafx.scene.layout.StackPane; var Scene = javafx.scene.Scene; var clickCounter=0; $STAGE.title = $ARG.length>0 ? $ARG[0] : "You didn't provide a title"; var button = new Button(); button.text = "Say 'Hello World'"; button.onAction = myFunctionForButtonClicking; var root = new StackPane(); root.children.add(button); $STAGE.scene = new Scene(root, 300, 250); $STAGE.show(); function myFunctionForButtonClicking(){   var text = "Click Counter: " + clickCounter;   button.setText(text);   clickCounter++;   print(text); } For a more advanced post on using Nashorn to build a high-performing UI, see JavaFX with Nashorn Canvas example. Interoperable with frameworks like Node, Backbone, or Facebook React The major benefit of any language is the interoperability gained by people and systems that can read, write, and use it for interactions. Because Nashorn is built for the ECMAScript specification, developers familiar with JavaScript frameworks can write their code and then have system administrators deploy and monitor the applications the same as any other Java application. A number of projects are also running Node applications on Nashorn through Project Avatar and the supported modules. In addition to the previously mentioned Nashorn tutorial, Benjamin has also written a post about Using Backbone.js with Nashorn. To show the multi-language power of the Java Runtime, there is another interesting example that unites Facebook React and Clojure on JDK 8’s Nashorn. Summary Nashorn provides a simple and fast way of executing JavaScript applications and bridging between the best of each language. By making the full range of Java libraries to JavaScript applications, and the quick prototyping style of JavaScript to Java applications, developers are free to work as they see fit. Software Architects and System Administrators can take advantage of one runtime and leverage any work that they have done to tune, monitor, and certify their systems. Additional information is available within: The Nashorn Users’ Guide Java Magazine’s article "Next Generation JavaScript Engine for the JVM." The Nashorn team’s primary blog or a very helpful collection of Nashorn links.

    Read the article

  • Disk Drive not working

    - by user287681
    The CD/DVD drive on my sisters' (I'm helping her shift from Win. XP (now officially deprecated by Microsoft) to Ubuntu) system. Now, it may end up being a failed attempt, all together (Almost the whole last year (when she's been on XP) the disk drive hasn't (not even powering on) been working.), I just want to make sure I've explored every remote possibility. Because I figure, "Huh, now that I've got Ubuntu running, instead of XP, that (just) might make a difference.". I have tried using the sudo lshw command in the terminal, to (seemingly) no avil, but, who knows, you might be able to make something out of it. Here's the output: kyra@kyra-Satellite-P105:~$ sudo lshw [sudo] password for kyra: kyra-satellite-p105 description: Notebook product: Satellite P105 () vendor: TOSHIBA version: PSPA0U-0TN01M serial: 96084354W width: 64 bits capabilities: smbios-2.4 dmi-2.4 vsyscall32 configuration: administrator_password=disabled boot=oem-specific chassis=notebook frontpanel_password=unknown keyboard_password=unknown power-on_password=disabled uuid=00900559-F88E-D811-82E0-00163680E992 *-core description: Motherboard product: Satellite P105 vendor: TOSHIBA physical id: 0 version: Not Applicable serial: 1234567890 *-firmware description: BIOS vendor: TOSHIBA physical id: 0 version: V4.70 date: 01/19/20092 size: 92KiB capabilities: isa pci pcmcia pnp upgrade shadowing escd cdboot acpi usb biosbootspecification *-cpu description: CPU product: Intel(R) Core(TM)2 CPU T5500 @ 1.66GHz vendor: Intel Corp. physical id: 4 bus info: cpu@0 version: Intel(R) Core(TM)2 CPU T5 slot: U2E1 size: 1667MHz capacity: 1667MHz width: 64 bits clock: 166MHz capabilities: fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx x86-64 constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm lahf_lm dtherm cpufreq *-cache:0 description: L1 cache physical id: 5 slot: L1 Cache size: 16KiB capacity: 16KiB capabilities: asynchronous internal write-back *-cache:1 description: L2 cache physical id: 6 slot: L2 Cache size: 2MiB capabilities: burst external write-back *-memory description: System Memory physical id: c slot: System board or motherboard size: 2GiB capacity: 3GiB *-bank:0 description: SODIMM DDR2 Synchronous physical id: 0 slot: M1 size: 1GiB width: 64 bits *-bank:1 description: SODIMM DDR2 Synchronous physical id: 1 slot: M2 size: 1GiB width: 64 bits *-pci description: Host bridge product: Mobile 945GM/PM/GMS, 943/940GML and 945GT Express Memory Controller Hub vendor: Intel Corporation physical id: 100 bus info: pci@0000:00:00.0 version: 03 width: 32 bits clock: 33MHz configuration: driver=agpgart-intel resources: irq:0 *-display:0 description: VGA compatible controller product: Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 03 width: 32 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:16 memory:d0200000-d027ffff ioport:1800(size=8) memory:c0000000-cfffffff memory:d0300000-d033ffff *-display:1 UNCLAIMED description: Display controller product: Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller vendor: Intel Corporation physical id: 2.1 bus info: pci@0000:00:02.1 version: 03 width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: latency=0 resources: memory:d0280000-d02fffff *-multimedia description: Audio device product: NM10/ICH7 Family High Definition Audio Controller vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 02 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=snd_hda_intel latency=0 resources: irq:44 memory:d0340000-d0343fff *-pci:0 description: PCI bridge product: NM10/ICH7 Family PCI Express Port 1 vendor: Intel Corporation physical id: 1c bus info: pci@0000:00:1c.0 version: 02 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:40 ioport:3000(size=4096) memory:84000000-841fffff ioport:84200000(size=2097152) *-pci:1 description: PCI bridge product: NM10/ICH7 Family PCI Express Port 2 vendor: Intel Corporation physical id: 1c.1 bus info: pci@0000:00:1c.1 version: 02 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:41 ioport:4000(size=4096) memory:84400000-846fffff ioport:84700000(size=2097152) *-network description: Wireless interface product: PRO/Wireless 3945ABG [Golan] Network Connection vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 02 serial: 00:13:02:d6:d2:35 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwl3945 driverversion=3.13.0-29-generic firmware=15.32.2.9 ip=10.110.20.157 latency=0 link=yes multicast=yes wireless=IEEE 802.11abg resources: irq:43 memory:84400000-84400fff *-pci:2 description: PCI bridge product: NM10/ICH7 Family PCI Express Port 3 vendor: Intel Corporation physical id: 1c.2 bus info: pci@0000:00:1c.2 version: 02 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:42 ioport:5000(size=4096) memory:84900000-84afffff ioport:84b00000(size=2097152) *-usb:0 description: USB controller product: NM10/ICH7 Family USB UHCI Controller #1 vendor: Intel Corporation physical id: 1d bus info: pci@0000:00:1d.0 version: 02 width: 32 bits clock: 33MHz capabilities: uhci bus_master configuration: driver=uhci_hcd latency=0 resources: irq:23 ioport:1820(size=32) *-usb:1 description: USB controller product: NM10/ICH7 Family USB UHCI Controller #2 vendor: Intel Corporation physical id: 1d.1 bus info: pci@0000:00:1d.1 version: 02 width: 32 bits clock: 33MHz capabilities: uhci bus_master configuration: driver=uhci_hcd latency=0 resources: irq:19 ioport:1840(size=32) *-usb:2 description: USB controller product: NM10/ICH7 Family USB UHCI Controller #3 vendor: Intel Corporation physical id: 1d.2 bus info: pci@0000:00:1d.2 version: 02 width: 32 bits clock: 33MHz capabilities: uhci bus_master configuration: driver=uhci_hcd latency=0 resources: irq:18 ioport:1860(size=32) *-usb:3 description: USB controller product: NM10/ICH7 Family USB UHCI Controller #4 vendor: Intel Corporation physical id: 1d.3 bus info: pci@0000:00:1d.3 version: 02 width: 32 bits clock: 33MHz capabilities: uhci bus_master configuration: driver=uhci_hcd latency=0 resources: irq:16 ioport:1880(size=32) *-usb:4 description: USB controller product: NM10/ICH7 Family USB2 EHCI Controller vendor: Intel Corporation physical id: 1d.7 bus info: pci@0000:00:1d.7 version: 02 width: 32 bits clock: 33MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci-pci latency=0 resources: irq:23 memory:d0544000-d05443ff *-pci:3 description: PCI bridge product: 82801 Mobile PCI Bridge vendor: Intel Corporation physical id: 1e bus info: pci@0000:00:1e.0 version: e2 width: 32 bits clock: 33MHz capabilities: pci subtractive_decode bus_master cap_list resources: ioport:2000(size=4096) memory:d0000000-d00fffff ioport:80000000(size=67108864) *-pcmcia description: CardBus bridge product: PCIxx12 Cardbus Controller vendor: Texas Instruments physical id: 4 bus info: pci@0000:0a:04.0 version: 00 width: 32 bits clock: 33MHz capabilities: pcmcia bus_master cap_list configuration: driver=yenta_cardbus latency=176 maxlatency=5 mingnt=192 resources: irq:17 memory:d0004000-d0004fff ioport:2400(size=256) ioport:2800(size=256) memory:80000000-83ffffff memory:88000000-8bffffff *-firewire description: FireWire (IEEE 1394) product: PCIxx12 OHCI Compliant IEEE 1394 Host Controller vendor: Texas Instruments physical id: 4.1 bus info: pci@0000:0a:04.1 version: 00 width: 32 bits clock: 33MHz capabilities: pm ohci bus_master cap_list configuration: driver=firewire_ohci latency=64 maxlatency=4 mingnt=3 resources: irq:17 memory:d0007000-d00077ff memory:d0000000-d0003fff *-storage description: Mass storage controller product: 5-in-1 Multimedia Card Reader (SD/MMC/MS/MS PRO/xD) vendor: Texas Instruments physical id: 4.2 bus info: pci@0000:0a:04.2 version: 00 width: 32 bits clock: 33MHz capabilities: storage pm bus_master cap_list configuration: driver=tifm_7xx1 latency=64 maxlatency=4 mingnt=7 resources: irq:17 memory:d0005000-d0005fff *-generic description: SD Host controller product: PCIxx12 SDA Standard Compliant SD Host Controller vendor: Texas Instruments physical id: 4.3 bus info: pci@0000:0a:04.3 version: 00 width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: driver=sdhci-pci latency=64 maxlatency=4 mingnt=7 resources: irq:17 memory:d0007800-d00078ff *-network description: Ethernet interface product: PRO/100 VE Network Connection vendor: Intel Corporation physical id: 8 bus info: pci@0000:0a:08.0 logical name: eth0 version: 02 serial: 00:16:36:80:e9:92 size: 10Mbit/s capacity: 100Mbit/s width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e100 driverversion=3.5.24-k2-NAPI duplex=half latency=64 link=no maxlatency=56 mingnt=8 multicast=yes port=MII speed=10Mbit/s resources: irq:20 memory:d0006000-d0006fff ioport:2000(size=64) *-isa description: ISA bridge product: 82801GBM (ICH7-M) LPC Interface Bridge vendor: Intel Corporation physical id: 1f bus info: pci@0000:00:1f.0 version: 02 width: 32 bits clock: 33MHz capabilities: isa bus_master cap_list configuration: driver=lpc_ich latency=0 resources: irq:0 *-ide description: IDE interface product: 82801GBM/GHM (ICH7-M Family) SATA Controller [IDE mode] vendor: Intel Corporation physical id: 1f.2 bus info: pci@0000:00:1f.2 version: 02 width: 32 bits clock: 66MHz capabilities: ide pm bus_master cap_list configuration: driver=ata_piix latency=0 resources: irq:19 ioport:1f0(size=8) ioport:3f6 ioport:170(size=8) ioport:376 ioport:18b0(size=16) *-serial UNCLAIMED description: SMBus product: NM10/ICH7 Family SMBus Controller vendor: Intel Corporation physical id: 1f.3 bus info: pci@0000:00:1f.3 version: 02 width: 32 bits clock: 33MHz configuration: latency=0 resources: ioport:18c0(size=32) *-scsi physical id: 1 logical name: scsi0 capabilities: emulated *-disk description: ATA Disk product: ST9250421AS vendor: Seagate physical id: 0.0.0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: SD13 serial: 5TH0B2HB size: 232GiB (250GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 sectorsize=512 signature=000d7fd5 *-volume:0 description: EXT4 volume vendor: Linux physical id: 1 bus info: scsi@0:0.0.0,1 logical name: /dev/sda1 logical name: / version: 1.0 serial: 13bb4bdd-8cc9-40e2-a490-dbe436c2a02d size: 230GiB capacity: 230GiB capabilities: primary bootable journaled extended_attributes large_files huge_files dir_nlink recover extents ext4 ext2 initialized configuration: created=2014-06-01 17:37:01 filesystem=ext4 lastmountpoint=/ modified=2014-06-01 21:15:21 mount.fstype=ext4 mount.options=rw,relatime,errors=remount-ro,data=ordered mounted=2014-06-01 21:15:21 state=mounted *-volume:1 description: Extended partition physical id: 2 bus info: scsi@0:0.0.0,2 logical name: /dev/sda2 size: 2037MiB capacity: 2037MiB capabilities: primary extended partitioned partitioned:extended *-logicalvolume description: Linux swap / Solaris partition physical id: 5 logical name: /dev/sda5 capacity: 2037MiB capabilities: nofs *-remoteaccess UNCLAIMED vendor: Intel physical id: 1 capabilities: inbound kyra@kyra-Satellite-P105:~$

    Read the article

  • How-to delete a tree node using the context menu

    - by frank.nimphius
    Hierarchical trees in Oracle ADF make use of View Accessors, which means that only the top level node needs to be exposed as a View Object instance on the ADF Business Components Data Model. This also means that only the top level node has a representation in the PageDef file as a tree binding and iterator binding reference. Detail nodes are accessed through tree rule definitions that use the accessor mentioned above (or nested collections in the case of POJO or EJB business services). The tree component is configured for single node selection, which however can be declaratively changed for users to press the ctrl key and selecting multiple nodes. In the following, I explain how to create a context menu on the tree for users to delete the selected tree nodes. For this, the context menu item will access a managed bean, which then determines the selected node(s), the internal ADF node bindings and the rows they represent. As mentioned, the ADF Business Components Data Model only needs to expose the top level node data sources, which in this example is an instance of the Locations View Object. For the tree to work, you need to have associations defined between entities, which usually is done for you by Oracle JDeveloper if the database tables have foreign keys defined Note: As a general hint of best practices and to simplify your life: Make sure your database schema is well defined and designed before starting your development project. Don't treat the database as something organic that grows and changes with the requirements as you proceed in your project. Business service refactoring in response to database changes is possible, but should be treated as an exception, not the rule. Good database design is a necessity – even for application developers – and nothing evil. To create the tree component, expand the Data Controls panel and drag the View Object collection to the view. From the context menu, select the tree component entry and continue with defining the tree rules that make up the hierarchical structure. As you see, when pressing the green plus icon  in the Edit Tree Binding  dialog, the data structure, Locations -  Departments – Employees in my sample, shows without you having created a View Object instance for each of the nodes in the ADF Business Components Data Model. After you configured the tree structure in the Edit Tree Binding dialog, you press OK and the tree is created. Select the tree in the page editor and open the Structure Window (ctrl+shift+S). In the Structure window, expand the tree node to access the conextMenu facet. Use the right mouse button to insert a Popup  into the facet. Repeat the same steps to insert a Menu and a Menu Item into the Popup you created. The Menu item text should be changed to something meaningful like "Delete". Note that the custom menu item later is added to the context menu together with the default context menu options like expand and expand all. To define the action that is executed when the menu item is clicked on, you select the Action Listener property in the Property Inspector and click the arrow icon followed by the Edit menu option. Create or select a managed bean and define a method name for the action handler. Next, select the tree component and browse to its binding property in the Property Inspector. Again, use the arrow icon | Edit option to create a component binding in the same managed bean that has the action listener defined. The tree handle is used in the action listener code, which is shown below: public void onTreeNodeDelete(ActionEvent actionEvent) {   //access the tree from the JSF component reference created   //using the af:tree "binding" property. The "binding" property   //creates a pair of set/get methods to access the RichTree instance   RichTree tree = this.getTreeHandler();   //get the list of selected row keys   RowKeySet rks = tree.getSelectedRowKeys();   //access the iterator to loop over selected nodes   Iterator rksIterator = rks.iterator();          //The CollectionModel represents the tree model and is   //accessed from the tree "value" property   CollectionModel model = (CollectionModel) tree.getValue();   //The CollectionModel is a wrapper for the ADF tree binding   //class, which is JUCtrlHierBinding   JUCtrlHierBinding treeBinding =                  (JUCtrlHierBinding) model.getWrappedData();          //loop over the selected nodes and delete the rows they   //represent   while(rksIterator.hasNext()){     List nodeKey = (List) rksIterator.next();     //find the ADF node binding using the node key     JUCtrlHierNodeBinding node =                       treeBinding.findNodeByKeyPath(nodeKey);     //delete the row.     Row rw = node.getRow();       rw.remove();   }          //only refresh the tree if tree nodes have been selected   if(rks.size() > 0){     AdfFacesContext adfFacesContext =                          AdfFacesContext.getCurrentInstance();     adfFacesContext.addPartialTarget(tree);   } } Note: To enable multi node selection for a tree, select the tree and change the row selection setting from "single" to "multiple". Note: a fully pictured version of this post will become available at the end of the month in a PDF summary on ADF Code Corner : http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html 

    Read the article

  • How to configure TATA Photon+ EC1261 HUAWEI

    - by user3215
    I'm running ubuntu 10.04. I have a newly purchased TATA Photon+ Internet connection which supports Windows and Mac. On the Internet I found a article saying that it could be configured on Linux. I followed the steps to install it on Ubuntu from this link. I am still not able to get online, and need some help. Also, it is very slow, but I was told that I would see speeds up to 3.1MB. I dont have wvdial installed and cannot install it from apt as I'm not connected to internet Booting from windows I dowloaded "wvdial" .deb package and tried to install on ubuntu but it's ended with dependency problem. Automatically, don't know how, I got connected to internet only for once. Immediately I installed wvdial package after this I followed the tutorials(I could not browse and upload the files here) . From then it's showing that the device is connected in the network connections but no internet connection. Once I disable the device, it won't show as connected again and I'll have to restart my system. Sometimes the device itself not detected(wondering if there is any command to re-read the all devices). output of wvdialconf /etc/wvdial.cof: #wvdialconf /etc/wvdial.conf Editing `/etc/wvdial.conf'. Scanning your serial ports for a modem. ttyS0<*1>: ATQ0 V1 E1 -- failed with 2400 baud, next try: 9600 baud ttyS0<*1>: ATQ0 V1 E1 -- failed with 9600 baud, next try: 115200 baud ttyS0<*1>: ATQ0 V1 E1 -- and failed too at 115200, giving up. Modem Port Scan<*1>: S1 S2 S3 WvModem<*1>: Cannot get information for serial port. ttyUSB0<*1>: ATQ0 V1 E1 -- failed with 2400 baud, next try: 9600 baud ttyUSB0<*1>: ATQ0 V1 E1 -- failed with 9600 baud, next try: 9600 baud ttyUSB0<*1>: ATQ0 V1 E1 -- and failed too at 115200, giving up. WvModem<*1>: Cannot get information for serial port. ttyUSB1<*1>: ATQ0 V1 E1 -- failed with 2400 baud, next try: 9600 baud ttyUSB1<*1>: ATQ0 V1 E1 -- failed with 9600 baud, next try: 9600 baud ttyUSB1<*1>: ATQ0 V1 E1 -- and failed too at 115200, giving up. WvModem<*1>: Cannot get information for serial port. ttyUSB2<*1>: ATQ0 V1 E1 -- OK ttyUSB2<*1>: ATQ0 V1 E1 Z -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK ttyUSB2<*1>: Modem Identifier: ATI -- Manufacturer: +GMI: HUAWEI TECHNOLOGIES CO., LTD ttyUSB2<*1>: Speed 9600: AT -- OK ttyUSB2<*1>: Max speed is 9600; that should be safe. ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK Found a modem on /dev/ttyUSB2. Modem configuration written to /etc/wvdial.conf. ttyUSB2<Info>: Speed 9600; init "ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0" output of wvdial: #wvdial --> WvDial: Internet dialer version 1.60 --> Cannot get information for serial port. --> Initializing modem. --> Sending: ATZ ATZ OK --> Sending: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 OK --> Sending: AT+CRM=1 AT+CRM=1 OK --> Modem initialized. --> Sending: ATDT#777 --> Waiting for carrier. ATDT#777 CONNECT --> Carrier detected. Starting PPP immediately. --> Starting pppd at Sat Oct 16 15:30:47 2010 --> Pid of pppd: 5681 --> Using interface ppp0 --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> local IP address 14.96.147.104 --> pppd: (u;[08]@s;[08]`{;[08] --> remote IP address 172.29.161.223 --> pppd: (u;[08]@s;[08]`{;[08] --> primary DNS address 121.40.152.90 --> pppd: (u;[08]@s;[08]`{;[08] --> secondary DNS address 121.40.152.100 --> pppd: (u;[08]@s;[08]`{;[08] Output of log message /var/log/messages: Oct 16 15:29:44 avyakta-desktop pppd[5119]: secondary DNS address 121.242.190.180 Oct 16 15:29:58 desktop pppd[5119]: Terminating on signal 15 Oct 16 15:29:58 desktop pppd[5119]: Connect time 0.3 minutes. Oct 16 15:29:58 desktop pppd[5119]: Sent 0 bytes, received 177 bytes. Oct 16 15:29:58 desktop pppd[5119]: Connection terminated. Oct 16 15:30:47 desktop pppd[5681]: pppd 2.4.5 started by root, uid 0 Oct 16 15:30:47 desktop pppd[5681]: Using interface ppp0 Oct 16 15:30:47 desktop pppd[5681]: Connect: ppp0 <--> /dev/ttyUSB2 Oct 16 15:30:47 desktop pppd[5681]: CHAP authentication succeeded Oct 16 15:30:47 desktop pppd[5681]: CHAP authentication succeeded Oct 16 15:30:48 desktop pppd[5681]: local IP address 14.96.147.104 Oct 16 15:30:48 desktop pppd[5681]: remote IP address 172.29.161.223 Oct 16 15:30:48 desktop pppd[5681]: primary DNS address 121.40.152.90 Oct 16 15:30:48 desktop pppd[5681]: secondary DNS address 121.40.152.100 EDIT 1 : I tried the following sudo stop network-manager sudo killall modem-manager sudo /usr/sbin/modem-manager --debug > ~/mm.log 2>&1 & sudo /usr/sbin/NetworkManager --no-daemon > ~/nm.log 2>&1 & Output of mm.log: #vim ~/mm.log: ** Message: Loaded plugin Option High-Speed ** Message: Loaded plugin Option ** Message: Loaded plugin Huawei ** Message: Loaded plugin Longcheer ** Message: Loaded plugin AnyData ** Message: Loaded plugin ZTE ** Message: Loaded plugin Ericsson MBM ** Message: Loaded plugin Sierra ** Message: Loaded plugin Generic ** Message: Loaded plugin Gobi ** Message: Loaded plugin Novatel ** Message: Loaded plugin Nokia ** Message: Loaded plugin MotoC Output of nm.log: #vim ~/nm.log: NetworkManager: <info> starting... NetworkManager: <info> modem-manager is now available NetworkManager: SCPlugin-Ifupdown: init! NetworkManager: SCPlugin-Ifupdown: update_system_hostname NetworkManager: SCPluginIfupdown: guessed connection type (eth0) = 802-3-ethernet NetworkManager: SCPlugin-Ifupdown: update_connection_setting_from_if_block: name:eth0, type:802-3-ethernet, id:Ifupdown (eth0), uuid: 681b428f-beaf-8932-dce4-678ed5bae28e NetworkManager: SCPlugin-Ifupdown: addresses count: 1 NetworkManager: SCPlugin-Ifupdown: No dns-nameserver configured in /etc/network/interfaces NetworkManager: nm-ifupdown-connection.c.119 - invalid connection read from /etc/network/interfaces: (1) addresses NetworkManager: SCPluginIfupdown: management mode: unmanaged NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/pci0000:00/0000:00:14.4/0000:02:02.0/net/eth1, iface: eth1) NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/pci0000:00/0000:00:14.4/0000:02:02.0/net/eth1, iface: eth1): no ifupdown configuration found. NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/lo, iface: lo) @

    Read the article

  • DTracing TCP congestion control

    - by user12820842
    In a previous post, I showed how we can use DTrace to probe TCP receive and send window events. TCP receive and send windows are in effect both about flow-controlling how much data can be received - the receive window reflects how much data the local TCP is prepared to receive, while the send window simply reflects the size of the receive window of the peer TCP. Both then represent flow control as imposed by the receiver. However, consider that without the sender imposing flow control, and a slow link to a peer, TCP will simply fill up it's window with sent segments. Dealing with multiple TCP implementations filling their peer TCP's receive windows in this manner, busy intermediate routers may drop some of these segments, leading to timeout and retransmission, which may again lead to drops. This is termed congestion, and TCP has multiple congestion control strategies. We can see that in this example, we need to have some way of adjusting how much data we send depending on how quickly we receive acknowledgement - if we get ACKs quickly, we can safely send more segments, but if acknowledgements come slowly, we should proceed with more caution. More generally, we need to implement flow control on the send side also. Slow Start and Congestion Avoidance From RFC2581, let's examine the relevant variables: "The congestion window (cwnd) is a sender-side limit on the amount of data the sender can transmit into the network before receiving an acknowledgment (ACK). Another state variable, the slow start threshold (ssthresh), is used to determine whether the slow start or congestion avoidance algorithm is used to control data transmission" Slow start is used to probe the network's ability to handle transmission bursts both when a connection is first created and when retransmission timers fire. The latter case is important, as the fact that we have effectively lost TCP data acts as a motivator for re-probing how much data the network can handle from the sending TCP. The congestion window (cwnd) is initialized to a relatively small value, generally a low multiple of the sending maximum segment size. When slow start kicks in, we will only send that number of bytes before waiting for acknowledgement. When acknowledgements are received, the congestion window is increased in size until cwnd reaches the slow start threshold ssthresh value. For most congestion control algorithms the window increases exponentially under slow start, assuming we receive acknowledgements. We send 1 segment, receive an ACK, increase the cwnd by 1 MSS to 2*MSS, send 2 segments, receive 2 ACKs, increase the cwnd by 2*MSS to 4*MSS, send 4 segments etc. When the congestion window exceeds the slow start threshold, congestion avoidance is used instead of slow start. During congestion avoidance, the congestion window is generally updated by one MSS for each round-trip-time as opposed to each ACK, and so cwnd growth is linear instead of exponential (we may receive multiple ACKs within a single RTT). This continues until congestion is detected. If a retransmit timer fires, congestion is assumed and the ssthresh value is reset. It is reset to a fraction of the number of bytes outstanding (unacknowledged) in the network. At the same time the congestion window is reset to a single max segment size. Thus, we initiate slow start until we start receiving acknowledgements again, at which point we can eventually flip over to congestion avoidance when cwnd ssthresh. Congestion control algorithms differ most in how they handle the other indication of congestion - duplicate ACKs. A duplicate ACK is a strong indication that data has been lost, since they often come from a receiver explicitly asking for a retransmission. In some cases, a duplicate ACK may be generated at the receiver as a result of packets arriving out-of-order, so it is sensible to wait for multiple duplicate ACKs before assuming packet loss rather than out-of-order delivery. This is termed fast retransmit (i.e. retransmit without waiting for the retransmission timer to expire). Note that on Oracle Solaris 11, the congestion control method used can be customized. See here for more details. In general, 3 or more duplicate ACKs indicate packet loss and should trigger fast retransmit . It's best not to revert to slow start in this case, as the fact that the receiver knew it was missing data suggests it has received data with a higher sequence number, so we know traffic is still flowing. Falling back to slow start would be excessive therefore, so fast recovery is used instead. Observing slow start and congestion avoidance The following script counts TCP segments sent when under slow start (cwnd ssthresh). #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::connect-request / start[args[1]-cs_cid] == 0/ { start[args[1]-cs_cid] = 1; } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd tcps_cwnd_ssthresh / { @c["Slow start", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd args[3]-tcps_cwnd_ssthresh / { @c["Congestion avoidance", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } As we can see the script only works on connections initiated since it is started (using the start[] associative array with the connection ID as index to set whether it's a new connection (start[cid] = 1). From there we simply differentiate send events where cwnd ssthresh (congestion avoidance). Here's the output taken when I accessed a YouTube video (where rport is 80) and from an FTP session where I put a large file onto a remote system. # dtrace -s tcp_slow_start.d ^C ALGORITHM RADDR RPORT #SEG Slow start 10.153.125.222 20 6 Slow start 138.3.237.7 80 14 Slow start 10.153.125.222 21 18 Congestion avoidance 10.153.125.222 20 1164 We see that in the case of the YouTube video, slow start was exclusively used. Most of the segments we sent in that case were likely ACKs. Compare this case - where 14 segments were sent using slow start - to the FTP case, where only 6 segments were sent before we switched to congestion avoidance for 1164 segments. In the case of the FTP session, the FTP data on port 20 was predominantly sent with congestion avoidance in operation, while the FTP session relied exclusively on slow start. For the default congestion control algorithm - "newreno" - on Solaris 11, slow start will increase the cwnd by 1 MSS for every acknowledgement received, and by 1 MSS for each RTT in congestion avoidance mode. Different pluggable congestion control algorithms operate slightly differently. For example "highspeed" will update the slow start cwnd by the number of bytes ACKed rather than the MSS. And to finish, here's a neat oneliner to visually display the distribution of congestion window values for all TCP connections to a given remote port using a quantization. In this example, only port 80 is in use and we see the majority of cwnd values for that port are in the 4096-8191 range. # dtrace -n 'tcp:::send { @q[args[4]-tcp_dport] = quantize(args[3]-tcps_cwnd); }' dtrace: description 'tcp:::send ' matched 10 probes ^C 80 value ------------- Distribution ------------- count -1 | 0 0 |@@@@@@ 5 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 0 512 | 0 1024 | 0 2048 |@@@@@@@@@ 8 4096 |@@@@@@@@@@@@@@@@@@@@@@@@@@ 23 8192 | 0

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups have completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • To SYNC or not to SYNC – Part 3

    - by AshishRay
    I can't believe it has been almost a year since my last blog post. I know, that's an absolute no-no in the blogosphere. And I know that "I have been busy" is not a good excuse. So - without trying to come up with an excuse - let me state this - my apologies for taking such a long time to write the next Part. Without further ado, here goes. This is Part 3 of a multi-part blog article where we are discussing various aspects of setting up Data Guard synchronous redo transport (SYNC). In Part 1 of this article, I debunked the myth that Data Guard SYNC is similar to a two-phase commit operation. In Part 2, I discussed the various ways that network latency may or may not impact a Data Guard SYNC configuration. In this article, I will talk in details regarding why Data Guard SYNC is a good thing. I will also talk about distance implications for setting up such a configuration. So, Why Good? Why is Data Guard SYNC a good thing? Because, at the end of the day, this gives you the assurance of zero data loss - it doesn’t matter what outage may befall your primary system. Befall! Boy, that sounds theatrical. But seriously - think about this - it minimizes your data risks. That’s a big deal. Whether you have an outage due to bad disks, faulty hardware components, hardware / software bugs, physical data corruptions, power failures, lightning that takes out significant part of your data center, fire that melts your assets, water leakage from the cooling system, human errors such as accidental deletion of online redo log files - it doesn’t matter - you can have that “Om - peace” look on your face and then you can failover to the standby system, without losing a single bit of data in your Oracle database. You will be a hero, as shown in this not so imaginary conversation: IT Manager: Well, what’s the status? You: John is doing the trace analysis on the storage array. IT Manager: So? How long is that gonna take? You: Well, he is stuck, waiting for a response from <insert your not-so-favorite storage vendor here>. IT Manager: So, no root cause yet? You: I told you, he is stuck. We have escalated with their Support, but you know how long these things take. IT Manager: Darn it - the site is down! You: Not really … IT Manager: What do you mean? You: John is stuck, but Sreeni has already done a failover to the Data Guard standby. IT Manager: Whoa, whoa - wait! Failover means we lost some data, why did you do this without letting the Business group know? You: We didn’t lose any data. Remember, we had set up Data Guard with SYNC? So now, any problems on the production – we just failover. No data loss, and we are up and running in minutes. The Business guys don’t need to know. IT Manager: Wow! Are we great or what!! You: I guess … Ok, so you get it - SYNC is good. But as my dear friend Larry Carpenter says, “TANSTAAFL”, or "There ain't no such thing as a free lunch". Yes, of course - investing in Data Guard SYNC means that you have to invest in a low-latency network, you have to monitor your applications and database especially in peak load conditions, and you cannot under-provision your standby systems. But all these are good and necessary things, if you are supporting mission-critical apps that are supposed to be running 24x7. The peace of mind that this investment will give you is priceless, especially if you are serious about HA. How Far Can We Go? Someone may say at this point - well, I can’t use Data Guard SYNC over my coast-to-coast deployment. Most likely - true. So how far can you go? Well, we have customers who have deployed Data Guard SYNC over 300+ miles! Does this mean that you can also deploy over similar distances? Duh - no! I am going to say something here that most IT managers don’t like to hear - “It depends!” It depends on your application design, application response time / throughput requirements, network topology, etc. However, because of the optimal way we do SYNC, customers have been able to stretch Data Guard SYNC deployments over longer distances compared to traditional, storage-centric ways of doing this. The MAA Database 10.2 best practices paper Data Guard Redo Transport & Network Configuration, and Oracle Database 11.2 High Availability Best Practices Manual talk about some of these SYNC-related metrics. For example, a test deployment of Data Guard SYNC over 330 miles with 10ms latency showed an impact less than 5% for a busy OLTP application. Even if you can’t deploy Data Guard SYNC over your WAN distance, or if you already have an ASYNC standby located 1000-s of miles away, here’s another nifty way to boost your HA. Have a local standby, configured SYNC. How local is “local”? Again - it depends. One customer runs a local SYNC standby across the campus. Another customer runs it across 15 miles in another data center. Both of these customers are running Data Guard SYNC as their HA standard. If a localized outage affects their primary system, no problem! They have all the data available on the standby, to which they can failover. Very fast. In seconds. Wait - did I say “seconds”? Yes, Virginia, there is a Santa Claus. But you have to wait till the next blog article to find out more. I assure you tho’ that this time you won’t have to wait for another year for this.

    Read the article

  • Webcast Q&A: ING on How to Scale Role Management and Compliance

    - by Tanu Sood
    Thanks to all who attended the live webcast we hosted on ING: Scaling Role Management and Access Certifications to Thousands of Applications on Wed, April 11th. Those of you who couldn’t join us, the webcast replay is now available. Many thanks to our guest speaker, Mark Robison, Enterprise Architect at ING for walking us through ING’s drivers and rationale for the platform approach, the phased implementation strategy, results & metrics, roadmap and recommendations. We greatly appreciate the insight he shared with us all on the deployment synergies between Oracle Identity Manager (OIM) and Oracle Identity Analytics (OIA) to enforce streamlined user and role management and scalable compliance. Mark was also kind enough to walk us through specific solutions features that helped ING manage the problem of role explosion and implement closed loop remediation. Our host speaker, Neil Gandhi, Principal Product Manager, Oracle rounded off the presentation by discussing common use cases and deployment scenarios we see organizations implement to automate user/identity administration and enforce closed-loop scalable compliance. Neil also called out the specific features in Oracle Identity Analytics 11gR1 that cater to expediting and streamlining compliance processes such as access certifications. While we tackled a few questions during the webcast, we have captured the responses to those that we weren’t able to get to here; our sincere thanks to Mark Robison for taking the time to respond to questions specific to ING’s implementation and strategy. Q. Did you include business friendly entitlment descriptions, or is the business seeing application descriptors A. We include very business friendly descriptions.  The OIA tool has the facility to allow this. Q. When doing attestation on job change, who is in the workflow to review and confirm that the employee should continue to have access? Is that a best practice?   A. The new and old manager  are in the workflow.  The tool can check for any Separation of Duties (SOD) violations with both having similiar accesses.  It may not be a best practice, but it is a reality of doing your old and new job for a transition period on a transfer. Q. What versions of OIM and OIA are being used at ING?   A. OIM 11gR1 and OIA 11gR1; the very latest versions available. Q. Are you using an entitlements / role catalog?   A. Yes. We use both roles and entitlements. Q. What specific unexpected benefits did the Identity Warehouse provide ING?   A. The most unanticipated was to help Legal Hold identify user ID's in the various applications.   Other benefits included providing a one stop shop for all aggregated ID information. Q. How fine grained are your application and entitlements? Did OIA, OIM support that level of granularity?   A. We have some very fine grained entitlements, but we role this up into approved Roles to allow for easier management.   For managing very fine grained entitlements, Oracle offers the Oracle Entitlement Server.  We currently do not own this software but are considering it. Q. Do you allow any individual access or is everything truly role based?   A. We are a hybrid environment with roles and individual positive and negative entitlements Q. Did you use an Agile methodology like scrum to deliver functionality during your project? A. We started with waterfall, but used an agile approach to provide benefits after the initial implementation Q. How did you handle rolling out the standard ID format to existing users? A. We just used the standard IDs for new users.  We have not taken on a project to address the existing nonstandard IDs. Q. To avoid role explosion, how do you deal with apps that require more than a couple of entitlement TYPES? For example, an app may have different levels of access and it may need to know the user's country/state to associate them with particular customers.   A. We focus on the functional user and craft the role around their daily job requirements.  The role captures the required application entitlements.  To keep role explosion down, we use role mining in OIA and also meet and interview the business.  It is an iterative process to get role consensus. Q. Great presentation! How many rounds of Certifications has ING performed so far?  A. Around 7 quarters and constant certifications on transfer. Q. Did you have executive support from the top down   A. Yes  The executive support was key to our success. Q. For your cloud instance are you using OIA or OIM as SaaS?  A. No.  We are just provisioning and deprovisioning to various Cloud providers.  (Service Now is an example) Q. How do you ensure a role owner does not get more priviliges as are intended and thus violates another role, e,g, a DBA Roles should not get tor rigt to run somethings as root, as this would affect the root role? A. We have SOD  checks.  Also all Roles are initially approved by external audit and the role owners have to certify the roles and any changes Q. What is your ratio of employees to roles?   A. We are still in process going through our various lines of business, so I do not have a final ratio.  From what we have seen, the ratio varies greatly depending on the Line of Business and the diversity of Job Functions.  For standardized lines of business such as call centers, the ratio is very good where we can have a single role that covers many employees.  For specialized lines of business like treasury, it can be one or two people per role. Q. Is ING using Oracle On Demand service ?   A. No Q. Do you have to implement or migrate to OIM in order to get the Identity Warehouse, or can OIA provide the identity warehouse as well if you haven't reached OIM yet? A. No, OIM deployment is not required to implement OIA’s Identity Warehouse but as you heard during the webcast, there are tremendous deployment synergies in deploying both OIA and OIM together. Q. When is the Security Governor product coming out? A. Oracle Security Governor for Healthcare is available today. Hope you enjoyed the webcast and we look forward to having you join us for the next webcast in the Customers Talk: Identity as a Platform webcast series: Toyota: Putting Customers First – Identity Platform as a Business Enabler Wednesday, May 16th at 10 am PST/ 1 pm EST Register Today You can also register for a live event at a city near you where Aberdeen’s Derek Brink will discuss the survey results from the recently published report “Analyzing Platform vs. Point Solution Approach in Identity”. And, you can do a quick (& free)  online assessment of your identity programs by benchmarking it against the 160 organizations surveyed  in the Aberdeen report, compliments of Oracle. Here’s the slide deck from our ING webcast: ING webcast platform View more presentations from OracleIDM

    Read the article

  • ?Oracle????SELECT????UNDO

    - by Liu Maclean(???)
    ????????Oracle?????(dirty read),?Oracle??????Asktom????????Oracle???????, ???undo??????????(before image)??????Consistent, ???????????????Oracle????????????? ????????? ??,??,Oracle?????????????RDBMS,???????????? ?????????2?????: _offline_rollback_segments or _corrupted_rollback_segments ?2?????????Oracle???????????ORA-600[4XXX]???????????????,???2??????Undo??Corruption????????????,?????2????????????????? ??????????????_offline_rollback_segments ? _corrupted_rollback_segments ?2?????: ???????(FORCE OPEN DATABASE) ????????????(consistent read & delayed block cleanout) ??????rollback segment??? ?????:???????Oracle????????,??????????2?????,?????????????!! _offline_rollback_segments ? _corrupted_rollback_segments ???????????: ??2???????Undo Segments(???/???)????????online ?UNDO$???????????OFFLINE??? ???instance??????????????????? ??????Undo Segments????????active transaction????????????dead??SMON???(????????SMON??(?):Recover Dead transaction) _OFFLINE_ROLLBACK_SEGMENTS(offline undo segment list)????(hidden parameter)?????: ???startup???open database???????_OFFLINE_ROLLBACK_SEGMENTS????Undo segments(???/???),?????undo segments????????alert.log???TRACE?????,???????startup?? ?????????????,?ITL?????undo segments?: ???undo segments?transaction table?????????????????? ???????????commit,?????CR??? ????undo segments????(???corrupted??,???missed??)???????????alert.log,??????? ?DML?????????????????????????????????CPU,????????????????????? _CORRUPTED_ROLLBACK_SEGMENTS(corrupted undo segment list)??????????: ?????startup?open database???_CORRUPTED_ROLLBACK_SEGMENTS????undo segments(???/???)???????? ???????_CORRUPTED_ROLLBACK_SEGMENTS???undo segments????????????commit,???undo segments???drop??? ??????????? ??????????????????,?????????????????? ??bootstrap???????????,?????????ORA-00704: bootstrap process failure??,???????????(???Oracle????:??ORA-00600:[4000] ORA-00704: bootstrap process failure????) ??????_CORRUPTED_ROLLBACK_SEGMENTS????????????????????,??????????????? Oracle???????TXChecker??????????? ???????2?????,??????????????_CORRUPTED_ROLLBACK_SEGMENTS?????SELECT????UNDO???????: SQL> alter system set event= '10513 trace name context forever, level 2' scope=spfile; System altered. SQL> alter system set "_in_memory_undo"=false scope=spfile; System altered. 10513 level 2 event????SMON ??rollback ??? dead transaction _in_memory_undo ?? in memory undo ?? SQL> startup force; ORACLE instance started. Total System Global Area 3140026368 bytes Fixed Size 2232472 bytes Variable Size 1795166056 bytes Database Buffers 1325400064 bytes Redo Buffers 17227776 bytes Database mounted. Database opened. session A: SQL> conn maclean/maclean Connected. SQL> create table maclean tablespace users as select 1 t1 from dual connect by level exec dbms_stats.gather_table_stats('','MACLEAN'); PL/SQL procedure successfully completed. SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 501 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 1 recursive calls 0 db block gets 3 consistent gets 0 physical reads 0 redo size 515 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processe ???????????,????current block, ????????,consistent gets??3? SQL> update maclean set t1=0; 501 rows updated. SQL> alter system checkpoint; System altered. ??session A?commit; ???? session: SQL> conn maclean/maclean Connected. SQL> SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 501 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 505 consistent gets 0 physical reads 108 redo size 515 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed ?????? ?????????undo??CR?,???consistent gets??? 505 [oracle@vrh8 ~]$ ps -ef|grep LOCAL=YES |grep -v grep oracle 5841 5839 0 09:17 ? 00:00:00 oracleG10R25 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) [oracle@vrh8 ~]$ kill -9 5841 ??session A???Server Process????,???dead transaction ????smon?? select ktuxeusn, to_char(sysdate, 'DD-MON-YYYY HH24:MI:SS') "Time", ktuxesiz, ktuxesta from x$ktuxe where ktuxecfl = 'DEAD'; KTUXEUSN Time KTUXESIZ KTUXESTA ---------- -------------------- ---------- ---------------- 2 06-AUG-2012 09:20:45 7 ACTIVE ???1?active rollback segment SQL> conn maclean/maclean Connected. SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 501 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 411 consistent gets 0 physical reads 108 redo size 515 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed ????? ????kill?? ???smon ??dead transaction , ???????????? ?????undo??????? ????active?rollback segment??? SQL> select segment_name from dba_rollback_segs where segment_id=2; SEGMENT_NAME ------------------------------ _SYSSMU2$ SQL> alter system set "_corrupted_rollback_segments"='_SYSSMU2$' scope=spfile; System altered. ? _corrupted_rollback_segments ?? ???2?rollback segment, ????????undo SQL> startup force; ORACLE instance started. Total System Global Area 3140026368 bytes Fixed Size 2232472 bytes Variable Size 1795166056 bytes Database Buffers 1325400064 bytes Redo Buffers 17227776 bytes Database mounted. Database opened. SQL> conn maclean/maclean Connected. SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 94 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 228 recursive calls 0 db block gets 29 consistent gets 5 physical reads 116 redo size 514 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 4 sorts (memory) 0 sorts (disk) 1 rows processed SQL> / SUM(T1) ---------- 94 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 3 consistent gets 0 physical reads 0 redo size 514 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed ?????? consistent gets???3,?????????????????,??ITL???UNDO SEGMENTS?_corrupted_rollback_segments????,???????????COMMIT??,????UNDO? ???????,?????????????????????????(????????????????????),????????????????? ???? , ?????

    Read the article

  • DHCP settings out of range Internet shuts off after a few minutes

    - by user263115
    I recently upgraded from windows eight to windows 8.1 I do not know if this has anything to do with anything I have a 64 bit OS. My Internet goes off by itself every 5 minutes even though my wireless icon at the lower right of the screen still shows connected I had an error message in the last event in it said that might DHCP settings were out of range. I get my internet at my house through a wireless portable hotspot through my smart phone. But i haven't ever had any problems before and i only have this problem on this network. If i turn airplane mode on and reset my network card, the internet will come back to life but soon die. i don't experience this problem while on a different network or if i'm on WiFi. This s really annoying please help Windows IP Configuration Host Name . . . . . . . . . . . . : NastyMcnastyJr Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Ethernet adapter Local Area Connection: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : TeamViewer VPN Adapter Physical Address. . . . . . . . . : 00-FF-5D-13-26-21 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Wireless LAN adapter Local Area Connection* 11: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Microsoft Wi-Fi Direct Virtual Adapter Physical Address. . . . . . . . . : F6-B7-E2-50-09-38 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Wireless LAN adapter SAMMY McNASTY: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom 802.11n Network Adapter Physical Address. . . . . . . . . : F4-B7-E2-50-09-38 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::3107:66bc:cf1f:c776%4(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.43.3(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : Friday, November 1, 2013 9:50:20 PM Lease Expires . . . . . . . . . . : Saturday, November 2, 2013 12:56:46 AM Default Gateway . . . . . . . . . : 192.168.43.1 DHCP Server . . . . . . . . . . . : 192.168.43.1 DHCPv6 IAID . . . . . . . . . . . : 83146722 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-19-F1-98-B4-20-89-84-84-61-BB DNS Servers . . . . . . . . . . . : 192.168.43.1 NetBIOS over Tcpip. . . . . . . . : Enabled Ethernet adapter Ethernet: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom NetLink (TM) Gigabit Ethernet Physical Address. . . . . . . . . : 20-89-84-84-61-BB DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Log Name: System Source: Microsoft-Windows-UserPnp Date: 10/26/2013 7:52:23 PM Event ID: 20003 Task Category: (7005) Level: Information Keywords: User: SYSTEM Computer: NastyMcnastyJr Description: Driver Management has concluded the process to add Service vwifibus for Device Instance ID PCI\VEN_14E4&DEV_4727&SUBSYS_E042105B&REV_01\4&3265ADAB&0&00E1 with the following status: 0. Event Xml: 20003 0 4 7005 0 0x8000000000000000 5118 System NastyMcnastyJr vwifibus \SystemRoot\system32\DRIVERS\vwifibus.sys PCI\VEN_14E4&DEV_4727&SUBSYS_E042105B&REV_01\4&3265ADAB&0&00E1 false true 0 Log Name: System Source: Microsoft-Windows-UserPnp Date: 10/19/2013 3:29:12 PM Event ID: 20001 Task Category: (7005) Level: Information Keywords: User: SYSTEM Computer: NastyMcnastyJr Description: Driver Management concluded the process to install driver netbc64.inf_amd64_0df63b5297d0f820\netbc64.inf for Device Instance ID PCI\VEN_14E4&DEV_4727&SUBSYS_E042105B&REV_01\4&3265ADAB&0&00E1 with the following status: 0x0. Event Xml: 20001 0 4 7005 0 0x8000000000000000 2015 System NastyMcnastyJr netbc64.inf_amd64_0df63b5297d0f820\netbc64.inf 6.30.223.102 Microsoft PCI\VEN_14E4&DEV_4727&SUBSYS_E042105B&REV_01\4&3265ADAB&0&00E1 {4D36E972-E325-11CE-BFC1-08002BE10318} false false false 0x0 Broadcom 802.11n Network Adapter Log Name: System Source: Microsoft-Windows-DNS-Client Date: 11/2/2013 12:24:59 AM Event ID: 1014 Task Category: (1014) Level: Warning Keywords: (268435456) User: NETWORK SERVICE Computer: NastyMcnastyJr Description: Name resolution for the name www.google.com timed out after none of the configured DNS servers responded. Event Xml: 1014 0 3 1014 0 0x4000000010000000 34771 System NastyMcnastyJr www.google.com 128 02000000C0A82B01000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

    Read the article

  • DirectX works for 64-bit but not 32-bit

    - by dtbarne
    I'm trying to play a game (Civilization 5) which was previously working but no longer. I believe I've narrowed it down to a DirectX issue because I get an error running dxdiag.exe in 32 bit mode. My goal (at least I believe) is to get Direct3D Acceleration "Enabled" in dxdiag (as it is in 64 bit dxdiag). A very similar issue is here: http://answers.microsoft.com/en-us/windows/forum/windows_7-gaming/direct3d-acceleration-is-not-available-in-windows/4c345e6e-dc68-e011-8dfc-68b599b31bf5?page=1 The proposed answer, which looks very promising, doesn't seem to work for me. Like other users in that thread, HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Direct3D\Drivers does not have a SoftwareOnly key to change. I even tried manually adding it as a string and dword, to no avail. I have a NVIDIA GeForce GT 525M, and before you ask, yes I've tried updating (also uninstalling, reinstalling) my drivers. I've also tried doing the same with DirectX (and Civilization 5 for that matter). Been debugging for some 4+ hours now after a full day of work and I've run out of ideas. I'm hoping somebody knows the solution here! :) Here's what I see when I open dxdiag: DxDiag has detected that there mgiht have been a problem accessing Direct3D the last time this program was used. Would you like to bypass Direct3D this time? No - Crash Yes - Works, but in Display tab: DirectDraw Acceleration: Disabled Direct3D Acceleration: Not Available AGP Texture Acceleration: Not Available If I click "Run 64-bit DxDiag", all three are "Enabled". I should also note that I've tried the following steps as Microsoft suggests, but I'm not able to do so as the "Change Settings" button is disabled. Some programs run very slowly—or not at all—unless Microsoft DirectDraw or Direct3D hardware acceleration is turned on. To determine this, click the Display tab, and then under DirectX Features, check to see whether DirectDraw, Direct3D, and AGP Texture Acceleration appear as Enabled. If not, try turning on hardware acceleration. Click to open Screen Resolution. Click Advanced settings. Click the Troubleshoot tab, and then click Change settings. If you're prompted for an administrator password or confirmation, type the password or provide confirmation. Move the Hardware Acceleration slider to Full. Full dxdiag dump: ------------------ System Information ------------------ Time of this report: 11/8/2012, 23:13:24 Machine name: DTBARNE Operating System: Windows 7 Professional 64-bit (6.1, Build 7601) Service Pack 1 (7601.win7sp1_gdr.120830-0333) Language: English (Regional Setting: English) System Manufacturer: Dell Inc. System Model: Dell System XPS L502X BIOS: Default System BIOS Processor: Intel(R) Core(TM) i5-2450M CPU @ 2.50GHz (4 CPUs), ~2.5GHz Memory: 8192MB RAM Available OS Memory: 8086MB RAM Page File: 2466MB used, 13704MB available Windows Dir: C:\Windows DirectX Version: DirectX 11 DX Setup Parameters: Not found User DPI Setting: Using System DPI System DPI Setting: 96 DPI (100 percent) DWM DPI Scaling: Disabled DxDiag Version: 6.01.7601.17514 32bit Unicode DxDiag Previously: Crashed in Direct3D (stage 2). Re-running DxDiag with "dontskip" command line parameter or choosing not to bypass information gathering when prompted might result in DxDiag successfully obtaining this information ------------ DxDiag Notes ------------ Display Tab 1: No problems found. Sound Tab 1: No problems found. Sound Tab 2: No problems found. Input Tab: No problems found. -------------------- DirectX Debug Levels -------------------- Direct3D: 0/4 (retail) DirectDraw: 0/4 (retail) DirectInput: 0/5 (retail) DirectMusic: 0/5 (retail) DirectPlay: 0/9 (retail) DirectSound: 0/5 (retail) DirectShow: 0/6 (retail) --------------- Display Devices --------------- Card name: Intel(R) HD Graphics 3000 Manufacturer: Chip type: DAC type: Device Key: Enum\PCI\VEN_8086&DEV_0126&SUBSYS_04B61028&REV_09 Display Memory: Dedicated Memory: n/a Shared Memory: n/a Current Mode: 1920 x 1080 (32 bit) (60Hz) Monitor Name: Generic PnP Monitor Monitor Model: Monitor Id: Native Mode: Output Type: Driver Name: Driver File Version: () Driver Version: DDI Version: Driver Model: WDDM 1.1 Driver Attributes: Final Retail Driver Date/Size: , 0 bytes WHQL Logo'd: n/a WHQL Date Stamp: n/a Device Identifier: Vendor ID: Device ID: SubSys ID: Revision ID: Driver Strong Name: oem11.inf:IntelGfx.NTamd64.6.0:iSNBM0:8.15.10.2696:pci\ven_8086&dev_0126&subsys_04b61028 Rank Of Driver: 00E60001 Video Accel: Deinterlace Caps: n/a D3D9 Overlay: DXVA-HD: DDraw Status: Disabled D3D Status: Not Available AGP Status: Not Available ------------- Sound Devices ------------- Description: Speakers (High Definition Audio Device) Default Sound Playback: Yes Default Voice Playback: Yes Hardware ID: HDAUDIO\FUNC_01&VEN_10EC&DEV_0665&SUBSYS_102804B6&REV_1000 Manufacturer ID: 1 Product ID: 65535 Type: WDM Driver Name: HdAudio.sys Driver Version: 6.01.7601.17514 (English) Driver Attributes: Final Retail WHQL Logo'd: Yes Date and Size: 11/20/2010 22:23:47, 350208 bytes Other Files: Driver Provider: Microsoft HW Accel Level: Basic Cap Flags: 0xF1F Min/Max Sample Rate: 100, 200000 Static/Strm HW Mix Bufs: 1, 0 Static/Strm HW 3D Bufs: 0, 0 HW Memory: 0 Voice Management: No EAX(tm) 2.0 Listen/Src: No, No I3DL2(tm) Listen/Src: No, No Sensaura(tm) ZoomFX(tm): No Description: Digital Audio (S/PDIF) (High Definition Audio Device) Default Sound Playback: No Default Voice Playback: No Hardware ID: HDAUDIO\FUNC_01&VEN_10EC&DEV_0665&SUBSYS_102804B6&REV_1000 Manufacturer ID: 1 Product ID: 65535 Type: WDM Driver Name: HdAudio.sys Driver Version: 6.01.7601.17514 (English) Driver Attributes: Final Retail WHQL Logo'd: Yes Date and Size: 11/20/2010 22:23:47, 350208 bytes Other Files: Driver Provider: Microsoft HW Accel Level: Basic Cap Flags: 0xF1F Min/Max Sample Rate: 100, 200000 Static/Strm HW Mix Bufs: 1, 0 Static/Strm HW 3D Bufs: 0, 0 HW Memory: 0 Voice Management: No EAX(tm) 2.0 Listen/Src: No, No I3DL2(tm) Listen/Src: No, No Sensaura(tm) ZoomFX(tm): No --------------------- Sound Capture Devices --------------------- Description: Microphone (High Definition Audio Device) Default Sound Capture: Yes Default Voice Capture: Yes Driver Name: HdAudio.sys Driver Version: 6.01.7601.17514 (English) Driver Attributes: Final Retail Date and Size: 11/20/2010 22:23:47, 350208 bytes Cap Flags: 0x1 Format Flags: 0xFFFFF ------------------- DirectInput Devices ------------------- Device Name: Mouse Attached: 1 Controller ID: n/a Vendor/Product ID: n/a FF Driver: n/a Device Name: Keyboard Attached: 1 Controller ID: n/a Vendor/Product ID: n/a FF Driver: n/a Poll w/ Interrupt: No ----------- USB Devices ----------- + USB Root Hub | Vendor/Product ID: 0x8086, 0x1C26 | Matching Device ID: usb\root_hub20 | Service: usbhub | +-+ Generic USB Hub | | Vendor/Product ID: 0x8087, 0x0024 | | Location: Port_#0001.Hub_#0002 | | Matching Device ID: usb\class_09 | | Service: usbhub ---------------- Gameport Devices ---------------- ------------ PS/2 Devices ------------ + Standard PS/2 Keyboard | Matching Device ID: *pnp0303 | Service: i8042prt | + Terminal Server Keyboard Driver | Matching Device ID: root\rdp_kbd | Upper Filters: kbdclass | Service: TermDD | + Synaptics PS/2 Port TouchPad | Matching Device ID: *dll04b6 | Upper Filters: SynTP | Service: i8042prt | + Terminal Server Mouse Driver | Matching Device ID: root\rdp_mou | Upper Filters: mouclass | Service: TermDD ------------------------ Disk & DVD/CD-ROM Drives ------------------------ Drive: C: Free Space: 26.2 GB Total Space: 122.0 GB File System: NTFS Model: M4-CT128M4SSD2 ATA Device Drive: D: Model: Optiarc DVDRWBD BC-5540H ATA Device Driver: c:\windows\system32\drivers\cdrom.sys, 6.01.7601.17514 (English), , 0 bytes -------------- System Devices -------------- Name: High Definition Audio Controller Device ID: PCI\VEN_8086&DEV_1C20&SUBSYS_04B61028&REV_05\3&11583659&0&D8 Driver: n/a Name: PCI standard host CPU bridge Device ID: PCI\VEN_8086&DEV_0104&SUBSYS_04B61028&REV_09\3&11583659&0&00 Driver: n/a Name: PCI standard PCI-to-PCI bridge Device ID: PCI\VEN_8086&DEV_1C1A&SUBSYS_04B61028&REV_B5\3&11583659&0&E5 Driver: n/a Name: PCI standard PCI-to-PCI bridge Device ID: PCI\VEN_8086&DEV_0101&SUBSYS_20108086&REV_09\3&11583659&0&08 Driver: n/a Name: PCI standard PCI-to-PCI bridge Device ID: PCI\VEN_8086&DEV_1C18&SUBSYS_04B61028&REV_B5\3&11583659&0&E4 Driver: n/a Name: Intel(R) Centrino(R) Advanced-N 6230 Device ID: PCI\VEN_8086&DEV_0091&SUBSYS_52218086&REV_34\4&2634DE8D&0&00E1 Driver: n/a Name: PCI standard ISA bridge Device ID: PCI\VEN_8086&DEV_1C4B&SUBSYS_04B61028&REV_05\3&11583659&0&F8 Driver: n/a Name: PCI standard PCI-to-PCI bridge Device ID: PCI\VEN_8086&DEV_1C16&SUBSYS_04B61028&REV_B5\3&11583659&0&E3 Driver: n/a Name: Realtek PCIe GBE Family Controller Device ID: PCI\VEN_10EC&DEV_8168&SUBSYS_04B61028&REV_06\4&109EAB2F&0&00E5 Driver: n/a Name: Intel(R) Management Engine Interface Device ID: PCI\VEN_8086&DEV_1C3A&SUBSYS_04B61028&REV_04\3&11583659&0&B0 Driver: n/a Name: PCI standard PCI-to-PCI bridge Device ID: PCI\VEN_8086&DEV_1C12&SUBSYS_04B61028&REV_B5\3&11583659&0&E1 Driver: n/a Name: NVIDIA GeForce GT 525M Device ID: PCI\VEN_10DE&DEV_0DF5&SUBSYS_04B61028&REV_A1\4&4DCA75F&0&0008 Driver: n/a Name: Standard Enhanced PCI to USB Host Controller Device ID: PCI\VEN_8086&DEV_1C2D&SUBSYS_04B61028&REV_05\3&11583659&0&D0 Driver: n/a Name: PCI standard PCI-to-PCI bridge Device ID: PCI\VEN_8086&DEV_1C10&SUBSYS_04B61028&REV_B5\3&11583659&0&E0 Driver: n/a Name: Standard Enhanced PCI to USB Host Controller Device ID: PCI\VEN_8086&DEV_1C26&SUBSYS_04B61028&REV_05\3&11583659&0&E8 Driver: n/a Name: Standard AHCI 1.0 Serial ATA Controller Device ID: PCI\VEN_8086&DEV_1C03&SUBSYS_04B61028&REV_05\3&11583659&0&FA Driver: n/a Name: SM Bus Controller Device ID: PCI\VEN_8086&DEV_1C22&SUBSYS_04B61028&REV_05\3&11583659&0&FB Driver: n/a Name: Intel(R) HD Graphics 3000 Device ID: PCI\VEN_8086&DEV_0126&SUBSYS_04B61028&REV_09\3&11583659&0&10 Driver: n/a Name: Renesas Electronics USB 3.0 Host Controller Device ID: PCI\VEN_1033&DEV_0194&SUBSYS_04B61028&REV_04\4&3494AC3A&0&00E3 Driver: n/a ------------------ DirectShow Filters ------------------ DirectShow Filters: WMAudio Decoder DMO,0x00800800,1,1,WMADMOD.DLL,6.01.7601.17514 WMAPro over S/PDIF DMO,0x00600800,1,1,WMADMOD.DLL,6.01.7601.17514 WMSpeech Decoder DMO,0x00600800,1,1,WMSPDMOD.DLL,6.01.7601.17514 MP3 Decoder DMO,0x00600800,1,1,mp3dmod.dll,6.01.7600.16385 Mpeg4s Decoder DMO,0x00800001,1,1,mp4sdecd.dll,6.01.7600.16385 WMV Screen decoder DMO,0x00600800,1,1,wmvsdecd.dll,6.01.7601.17514 WMVideo Decoder DMO,0x00800001,1,1,wmvdecod.dll,6.01.7601.17514 Mpeg43 Decoder DMO,0x00800001,1,1,mp43decd.dll,6.01.7600.16385 Mpeg4 Decoder DMO,0x00800001,1,1,mpg4decd.dll,6.01.7600.16385 DV Muxer,0x00400000,0,0,qdv.dll,6.06.7601.17514 Color Space Converter,0x00400001,1,1,quartz.dll,6.06.7601.17713 WM ASF Reader,0x00400000,0,0,qasf.dll,12.00.7601.17514 Screen Capture filter,0x00200000,0,1,wmpsrcwp.dll,12.00.7601.17514 AVI Splitter,0x00600000,1,1,quartz.dll,6.06.7601.17713 VGA 16 Color Ditherer,0x00400000,1,1,quartz.dll,6.06.7601.17713 SBE2MediaTypeProfile,0x00200000,0,0,sbe.dll,6.06.7601.17528 Microsoft DTV-DVD Video Decoder,0x005fffff,2,4,msmpeg2vdec.dll,6.01.7140.0000 AC3 Parser Filter,0x00600000,1,1,mpg2splt.ax,6.06.7601.17528 StreamBufferSink,0x00200000,0,0,sbe.dll,6.06.7601.17528 MJPEG Decompressor,0x00600000,1,1,quartz.dll,6.06.7601.17713 MPEG-I Stream Splitter,0x00600000,1,2,quartz.dll,6.06.7601.17713 SAMI (CC) Parser,0x00400000,1,1,quartz.dll,6.06.7601.17713 VBI Codec,0x00600000,1,4,VBICodec.ax,6.06.7601.17514 MPEG-2 Splitter,0x005fffff,1,0,mpg2splt.ax,6.06.7601.17528 Closed Captions Analysis Filter,0x00200000,2,5,cca.dll,6.06.7601.17514 SBE2FileScan,0x00200000,0,0,sbe.dll,6.06.7601.17528 Microsoft MPEG-2 Video Encoder,0x00200000,1,1,msmpeg2enc.dll,6.01.7601.17514 Internal Script Command Renderer,0x00800001,1,0,quartz.dll,6.06.7601.17713 MPEG Audio Decoder,0x03680001,1,1,quartz.dll,6.06.7601.17713 DV Splitter,0x00600000,1,2,qdv.dll,6.06.7601.17514 Video Mixing Renderer 9,0x00200000,1,0,quartz.dll,6.06.7601.17713 Microsoft MPEG-2 Encoder,0x00200000,2,1,msmpeg2enc.dll,6.01.7601.17514 ACM Wrapper,0x00600000,1,1,quartz.dll,6.06.7601.17713 Video Renderer,0x00800001,1,0,quartz.dll,6.06.7601.17713 MPEG-2 Video Stream Analyzer,0x00200000,0,0,sbe.dll,6.06.7601.17528 Line 21 Decoder,0x00600000,1,1,qdvd.dll,6.06.7601.17835 Video Port Manager,0x00600000,2,1,quartz.dll,6.06.7601.17713 Video Renderer,0x00400000,1,0,quartz.dll,6.06.7601.17713 VPS Decoder,0x00200000,0,0,WSTPager.ax,6.06.7601.17514 WM ASF Writer,0x00400000,0,0,qasf.dll,12.00.7601.17514 VBI Surface Allocator,0x00600000,1,1,vbisurf.ax,6.01.7601.17514 File writer,0x00200000,1,0,qcap.dll,6.06.7601.17514 iTV Data Sink,0x00600000,1,0,itvdata.dll,6.06.7601.17514 iTV Data Capture filter,0x00600000,1,1,itvdata.dll,6.06.7601.17514 DVD Navigator,0x00200000,0,3,qdvd.dll,6.06.7601.17835 Overlay Mixer2,0x00200000,1,1,qdvd.dll,6.06.7601.17835 AVI Draw,0x00600064,9,1,quartz.dll,6.06.7601.17713 RDP DShow Redirection Filter,0xffffffff,1,0,DShowRdpFilter.dll, Microsoft MPEG-2 Audio Encoder,0x00200000,1,1,msmpeg2enc.dll,6.01.7601.17514 WST Pager,0x00200000,1,1,WSTPager.ax,6.06.7601.17514 MPEG-2 Demultiplexer,0x00600000,1,1,mpg2splt.ax,6.06.7601.17528 DV Video Decoder,0x00800000,1,1,qdv.dll,6.06.7601.17514 SampleGrabber,0x00200000,1,1,qedit.dll,6.06.7601.17514 Null Renderer,0x00200000,1,0,qedit.dll,6.06.7601.17514 MPEG-2 Sections and Tables,0x005fffff,1,0,Mpeg2Data.ax,6.06.7601.17514 Microsoft AC3 Encoder,0x00200000,1,1,msac3enc.dll,6.01.7601.17514 StreamBufferSource,0x00200000,0,0,sbe.dll,6.06.7601.17528 Smart Tee,0x00200000,1,2,qcap.dll,6.06.7601.17514 Overlay Mixer,0x00200000,0,0,qdvd.dll,6.06.7601.17835 AVI Decompressor,0x00600000,1,1,quartz.dll,6.06.7601.17713 AVI/WAV File Source,0x00400000,0,2,quartz.dll,6.06.7601.17713 Wave Parser,0x00400000,1,1,quartz.dll,6.06.7601.17713 MIDI Parser,0x00400000,1,1,quartz.dll,6.06.7601.17713 Multi-file Parser,0x00400000,1,1,quartz.dll,6.06.7601.17713 File stream renderer,0x00400000,1,1,quartz.dll,6.06.7601.17713 Microsoft DTV-DVD Audio Decoder,0x005fffff,1,1,msmpeg2adec.dll,6.01.7140.0000 StreamBufferSink2,0x00200000,0,0,sbe.dll,6.06.7601.17528 AVI Mux,0x00200000,1,0,qcap.dll,6.06.7601.17514 Line 21 Decoder 2,0x00600002,1,1,quartz.dll,6.06.7601.17713 File Source (Async.),0x00400000,0,1,quartz.dll,6.06.7601.17713 File Source (URL),0x00400000,0,1,quartz.dll,6.06.7601.17713 Infinite Pin Tee Filter,0x00200000,1,1,qcap.dll,6.06.7601.17514 Enhanced Video Renderer,0x00200000,1,0,evr.dll,6.01.7601.17514 BDA MPEG2 Transport Information Filter,0x00200000,2,0,psisrndr.ax,6.06.7601.17669 MPEG Video Decoder,0x40000001,1,1,quartz.dll,6.06.7601.17713 WDM Streaming Tee/Splitter Devices: Tee/Sink-to-Sink Converter,0x00200000,1,1,ksproxy.ax,6.01.7601.17514 Video Compressors: WMVideo8 Encoder DMO,0x00600800,1,1,wmvxencd.dll,6.01.7600.16385 WMVideo9 Encoder DMO,0x00600800,1,1,wmvencod.dll,6.01.7600.16385 MSScreen 9 encoder DMO,0x00600800,1,1,wmvsencd.dll,6.01.7600.16385 DV Video Encoder,0x00200000,0,0,qdv.dll,6.06.7601.17514 MJPEG Compressor,0x00200000,0,0,quartz.dll,6.06.7601.17713 Cinepak Codec by Radius,0x00200000,1,1,qcap.dll,6.06.7601.17514 Intel IYUV codec,0x00200000,1,1,qcap.dll,6.06.7601.17514 Intel IYUV codec,0x00200000,1,1,qcap.dll,6.06.7601.17514 Microsoft RLE,0x00200000,1,1,qcap.dll,6.06.7601.17514 Microsoft Video 1,0x00200000,1,1,qcap.dll,6.06.7601.17514 Audio Compressors: WM Speech Encoder DMO,0x00600800,1,1,WMSPDMOE.DLL,6.01.7600.16385 WMAudio Encoder DMO,0x00600800,1,1,WMADMOE.DLL,6.01.7600.16385 IMA ADPCM,0x00200000,1,1,quartz.dll,6.06.7601.17713 PCM,0x00200000,1,1,quartz.dll,6.06.7601.17713 Microsoft ADPCM,0x00200000,1,1,quartz.dll,6.06.7601.17713 GSM 6.10,0x00200000,1,1,quartz.dll,6.06.7601.17713 CCITT A-Law,0x00200000,1,1,quartz.dll,6.06.7601.17713 CCITT u-Law,0x00200000,1,1,quartz.dll,6.06.7601.17713 MPEG Layer-3,0x00200000,1,1,quartz.dll,6.06.7601.17713 Audio Capture Sources: Microphone (High Definition Aud,0x00200000,0,0,qcap.dll,6.06.7601.17514 PBDA CP Filters: PBDA DTFilter,0x00600000,1,1,CPFilters.dll,6.06.7601.17528 PBDA ETFilter,0x00200000,0,0,CPFilters.dll,6.06.7601.17528 PBDA PTFilter,0x00200000,0,0,CPFilters.dll,6.06.7601.17528 Midi Renderers: Default MidiOut Device,0x00800000,1,0,quartz.dll,6.06.7601.17713 Microsoft GS Wavetable Synth,0x00200000,1,0,quartz.dll,6.06.7601.17713 WDM Streaming Capture Devices: HD Audio Microphone 2,0x00200000,1,1,ksproxy.ax,6.01.7601.17514 Integrated Webcam,0x00200000,1,2,ksproxy.ax,6.01.7601.17514 WDM Streaming Rendering Devices: HD Audio Headphone/Speakers,0x00200000,1,1,ksproxy.ax,6.01.7601.17514 HD Audio SPDIF out,0x00200000,1,1,ksproxy.ax,6.01.7601.17514 BDA Network Providers: Microsoft ATSC Network Provider,0x00200000,0,1,MSDvbNP.ax,6.06.7601.17514 Microsoft DVBC Network Provider,0x00200000,0,1,MSDvbNP.ax,6.06.7601.17514 Microsoft DVBS Network Provider,0x00200000,0,1,MSDvbNP.ax,6.06.7601.17514 Microsoft DVBT Network Provider,0x00200000,0,1,MSDvbNP.ax,6.06.7601.17514 Microsoft Network Provider,0x00200000,0,1,MSNP.ax,6.06.7601.17514 Video Capture Sources: Integrated Webcam,0x00200000,1,2,ksproxy.ax,6.01.7601.17514 Multi-Instance Capable VBI Codecs: VBI Codec,0x00600000,1,4,VBICodec.ax,6.06.7601.17514 BDA Transport Information Renderers: BDA MPEG2 Transport Information Filter,0x00600000,2,0,psisrndr.ax,6.06.7601.17669 MPEG-2 Sections and Tables,0x00600000,1,0,Mpeg2Data.ax,6.06.7601.17514 BDA CP/CA Filters: Decrypt/Tag,0x00600000,1,1,EncDec.dll,6.06.7601.17708 Encrypt/Tag,0x00200000,0,0,EncDec.dll,6.06.7601.17708 PTFilter,0x00200000,0,0,EncDec.dll,6.06.7601.17708 XDS Codec,0x00200000,0,0,EncDec.dll,6.06.7601.17708 WDM Streaming Communication Transforms: Tee/Sink-to-Sink Converter,0x00200000,1,1,ksproxy.ax,6.01.7601.17514 Audio Renderers: Speakers (High Definition Audio,0x00200000,1,0,quartz.dll,6.06.7601.17713 Default DirectSound Device,0x00800000,1,0,quartz.dll,6.06.7601.17713 Default WaveOut Device,0x00200000,1,0,quartz.dll,6.06.7601.17713 Digital Audio (S/PDIF) (High De,0x00200000,1,0,quartz.dll,6.06.7601.17713 DirectSound: Digital Audio (S/PDIF) (High Definition Audio Device),0x00200000,1,0,quartz.dll,6.06.7601.17713 DirectSound: Speakers (High Definition Audio Device),0x00200000,1,0,quartz.dll,6.06.7601.17713 --------------- EVR Power Information --------------- Current Setting: {651288E5-A7ED-4076-A96B-6CC62D848FE1} (Balanced) Quality Flags: 2576 Enabled: Force throttling Allow half deinterlace Allow scaling Decode Power Usage: 100 Balanced Flags: 1424 Enabled: Force throttling Allow batching Force half deinterlace Force scaling Decode Power Usage: 50 PowerFlags: 1424 Enabled: Force throttling Allow batching Force half deinterlace Force scaling Decode Power Usage: 0

    Read the article

  • OpenVPN - Windows 8 to Windows 2008 Server, not connecting

    - by niico
    I have followed this tutorial about setting up an OpenVPN Server on Windows Server - and a client on Windows (in this case Windows 8). The server appears to be running fine - but it is not connecting with this error: Mon Jul 22 19:09:04 2013 Warning: cannot open --log file: C:\Program Files\OpenVPN\log\my-laptop.log: Access is denied. (errno=5) Mon Jul 22 19:09:04 2013 OpenVPN 2.3.2 x86_64-w64-mingw32 [SSL (OpenSSL)] [LZO] [PKCS11] [eurephia] [IPv6] built on Jun 3 2013 Mon Jul 22 19:09:04 2013 MANAGEMENT: TCP Socket listening on [AF_INET]127.0.0.1:25340 Mon Jul 22 19:09:04 2013 Need hold release from management interface, waiting... Mon Jul 22 19:09:05 2013 MANAGEMENT: Client connected from [AF_INET]127.0.0.1:25340 Mon Jul 22 19:09:05 2013 MANAGEMENT: CMD 'state on' Mon Jul 22 19:09:05 2013 MANAGEMENT: CMD 'log all on' Mon Jul 22 19:09:05 2013 MANAGEMENT: CMD 'hold off' Mon Jul 22 19:09:05 2013 MANAGEMENT: CMD 'hold release' Mon Jul 22 19:09:05 2013 Socket Buffers: R=[65536->65536] S=[65536->65536] Mon Jul 22 19:09:05 2013 UDPv4 link local: [undef] Mon Jul 22 19:09:05 2013 UDPv4 link remote: [AF_INET]66.666.66.666:9999 Mon Jul 22 19:09:05 2013 MANAGEMENT: >STATE:1374494945,WAIT,,, Mon Jul 22 19:10:05 2013 TLS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity) Mon Jul 22 19:10:05 2013 TLS Error: TLS handshake failed Mon Jul 22 19:10:05 2013 SIGUSR1[soft,tls-error] received, process restarting Mon Jul 22 19:10:05 2013 MANAGEMENT: >STATE:1374495005,RECONNECTING,tls-error,, Mon Jul 22 19:10:05 2013 Restart pause, 2 second(s) Note I have changed the IP and port no (it uses a non-standard port for security reasons). That port is open on the hardware firewall. The server logs are showing a connection attempt from my client: TLS: Initial packet from [AF_INET]118.68.xx.xx:65011, sid=081af4ed xxxxxxxx Mon Jul 22 14:19:15 2013 118.68.xx.xx:65011 TLS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity) How can I problem solve this & find the problem? Thx Update - Client config file: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. ;proto tcp proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. remote 00.00.00.00 1194 ;remote 00.00.00.00 9999 ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. ;remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nobody # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca "C:\\Program Files\\OpenVPN\\config\\ca.crt" cert "C:\\Program Files\\OpenVPN\\config\\my-laptop.crt" key "C:\\Program Files\\OpenVPN\\config\\my-laptop.key" # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 Server config file: ################################################# # Sample OpenVPN 2.0 config file for # # multi-client server. # # # # This file is for the server side # # of a many-clients <-> one-server # # OpenVPN configuration. # # # # OpenVPN also supports # # single-machine <-> single-machine # # configurations (See the Examples page # # on the web site for more info). # # # # This config should work on Windows # # or Linux/BSD systems. Remember on # # Windows to quote pathnames and use # # double backslashes, e.g.: # # "C:\\Program Files\\OpenVPN\\config\\foo.key" # # # # Comments are preceded with '#' or ';' # ################################################# # Which local IP address should OpenVPN # listen on? (optional) ;local 00.00.00.00 # Which TCP/UDP port should OpenVPN listen on? # If you want to run multiple OpenVPN instances # on the same machine, use a different port # number for each one. You will need to # open up this port on your firewall. std 1194 port 1194 # TCP or UDP server? ;proto tcp proto udp # "dev tun" will create a routed IP tunnel, # "dev tap" will create an ethernet tunnel. # Use "dev tap0" if you are ethernet bridging # and have precreated a tap0 virtual interface # and bridged it with your ethernet interface. # If you want to control access policies # over the VPN, you must create firewall # rules for the the TUN/TAP interface. # On non-Windows systems, you can give # an explicit unit number, such as tun0. # On Windows, use "dev-node" for this. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel if you # have more than one. On XP SP2 or higher, # you may need to selectively disable the # Windows firewall for the TAP adapter. # Non-Windows systems usually don't need this. ;dev-node MyTap # SSL/TLS root certificate (ca), certificate # (cert), and private key (key). Each client # and the server must have their own cert and # key file. The server and all clients will # use the same ca file. # # See the "easy-rsa" directory for a series # of scripts for generating RSA certificates # and private keys. Remember to use # a unique Common Name for the server # and each of the client certificates. # # Any X509 key management system can be used. # OpenVPN can also use a PKCS #12 formatted key file # (see "pkcs12" directive in man page). ca "C:\\Program Files\\OpenVPN\\config\\ca.crt" cert "C:\\Program Files\\OpenVPN\\config\\server.crt" key "C:\\Program Files\\OpenVPN\\config\\server.key" # Diffie hellman parameters. # Generate your own with: # openssl dhparam -out dh1024.pem 1024 # Substitute 2048 for 1024 if you are using # 2048 bit keys. dh "C:\\Program Files\\OpenVPN\\config\\dh2048.pem" # Configure server mode and supply a VPN subnet # for OpenVPN to draw client addresses from. # The server will take 10.8.0.1 for itself, # the rest will be made available to clients. # Each client will be able to reach the server # on 10.8.0.1. Comment this line out if you are # ethernet bridging. See the man page for more info. server 10.8.0.0 255.255.255.0 # Maintain a record of client <-> virtual IP address # associations in this file. If OpenVPN goes down or # is restarted, reconnecting clients can be assigned # the same virtual IP address from the pool that was # previously assigned. ifconfig-pool-persist ipp.txt # Configure server mode for ethernet bridging. # You must first use your OS's bridging capability # to bridge the TAP interface with the ethernet # NIC interface. Then you must manually set the # IP/netmask on the bridge interface, here we # assume 10.8.0.4/255.255.255.0. Finally we # must set aside an IP range in this subnet # (start=10.8.0.50 end=10.8.0.100) to allocate # to connecting clients. Leave this line commented # out unless you are ethernet bridging. ;server-bridge 10.8.0.4 255.255.255.0 10.8.0.50 10.8.0.100 # Configure server mode for ethernet bridging # using a DHCP-proxy, where clients talk # to the OpenVPN server-side DHCP server # to receive their IP address allocation # and DNS server addresses. You must first use # your OS's bridging capability to bridge the TAP # interface with the ethernet NIC interface. # Note: this mode only works on clients (such as # Windows), where the client-side TAP adapter is # bound to a DHCP client. ;server-bridge # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. ;push "route 192.168.10.0 255.255.255.0" ;push "route 192.168.20.0 255.255.255.0" # To assign specific IP addresses to specific # clients or if a connecting client has a private # subnet behind it that should also have VPN access, # use the subdirectory "ccd" for client-specific # configuration files (see man page for more info). # EXAMPLE: Suppose the client # having the certificate common name "Thelonious" # also has a small subnet behind his connecting # machine, such as 192.168.40.128/255.255.255.248. # First, uncomment out these lines: ;client-config-dir ccd ;route 192.168.40.128 255.255.255.248 # Then create a file ccd/Thelonious with this line: # iroute 192.168.40.128 255.255.255.248 # This will allow Thelonious' private subnet to # access the VPN. This example will only work # if you are routing, not bridging, i.e. you are # using "dev tun" and "server" directives. # EXAMPLE: Suppose you want to give # Thelonious a fixed VPN IP address of 10.9.0.1. # First uncomment out these lines: ;client-config-dir ccd ;route 10.9.0.0 255.255.255.252 # Then add this line to ccd/Thelonious: # ifconfig-push 10.9.0.1 10.9.0.2 # Suppose that you want to enable different # firewall access policies for different groups # of clients. There are two methods: # (1) Run multiple OpenVPN daemons, one for each # group, and firewall the TUN/TAP interface # for each group/daemon appropriately. # (2) (Advanced) Create a script to dynamically # modify the firewall in response to access # from different clients. See man # page for more info on learn-address script. ;learn-address ./script # If enabled, this directive will configure # all clients to redirect their default # network gateway through the VPN, causing # all IP traffic such as web browsing and # and DNS lookups to go through the VPN # (The OpenVPN server machine may need to NAT # or bridge the TUN/TAP interface to the internet # in order for this to work properly). ;push "redirect-gateway def1 bypass-dhcp" # Certain Windows-specific network settings # can be pushed to clients, such as DNS # or WINS server addresses. CAVEAT: # http://openvpn.net/faq.html#dhcpcaveats # The addresses below refer to the public # DNS servers provided by opendns.com. ;push "dhcp-option DNS 208.67.222.222" ;push "dhcp-option DNS 208.67.220.220" # Uncomment this directive to allow differenta # clients to be able to "see" each other. # By default, clients will only see the server. # To force clients to only see the server, you # will also need to appropriately firewall the # server's TUN/TAP interface. ;client-to-client # Uncomment this directive if multiple clients # might connect with the same certificate/key # files or common names. This is recommended # only for testing purposes. For production use, # each client should have its own certificate/key # pair. # # IF YOU HAVE NOT GENERATED INDIVIDUAL # CERTIFICATE/KEY PAIRS FOR EACH CLIENT, # EACH HAVING ITS OWN UNIQUE "COMMON NAME", # UNCOMMENT THIS LINE OUT. ;duplicate-cn # The keepalive directive causes ping-like # messages to be sent back and forth over # the link so that each side knows when # the other side has gone down. # Ping every 10 seconds, assume that remote # peer is down if no ping received during # a 120 second time period. keepalive 10 120 # For extra security beyond that provided # by SSL/TLS, create an "HMAC firewall" # to help block DoS attacks and UDP port flooding. # # Generate with: # openvpn --genkey --secret ta.key # # The server and each client must have # a copy of this key. # The second parameter should be '0' # on the server and '1' on the clients. ;tls-auth ta.key 0 # This file is secret # Select a cryptographic cipher. # This config item must be copied to # the client config file as well. ;cipher BF-CBC # Blowfish (default) ;cipher AES-128-CBC # AES ;cipher DES-EDE3-CBC # Triple-DES # Enable compression on the VPN link. # If you enable it here, you must also # enable it in the client config file. comp-lzo # The maximum number of concurrently connected # clients we want to allow. ;max-clients 100 # It's a good idea to reduce the OpenVPN # daemon's privileges after initialization. # # You can uncomment this out on # non-Windows systems. ;user nobody ;group nobody # The persist options will try to avoid # accessing certain resources on restart # that may no longer be accessible because # of the privilege downgrade. persist-key persist-tun # Output a short status file showing # current connections, truncated # and rewritten every minute. status openvpn-status.log # By default, log messages will go to the syslog (or # on Windows, if running as a service, they will go to # the "\Program Files\OpenVPN\log" directory). # Use log or log-append to override this default. # "log" will truncate the log file on OpenVPN startup, # while "log-append" will append to it. Use one # or the other (but not both). ;log openvpn.log ;log-append openvpn.log # Set the appropriate level of log # file verbosity. # # 0 is silent, except for fatal errors # 4 is reasonable for general usage # 5 and 6 can help to debug connection problems # 9 is extremely verbose verb 3 # Silence repeating messages. At most 20 # sequential messages of the same message # category will be output to the log. ;mute 20 I have changed IP's for security

    Read the article

  • W7 routing - traffic not going to default gateway

    - by Ian Macintosh
    I have a really strange Windows 7 IPv4 routing issue that I can't get to the bottom of. The summary of the issue is that the default gateway is set to 192.168.254.253, but that it is actually using a default gateway of 192.168.254.254. Here's a network diagram: .-,( ),-. .-( )-. .-----( internet )----.--------------------------. | '-( ).-' | | | '-.( ).-' | | v v v .------------. .------. .------. | 10mb Fibre | | ADSL | | ADSL | '------------' '------' '------' | | | | | | v v v .---------------------. .--------------------. .--------------------. | Juniper Box | | Draytek DSL Router | | Draytek DSL Router | |---------------------| |--------------------| |--------------------| | (public IP address) | | 172.16.0.x | | 172.16.0.x | '---------------------' '--------------------' '--------------------' | | | | | .-------------------' | v v v .-------------------------. .-----------------. | Draytek Dual WAN Router | | Untangle GW | |-------------------------| |-----------------| | 192.168.254.254 | | 192.168.254.253 | '-------------------------' '-----------------' | | | | | v v =================================== LAN =================================== | | | | v v .----------------. .----------------. | Windows 7 W/S | | Windows 7 W/S | |----------------| |----------------| | 192.168.254.38 | | 192.168.254.77 | '----------------' '----------------' This is a recently (a few weeks ago) converted fibre site with the original 2 DSL lines still attached and running. An Untangle (firewall) was installed with the fibre line. Here is the affected PC network configuration: C:\>ipconfig /allcompartments /all Windows IP Configuration ============================================================================== Network Information for Compartment 1 (ACTIVE) ============================================================================== Host Name . . . . . . . . . . . . : COMP36 Primary Dns Suffix . . . . . . . : XXXXXX.local Node Type . . . . . . . . . . . . : Broadcast IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : XXXXXX.local Ethernet adapter Local Area Connection 2: Connection-specific DNS Suffix . : XXXXXX.local Description . . . . . . . . . . . : Realtek PCIe GBE Family Controller #2 Physical Address. . . . . . . . . : C8-9C-DC-33-F1-65 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::3925:86a5:7066:ab92%15(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.254.38(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : 22 August 2012 10:20:32 Lease Expires . . . . . . . . . . : 30 August 2012 10:20:31 Default Gateway . . . . . . . . . : 192.168.254.253 DHCP Server . . . . . . . . . . . : 192.168.254.200 DHCPv6 IAID . . . . . . . . . . . : 315137244 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-14-4A-17-8D-10-78-D2-74-2F-8A DNS Servers . . . . . . . . . . . : 192.168.254.200 Primary WINS Server . . . . . . . : 192.168.254.200 NetBIOS over Tcpip. . . . . . . . : Enabled Tunnel adapter isatap.XXXXXX.local: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : XXXXXX.local Description . . . . . . . . . . . : Microsoft ISATAP Adapter Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Tunnel adapter Teredo Tunneling Pseudo-Interface: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Teredo Tunneling Pseudo-Interface Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes The routing table: C:\>route print =========================================================================== Interface List 15...c8 9c dc 33 f1 65 ......Realtek PCIe GBE Family Controller #2 1...........................Software Loopback Interface 1 10...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 11...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.254.253 192.168.254.38 10 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.254.0 255.255.255.0 On-link 192.168.254.38 266 192.168.254.38 255.255.255.255 On-link 192.168.254.38 266 192.168.254.255 255.255.255.255 On-link 192.168.254.38 266 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.254.38 266 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.254.38 266 =========================================================================== Persistent Routes: None IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 1 306 ::1/128 On-link 15 266 fe80::/64 On-link 15 266 fe80::3925:86a5:7066:ab92/128 On-link 1 306 ff00 ::/8 On-link 15 266 ff00::/8 On-link =========================================================================== Persistent Routes: None And the strange routing as demonstrated by tracert: C:\>tracert -d www.bbc.co.uk Tracing route to www.bbc.net.uk [212.58.246.95] over a maximum of 30 hops: 1 1 ms 1 ms <1 ms 192.168.254.254 2 1 ms 1 ms 1 ms 172.16.0.254 3 17 ms 18 ms 16 ms XXXXXXXXXXXXXXX 4 18 ms 19 ms 19 ms XXXXXXXXXXXXXXX 5 22 ms 22 ms 22 ms XXXXXXXXXXXXXXX 6 22 ms 21 ms 22 ms XXXXXXXXXXXXXXX 7 21 ms 21 ms 22 ms 217.41.169.109 8 30 ms 32 ms 57 ms 109.159.251.227 9 46 ms 39 ms 35 ms 109.159.251.137 10 27 ms 66 ms 30 ms 109.159.254.116 ^C However, when done from another Windows 7 workstation: C:\Users\administrator>ipconfig /allcompartments /all Windows IP Configuration ============================================================================== Network Information for Compartment 1 (ACTIVE) ============================================================================== Host Name . . . . . . . . . . . . : PABX-BACKUP Primary Dns Suffix . . . . . . . : XXXXXX.local Node Type . . . . . . . . . . . . : Broadcast IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : XXXXXX.local Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : XXXXXX.local Description . . . . . . . . . . . : Realtek PCIe GBE Family Controller Physical Address. . . . . . . . . : 8C-89-A5-94-43-84 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::9479:1c11:6f9f:ae0b%11(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.254.77(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : 15 August 2012 08:27:18 Lease Expires . . . . . . . . . . : 27 August 2012 08:27:31 Default Gateway . . . . . . . . . : 192.168.254.253 DHCP Server . . . . . . . . . . . : 192.168.254.200 DHCPv6 IAID . . . . . . . . . . . : 244091301 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-16-C2-79-BE-8C-89-A5-94-43-84 DNS Servers . . . . . . . . . . . : 192.168.254.200 Primary WINS Server . . . . . . . : 192.168.254.200 NetBIOS over Tcpip. . . . . . . . : Enabled Tunnel adapter isatap.XXXXXX.local: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : XXXXXX.local Description . . . . . . . . . . . : Microsoft ISATAP Adapter Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Tunnel adapter Local Area Connection* 9: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Microsoft 6to4 Adapter Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Tunnel adapter Teredo Tunneling Pseudo-Interface: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Teredo Tunneling Pseudo-Interface Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes C:\Users\administrator> And finally, doing a tracert from the 2nd workstation yields expected results: C:\Users\administrator>tracert -d www.bbc.co.uk Tracing route to www.bbc.net.uk [212.58.244.67] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms 192.168.254.253 2 1 ms 1 ms 1 ms 141.0.xxx.xxx 3 2 ms 2 ms 2 ms 141.0.xxx.xxx 4 7 ms 2 ms 2 ms 109.204.xxx.xxx 5 2 ms 2 ms 2 ms 95.177.0.7 6 3 ms 2 ms 2 ms 95.177.0.9 7 30 ms 2 ms 2 ms 95.177.0.2 8 2 ms 2 ms 2 ms 195.66.224.103 9 ^C As expected, it is routing via .253, and the 2nd hop is the inside interface of the Juniper NTU. I've not inspected the traffic yet. In particular, I was going to look for ICMP redirects, though why there would be an ICMP redirect at all is not really sensible? .254 used to be the default gateway before the fibre was installed. Any ideas? Doesn't make sense to me why there should be this routing issue :( The Draytek Dual WAN Router was rebooted, the PC was rebooted. The PC had the network disabled and then re-enabled. All the standard stuff when Windows looses the plot. Hopefully somebody recognises the symptoms! PS: Sorry for the long post, but I didn't want to leave something potentially relevant out. PPS: No iSCSI involved on/at this or any other workstation so Windows 7 routing traffic through the gateway for local addresses isn't the issue.

    Read the article

< Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >