Search Results

Search found 14713 results on 589 pages for 'release upgrade'.

Page 513/589 | < Previous Page | 509 510 511 512 513 514 515 516 517 518 519 520  | Next Page >

  • Maximize Performance and Availability with Oracle Data Integration

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-fareast-font-family:Calibri; mso-bidi-font-family:"Times New Roman";} Alert: Oracle is hosting the 12c Launch Webcast for Oracle Data Integration and Oracle Golden Gate on Tuesday, November 12 (tomorrow) to discuss the new capabilities in detail and share customer perspectives. Hear directly from customer experts and executives from SolarWorld Industries America, British Telecom and Rittman Mead and get your questions answered live by product experts. Register for this complimentary webcast today and join in the discussion tomorrow. Author: Irem Radzik, Senior Principal Product Director, Oracle Organizations that want to use IT as a strategic point of differentiation prefer Oracle’s complete application offering to drive better business performance and optimize their IT investments. These enterprise applications are in the center of business operations and they contain critical data that needs to be accessed continuously, as well as analyzed and acted upon in a timely manner. These systems also need to operate with high-performance and availability, which means analytical functions should not degrade applications performance, and even system maintenance and upgrades should not interrupt availability. Oracle’s data integration products, Oracle Data Integrator, Oracle GoldenGate, and Oracle Enterprise Data Quality, provide the core foundation for bringing data from various business-critical systems to gain a broader, unified view. As a more advance offering to 3rd party products, Oracle’s data integration products facilitate real-time reporting for Oracle Applications without impacting application performance, and provide ability to upgrade and maintain the system without taking downtime. Oracle GoldenGate is certified for Oracle Applications, including E-Business Suite, Siebel CRM, PeopleSoft, and JD Edwards, for moving transactional data in real-time to a dedicated operational reporting environment. This solution allows the app users to offload the resource-heavy queries to the reporting instance(s), reducing CPU utilization, improving OLTP performance, and extending the lifetime of existing IT assets. In addition, having a dedicated reporting instance with up-to-the-second transactional data allows optimizing the reporting environment and even decreasing costs as GoldenGate can move only the required data from expensive mainframe environments to cost-efficient open system platforms.  With real-time data replication capabilities GoldenGate is also certified to enable application upgrades and database/hardware/OS migration without impacting business operations. GoldenGate is certified for Siebel CRM, Communications Billing and Revenue Management and JD Edwards for supporting zero downtime upgrades to the latest app version. GoldenGate synchronizes a parallel, upgraded system with the old version in real time, thus enables continuous operations during the process. Oracle GoldenGate is also certified for minimal downtime database migrations for Oracle E-Business Suite and other key applications. GoldenGate’s solution also minimizes the risk by offering a failback option after the switchover to the new environment. Furthermore, Oracle GoldenGate’s bidirectional active-active data replication is certified for Oracle ATG Web Commerce to enable geographically load balancing and high availability for ATG customers. For enabling better business insight, Oracle Data Integration products power Oracle BI Applications with high performance bulk and real-time data integration. Oracle Data Integrator (ODI) is embedded in Oracle BI Applications version 11.1.1.7.1 and helps to integrate data end-to-end across the full BI Applications architecture, supporting capabilities such as data-lineage, which helps business users identify report-to-source capabilities. ODI is integrated with Oracle GoldenGate and provides Oracle BI Applications customers the option to use real-time transactional data in analytics, and do so non-intrusively. By using Oracle GoldenGate with the latest release of Oracle BI Applications, organizations not only leverage fresh data in analytics, but also eliminate the need for an ETL batch window and minimize the impact on OLTP systems. You can learn more about Oracle Data Integration products latest 12c version in our upcoming launch webcast and access the app-specific free resources in the new Data Integration for Oracle Applications Resource Center.

    Read the article

  • Windows for IoT, continued

    - by Valter Minute
    Originally posted on: http://geekswithblogs.net/WindowsEmbeddedCookbook/archive/2014/08/05/windows-for-iot-continued.aspxI received many interesting feedbacks on my previous blog post and I tried to find some time to do some additional tests. Bert Kleinschmidt pointed out that pins 2,3 and 10 of the Galileo are connected directly to the SOC, while pin 13, the one used for the sample sketch is controlled via an I2C I/O expander. I changed my code to use pin 2 instead of 13 (just changing the variable assignment at the beginning of the code) and latency was greatly reduced. Now each pulse lasts for 1.44ms, 44% more than the expected time, but ways better that the result we got using pin 13. I also used SetThreadPriority to increase the priority of the thread that was running the sketch to THREAD_PRIORITY_HIGHEST but that didn't change the results. When I was using the I2C-controlled pin I tried the same and the timings got ways worse (increasing more than 10 times) and so I did not commented on that part, wanting to investigate the issua a bit more in detail. It seems that increasing the priority of the application thread impacts negatively the I2C communication. I tried to use also the Linux-based implementation (using a different Galileo board since the one provided by MS seems to use a different firmware) and the results of running the sample blink sketch modified to use pin 2 and blink the led for 1ms are similar to those we got on the same board running Windows. Here the difference between expected time and measured time is worse, getting around 3.2ms instead of 1 (320% compared to 150% using Windows but far from the 100.1% we got with the 8-bit Arduino). Both systems were not under load during the test, maybe loading some applications that use part of the CPU time would make those timings even less reliable, but I think that those numbers are enough to draw some conclusions. It may not be worth running a full OS if what you need is Arduino compatibility. The Arduino UNO is probably the best Arduino you can find to perform this kind of development. The Galileo running the Linux-based stack or running Windows for IoT is targeted to be a platform for "Internet of Things" devices, whatever that means. At the moment I don't see the "I" part of IoT. We have low level interfaces (SPI, I2C, the GPIO pins) that can be used to connect sensors but the support for connectivity is limited and the amount of work required to deliver some data to the cloud (using a secure HTTP request or a message queuing system like APMQS or MQTT) is still big and the rich OS underneath seems to not provide any help doing that.Why should I use sockets and can't access all the high level connectivity features we have on "full" Windows?I know that it's possible to use some third party libraries, try to build them using the Windows For IoT SDK etc. but this means re-inventing the wheel every time and can also lead to some IP concerns if used for products meant to be closed-source. I hope that MS and Intel (and others) will focus less on the "coolness" of running (some) Arduino sketches and more on providing a better platform to people that really want to design devices that leverage internet connectivity and the cloud processing power to deliver better products and services. Providing a reliable set of connectivity services would be a great start. Providing support for .NET would be even better, leaving native code available for hardware access etc. I know that those components may require additional storage and memory etc. So making the OS componentizable (or, at least, provide a way to install additional components) would be a great way to let developers pick the parts of the system they need to develop their solution, knowing that they will integrate well together. I can understand that the Arduino and Raspberry Pi* success may have attracted the attention of marketing departments worldwide and almost any new development board those days is promoted as "XXX response to Arduino" or "YYYY alternative to Raspberry Pi", but this is misleading and prevents companies from focusing on how to deliver good products and how to integrate "IoT" features with their existing offer to provide, at the end, a better product or service to their customers. Marketing is important, but can't decide the key features of a product (the OS) that is going to be used to develop full products for end customers integrating it with hardware and application software. I really like the "hackable" nature of open-source devices and like to see that companies are getting more and more open in releasing information, providing "hackable" devices and supporting developers with documentation, good samples etc. On the other side being able to run a sketch designed for an 8 bit microcontroller on a full-featured application processor may sound cool and an easy upgrade path for people that just experimented with sensors etc. on Arduino but it's not, in my humble opinion, the main path to follow for people who want to deliver real products.   *Shameless self-promotion: if you are looking for a good book in Italian about the Raspberry Pi , try mine: http://www.amazon.it/Raspberry-Pi-alluso-Digital-LifeStyle-ebook/dp/B00GYY3OKO

    Read the article

  • UppercuT v1.0 and 1.1&ndash;Linux (Mono), Multi-targeting, SemVer, Nitriq and Obfuscation, oh my!

    - by Robz / Fervent Coder
    Recently UppercuT (UC) quietly released version 1 (in August). I’m pretty happy with where we are, although I think it’s a few months later than I originally planned. I’m glad I held it back, it gave me some more time to think about some things a little more and also the opportunity to receive a patch for running builds with UC on Linux. We also released v1.1 very recently (December). UppercuT v1 Builds On Linux Perhaps the most significant changes to UC going v1 is that it now supports builds on Linux using Mono! This is thanks mostly to Svein Ackenhausen for the patches and working with me on getting it all working while not breaking the windows builds!  This means you can use mono on Windows or Linux. Notice the shell files to execute with Linux that come as part of UC now. Multi-Targeting Perhaps one of the hardest things to do that requires an automated build is multi-targeting. At v1 this is early, and possibly prone to some issues, but available.  We believe in making everything stupid simple, so it’s as simple as adding a comma to the microsoft.framework property. i.e. “net-3.5, net-4.0” to suddenly produce both framework builds. When you build, this is what you get (if you meet each framework’s requirements): At this time you have to let UC override the build location (as it does by default) or this will not work.  Semantic Versioning By now many of you have been using UppercuT for awhile and have watched how we have done versioning. Many of you who use git already know we put the revision hash in the informational/product version as the last octet. At v1, UppercuT has adopted the semantic versioning scheme. What does that mean? This is a short read, but a good one: http://SemVer.org SemVer (Semantic Versioning) is really using versioning what it was meant for. You have three octets. Major.Minor.Patch as in 1.1.0.  UC will use three different versioning concepts, one for the assembly version, one for the file version, and one for the product version. All versions - The first three octects of the version are owned by SemVer. Major.Minor.Patch i.e.: 1.1.0 Assembly Version - The assembly version would much closer follow SemVer. Last digit is always 0. Major.Minor.Patch.0 i.e: 1.1.0.0 File Version - The file version occupies the build number as the last digit. Major.Minor.Patch.Build i.e.: 1.1.0.2650 Product/Informational Version - The last octect of your product/informational version is the source control revision/hash. Major.Minor.Patch.RevisionOrHash i.e. (TFS/SVN): 1.1.0.235 i.e. (Git/HG): 1.1.0.a45ace4346adef0 SemVer is not on by default, the passive versioning scheme is still in effect. Notice that version.use_semanticversioning has been added to the UppercuT.config file (and version.patch in support of the third octet): Gems Support Gems support was added at v1. This will probably be deprecated as some point once there is an announced sunset for Nu v1. Application gems may keep it around since there is no alternative for that yet though (CoApp would be a possible replacement). Nitriq Support Nitriq is a code analysis tool like NDepend. It’s built by Mr. Jon von Gillern. It uses LINQ query language, so you can use a familiar idiom when analyzing your code base. It’s a pretty awesome tool that has a free version for those looking to do code analysis! To use Nitriq with UC, you are going to need the console edition.  To take advantage of Nitriq, you just need to update the location of Nitriq in the config: Then add the nitriq project files at the root of your source. Please refer to the Nitriq documentation on how these are created. UppercuT v1.1 Obfuscation One thing I started looking into was an easy way to obfuscate my code. I came across EazFuscator, which is both free and awesome. Plus the GUI for it is super simple to use. How do you make obfuscation even easier? Make it a convention and a configurable property in the UC config file! And the code gets obfuscated! Closing Definitely get out and look at the new release. It contains lots of chocolaty (sp?) goodness. And remember, the upgrade path is almost as simple as drag and drop!

    Read the article

  • #OOW 2012 @PARIS...talking Oracle and Clouds, and Optimized Datacenter

    - by Eric Bezille
    For those of you who want to get most out of Oracle technologies to evolve your IT to the Next Wave, I encourage you to register to the up coming Oracle Optimized Datacenter event that will take place in Paris on November 28th. You will get the opportunity to exchange with Oracle experts and customers having successfully evolve their IT by leveraging Oracle technologies. You will also get the latest news on some of the Oracle systems announcements made during OOW 2012. During this event we will make an update about Oracle and Clouds, from private to public and hybrid models. So in preparing this session, I thought it was a good start to make a status of Cloud Computing in France, and CIO requirements in particular. Starting in 2009 with the first Cloud Camp in Paris, the market has evolved, but the basics are still the same : think hybrid. From Traditional IT to Clouds One size doesn't fit all, and for big companies having already an IT in place, there will be parts eligible to external (public) cloud, and parts that would be required to stay inside the firewalls, so ability to integrate both side is key.  None the less, one of the major impact of Cloud Computing trend on IT, reported by Forrester, is the pressure it makes on CIO to evolve towards the same model that end-users are now used to in their day to day life, where self-service and flexibility are paramount. This is what is driving IT to transform itself toward "a Global Service Provider", or for some as "IT "is" the Business" (see : Gartner Identifies Four Futures for IT and CIO), and for both models toward a Private Cloud Service Provider. In this journey, there is still a big difference between most of existing external Cloud and a firm IT : the number of applications that a CIO has to manage. Most cloud providers today are overly specialized, but at the end of the day, there are really few business processes that rely on only one application. So CIOs has to combine everything together external and internal. And for the internal parts that they will have to make them evolve to a Private Cloud, the scope can be very large. This will often require CIOs to evolve from their traditional approach to more disruptive ones, the time has come to introduce new standards and processes, if they want to succeed. So let's have a look at the different Cloud models, what type of users they are addressing, what value they bring and most importantly what needs to be done by the  Cloud Provider, and what is left over to the user. IaaS, PaaS, SaaS : what's provided and what needs to be done First of all the Cloud Provider will have to provide all the infrastructure needed to deliver the service. And the more value IT will want to provide, the more IT will have to deliver and integrate : from disks to applications. As we can see in the above picture, providing pure IaaS, left a lot to cover for the end-user, that’s why the end-user targeted by this Cloud Service is IT people. If you want to bring more value to developers, you need to provide to them a development platform ready to use, which is what PaaS is standing for, by providing not only the processors power, storage and OS, but also the Database and Middleware platform. SaaS being the last mile of the Cloud, providing an application ready to use by business users, the remaining part for the end-users being configuring and specifying the application for their specific usage. In addition to that, there are common challenges encompassing all type of Cloud Services : Security : covering all aspect, not only of users management but also data flows and data privacy Charge back : measuring what is used and by whom Application management : providing capabilities not only to deploy, but also to upgrade, from OS for IaaS, Database, and Middleware for PaaS, to a full Business Application for SaaS. Scalability : ability to evolve ALL the components of the Cloud Provider stack as needed Availability : ability to cover “always on” requirements Efficiency : providing a infrastructure that leverage shared resources in an efficient way and still comply to SLA (performances, availability, scalability, and ability to evolve) Automation : providing the orchestration of ALL the components in all service life-cycle (deployment, growth & shrink (elasticity), upgrades,...) Management : providing monitoring, configuring and self-service up to the end-users Oracle Strategy and Clouds For CIOs to succeed in their Private Cloud implementation, means that they encompass all those aspects for each component life-cycle that they selected to build their Cloud. That’s where a multi-vendors layered approach comes short in terms of efficiency. That’s the reason why Oracle focus on taking care of all those aspects directly at Engineering level, to truly provide efficient Cloud Services solutions for IaaS, PaaS and SaaS. We are going as far as embedding software functions in hardware (storage, processor level,...) to ensure the best SLA with the highest efficiency. The beauty of it, as we rely on standards, is that the Oracle components that you are running today in-house, are exactly the same that we are using to build Clouds, bringing you flexibility, reversibility and fast path to adoption. With Oracle Engineered Systems (Exadata, Exalogic & SPARC SuperCluster, more specifically, when talking about Cloud), we are delivering all those components hardware and software already engineered together at Oracle factory, with a single pane of glace for the management of ALL the components through Oracle Enterprise Manager, and with high-availability, scalability and ability to evolve by design. To give you a feeling of what does that bring in terms just of implementation project timeline, for example with Oracle SPARC SuperCluster, we have a consistent track of record to have the system plug into existing Datacenter and ready in a week. This includes Oracle Database, OS, virtualization, Database Storage (Exadata Storage Cells in this case), Application Storage, and all network configuration. This strategy enable CIOs to very quickly build Cloud Services, taking out not only the complexity of integrating everything together but also taking out the automation and evolution complexity and cost. I invite you to discuss all those aspect in regards of your particular context face2face on November 28th.

    Read the article

  • Upgrading to Code Based Migrations EF 4.3.1 with Connector/Net 6.6

    - by GABMARTINEZ
    Entity Framework 4.3.1 includes a new feature called code first migrations.  We are adding support for this feature in our upcoming 6.6 release of Connector/Net.  In this walk-through we'll see the workflow of code-based migrations when you have an existing application and you would like to upgrade to this EF 4.3.1 version and use this approach, so you can keep track of the changes that you do to your database.   The first thing we need to do is add the new Entity Framework 4.3.1 package to our application. This should via the NuGet package manager.  You can read more about why EF is not part of the .NET framework here. Adding EF 4.3.1 to our existing application  Inside VS 2010 go to Tools -> Library Package Manager -> Package Manager Console, this will open the Power Shell Host Window where we can work with all the EF commands. In order to install this library to your existing application you should type Install-Package EntityFramework This will make some changes to your application. So Let's check them. In your .config file you'll see a  <configSections> which contains the version you have from EntityFramework and also was added the <entityFramework> section as shown below. This section is by default configured to use SQL Express which won't be necesary for this case. So you can comment it out or leave it empty. Also please make sure you're using the Connector/Net 6.6.x version which is the one that has this support as is shown in the previous image. At this point we face one issue; in order to be able to work with Migrations we need the __MigrationHistory table that we don't have yet since our Database was created with an older version. This table is used to keep track of the changes in our model. So we need to get it in our existing Database. Getting a Migration-History table into an existing database First thing we need to do to enable migrations in our existing application is to create our configuration class which will set up the MySqlClient Provider as our SQL Generator. So we have to add it with the following code: using System.Data.Entity.Migrations;     //add this at the top of your cs file public class Configuration : DbMigrationsConfiguration<NameOfYourDbContext>  //Make sure to use the name of your existing DBContext { public Configuration() { this.AutomaticMigrationsEnabled = false; //Set Automatic migrations to false since we'll be applying the migrations manually for this case. SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());     }   }  This code will set up our configuration that we'll be using when executing all the migrations for our application. Once we have done this we can Build our application so we can check that everything is fine. Creating our Initial Migration Now let's add our Initial Migration. In Package Manager Console, execute "add-migration InitialCreate", you can use any other name but I like to set this as our initial create for future reference. After we run this command, some changes were done in our application: A new Migrations Folder was created. A new class migration call InitialCreate which in most of the cases should have empty Up and Down methods as long as your database is up to date with your Model. Since all your entities already exists, delete all duplicated code to create any entity which exists already in your Database if there is any. I found this easier when you don't have any pending updates to do to your database. Now we have our empty migration that will make no changes in our database and represents how are all the things at the begining of our migrations.  Finally, let's create our MigrationsHistory table. Optionally you can add SQL code to delete the edmdata table which is not needed anymore. public override void Up() { // Just make sure that you used 4.1 or later version         Sql("DROP TABLE EdmMetadata"); } From our Package Manager Console let's type: Update-database; If you like to see the operations made on each Update-database command you can use the flag -verbose after the Update-database. This will make two important changes.  It will execute the Up method in the initial migration which has no changes in the database. And second, and very important,  it will create the __MigrationHistory table necessary to keep track of your changes. And next time you make a change to your database it will compare the current model to the one stored in the Model Column of this table. Conclusion The important thing of this walk through is that we must create our initial migration before we start doing any changes to our model. This way we'll be adding the necessary __MigrationsHistory table to our existing database, so we can keep our database up to date with all the changes we do in our context model using migrations. Hope you have found this information useful. Please let us know if you have any questions or comments, also please check our forums here where we keep answering questions in general for the community.  Happy MySQL/Net Coding!

    Read the article

  • Hiring New IT Employees versus Promoting Internally for IT Positions

    Recently I was asked my opinion regarding the hiring of IT professionals in regards to the option of hiring new IT employees versus promoting internally for IT positions. After thinking a little more about this question regarding staffing, specifically pertaining to promoting internally verses new employees; I think my answer to this question is that it truly depends on the situation. However, in most cases I would side with promoting internally. The key factors in this decision should be based on a company/department’s current values, culture, attitude, and existing priorities.  For example if a company values retaining all of its hard earned business knowledge then they would tend to promote existing employees internal over hiring a new employee. Moreover, the company will have to pay to train an existing employee to learn a new technology and the learning curve for some technologies can be very steep. Conversely, if a company values new technologies and technical proficiency over business knowledge then a company would tend to hire new employees because they may already have experience with a technology that the company is planning on using. In this scenario, the company would have to take on the additional overhead of allowing a new employee to learn how the business operates prior to them being fully effective. To illustrate my points above let us look at contractor that builds in ground pools for example.  He has the option to hire employees that are very strong but use small shovels to dig, or employees weak in physical strength but use large shovels to dig. Which employee should the contractor use to dig a hole for a new in ground pool? If we compare the possible candidates for this job we will find that they are very similar to hiring someone internally verses a new hire. The first example represents the existing workers that are very strong regarding the understanding how the business operates and the reasons why in a specific manner. However this employee could be potentially weaker than an outsider pertaining to specific technologies and would need some time to build their technical prowess for a new position much like the strong worker upgrading their shovels in order to remove more dirt at once when digging. The other employee is very similar to hiring a new person that may already have the large shovel but will need to increase their strength in order to use the shovel properly and efficiently so that they can move a maximum amount of dirt in a minimal amount of time. This can be compared to new employ learning how a business operates before they can be fully functional and integrated in the company/department. Another key factor in this dilemma pertains to existing employee and their passion for their work, their ability to accept new responsibility when given, and the willingness to take on responsibilities when they see a need in the business. As much as possible should be considered in this decision down to the mood of the team, the quality of existing staff, learning cure for both technology and business, and the potential side effects of the existing staff.  In addition, there are many more consideration based on the current team/department/companies culture and mood. There are several factors that need to be considered when promoting an individual or hiring new blood for a team. They both can provide great benefits as well as create controversy to a group. Personally, staffing especially in the IT world is like building a large scale system in that all of the components and modules must fit together and preform as one cohesive system in the same way a team must come together using their individually acquired skills so that they can work as one team.  If a module is out of place or is nonexistent then the rest of the team will suffer until the all of its issues are addressed and resolved. Benefits of Promoting Internally Internal promotions give employees a reason to constantly upgrade their technology, business, and communication skills if they want to further their career Employees can control their own destiny based on personal desires Employee already knows how the business operates Companies can save money by promoting internally because the initial overhead of allowing new hires to learn how a company operates is very expensive Newly promoted employees can assist in training their replacements while transitioning to their new role within a company. Existing employees already have a proven track record in regards fitting in with the business culture; this is always an unknown with all new hires Benefits of a New Hire New employees can energize and excite existing employees New employees can bring new ideas and advancements in technology New employees can offer a different perspective on existing issues based on their past experience. As you can see the decision to promote an existing employee from within a company verses hiring a new person should be based on several factors that should ultimately place the business in the best possible situation for the immediate and long term future. How would you handle this situation? Would you hire a new employee or promote from within?

    Read the article

  • Top Questions and Answers for Pluging into Oracle Database as a Service

    - by David Swanger
    Yesterday we hosted a comprehensive online forum that shared a comprehensive path to help your organization design, deploy, and deliver a Database as a Service cloud. If you missed the online forum, you can watch it on demand by registering here. We received numerous questions.  Below are highlights of the most informative: DBaaS requires a lengthy and careful design efforts. What is the minimum requirements of setting up a scaled-down environment and test it out? You should have an OEM 12c environment for DBaaS administration and then a target database deployment platform that has the key characteristics of what your production environment will look like. This could be a single server or it could be a small pool of hosts if your production DBaaS will be larger and you want to test a more robust / real world configuration with Zones and Pools or DR capabilities for example. How does this benefit companies having their own data center? This allows companies to transform their internal IT to a service delivery model for the database. The benefits to the company are significant cost savings, improved business agility and reduced risk. The benefits to the consumers (internal) of services if much fast provisioning, and response to change in business requirements. From a deployment perspective, is DBaaS's job solely DBA's job? The best deployment model enables the DBA (or end-user) to control the entire process. All resources required to deploy the service are pre-provisioned, and there are no external dependencies (on network, storage, sysadmins teams). The service is created either via a self-service portal or by the DBA. The purpose of self service seems to be that the end user does not rely on the DBA. I just need to give him a template. He decides how much AMM he needs. Why shall I set it one by one. That doesn't seem to be the purpose of self service. Most customers we have worked with define a standardized service catalog, with a few (2 to 5) different classes of service. For each of these classes, there is a pre-defined deployment template, and the user has the ability to select from some pre-defined service sizes. The administrator only has to create this catalog once. Each user then simply selects from the options offered in the catalog.  Looking at DBaaS service definition, it seems to be no different from a service definition provided by a well defined DBA team. Why do you attribute it to DBaaS? There are a couple of perspectives. First, some organizations might already be operating with a high level of standardization and a higher level of maturity from an ITIL or Service Management perspective. Their journey to DBaaS could be shorter and their Service Definition will evolve less but they still might need to add capabilities such as Self Service and Metering/Chargeback. Other organizations are still operating in highly siloed environments with little automation and their formal Service Definition (if they have one) will be a lot less mature today. Therefore their future state DBaaS will look a lot different from their current state, as will their Service Definition. How database as a service impact or help with "Click to Compute" or deploying "Database in cloud infrastructure" DBaaS enables Click to Compute. Oracle DBaaS can be implemented using three architecture models: Oracle Multitenant 12c, native consolidation using Oracle Database and consolidation using virtualization in infrastructure cloud. As Deploy session showed, you get higher consolidating density and efficiency using Multitenant and higher isolation using infrastructure cloud. Depending upon your business needs, DBaaS can be implemented using any of these models. How exactly is the DBaaS different from the traditional db? Storage/OS/DB all together to 'transparently' provide service to applications? Will there be across-databases access by application/user. Some key differences are: 1) The services run on a shared platform. 2) The services can be rapidly provisioned (< 15 minutes). 3) The services are dynamic and can be relocated, grown, shrunk as needed to meet business needs without disruption and rapidly. 4) The user is able to provision the services directly from a standardized service catalog.. With 24x7x365 databases its difficult to find off peak hrs to do basic admin tasks such as gathering stats, running backups, batch jobs. How does pluggable database handle this and different needs/patching downtime of apps databases might be serving? You can gather stats in Oracle Multitenant the same way you had been in regular databases. Regarding patching/upgrading, Oracle Multitenant makes patch/upgrade very efficient in that you can pre-provision a new version/patched multitenant db in a different ORACLE_HOME and then unplug a PDB from its CDB and plug it into the newer/patched CDB in seconds.  Thanks for all the great questions!  If you'd like to learn more and missed the online forum, you can watch it on demand here.

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • Content in Context: The right medicine for your business applications

    - by Lance Shaw
    For many of you, your companies have already invested in a number of applications that are critical to the way your business is run. HR, Payroll, Legal, Accounts Payable, and while they might need an upgrade in some cases, they are all there and handling the lifeblood of your business. But are they really running as efficiently as they could be? For many companies, the answer is no. The problem has to do with the important information caught up within documents and paper. It’s everywhere except where it truly needs to be – readily available right within the context of the application itself. When the right information cannot be easily found, business processes suffer significantly. The importance of this recently struck me when I recently went to meet my new doctor and get a routine physical. Walking into the office lobby, I couldn't help but notice rows and rows of manila folders in racks from floor to ceiling, filled with documents and sensitive, personal information about various patients like myself.  As I looked at all that paper and all that history, two things immediately popped into my head.  “How do they find anything?” and then the even more alarming, “So much for information security!” It sure looked to me like all those documents could be accessed by anyone with a key to the building. Now the truth is that the offices of many general practitioners look like this all over the United States and the world.  But it had me thinking, is the same thing going on in just about any company around the world, involving a wide variety of important business processes? Probably so. Think about all the various processes going on in your company right now. Invoice payments are being processed through Accounts Payable, contracts are being reviewed by Procurement, and Human Resources is reviewing job candidate submissions and doing background checks. All of these processes and many more like them rely on access to forms and documents, whether they are paper or digital. Now consider that it is estimated that employee’s spend nearly 9 hours a week searching for information and not finding it. That is a lot of very well paid employees, spending more than one day per week not doing their regular job while they search for or re-create what already exists. Back in the doctor’s office, I saw this trend exemplified as well. First, I had to fill out a new patient form, even though my previous doctor had transferred my records over months previously. After filling out the form, I was later introduced to my new doctor who then interviewed me and asked me the exact same questions that I had answered on the form. I understand that there is value in the interview process and it was great to meet my new doctor, but this simple process could have been so much more efficient if the information already on file could have been brought directly together with the new patient information I had provided. Instead of having a highly paid medical professional re-enter the same information into the records database, the form I filled out could have been immediately scanned into the system, associated with my previous information, discrepancies identified, and the entire process streamlined significantly. We won’t solve the health records management issues that exist in the United States in this blog post, but this example illustrates how the automation of information capture and classification can eliminate a lot of repetitive and costly human entry and re-creation, even in a simple process like new patient on-boarding. In a similar fashion, by taking a fresh look at the various processes in place today in your organization, you can likely spot points along the way where automating the capture and access to the right information could be significantly improved. As you evaluate how content-process flows through your organization, take a look at how departments and regions share information between the applications they are using. Business applications are often implemented on an individual department basis to solve specific problems but a holistic approach to overall information management is not taken at the same time. The end result over the years is disparate applications with separate information repositories and in many cases these contain duplicate information, or worse, slightly different versions of the same information. This is where Oracle WebCenter Content comes into the story. More and more companies are realizing that they can significantly improve their existing application processes by automating the capture of paper, forms and other content. This makes the right information immediately accessible in the context of the business process and making the same information accessible across departmental systems which has helped many organizations realize significant cost savings. Here on the Oracle WebCenter team, one of our primary goals is to help customers find new ways to be more effective, more cost-efficient and manage information as effectively as possible. We have a series of three webcasts occurring over the next few weeks that are focused on the integration of enterprise content management within the context of business applications. We hope you will join us for one or all three and that you will find them informative. Click here to learn more about these sessions and to register for them. There are many aspects of information management to consider as you look at integrating content management within your business applications. We've barely scratched the surface here but look for upcoming blog posts where we will discuss more specifics on the value of delivering documents, forms and images directly within applications like Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards Enterprise One, Siebel CRM and many others. What do you think?  Are your important business processes as healthy as they can be?  Do you have any insights to share on the value of delivering content directly within critical business processes? Please post a comment and let us know the value you have realized, the lessons learned and what specific areas you are interested in.

    Read the article

  • Tech Talk: Managing Cloud Integration

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Cloud computing solutions are widely hailed as a way to reduce capital expenditures yet organizations are realizing they need to also consider all of the nuances of integrating cloud applications with existing information systems.Cloud integration, after all, has a direct impact on your costs, maintenance and upgrade efforts. Catch this conversation on Tech Talk with Oracle Vice President, Amit Zavery, to understand how Oracle Fusion Middleware provides a simple and consistent method to maintaining integration interfaces across disparate systems across cloud and on-premise applications. Simplify your IT infrastructure and seamlessly manage data and application integration across your applications with Oracle solutions. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} For other Fusion Middleware talks, subscribe to Fusion Middleware Radio today and visit us on oracle.com Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Photo courtesy: www.cloudtweaks.com

    Read the article

  • 2012&ndash;The End Of The World Review

    - by Tim Murphy
    The end of the world must be coming.  Not because the Mayan calendar says so, but because Microsoft is innovating more than Apple.  It has been a crazy year, with pundits declaring not that the end of the world is coming, but that the end of Microsoft is coming.  Let’s take a look at what 2012 has brought us. The beginning of year is a blur.  I managed to get to TechEd in June which was the first time that I got to take a deep dive into Windows 8 and many other things that had been announced in 2011.  The promise I saw in these products was really encouraging.  The thought of being able to run Windows 8 from a thumb drive or have Hyper-V native to the OS told me that at least for developers good things were coming. I finally got my feet wet with Windows 8 with the developer preview just prior to the RTM.  While the initial experience was a bit of a culture shock I quickly grew to love it.  The media still seems to hold little love for the “reimagined” platform, but I think that once people spend some time with it they will enjoy the experience and what the FUD mongers say will fade into the background.  With the launch of the OS we finally got a look at the Surface.  I think this is a bold entry into the tablet market.  While I wish it was a little more affordable I am already starting to see them in the wild being used by non-techies. I was waiting for Windows Phone 8 at least as much as Windows 8, probably more.  The new hardware, better marketing and new OS features I think are going to finally push us to the point of having a real presence in the smartphone market.  I am seeing a number of iPhone users picking up a Nokia Lumia 920 and getting rid of their brand new iPhone 5.  The only real debacle that I saw around the launch was when they held back the SDK from general developers. Shortly after the launch events came Build 2012.  I was extremely disappointed that I didn’t make it to this year’s Build.  Even if they weren’t handing out Surface and Lumia devices I think the atmosphere and content were something that really needed to be experience in person.  Hopefully there will be a Build next year and it’s schedule will be announced soon.  As you would expect Windows 8 and Windows Phone 8 development were the mainstay of the conference, but improvements in Azure also played a key role.  This movement of services to the cloud will continue and we need to understand where it best fits into the solutions we build. Lower on the radar this year were Office 2013, SQL Server 2012, and Windows Server 2012.  Their glory stolen by the consumer OS and hardware announcements, these new releases are no less important.  Companies will see significant improvements in performance and capabilities if they upgrade.  At TechEd they had shown some of the new features of Windows Server 2012 around hardware integration and Hyper-V performance which absolutely blew me away.  It is our job to bring these important improvements to our company’s attention so that they can be leveraged. Personally, the consulting business in 2012 was the busiest it has been in a long time.  More companies were ready to attack new projects after several years of putting them on the back burner.  I also worked to bring back momentum to the Chicago Information Technology Architects Group.  Both the community and clients are excited about the new technologies that have come out in 2012 and now it is time to deliver. What does 2013 have in store.  I don’t see it be quite as exciting as 2012.  Microsoft will be releasing the Surface Pro in January and it seems that we will see more frequent OS update for Windows.  There are rumors that we may see a Surface phone in 2013.  It has also been announced that there will finally be a rework of the XBox next fall.  The new year will also be a time for us in the development community to take advantage of these new tools and devices.  After all, it is what we build on top of these platforms that will attract more consumers and corporations to using them. Just as I am 99.999% sure that the world is not going to end this year, I am also sure that Microsoft will move on and that most of this negative backlash from the media is actually fear and jealousy.  In the end I think we have a promising year ahead of us. del.icio.us Tags: Microsoft,Pundits,Mayans,Windows 8,Windows Phone 8,Surface

    Read the article

  • "domain crashed" when creating new Xen instance

    - by user47650
    I have downloaded a Xen virtual machine image from the appscale project, and I am trying to start it up. However once I run the command; xm create -c -f xen.conf The instance immediately crashes and provides no console output. however it produces logs that I have posted below. but this is the error; [2011-03-01 12:34:03 xend.XendDomainInfo 3580] WARNING (XendDomainInfo:1178) Domain has crashed: name=appscale-1.4b id=10. I have managed to mount the root.img file locally and verify that it is actually an ext3 file system. I am running Xen 3.0.3 that is a stock RPM from the CentOS 5 repos; # rpm -qa | grep -i xen xen-libs-3.0.3-105.el5_5.5 xen-3.0.3-105.el5_5.5 xen-libs-3.0.3-105.el5_5.5 kernel-xen-2.6.18-194.32.1.el5 any suggestions on how to proceed with troubleshooting? (i am a newbie to Xen) so far I have enabled console logging, but the log file is empty. ==> domain-builder-ng.log <== xc_dom_allocate: cmdline=" ip=:1.2.3.4::::eth0:dhcp root=/dev/sda1 ro xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console", features="" xc_dom_kernel_file: filename="/boot/vmlinuz-2.6.27-7-server" xc_dom_malloc_filemap : 2284 kB xc_dom_ramdisk_file: filename="/boot/initrd.img-2.6.27-7-server" xc_dom_malloc_filemap : 9005 kB xc_dom_boot_xen_init: ver 3.1, caps xen-3.0-x86_64 xen-3.0-x86_32p xc_dom_parse_image: called xc_dom_find_loader: trying ELF-generic loader ... failed xc_dom_find_loader: trying Linux bzImage loader ... xc_dom_malloc : 9875 kB xc_dom_do_gunzip: unzip ok, 0x234bb2 -> 0x9a4de0 OK elf_parse_binary: phdr: paddr=0x200000 memsz=0x447000 elf_parse_binary: phdr: paddr=0x647000 memsz=0xab888 elf_parse_binary: phdr: paddr=0x6f3000 memsz=0x908 elf_parse_binary: phdr: paddr=0x6f4000 memsz=0x1c2f9c elf_parse_binary: memory: 0x200000 -> 0x8b6f9c elf_xen_parse_note: GUEST_OS = "linux" elf_xen_parse_note: GUEST_VERSION = "2.6" elf_xen_parse_note: XEN_VERSION = "xen-3.0" elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000 elf_xen_parse_note: ENTRY = 0xffffffff8071e200 elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff80209000 elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb" elf_xen_parse_note: PAE_MODE = "yes" elf_xen_parse_note: LOADER = "generic" elf_xen_parse_note: unknown xen elf note (0xd) elf_xen_parse_note: SUSPEND_CANCEL = 0x1 elf_xen_parse_note: HV_START_LOW = 0xffff800000000000 elf_xen_parse_note: PADDR_OFFSET = 0x0 elf_xen_addr_calc_check: addresses: virt_base = 0xffffffff80000000 elf_paddr_offset = 0x0 virt_offset = 0xffffffff80000000 virt_kstart = 0xffffffff80200000 virt_kend = 0xffffffff808b6f9c virt_entry = 0xffffffff8071e200 xc_dom_parse_elf_kernel: xen-3.0-x86_64: 0xffffffff80200000 -> 0xffffffff808b6f9c xc_dom_mem_init: mem 1024 MB, pages 0x40000 pages, 4k each xc_dom_mem_init: 0x40000 pages xc_dom_boot_mem_init: called x86_compat: guest xen-3.0-x86_64, address size 64 xc_dom_malloc : 2048 kB ==> xend.log <== [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:957) Dev 0 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:957) Dev 0 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:957) Dev 0 still active, looping... [2011-03-01 12:34:02 xend.XendDomainInfo 3580] INFO (XendDomainInfo:957) Dev 0 still active, looping... [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2114) UUID Created: True [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2115) Devices to release: [], domid = 9 [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2127) Releasing PVFB backend devices ... [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:207) XendDomainInfo.create(['domain', ['domid', 9], ['uuid', 'd5f22dd4-8dc2-f51f-84e9-eea7d71ea1d0'], ['vcpus', 1], ['vcpu_avail', 1], ['cpu_cap', 0], ['cpu_weight', 256], ['memory', 1024], ['shadow_memory', 0], ['maxmem', 1024], ['features', ''], ['name', 'appscale-1.4b'], ['on_poweroff', 'destroy'], ['on_reboot', 'restart'], ['on_crash', 'restart'], ['image', ['linux', ['kernel', '/boot/vmlinuz-2.6.27-7-server'], ['ramdisk', '/boot/initrd.img-2.6.27-7-server'], ['ip', ':1.2.3.4::::eth0:dhcp'], ['root', '/dev/sda1 ro'], ['args', 'xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console']]], ['cpus', []], ['device', ['vif', ['backend', 0], ['script', 'vif-bridge'], ['mac', '00:16:3B:72:10:E4']]], ['device', ['vbd', ['backend', 0], ['dev', 'sda1:disk'], ['uname', 'file:/local/xen/domains/appscale1.4/root.img'], ['mode', 'w']]], ['state', '----c-'], ['shutdown_reason', 'crash'], ['cpu_time', 0.000339131], ['online_vcpus', 1], ['up_time', '0.952092885971'], ['start_time', '1299011639.92'], ['store_mfn', 1169289], ['console_mfn', 1169288]]) [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:329) parseConfig: config is ['domain', ['domid', 9], ['uuid', 'd5f22dd4-8dc2-f51f-84e9-eea7d71ea1d0'], ['vcpus', 1], ['vcpu_avail', 1], ['cpu_cap', 0], ['cpu_weight', 256], ['memory', 1024], ['shadow_memory', 0], ['maxmem', 1024], ['features', ''], ['name', 'appscale-1.4b'], ['on_poweroff', 'destroy'], ['on_reboot', 'restart'], ['on_crash', 'restart'], ['image', ['linux', ['kernel', '/boot/vmlinuz-2.6.27-7-server'], ['ramdisk', '/boot/initrd.img-2.6.27-7-server'], ['ip', ':1.2.3.4::::eth0:dhcp'], ['root', '/dev/sda1 ro'], ['args', 'xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console']]], ['cpus', []], ['device', ['vif', ['backend', 0], ['script', 'vif-bridge'], ['mac', '00:16:3B:72:10:E4']]], ['device', ['vbd', ['backend', 0], ['dev', 'sda1:disk'], ['uname', 'file:/local/xen/domains/appscale1.4/root.img'], ['mode', 'w']]], ['state', '----c-'], ['shutdown_reason', 'crash'], ['cpu_time', 0.000339131], ['online_vcpus', 1], ['up_time', '0.952092885971'], ['start_time', '1299011639.92'], ['store_mfn', 1169289], ['console_mfn', 1169288]] [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:446) parseConfig: result is {'features': '', 'image': ['linux', ['kernel', '/boot/vmlinuz-2.6.27-7-server'], ['ramdisk', '/boot/initrd.img-2.6.27-7-server'], ['ip', ':1.2.3.4::::eth0:dhcp'], ['root', '/dev/sda1 ro'], ['args', 'xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console']], 'cpus': [], 'vcpu_avail': 1, 'backend': [], 'uuid': 'd5f22dd4-8dc2-f51f-84e9-eea7d71ea1d0', 'on_reboot': 'restart', 'cpu_weight': 256.0, 'memory': 1024, 'cpu_cap': 0, 'localtime': None, 'timer_mode': None, 'start_time': 1299011639.9200001, 'on_poweroff': 'destroy', 'on_crash': 'restart', 'device': [('vif', ['vif', ['backend', 0], ['script', 'vif-bridge'], ['mac', '00:16:3B:72:10:E4']]), ('vbd', ['vbd', ['backend', 0], ['dev', 'sda1:disk'], ['uname', 'file:/local/xen/domains/appscale1.4/root.img'], ['mode', 'w']])], 'bootloader': None, 'maxmem': 1024, 'shadow_memory': 0, 'name': 'appscale-1.4b', 'bootloader_args': None, 'vcpus': 1, 'cpu': None} [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:1784) XendDomainInfo.construct: None [2011-03-01 12:34:02 xend 3580] DEBUG (balloon:145) Balloon: 3034420 KiB free; need 4096; done. [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:1953) XendDomainInfo.initDomain: 10 256.0 [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:1994) _initDomain:shadow_memory=0x0, maxmem=0x400, memory=0x400. [2011-03-01 12:34:02 xend 3580] DEBUG (balloon:145) Balloon: 3034412 KiB free; need 1048576; done. [2011-03-01 12:34:02 xend 3580] INFO (image:139) buildDomain os=linux dom=10 vcpus=1 [2011-03-01 12:34:02 xend 3580] DEBUG (image:208) domid = 10 [2011-03-01 12:34:02 xend 3580] DEBUG (image:209) memsize = 1024 [2011-03-01 12:34:02 xend 3580] DEBUG (image:210) image = /boot/vmlinuz-2.6.27-7-server [2011-03-01 12:34:02 xend 3580] DEBUG (image:211) store_evtchn = 1 [2011-03-01 12:34:02 xend 3580] DEBUG (image:212) console_evtchn = 2 [2011-03-01 12:34:02 xend 3580] DEBUG (image:213) cmdline = ip=:1.2.3.4::::eth0:dhcp root=/dev/sda1 ro xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console [2011-03-01 12:34:02 xend 3580] DEBUG (image:214) ramdisk = /boot/initrd.img-2.6.27-7-server [2011-03-01 12:34:02 xend 3580] DEBUG (image:215) vcpus = 1 [2011-03-01 12:34:02 xend 3580] DEBUG (image:216) features = ==> domain-builder-ng.log <== xc_dom_build_image: called xc_dom_alloc_segment: kernel : 0xffffffff80200000 -> 0xffffffff808b7000 (pfn 0x200 + 0x6b7 pages) xc_dom_pfn_to_ptr: domU mapping: pfn 0x200+0x6b7 at 0x2aaaab5f6000 elf_load_binary: phdr 0 at 0x0x2aaaab5f6000 -> 0x0x2aaaaba3d000 elf_load_binary: phdr 1 at 0x0x2aaaaba3d000 -> 0x0x2aaaabae8888 elf_load_binary: phdr 2 at 0x0x2aaaabae9000 -> 0x0x2aaaabae9908 elf_load_binary: phdr 3 at 0x0x2aaaabaea000 -> 0x0x2aaaabb9a004 xc_dom_alloc_segment: ramdisk : 0xffffffff808b7000 -> 0xffffffff82382000 (pfn 0x8b7 + 0x1acb pages) xc_dom_malloc : 160 kB xc_dom_pfn_to_ptr: domU mapping: pfn 0x8b7+0x1acb at 0x2aaab0000000 xc_dom_do_gunzip: unzip ok, 0x8cb5e7 -> 0x1aca210 xc_dom_alloc_segment: phys2mach : 0xffffffff82382000 -> 0xffffffff82582000 (pfn 0x2382 + 0x200 pages) xc_dom_pfn_to_ptr: domU mapping: pfn 0x2382+0x200 at 0x2aaab1acb000 xc_dom_alloc_page : start info : 0xffffffff82582000 (pfn 0x2582) xc_dom_alloc_page : xenstore : 0xffffffff82583000 (pfn 0x2583) xc_dom_alloc_page : console : 0xffffffff82584000 (pfn 0x2584) nr_page_tables: 0x0000ffffffffffff/48: 0xffff000000000000 -> 0xffffffffffffffff, 1 table(s) nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 -> 0xffffffffffffffff, 1 table(s) nr_page_tables: 0x000000003fffffff/30: 0xffffffff80000000 -> 0xffffffffbfffffff, 1 table(s) nr_page_tables: 0x00000000001fffff/21: 0xffffffff80000000 -> 0xffffffff827fffff, 20 table(s) xc_dom_alloc_segment: page tables : 0xffffffff82585000 -> 0xffffffff8259c000 (pfn 0x2585 + 0x17 pages) xc_dom_pfn_to_ptr: domU mapping: pfn 0x2585+0x17 at 0x2aaab1ccb000 xc_dom_alloc_page : boot stack : 0xffffffff8259c000 (pfn 0x259c) xc_dom_build_image : virt_alloc_end : 0xffffffff8259d000 xc_dom_build_image : virt_pgtab_end : 0xffffffff82800000 xc_dom_boot_image: called arch_setup_bootearly: doing nothing xc_dom_compat_check: supported guest type: xen-3.0-x86_64 <= matches xc_dom_compat_check: supported guest type: xen-3.0-x86_32p xc_dom_update_guest_p2m: dst 64bit, pages 0x40000 clear_page: pfn 0x2584, mfn 0x11d788 clear_page: pfn 0x2583, mfn 0x11d789 xc_dom_pfn_to_ptr: domU mapping: pfn 0x2582+0x1 at 0x2aaab1ce2000 start_info_x86_64: called setup_hypercall_page: vaddr=0xffffffff80209000 pfn=0x209 domain builder memory footprint allocated malloc : 12139 kB anon mmap : 0 bytes mapped file mmap : 11289 kB domU mmap : 35 MB arch_setup_bootlate: shared_info: pfn 0x0, mfn 0xd6fe1 shared_info_x86_64: called vcpu_x86_64: called vcpu_x86_64: cr3: pfn 0x2585 mfn 0x11d787 launch_vm: called, ctxt=0x97b21f8 xc_dom_release: called ==> xend.log <== [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:114) DevController: writing {'mac': '00:16:3B:72:10:E4', 'handle': '0', 'protocol': 'x86_64-abi', 'backend-id': '0', 'state': '1', 'backend': '/local/domain/0/backend/vif/10/0'} to /local/domain/10/device/vif/0. [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:116) DevController: writing {'domain': 'appscale-1.4b', 'handle': '0', 'script': '/etc/xen/scripts/vif-bridge', 'state': '1', 'frontend': '/local/domain/10/device/vif/0', 'mac': '00:16:3B:72:10:E4', 'online': '1', 'frontend-id': '10'} to /local/domain/0/backend/vif/10/0. [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:634) Checking for duplicate for uname: /local/xen/domains/appscale1.4/root.img [file:/local/xen/domains/appscale1.4/root.img], dev: sda1:disk, mode: w [2011-03-01 12:34:02 xend 3580] DEBUG (blkif:27) exception looking up device number for sda1:disk: [Errno 2] No such file or directory: '/dev/sda1:disk' [2011-03-01 12:34:02 xend 3580] DEBUG (blkif:27) exception looking up device number for sda1: [Errno 2] No such file or directory: '/dev/sda1' [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:114) DevController: writing {'virtual-device': '2049', 'device-type': 'disk', 'protocol': 'x86_64-abi', 'backend-id': '0', 'state': '1', 'backend': '/local/domain/0/backend/vbd/10/2049'} to /local/domain/10/device/vbd/2049. [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:116) DevController: writing {'domain': 'appscale-1.4b', 'frontend': '/local/domain/10/device/vbd/2049', 'format': 'raw', 'dev': 'sda1', 'state': '1', 'params': '/local/xen/domains/appscale1.4/root.img', 'mode': 'w', 'online': '1', 'frontend-id': '10', 'type': 'file'} to /local/domain/0/backend/vbd/10/2049. [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:993) Storing VM details: {'shadow_memory': '0', 'uuid': 'd5f22dd4-8dc2-f51f-84e9-eea7d71ea1d0', 'on_reboot': 'restart', 'start_time': '1299011642.74', 'on_poweroff': 'destroy', 'name': 'appscale-1.4b', 'xend/restart_count': '0', 'vcpus': '1', 'vcpu_avail': '1', 'memory': '1024', 'on_crash': 'restart', 'image': "(linux (kernel /boot/vmlinuz-2.6.27-7-server) (ramdisk /boot/initrd.img-2.6.27-7-server) (ip :1.2.3.4::::eth0:dhcp) (root '/dev/sda1 ro') (args 'xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console'))", 'maxmem': '1024'} [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:1028) Storing domain details: {'console/ring-ref': '1169288', 'console/port': '2', 'name': 'appscale-1.4b', 'console/limit': '1048576', 'vm': '/vm/d5f22dd4-8dc2-f51f-84e9-eea7d71ea1d0', 'domid': '10', 'cpu/0/availability': 'online', 'memory/target': '1048576', 'store/ring-ref': '1169289', 'store/port': '1'} [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:158) Waiting for devices vif. [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:164) Waiting for 0. [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:1250) XendDomainInfo.handleShutdownWatch [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:509) hotplugStatusCallback /local/domain/0/backend/vif/10/0/hotplug-status. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:509) hotplugStatusCallback /local/domain/0/backend/vif/10/0/hotplug-status. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:523) hotplugStatusCallback 1. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices usb. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices vbd. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:164) Waiting for 2049. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:509) hotplugStatusCallback /local/domain/0/backend/vbd/10/2049/hotplug-status. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:509) hotplugStatusCallback /local/domain/0/backend/vbd/10/2049/hotplug-status. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:523) hotplugStatusCallback 1. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices irq. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices vkbd. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices vfb. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices pci. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices ioports. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices tap. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices vtpm. [2011-03-01 12:34:03 xend.XendDomainInfo 3580] WARNING (XendDomainInfo:1178) Domain has crashed: name=appscale-1.4b id=10. [2011-03-01 12:34:03 xend.XendDomainInfo 3580] ERROR (XendDomainInfo:2654) VM appscale-1.4b restarting too fast (2.275545 seconds since the last restart). Refusing to restart to avoid loops. [2011-03-01 12:34:03 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2189) XendDomainInfo.destroy: domid=10 ==> xen-hotplug.log <== Nothing to flush. ==> xend.log <== [2011-03-01 12:34:03 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:03 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:03 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:03 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:03 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:03 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2114) UUID Created: True [2011-03-01 12:34:03 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2115) Devices to release: [], domid = 10 [2011-03-01 12:34:03 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2127) Releasing PVFB backend devices ... And this is the xen.conf file that I am using; # cat xen.conf # Configuration file for the Xen instance AppScale, created # bn VMBuilder kernel = '/boot/vmlinuz-2.6.27-7-server' ramdisk = '/boot/initrd.img-2.6.27-7-server' memory = 1024 vcpus = 1 root = '/dev/sda1 ro' disk = [ 'file:/local/xen/domains/appscale1.4/root.img,sda1,w', ] name = 'appscale-1.4b' dhcp = 'dhcp' vif = [ 'mac=00:16:3B:72:10:E4' ] on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' extra = 'xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console'

    Read the article

  • OpenVPN - Windows 8 to Windows 2008 Server, not connecting

    - by niico
    I have followed this tutorial about setting up an OpenVPN Server on Windows Server - and a client on Windows (in this case Windows 8). The server appears to be running fine - but it is not connecting with this error: Mon Jul 22 19:09:04 2013 Warning: cannot open --log file: C:\Program Files\OpenVPN\log\my-laptop.log: Access is denied. (errno=5) Mon Jul 22 19:09:04 2013 OpenVPN 2.3.2 x86_64-w64-mingw32 [SSL (OpenSSL)] [LZO] [PKCS11] [eurephia] [IPv6] built on Jun 3 2013 Mon Jul 22 19:09:04 2013 MANAGEMENT: TCP Socket listening on [AF_INET]127.0.0.1:25340 Mon Jul 22 19:09:04 2013 Need hold release from management interface, waiting... Mon Jul 22 19:09:05 2013 MANAGEMENT: Client connected from [AF_INET]127.0.0.1:25340 Mon Jul 22 19:09:05 2013 MANAGEMENT: CMD 'state on' Mon Jul 22 19:09:05 2013 MANAGEMENT: CMD 'log all on' Mon Jul 22 19:09:05 2013 MANAGEMENT: CMD 'hold off' Mon Jul 22 19:09:05 2013 MANAGEMENT: CMD 'hold release' Mon Jul 22 19:09:05 2013 Socket Buffers: R=[65536->65536] S=[65536->65536] Mon Jul 22 19:09:05 2013 UDPv4 link local: [undef] Mon Jul 22 19:09:05 2013 UDPv4 link remote: [AF_INET]66.666.66.666:9999 Mon Jul 22 19:09:05 2013 MANAGEMENT: >STATE:1374494945,WAIT,,, Mon Jul 22 19:10:05 2013 TLS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity) Mon Jul 22 19:10:05 2013 TLS Error: TLS handshake failed Mon Jul 22 19:10:05 2013 SIGUSR1[soft,tls-error] received, process restarting Mon Jul 22 19:10:05 2013 MANAGEMENT: >STATE:1374495005,RECONNECTING,tls-error,, Mon Jul 22 19:10:05 2013 Restart pause, 2 second(s) Note I have changed the IP and port no (it uses a non-standard port for security reasons). That port is open on the hardware firewall. The server logs are showing a connection attempt from my client: TLS: Initial packet from [AF_INET]118.68.xx.xx:65011, sid=081af4ed xxxxxxxx Mon Jul 22 14:19:15 2013 118.68.xx.xx:65011 TLS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity) How can I problem solve this & find the problem? Thx Update - Client config file: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. ;proto tcp proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. remote 00.00.00.00 1194 ;remote 00.00.00.00 9999 ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. ;remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nobody # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca "C:\\Program Files\\OpenVPN\\config\\ca.crt" cert "C:\\Program Files\\OpenVPN\\config\\my-laptop.crt" key "C:\\Program Files\\OpenVPN\\config\\my-laptop.key" # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 Server config file: ################################################# # Sample OpenVPN 2.0 config file for # # multi-client server. # # # # This file is for the server side # # of a many-clients <-> one-server # # OpenVPN configuration. # # # # OpenVPN also supports # # single-machine <-> single-machine # # configurations (See the Examples page # # on the web site for more info). # # # # This config should work on Windows # # or Linux/BSD systems. Remember on # # Windows to quote pathnames and use # # double backslashes, e.g.: # # "C:\\Program Files\\OpenVPN\\config\\foo.key" # # # # Comments are preceded with '#' or ';' # ################################################# # Which local IP address should OpenVPN # listen on? (optional) ;local 00.00.00.00 # Which TCP/UDP port should OpenVPN listen on? # If you want to run multiple OpenVPN instances # on the same machine, use a different port # number for each one. You will need to # open up this port on your firewall. std 1194 port 1194 # TCP or UDP server? ;proto tcp proto udp # "dev tun" will create a routed IP tunnel, # "dev tap" will create an ethernet tunnel. # Use "dev tap0" if you are ethernet bridging # and have precreated a tap0 virtual interface # and bridged it with your ethernet interface. # If you want to control access policies # over the VPN, you must create firewall # rules for the the TUN/TAP interface. # On non-Windows systems, you can give # an explicit unit number, such as tun0. # On Windows, use "dev-node" for this. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel if you # have more than one. On XP SP2 or higher, # you may need to selectively disable the # Windows firewall for the TAP adapter. # Non-Windows systems usually don't need this. ;dev-node MyTap # SSL/TLS root certificate (ca), certificate # (cert), and private key (key). Each client # and the server must have their own cert and # key file. The server and all clients will # use the same ca file. # # See the "easy-rsa" directory for a series # of scripts for generating RSA certificates # and private keys. Remember to use # a unique Common Name for the server # and each of the client certificates. # # Any X509 key management system can be used. # OpenVPN can also use a PKCS #12 formatted key file # (see "pkcs12" directive in man page). ca "C:\\Program Files\\OpenVPN\\config\\ca.crt" cert "C:\\Program Files\\OpenVPN\\config\\server.crt" key "C:\\Program Files\\OpenVPN\\config\\server.key" # Diffie hellman parameters. # Generate your own with: # openssl dhparam -out dh1024.pem 1024 # Substitute 2048 for 1024 if you are using # 2048 bit keys. dh "C:\\Program Files\\OpenVPN\\config\\dh2048.pem" # Configure server mode and supply a VPN subnet # for OpenVPN to draw client addresses from. # The server will take 10.8.0.1 for itself, # the rest will be made available to clients. # Each client will be able to reach the server # on 10.8.0.1. Comment this line out if you are # ethernet bridging. See the man page for more info. server 10.8.0.0 255.255.255.0 # Maintain a record of client <-> virtual IP address # associations in this file. If OpenVPN goes down or # is restarted, reconnecting clients can be assigned # the same virtual IP address from the pool that was # previously assigned. ifconfig-pool-persist ipp.txt # Configure server mode for ethernet bridging. # You must first use your OS's bridging capability # to bridge the TAP interface with the ethernet # NIC interface. Then you must manually set the # IP/netmask on the bridge interface, here we # assume 10.8.0.4/255.255.255.0. Finally we # must set aside an IP range in this subnet # (start=10.8.0.50 end=10.8.0.100) to allocate # to connecting clients. Leave this line commented # out unless you are ethernet bridging. ;server-bridge 10.8.0.4 255.255.255.0 10.8.0.50 10.8.0.100 # Configure server mode for ethernet bridging # using a DHCP-proxy, where clients talk # to the OpenVPN server-side DHCP server # to receive their IP address allocation # and DNS server addresses. You must first use # your OS's bridging capability to bridge the TAP # interface with the ethernet NIC interface. # Note: this mode only works on clients (such as # Windows), where the client-side TAP adapter is # bound to a DHCP client. ;server-bridge # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. ;push "route 192.168.10.0 255.255.255.0" ;push "route 192.168.20.0 255.255.255.0" # To assign specific IP addresses to specific # clients or if a connecting client has a private # subnet behind it that should also have VPN access, # use the subdirectory "ccd" for client-specific # configuration files (see man page for more info). # EXAMPLE: Suppose the client # having the certificate common name "Thelonious" # also has a small subnet behind his connecting # machine, such as 192.168.40.128/255.255.255.248. # First, uncomment out these lines: ;client-config-dir ccd ;route 192.168.40.128 255.255.255.248 # Then create a file ccd/Thelonious with this line: # iroute 192.168.40.128 255.255.255.248 # This will allow Thelonious' private subnet to # access the VPN. This example will only work # if you are routing, not bridging, i.e. you are # using "dev tun" and "server" directives. # EXAMPLE: Suppose you want to give # Thelonious a fixed VPN IP address of 10.9.0.1. # First uncomment out these lines: ;client-config-dir ccd ;route 10.9.0.0 255.255.255.252 # Then add this line to ccd/Thelonious: # ifconfig-push 10.9.0.1 10.9.0.2 # Suppose that you want to enable different # firewall access policies for different groups # of clients. There are two methods: # (1) Run multiple OpenVPN daemons, one for each # group, and firewall the TUN/TAP interface # for each group/daemon appropriately. # (2) (Advanced) Create a script to dynamically # modify the firewall in response to access # from different clients. See man # page for more info on learn-address script. ;learn-address ./script # If enabled, this directive will configure # all clients to redirect their default # network gateway through the VPN, causing # all IP traffic such as web browsing and # and DNS lookups to go through the VPN # (The OpenVPN server machine may need to NAT # or bridge the TUN/TAP interface to the internet # in order for this to work properly). ;push "redirect-gateway def1 bypass-dhcp" # Certain Windows-specific network settings # can be pushed to clients, such as DNS # or WINS server addresses. CAVEAT: # http://openvpn.net/faq.html#dhcpcaveats # The addresses below refer to the public # DNS servers provided by opendns.com. ;push "dhcp-option DNS 208.67.222.222" ;push "dhcp-option DNS 208.67.220.220" # Uncomment this directive to allow differenta # clients to be able to "see" each other. # By default, clients will only see the server. # To force clients to only see the server, you # will also need to appropriately firewall the # server's TUN/TAP interface. ;client-to-client # Uncomment this directive if multiple clients # might connect with the same certificate/key # files or common names. This is recommended # only for testing purposes. For production use, # each client should have its own certificate/key # pair. # # IF YOU HAVE NOT GENERATED INDIVIDUAL # CERTIFICATE/KEY PAIRS FOR EACH CLIENT, # EACH HAVING ITS OWN UNIQUE "COMMON NAME", # UNCOMMENT THIS LINE OUT. ;duplicate-cn # The keepalive directive causes ping-like # messages to be sent back and forth over # the link so that each side knows when # the other side has gone down. # Ping every 10 seconds, assume that remote # peer is down if no ping received during # a 120 second time period. keepalive 10 120 # For extra security beyond that provided # by SSL/TLS, create an "HMAC firewall" # to help block DoS attacks and UDP port flooding. # # Generate with: # openvpn --genkey --secret ta.key # # The server and each client must have # a copy of this key. # The second parameter should be '0' # on the server and '1' on the clients. ;tls-auth ta.key 0 # This file is secret # Select a cryptographic cipher. # This config item must be copied to # the client config file as well. ;cipher BF-CBC # Blowfish (default) ;cipher AES-128-CBC # AES ;cipher DES-EDE3-CBC # Triple-DES # Enable compression on the VPN link. # If you enable it here, you must also # enable it in the client config file. comp-lzo # The maximum number of concurrently connected # clients we want to allow. ;max-clients 100 # It's a good idea to reduce the OpenVPN # daemon's privileges after initialization. # # You can uncomment this out on # non-Windows systems. ;user nobody ;group nobody # The persist options will try to avoid # accessing certain resources on restart # that may no longer be accessible because # of the privilege downgrade. persist-key persist-tun # Output a short status file showing # current connections, truncated # and rewritten every minute. status openvpn-status.log # By default, log messages will go to the syslog (or # on Windows, if running as a service, they will go to # the "\Program Files\OpenVPN\log" directory). # Use log or log-append to override this default. # "log" will truncate the log file on OpenVPN startup, # while "log-append" will append to it. Use one # or the other (but not both). ;log openvpn.log ;log-append openvpn.log # Set the appropriate level of log # file verbosity. # # 0 is silent, except for fatal errors # 4 is reasonable for general usage # 5 and 6 can help to debug connection problems # 9 is extremely verbose verb 3 # Silence repeating messages. At most 20 # sequential messages of the same message # category will be output to the log. ;mute 20 I have changed IP's for security

    Read the article

  • Where is the documentation for MSBUILD arguments to run MSDEPLOY?

    - by Simon_Weaver
    There is an excellent PDC talk available here which describes the new MSDEPLOY features in Visual Studio 2010 - as well as how to deploy an application within TFS. The talk explains some of the command line parameters such as : /p:DeployOnBuild /p:DeployTarget=MsDeployPublish /p:CreatePackageOnPublish=True /p:MSDeployPublishMethod=InProc /p:MSDeployServiceURL=localhost /p:DeployIISAppApth="Default Web Site" But where is the documentation for this - explaining how they work and what i should use? Most of these turn up very few or zero results when searching. Isn't there some actual documentation for these parameters somewhere? I'd rather use these to deploy than try to add a command line exec command to run the package. I've managed to create a web deployment package, which TFS is copying to the output. But I'm ending up with all kinds of errors trying to actually deploy the package. Currently in my build configuration in TFS I have the following arguments for MSBuild Arguments /p:DeployOnBuild=True /p:DeployTarget=MsDeployPublish /p:Configuration=Release /p:CreatePackageOnPublish=True /p:DeployIisAppPath=staging.example.com /p:MsDeployServiceUrl=localhost This however gives me this error : Is there any actual real documentation for these arguments? It would probably take me about 5 minutes to get it running the package by the command line, but i want to get them deploying like this because it will simplify multiple configurations later.

    Read the article

  • How can I get TFS2010 to run MSDEPLOY for me through MSBUILD?

    - by Simon_Weaver
    There is an excellent PDC talk available here which describes the new MSDEPLOY features in Visual Studio 2010 - as well as how to deploy an application within TFS. You can use MSBUILD within TFS2010 to call through to MSDEPLOY to deploy your package to IIS. This is done by means of parameters to MSBUILD. The talk explains some of the command line parameters such as : /p:DeployOnBuild /p:DeployTarget=MsDeployPublish /p:CreatePackageOnPublish=True /p:MSDeployPublishMethod=InProc /p:MSDeployServiceURL=localhost /p:DeployIISAppPath="Default Web Site" But where is the documentation for this - I can't find any? I've been spending all day trying to get this to work and can't quite get it right and keep ending up with various errors. If I run the package's cmd file it deploys perfectly. But I want to get the whole deployment running through msbuild using these arguments and not a separate call to msdeploy or running the package .cmd file. How can I do this? PS. Yes I do have the Web Deployment Agent Service running. I also have the management service running under IIS. I've tried using both. Args I'm using : /p:DeployOnBuild=True /p:DeployTarget=MsDeployPublish /p:Configuration=Release /p:CreatePackageOnPublish=True /p:DeployIisAppPath=staging.example.com /p:MsDeployServiceUrl=https://staging.example.com:8172/msdeploy.axd /p:AllowUntrustedCertificate=True giving me : C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets (2660): VsMsdeploy failed.(Remote agent (URL https://staging.example.com:8172/msdeploy.axd?site=staging.example.com) could not be contacted. Make sure the remote agent service is installed and started on the target computer.) Error detail: Remote agent (URL https://staging.example.com:8172/msdeploy.axd?site=staging.example.com) could not be contacted. Make sure the remote agent service is installed and started on the target computer. An unsupported response was received. The response header 'MSDeploy.Response' was '' but 'v1' was expected. The remote server returned an error: (401) Unauthorized.

    Read the article

  • fatal error C1084: Cannot read type library file: 'Smegui.tlb': Error loading type library/DLL.

    - by Steiny
    Hi, I am trying to build an old version of an application which consists of VC++ projects that were written in Visual Studio 2003. My OS is Windows 7 Enterprise (64-bit). When I try and build the solution I get the following errors: error C4772: #import referenced a type from a missing type library; '__missing_type__' used as a placeholder fatal error C1084: Cannot read type library file: 'Smegui.tlb': Error loading type library/DLL. They both complain about the following import statement: #import "Smegui.tlb" no_implementation This is not a case of the file path being incorrect as renaming the Smegui.tlb file causes the compiler to throw another error saying it cannot find the library. Smegui is from another application that this one depends on. I thought perhaps I was missing a dll but there is no such thing as Smegui.dll. All I know about .tlb files is that they are a type library and you can create them from an assembly using tlbexp.exe or regasm.exe (the later also registers the assembly with COM) There is also an Apache Ant build script which uses a custom task to invoke devenv.com to build the projects. This is the same script that the build server originally used to build the application. It gives me the same errors when I try and run it. The strangest thing about this is that I knew it ought to work seeing as it is all freshly checked out from subversion. I tried many different combinations of admin vs user elevation, VS vs Ant build, cleaning, release. I have got it to build successfully about 5 times but the build seems to be non-deterministic. If anyone can shed some light on how this tlb stuff even works or what this error might mean I would greatly appreciate it. Cheers, Steiny

    Read the article

  • Server cannot set status after HTTP headers have been sent IIS7.5

    - by marcinn
    Hi, Sometimes I get exception in my production environment: Process information Process ID: 3832 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information Exception type: System.Web.HttpException Exception message: Server cannot set status after HTTP headers have been sent. Request information Request URL: http://www.myulr.pl/logon Request path: /logon User host address: 10.11.9.1 User: user001 Is authenticated: True Authentication Type: Forms Thread account name: NT AUTHORITY\NETWORK SERVICE Thread information Thread ID: 10 Thread account name: NT AUTHORITY\NETWORK SERVICE Is impersonating: False Stack trace: at System.Web.HttpResponse.set_StatusCode(Int32 value) at System.Web.HttpResponseWrapper.set_StatusCode(Int32 value) at System.Web.Mvc.HandleErrorAttribute.OnException(ExceptionContext filterContext) at System.Web.Mvc.ControllerActionInvoker.InvokeExceptionFilters(ControllerContext controllerContext, IList(1) filters, Exception exception) at System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) at System.Web.Mvc.Controller.ExecuteCore() at System.Web.Mvc.MvcHandler.<>c__DisplayClass8.<BeginProcessRequest>b__4() at System.Web.Mvc.Async.AsyncResultWrapper.<>c__DisplayClass1.<MakeVoidDelegate>b__0() at System.Web.Mvc.Async.AsyncResultWrapper.<>c__DisplayClass8(1).<BeginSynchronous>b__7(IAsyncResult _) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResult(1).End() at System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& ompletedSynchronously) I didn't noticed this error on my test environment what should I check? I am using ASP.NET MVC 2 (Release Candidate 2)

    Read the article

  • How to use QSerialDevice in Qt?

    - by Tobias
    Hello, I am trying to use QSerialDevice in Qt to get a connection to my serial port. I also tried QextSerialPort before (which works on Windows Vista but unfortunately not on Windows XP ..) but I need an API which supports XP, Vista and Win7... I build the library and configured it this way: CONFIG += dll CONFIG += debug I did use the current version from SVN (0.2.0 - 2010-04-05) and the 0.2.0 zip package. After building the library I did copy it to my Qt Libdir (C:\Qt\2009.05\qt\lib) and also to C:\Windows\system32. Now I try to link against the lib in my project file: LIBS += -lqserialdevice I import the needed header (abstractserial.h) and use my own AbstractSerial like this: // Initialize this->serialPort->setDeviceName("COM1"); if (!this->serialPort->open(QIODevice::ReadWrite | QIODevice::Unbuffered)) qWarning() << "Error" << this->serialPort->errorString(); // Configure SerialPort this->serialPort->setBaudRate(AbstractSerial::BaudRate4800); this->serialPort->setDataBits(AbstractSerial::DataBits8); this->serialPort->setFlowControl(AbstractSerial::FlowControlOff); this->serialPort->setParity(AbstractSerial::ParityNone); this->serialPort->setStopBits(AbstractSerial::StopBits1); The problem is, that if I run my application, it crashes immediately with exit code -1073741515 (application failed to initialize properly). This is the same error I got using QextSerialPort under Windows XP (it worked with Windows Vista). If I build the QSerialDevice lib with release config and also my program, it crashes immediately but with exit code -1073741819 Can someone help me with this program or with another solution of getting a serial port to work with Qt (maybe another API or something?) Otherwise I have to use Windows API functions which would mean that my program won't work with UNIX systems.. If you have a solution for the problem with QextSerialPort under WinXP SP3, they are also welcome ;) Best Regards, Tobias

    Read the article

  • "SignTool error: Access is denied" in TFS 2010 build process

    - by user351352
    I'm getting "SignTool Error: Access is Denied" when I attempt to sign a file. When I use an administrator cmd, all works fine. However, this process is going to be used in a TFS 2010 build process and using the InvokeProcess task with signtool gives the same access denied message as a non-administrator command prompt. More info: On a Win2008 R2 enterprise machine. User is machine admin and on the domain. The TFS Build service is also set to run as this user. Using a self signed certificate created using these instructions: How do I create a self-signed certificate for code signing on Windows? After following these instructions I have the following files: MyCA.cer MyCA.pvk MySPC.cer MySPC.pvk MySPC.pfx MyCA is in my Trusted Root Certification Authorities I imported MySPC.pfx into personal certificates, following the advice here: SignTool error: Access is denied To do the signing I'm using the thumbprint of the MySPC.pfx that was imported into the Personal section so my signtool command looks like: sign /sha1 1e9d7b5ad98552d9c58944e3f3903e6b929f4819 /t http://timestamp.verisign.com/scripts/timestamp.dll "FileName" Once again this works in Admin mode. This also works when running cmd as administrator: sign /f "C:\Code Signing Non-Release\MySPC.pfx" /t http://timestamp.verisign.com/scripts/timestamp.dll "FileName" New to code signing in general, so any help is welcome.

    Read the article

  • UIViewController presentModalViewController: animated: doing nothing?

    - by ryyst
    Hi, I recently started a project, using Apple's Utility Application example project. In the example project, there's an info button that shows an instance of FlipSideView. If you know the Weather.app, you know what the button acts like. I then changed the MainWindow.xib to contain a scrollview in the middle of the window and a page-control view at the bottom of the window (again, like the Weather.app). The scrollview gets filled with instances of MainView. When I then clicked the info button, the FlipSideView would show, but only in the area that was previously filled by the MainView instance – this means that the page-control view on the bottom of the page still showed when the FlipSideView instance got loaded. So, I thought that I would simply add a UIViewController for the top-most window, which is the one declared inside the AppDelegate created along side with the project. So, I created a subclass of UIViewController, put an instance of it inside MainWindow.xib and connected it's view outlet to the UIWindow declared as window inside the app delegate. I also changed the button's action, so that it know sends a message to the MainWindowController instance. The message does get sent (I checked with NSLog() statements), but the FlipSideView doesn't get shown. Here's the relevant (?) code: FlipsideViewController *controller = [[FlipsideViewController alloc] initWithNibName:@"FlipsideView" bundle:nil]; controller.delegate = self; controller.modalTransitionStyle = UIModalTransitionStyleFlipHorizontal; [self presentModalViewController:controller animated:YES]; [controller release]; Why's this not working? I've uploaded the entire project here for you to be able to see the whole thing. Thanks for help! -- Ry

    Read the article

  • Silverlight nested RadGridView SelectedItem DataContext

    - by Ciaran
    Hi, I'm developing a Silverlight 4 app and am using the 2010 Q1 release 1 RadGridView. I'm developing this app using the MVVM pattern and trying to keep my codebehind to a minimum. On my View I have a RadGridView and this binds to a property on my ViewModel. I am setting a property via the SelectedItem. I have a nested RadGridView and I want to set a property on my ViewModel to the SelectedItem but I cannot. I think the DataContext of my nested grid is the element in the parent's bound collection, rather than my ViewModel. I can easily use codebehind to set my ViewModel property from the SelectionChanged event on the nested grid, but I'd rather not do this. I have tried to use my viewModelName in the ElementName in my nested grid to specify that for SelectedItem, the ViewModel is the DataContext, but I cannot get this to work. Any ideas? Here is my Xaml: <grid:RadGridView x:Name="master" ItemsSource="{Binding EntityClassList, Mode=TwoWay}" SelectedItem="{Binding SelectedEntityClass, Mode=TwoWay}" AutoGenerateColumns="False" > <grid:RadGridView.Columns> <grid:GridViewSelectColumn></grid:GridViewSelectColumn> <grid:GridViewDataColumn DataMemberBinding="{Binding Description}" Header="Description"/. </grid:RadGridView.Columns> <grid:RadGridView.RowDetailsTemplate> <DataTemplate> <grid:RadGridView x:Name="child" ItemsSource="{Binding EntityDetails, Mode=TwoWay}" SelectedItem="{Binding DataContext.SelectedEntityDetail, ElementName='RequestView', Mode=TwoWay}" AutoGenerateColumns="False" > <grid:RadGridView.Columns> <grid:GridViewSelectColumn></grid:GridViewSelectColumn> <grid:GridViewDataColumn DataMemberBinding="{Binding ServiceItem}" Header="Service Item" /> <grid:GridViewDataColumn DataMemberBinding="{Binding Comment}" Header="Comments" /> </grid:RadGridView.Columns> </grid:RadGridView> </DataTemplate> </grid:RadGridView.RowDetailsTemplate> </grid:RadGridView>

    Read the article

  • Building a VS2010 solution from TFS2008

    - by slugster
    I have a TFS 2008 Build Agent that has been used to build .Net 3.5 applications. I now have a .Net 4.0 app which i want to compile on the same build agent. I have ensured that MSBuild 4.0 is installed on there and all the required componentry is also installed, but i am getting the following MSB4062 error when building: [Any CPU/Release] C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets(244,5): error MSB4062: The "Microsoft.WebApplication.Build.Tasks.GetSilverlightItemsFromProperty" task could not be loaded from the assembly C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.Build.Tasks.dll. Could not load file or assembly 'file:///C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.Build.Tasks.dll' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded. Confirm that the declaration is correct, and that the assembly and all its dependencies are available. I am presuming that i get this because the TFSBuild.proj gets executed by MSBuild 3.5 which in turn means my solution is compiled with MSBuild 3.5. Am i correct with my diagnosis? Is there any way to ensure that TFS2008 uses MSBuild 4.0 for my solution? Can it be done on a single team project so that it doesn't affect any other team projects being built on the same build agent? Note that i have checked the question Build failing - VS2010 solution on TFS2008 and this is not a duplicate. Thanks :)

    Read the article

  • HintPath vs ReferencePath in Visual Studio

    - by toasteroven
    What exactly is the difference between the HintPath in a .csproj file and the ReferencePath in a .csproj.user file? We're trying to commit to a convention where dependency DLLs are in a "releases" svn repo and all projects point to a particular release. Since different developers have different folder structures, relative references won't work, so we came up with a scheme to use an environment variable pointing to the particular developer's releases folder to create an absolute reference. So after a reference is added, we manually edit the project file to change the reference to an absolute path using the environment variable. I've noticed that this can be done with both the HintPath and the ReferencePath, but the only difference I could find between them is that HintPath is resolved at build-time and ReferencePath when the project is loaded into the IDE. I'm not really sure what the ramifications of that are though. I have noticed that VS sometimes rewrites the .csproj.user and I have to rewrite the ReferencePath, but I'm not sure what triggers that. I've heard that it's best not to check in the .csproj.user file since it's user-specific, so I'd like to aim for that, but I've also heard that the HintPath-specified DLL isn't "guaranteed" to be loaded if the same DLL is e.g. located in the project's output directory. Any thoughts on this?

    Read the article

  • Deploy ASP.NET MVC 2 to IIS 7.5 targeting .NET 3.5

    - by Agent_9191
    I created an ASP.NET MVC 2 application in Visual Studio 2008. I set the release build to go through the ASP.NET compiler to precompile all the views, minify Javascript and CSS, clean up the web.config, etc. Since the production deployment is going to an IIS6 server, I set up my pseudo-production deployment on my Windows 7 machine to have the application pool run in classic mode targeting the 2.0 runtime. I set up the extensionless handler in the web.config that's necessary and everything worked great. The problem came when I upgraded the solution to Visual Studio 2010. I'm still targeting the 3.5 framework, but now I'm using MSBuild 4.0 since that's what Visual Studio 2010 uses. Everything still compiles correctly because it runs fine under Cassini, but when I deploy it to the same location (same application pool, identity, etc) it now behaves differently. I still have the extensionless handler in the web.config, but now when I navigate to the root of the application it does directory browsing, and any routes that it had previously handled now come back as 404 errors being handled by the StaticFile handler in IIS. I'm at a loss for what changed and is causing the break. I have looked at this question, but I have already verified that all the prerequisite components are installed.

    Read the article

  • XamlParseException using Silverlight Toolkit control in Expression Blend

    - by Dan Auclair
    I am having a strange issue opening up my UserControl in Expression Blend when using a Silverlight Toolkit control. My UserControl uses the toolkit's ListBoxDragDropTarget as follows: <controlsToolkit:ListBoxDragDropTarget mswindows:DragDrop.AllowDrop="True" HorizontalContentAlignment="Stretch" VerticalContentAlignment="Stretch"> <ListBox ItemsSource="{Binding MyItemControls}" ScrollViewer.HorizontalScrollBarVisibility="Disabled"> <ListBox.ItemsPanel> <ItemsPanelTemplate> <controlsToolkit:WrapPanel/> </ItemsPanelTemplate> </ListBox.ItemsPanel> </ListBox> </controlsToolkit:ListBoxDragDropTarget> Everything works as expected at runtime and looks fine in Visual Studio 2008. However, when I try to open my UserControl in Blend I get XamlParseException: [Line: 0 Position: 0] and I can not see anything in the design view. More specifically Blend complains: The element "ListBoxDragDropTarget" could not be displayed because of a problem with System.Windows.Controls.ListBoxDragDropTarget: TargetType mismatch. My silverlight application is referencing System.Windows.Controls.Toolkit from the Nov. 2009 toolkit release, and I've made sure to include these namespace declarations for the ListBoxDragDropTarget: xmlns:controlsToolkit="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Toolkit" xmlns:mswindows="clr-namespace:Microsoft.Windows;assembly=System.Windows.Controls.Toolkit" If I comment out the ListBoxDragDropTarget control wrapper and just leave the ListBox I can see everything fine in the design view without errors. Furthermore, I realized this is happening with a variety of Silverlight Toolkit controls because if I comment out ListBoxDragDropTarget and replace it with <controlsToolkit:BusyIndicator /> the same exact error occurs in Blend. What is even weirder is that if I start a brand new silverlight application in blend I can add these toolkit elements without any kind of error, so it seems like something dumb that is happening with my project references to the toolkit assemblies. I'm pretty sure this has something to do with loading the default styles for the toolkit controls from its generic.xaml, since the error has to do with the TargetType and Blend is probably trying to load up the default styles. Has anyone encountered this issue before or have any ideas as to what may be my problem?

    Read the article

< Previous Page | 509 510 511 512 513 514 515 516 517 518 519 520  | Next Page >