Search Results

Search found 913 results on 37 pages for 'targets'.

Page 25/37 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Fix: SqlDeploy Task Fails with NullReferenceException at ExtractPassword

    Still working on getting a TeamCity build working (see my last post).  Latest exception is: C:\Program Files\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(120, 5): error MSB4018: The "SqlDeployTask" task failed unexpectedly. System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.Data.Schema.Common.ConnectionStringPersistence.ExtractPassword(String partialConnection, String dbProvider) at Microsoft.Data.Schema.Common.ConnectionStringPersistence.RetrieveFullConnection(String partialConnection, String provider, Boolean presentUI, String password) at Microsoft.Data.Schema.Sql.Build.SqlDeployment.ConfigureConnectionString(String connectionString, String databaseName) at Microsoft.Data.Schema.Sql.Build.SqlDeployment.OnBuildConnectionString(String partialConnectionString, String databaseName) at Microsoft.Data.Schema.Build.Deployment.FinishInitialize(String targetConnectionString) at Microsoft.Data.Schema.Build.Deployment.Initialize(FileInfo sourceDbSchemaFile, ErrorManager errors, String targetConnectionString) at Microsoft.Data.Schema.Build.DeploymentConstructor.ConstructServiceImplementation() at Microsoft.Data.Schema.Extensibility.ServiceConstructor'1.ConstructService() at Microsoft.Data.Schema.Tasks.DBDeployTask.Execute() at Microsoft.Build.BuildEngine.TaskEngine.ExecuteInstantiatedTask(EngineProxy engineProxy, ItemBucket bucket, TaskExecutionMode howToExecuteTask, ITask task, Boolean& taskResult)   This time searching yielded some good stuff, including this thread that talks about how to resolve this via permissions.  The short answer is that the account that your build server runs under needs to have the necessary permissions in SQL Server.  Youll need to create a Login and then ensure at least the minimum rights are configured as described here: Required Permissions in Database Edition Alternately, you can just make your buildserver account an admin on the database (which is probably running on the same machine anyway) and at that point it should be able to do whatever it needs to. If youre certain the account has the necessary permissions, but youre still getting the error, the problem may be that the account has never logged into the build server.  In this case, there wont be any entry in the HKCU hive in the registry, which the system is checking for permissions (see this thread).  The solution in this case is quite simple: log into the machine (once is enough) with the build server account.  Then, open Visual Studio (thanks Brendan for the answer in this thread). Summary Make sure the build service account has the necessary database permissions Make sure the account has logged into the server so it has the necessary registry hive info Make sure the account has run Visual Studio at least once so its settings are established In my case I went through all 3 of these steps before I resolved the problem. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Role based access control in Oracle VM using Enterprise Manager 12c

    - by Ronen Kofman
    Enterprise Managers let’s you control any element in the environment and define which users can do what on each element. We will show here an example on how to set up RBAC (Role Base Access Control) for Oracle VM using Enterprise Manager, this will be a very simplified explanation  to help you get going. For more comprehensive explanations please refer to the Enterprise Manager User Guide. OK, first some basic Enterprise Manager terminology: Target – any element in the environment is a target – server, pool, zone, VM etc. Administrators – these are the Enterprise Manager users who can login to the platform. Roles – roles are privilege profiles which could be applied to Administrators. The first step will be to discover the virtual environment and bring it in to Enterprise Manager, this process is simple and can be done in two ways: Work on your Oracle VM manager, set it up until you feel comfortable and then register it in Enterprise Manager Use Enterprise Manager and build it all from there. In both cases we will be able to see the same picture from Oracle VM and from Enterprise Manager, any change made in one will be reflected in the other. Oracle VM Manager: Enterprise Manager: Once you have your virtual environment set up in Enterprise Manager it is time to start associating VMs with users (or Administrators as they are called in Enterprise Manager). Enterprise Manager allows us to connect to multiple different identity services and import users from them but the simplest way to add Administrators is just go to setup->security->Administrators and create new Administrator. The creation wizard will walk you through several stages and allow you to assign role(s) to your newly created Administrator, using roles can really shorten the process if done multiple times. When you get to “Target Privileges” stage, scroll down to the bottom to the “Target Privileges” section. In this section you can add targets (virtual machine in our case) and define the type of privileges you would like to assign to the Administrator which you are creating. In this example I chose one of the VMs and granted full privileges to the newly created Administrator. Administrator creation wizard "Target Privileges": Now when you login as the newly created administrator, you will only see the VM that was assign to you and will be able to have full control over it. That’s it, simple and straight forward, Enterprise Manager offers many more things which I skipped here but the point is that if you need role based access control Enterprise Manager can give it to you in a very easy way. Oh and one more thing, virtualization management in Enterprise Manager has no license cost, sweet.

    Read the article

  • Web 2.0 Solutions with Oracle WebCenter 11g &ndash; Book Review

    - by juan.ruiz
    Recently I obtained a copy of the book Web 2.0 Solutions with Oracle Web Center 11g from Packt Publishing, right away I noticed that one of the authors of this book is a good and long time colleague of  mine Plinio Arbizu, whom I have joined for different developer events in Latin America in the past. In this entry you will find my review of the book. Chapter 1: What's Oracle WebCenter? Provides you with basic knowledge to understand the pieces of WebCenter and the role that these pieces play in the overall Oracle Fusion Middleware strategy. Chapter 2 and 3: Will guide you through installation process and set up instructions, required to start developing Web2.0 applications. The screenshots are very helpful. Chapter 4: The chapter will guide you through a series of steps for creating a basic HelloWorld application that uses ADF/Webservices/WebCenter framework to understand the relevant pieces that are part of the architecture in large Web2.0 solutions for WebCenter. One caveat on this chapter is that the use HTML in combination with ADF Faces is not a recommended practice, because in some cases (not in this one) HTML code generated by the components can conflict with existent HTML code place on the same page... so be careful. Chapter 5: Describes the basics to understand the usage of ADF Faces Rich Client Components, with templates and ADF Business components. Chapter 6: This chapter explains how to encapsulate, deploy and consume ADF UIs as JSR 168 portlets in a declarative way Chapter 7: Explains some of the WebCenter services and the different ways that these services can be integrated within WebCenter applications. Chapter 8: Goes over how to include a series of  WebCenter services provided out-of-the-box within applications. This chapter presents a simple and clear way of how to include RSS feeds, search capabilities, tagging and discussions using practical samples that are easy to follow. Chapter 9: Presents an important component of Oracle WebCenter - the composer. Through the composer and Oracle Metadata Services the composer adds all the functionality to perform end-user personalizations, which is a very common user case when working with portals. The concept is self-explanatory when running over the practice developed in this chapter. Chapter 10: Provides an introduction to WebCenter spaces, explaining common concepts about installation, administration (role creation, group creation, etc) and through a sample, the readers can put everything in practice on their own environments. Summary: This book would provide the reader with a fast start to work with Oracle WebCenter 11g  and its different components. In my opinion the book targets the developer audience, rather than the Portal type of audience, or content generator. For the readers of this book I recommend that to better understand the concepts discussed, first you need to understand the basics on Oracle Application Development Framework. Believe me you can thank me later!

    Read the article

  • Web Experience Management: Segmentation & Targeting - Chalk Talk with John

    - by Michael Snow
    Today's post comes from our WebCenter friend, John Brunswick.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Having trouble getting your arms around the differences between Web Content Management (WCM) and Web Experience Management (WEM)?  Told through story, the video below outlines the differences in an easy to understand manner. By following the journey of Mr. and Mrs. Smith on their adventure to find the best amusement park in two neighboring towns, we can clearly see what an impact context and relevancy play in our decision making within online channels.  Just as when we search to connect with the best products and services for our needs, the Smiths have their grandchildren coming to visit next week and finding the best park is essential to guarantee a great family vacation.  One town effectively Segments and Targets visitors to enhance their experience, reducing the effort needed to learn about their park. Have a look below to join the Smiths in their search.    Learn MORE about how you might measure up: Deliver Engaging Digital Experiences Drive Digital Marketing SuccessAccess Free Assessment Tool

    Read the article

  • Organizing Git repositories with common nested sub-modules

    - by André Caron
    I'm a big fan of Git sub-modules. I like to be able to track a dependency along with its version, so that you can roll-back to a previous version of your project and have the corresponding version of the dependency to build safely and cleanly. Moreover, it's easier to release our libraries as open source projects as the history for libraries is separate from that of the applications that depend on them (and which are not going to be open sourced). I'm setting up workflow for multiple projects at work, and I was wondering how it would be if we took this approach a bit of an extreme instead of having a single monolithic project. I quickly realized there is a potential can of worms in really using sub-modules. Supposing a pair of applications: studio and player, and dependent libraries core, graph and network, where dependencies are as follows: core is standalone graph depends on core (sub-module at ./libs/core) network depdends on core (sub-module at ./libs/core) studio depends on graph and network (sub-modules at ./libs/graph and ./libs/network) player depends on graph and network (sub-modules at ./libs/graph and ./libs/network) Suppose that we're using CMake and that each of these projects has unit tests and all the works. Each project (including studio and player) must be able to be compiled standalone to perform code metrics, unit testing, etc. The thing is, a recursive git submodule fetch, then you get the following directory structure: studio/ studio/libs/ (sub-module depth: 1) studio/libs/graph/ studio/libs/graph/libs/ (sub-module depth: 2) studio/libs/graph/libs/core/ studio/libs/network/ studio/libs/network/libs/ (sub-module depth: 2) studio/libs/network/libs/core/ Notice that core is cloned twice in the studio project. Aside from this wasting disk space, I have a build system problem because I'm building core twice and I potentially get two different versions of core. Question How do I organize sub-modules so that I get the versioned dependency and standalone build without getting multiple copies of common nested sub-modules? Possible solution If the the library dependency is somewhat of a suggestion (i.e. in a "known to work with version X" or "only version X is officially supported" fashion) and potential dependent applications or libraries are responsible for building with whatever version they like, then I could imagine the following scenario: Have the build system for graph and network tell them where to find core (e.g. via a compiler include path). Define two build targets, "standalone" and "dependency", where "standalone" is based on "dependency" and adds the include path to point to the local core sub-module. Introduce an extra dependency: studio on core. Then, studio builds core, sets the include path to its own copy of the core sub-module, then builds graph and network in "dependency" mode. The resulting folder structure looks like: studio/ studio/libs/ (sub-module depth: 1) studio/libs/core/ studio/libs/graph/ studio/libs/graph/libs/ (empty folder, sub-modules not fetched) studio/libs/network/ studio/libs/network/libs/ (empty folder, sub-modules not fetched) However, this requires some build system magic (I'm pretty confident this can be done with CMake) and a bit of manual work on the part of version updates (updating graph might also require updating core and network to get a compatible version of core in all projects). Any thoughts on this?

    Read the article

  • Games at Work Part 2: Gamification and Enterprise Applications

    - by ultan o'broin
    Gamification and Enterprise Applications In part 1 of this article, we explored why people are motivated to play games so much. Now, let's think about what that means for Oracle applications user experience. (Even the coffee is gamified. Acknowledgement @noelruane. Check out the Guardian article Dublin's Frothing with Tech Fever. Game development is big business in Ireland too.) Applying game dynamics (gamification) effectively in the enterprise applications space to reflect business objectives is now a hot user experience topic. Consider, for example, how such dynamics could solve applications users’ problems such as: Becoming familiar or expert with an application or process Building loyalty, customer satisfaction, and branding relationships Collaborating effectively and populating content in the community Completing tasks or solving problems on time Encouraging teamwork to achieve goals Improving data accuracy and completeness of entry Locating and managing the correct resources or information Managing changes and exceptions Setting and reaching targets, quotas, or objectives Games’ Incentives, Motivation, and Behavior I asked Julian Orr, Senior Usability Engineer, in the Oracle Fusion Applications CRM User Experience (UX) team for his thoughts on what potential gamification might offer Oracle Fusion Applications. Julian pointed to the powerful incentives offered by games as the starting place: “The biggest potential for gamification in enterprise apps is as an intrinsic motivator. Mechanisms include fun, social interaction, teamwork, primal wiring, adrenaline, financial, closed-loop feedback, locus of control, flow state, and so on. But we need to know what works best for a given work situation.” For example, in CRM service applications, we might look at the motivations of typical service applications users (see figure 1) and then determine how we can 'gamify' these motivations with techniques to optimize the desired work behavior for the role (see figure 2). Description of Figure 1 Description of Figure 2 Involving Our Users Online game players are skilled collaborators as well as problem solvers. Erika Webb (@erikanollwebb), Oracle Fusion Applications UX Manager, has run gamification events for Oracle, including one on collaboration and gamification in Oracle online communities that involved Oracle customers and partners. Read more... However, let’s be clear: gamifying a user interface that’s poorly designed is merely putting the lipstick of gamification on the pig of work. Gamification cannot replace good design and killer content based on understanding how applications users really work and what motivates them. So, Let the Games Begin! Gamification has tremendous potential for the enterprise application user experience. The Oracle Fusion Applications UX team is innovating fast and hard in this area, researching with our users how gamification can make work more satisfying and enterprises more productive. If you’re interested in knowing more about our gamification research, sign up for more information or check out how your company can get involved through the Oracle Usability Advisory Board. Your thoughts? Find those comments.

    Read the article

  • Oracle went back to school !....

    - by Cristina Ciocoiu
    I am Georgiana, Contracts Manager for Oracle University and Advanced Customer Services in Romania. I started working for Oracle for 4 years ago as a Contracts Specialist. Two years ago I became a manager of a team of 9 Contracts Specialists. On a sunny day in March some members of my team visited the students of the Academy of Economic Studies, accompanied by Recruitment colleagues. This was part of a new initiative to raise awareness on career opportunities at Oracle. We spent approximately 2 hours illustrating and explaining different aspects of the day-to-day activities of an Oracle Contracts Specialist to the future graduates of the Academy. Role Play Since a role play is worth 1000 job descriptions, the audience witnessed an entertaining performance on the contracting process from the phase of the negotiation with the customer to actual signing of the contract. The main focus was on the role of Contracts Specialist liaising with all the groups involved and ensuring that the contract is compliant with Oracle policies while generating the expected revenue. However, the team took other roles as well i.e. Sales Representative, Customer, Business Approver and Lawyer to demonstrate their role in the process. As each of these roles only have a small slice of the big pie, it is vital to understand what happens before and after you come on stage as a Contract Specialist. Contracts Specialist Being a Contracts Specialist goes beyond simply knowing what policies apply, it means understanding Oracle’s core business model, understanding customers’ requests and addressing them in the most effective way. The job also involves connecting smaller teams that are often geographically dispersed across multiple regions so that they become a bigger, stronger and successful team. You are the expert in this key position that can facilitate the closing of a deal or stop it from happening if the risk is too high. The role play provided insights on both. Why I love this job Events of this kind are sometimes just as useful for the “recruiters” as for the “recruits”. For me, as a presenter, it was an excellent opportunity to think about the many reasons why I love what I do in the Contracts department every day and to share this with the students. I wanted to explain to the audience, who are still considering education and career possibilities, that what we do in Contracts DOES make a difference. You have the power to achieve targets that you did not think reachable before. Working in the dynamic Oracle environment shapes you as a person and there is a lot to take away from this experience. Looking back to my years in the Academy (I graduated from the Academy myself), I wish I could have listened to more people talking about their great jobs and about how I could get there. If those were Oracle people I might have been writing this article sooner. J If you are interested to join the Contracts team please click here for more information or contact lavinia.protopopescu-AT-oracle-DOT-com. You can find all openings in Romania via http://campus.oracle.com

    Read the article

  • How Big Data and Social Won the Election

    - by Mike Stiles
    The story of big data’s influence on the outcome of the US Presidential election is worth a good look, because a) it’s a harbinger of things to come, and b) it’s an example of similar successes available to any enterprise seriously resourcing integrated big data, modeling, and data-driven execution on all assets, including social. Obama campaign manager Jim Messina fielded a data and analytics brain trust 5 times larger than 2008. At that time, there were numerous databases from various sources, few of them talking to each other. This time, the mission was to be metrics-centered and measure everything measurable, and in context with all the other data. Big data showed them exactly what they needed to know and told them what to do about it. It showed them women 40-49 on the west coast would donate big money if they got to eat with George Clooney. Women on the east coast would pony up to hang out with Sarah Jessica Parker. Extensive daily modeling showed them what kinds of email appeals, from who, and to whom, would prove most successful in raising cash, recruiting volunteers, and getting out the vote. Swing state voters were profiled and approached with more customized targeting that at any time in history. Ads were purchased on specific shows watched by the targets, increasing efficiency 14% over traditional media buys. For all the criticism of the candidate’s focus on appearing on comedy and entertainment shows, and local radio morning shows, that’s where the data sent them to reach the voters most likely to turn out for them. And then there was social. Again, more than in any other election, Facebook was used for virtual, highly efficient door-to-door canvasing. Facebook fans got pictures of friends in swing states and were asked to encourage them to act. Using that approach, 1 in 5 peer-to-peer appeals led to the desired action. Assumptions, gut, intuition, campaign experience, all took a backseat to strategy shifts solidly backed up by data. Zeroing in on demographics likely to back the President and tracking their mood daily literally changed the voter landscape. The Romney team watched Obama voters appear seemingly out of thin air. One Obama campaign aide said, “We ran the election 66,000 times every night.” Which brings us to your organization. If you’re starting to feel like the battle-cry of “but this is the way we’ve always done it” is starting to put you in an extremely vulnerable position, you’re right. Social has become a key communication tool of the 21st century. Failing to use it, or failing to invest in a deep understanding of who your customers and prospects are so the content you post there will achieve desired actions and results, will leave you waking up one morning wondering, “What happened?”@mikestilesPhoto stock.xchng

    Read the article

  • Architects, Leadership, and Influence

    - by Bob Rhubart
    Technical expertise is a given for architects. In addition to solid development experience, extensive knowledge of technical trends, tools, standards, and methodolgies (not to mention business accumen) provides the foundation for the decisions the architect must make in the effort to get all the pieces to work together. But even superior technical chops can't overcome a lack of leadership. Leadership is about influence: the ability to effectively communicate — to sell your ideas and defend your decisions in a manner that affects the decisions of the people around you. Leadership and influence are especially important in situations in which the architect may not have the authority to simply tell people what to do. And even when the architect has that kind of authority, influential leadership can mean the difference between gaining real buy-in and support from colleagues and stakeholders, and settling for their grudging acceptance (or worse). Guess which outcome is likely to produce the best results. In a previous post I presented some examples of the kind of criticism that is leveled at architects, a great deal of which can be attributed to a lack of leadership and influence on the part of the targets of that criticism. So it was serendipitous that I recently ran across a post on the Harvard Business Review blog written by Chris Musselwhite and Tammie Plouffe. That post, When Your Influence Is Ineffective, includes this: [I]nfluence becomes ineffective when individuals become so focused on the desired outcome that they fail to fully consider the situation. While the influencer may still gain the short-term desired outcome, he or she can do long-term damage to personal effectiveness and the organization, as it creates an atmosphere of distrust where people stop listening, and the potential for innovation or progress is diminished. The need to "see the big picture" is a grossly reductive assesement of the architect's responsibilities — but that doesn't mean it's not true. That big picture perspective must encompass both the technological elements of the architecture and the elements responsible for implementing those technologies in compliance with the prescribed architecture. Technologies may be tempermental, but they don't have personalities or egos, and they are unlikely to carry a grudge — not yet, anyway (Hello, Skynet!).  Effective leadership and the ability to influence people can help to ensure that all the pieces fit and that they work together, today and tomorrow.

    Read the article

  • My Automated NuGet Workflow

    - by Wes McClure
    When we develop libraries (whether internal or public), it helps to have a rapid ability to make changes and test them in a consuming application. Building Setup the library with automatic versioning and a nuspec Setup library assembly version to auto increment build and revision AssemblyInfo –> [assembly: AssemblyVersion("1.0.*")] This autoincrements build and revision based on time of build Major & Minor Major should be changed when you have breaking changes Minor should be changed once you have a solid new release During development I don’t increment these Create a nuspec, version this with the code nuspec - set version to <version>$version$</version> This uses the assembly’s version, which is auto-incrementing Make changes to code Run automated build (ruby/rake) run “rake nuget” nuget task builds nuget package and copies it to a local nuget feed I use an environment variable to point at this so I can change it on a machine level! The nuget command below assumes a nuspec is checked in called Library.nuspec next to the csproj file $projectSolution = 'src\\Library.sln' $nugetFeedPath = ENV["NuGetDevFeed"] msbuild :build => [:clean] do |msb| msb.properties :configuration => :Release msb.targets :Build msb.solution = $projectSolution end task :nuget => [:build] do sh "nuget pack src\\Library\\Library.csproj /OutputDirectory " + $nugetFeedPath end Setup the local nuget feed as a nuget package source (this is only required once per machine) Go to the consuming project Update the package Update-Package Library or Install-Package TLDR change library code run “rake nuget” run “Update-Package library” in the consuming application build/test! If you manually execute any of this process, especially copying files, you will find it a burden to develop the library and will find yourself dreading it, and even worse, making changes downstream instead of updating the shared library for everyone’s sake. Publishing Once you have a set of changes that you want to release, consider versioning and possibly increment the minor version if needed. Pick the package out of your local feed, and copy it to a public / shared feed! I have a script to do this where I can drop the package on a batch file Replace apikey with your nuget feed's apikey Take out the confirm(s) if you don't want them @ECHO off echo Upload %1? set /P anykey="Hit enter to continue " nuget push %1 apikey set /P anykey="Done " Note: helps to prune all the unnecessary versions during testing from your local feed once you are done and ready to publish TLDR consider version number run command to copy to public feed

    Read the article

  • UK Oracle User Group Event: Trends in Identity Management

    - by B Shashikumar
    As threat levels rise and new technologies such as cloud and mobile computing gain widespread acceptance, security is occupying more and more mindshare among IT executives. To help prepare for the rapidly changing security landscape, the Oracle UK User Group community and our partners at Enline/SENA have put together an User Group event in London on Apr 19 where you can learn more from your industry peers about upcoming trends in identity management. Here are some of the key trends in identity management and security that we predicted at the beginning of last year and look how they have turned out so far. You have to admit that we have a pretty good track record when it comes to forecasting trends in identity management and security. Threat levels will grow—and there will be more serious breaches:   We have since witnessed breaches of high value targets like RSA and Epsilon. Most organizations have not done enough to protect against insider threats. Organizations need to look for security solutions to stop user access to applications based on real-time patterns of fraud and for situations in which employees change roles or employment status within a company. Cloud computing will continue to grow—and require new security solutions: Cloud computing has since exploded into a dominant secular trend in the industry. Cloud computing continues to present many opportunities like low upfront costs, rapid deployment etc. But Cloud computing also increases policy fragmentation and reduces visibility and control. So organizations require solutions that bridge the security gap between the enterprise and cloud applications to reduce fragmentation and increase control. Mobile devices will challenge traditional security solutions: Since that time, we have witnessed proliferation of mobile devices—combined with increasing numbers of employees bringing their own devices to work (BYOD) — these trends continue to dissolve the traditional boundaries of the enterprise. This in turn, requires a holistic approach within an organization that combines strong authentication and fraud protection, externalization of entitlements, and centralized management across multiple applications—and open standards to make all that possible.  Security platforms will continue to converge: As organizations move increasingly toward vendor consolidation, security solutions are also evolving. Next-generation identity management platforms have best-of-breed features, and must also remain open and flexible to remain viable. As a result, developers need products such as the Oracle Access Management Suite in order to efficiently and reliably build identity and access management into applications—without requiring security experts. Organizations will increasingly pursue "business-centric compliance.": Privacy and security regulations have continued to increase. So businesses are increasingly look for solutions that combine strong security and compliance management tools with business ready experience for faster, lower-cost implementations.  If you'd like to hear more about the top trends in identity management and learn how to empower yourself, then join us for the Oracle UK User Group on Thu Apr 19 in London where Oracle and Enline/SENA product experts will come together to share security trends, best practices, and solutions for your business. Register Here.

    Read the article

  • Moms on Mobile: Are They Way Ahead of You?

    - by Mike Stiles
    You may have no idea how much and how fast moms are embracing mobile. Of all the demographics that can be targeted by marketers, moms have always been at or near the top of the list. And why not? They’re running households, they’re all over town, they’re making buying decisions, and they’re influencing family and friends. They, out of necessity, become masters of efficiency and time management. So when a technology tool, like mobile, comes along that assists with that efficiency and time management, we would obviously expect them to take advantage of it. So if it’s obvious, why are so many big, sophisticated brands left choking on the dust of moms who have zoomed past them in the adoption of mobile, and social on mobile? Let’s break down some hard truths as presented by a Mojiava report: -Moms spend 6.1 hours per day on average on their smartphones – more than magazines, TV or radio. -46% took action after seeing a mobile ad. -51% self-identify as “addicted” to their smartphone. -Households with an income of $25K-$50K have about the same mobile penetration among moms as those with incomes of $50K-$75K. So mobile is regarded as a necessity for middle-class moms. -Even moms without smartphones spend 2.5 hours on average per day on some connected mobile device. -Of moms with such devices, 9.8% have an iPad, 9.5% a Kindle and 5.7% an iPod Touch. -Of tablet-owning moms, 97% bought something using their tablet in the last month. -31% spend over 10 hours per week on their tablet, but less than 2 hours per week on their PCs. -62% of connected moms use shopping apps. -46% want to get info on their mobile while in a store. -Half of connected moms use social on their mobile. And they’re engaged. 81% are brand fans, 86% post updates, and 84% comment. If women and moms are one of your primary targets and you find yourself with no strong social channels where content is driving engagement and relationship-building, with sites not optimized for mobile, or with no tablet or smartphone apps, you have been solidly left behind by your customers and prospects. And their adoption of mobile and social on mobile is only exponentially speeding up, not slowing down. How much sense does it make when your customer is ready to act on your mobile ad, wants to user your iPad app to buy something from you, wants to be your fan on Facebook, wants to get messages and deals from you while they’re in your store…but you’re completely absent? I’ll help you cheat on the test by giving you the answer…no sense at all. Catch up to momma.

    Read the article

  • Applying DDD principles in a RESTish web service

    - by Andy
    I am developing an RESTish web service. I think I got the idea of the difference between aggregation and composition. Aggregation does not enforce lifecycle/scope on the objects it references. Composition does enforce lifecycle/scope on the objects it contain/own. If I delete a composite object then all the objects it contain/own are deleted as well, while the deleting an aggregate root does not delete referenced objects. 1) If it is true that deleting aggregate roots does not necessary delete referenced objects, what sense does it make to not have a repository for the references objects? Or are aggregate roots as a term referring to what is known as composite object? 2) When you create an web service you will have multiple endpoints, in my case I have one entity Book and another named Comment. It does not make sense to leave the comments in my application if the book is deleted. Therefore, book is a composite object. I guess I should not have a repository for comments since that would break the enforcement of lifecycle and rules that the book class may have. However I have URL such as (examples only): GET /books/1/comments POST /books/1/comments Now, if I do not have a repository for comments, does that mean I have to load the book object and then return the referenced comments? Am I allowed to return a list of Comment entities from the BookRepository, does that make sense? The repository for Book may eventually become rather big with all sorts of methods. Am I allowed to write JPQL (JPA queries) that targets comments and not books inside the repository? What about pagination and filtering of comments. When adding a new comment triggered by the POST endpoint, do you need to load the book, add the comment to the book, and then update the whole book object? What I am currently doing is having a own CommentRepository, even though the comments are deleted with the book. I could need some direction on how to do it correct. Since you are exposing not only root objects in RESTish services I wonder how to handle this at the backend. I am using Hibernate and Spring.

    Read the article

  • reference list for non-IT driven algorithmic patterns

    - by Quicker
    I am looking for a reference list for non-IT driven algorithmic patterns (which still can be helped with IT implementations of IT). An Example List would be: name; short desc; reference Travelling Salesman; find the shortest possible route on a multiple target path; http://en.wikipedia.org/wiki/Travelling_salesman_problem Ressource Disposition (aka Regulation); Distribute a limited/exceeding input on a given number output receivers based on distribution rules; http://database-programmer.blogspot.de/2010/12/critical-analysis-of-algorithm-sproc.html If there is no such list, but you instantly think of something specific, please 'put it on the desk'. Maybe I can compile something out of the input I get here (actually I am very frustrated as I did not find any such list via research by myself). Details on Scoping: I found it very hard to formulate what I want in a way everything is out that I do not need (which may be the issue why I did not find anything at google). There is a database centric definition for what I am looking for in the section 'Processes' of the second example link. That somehow fits, but the database focus sort of drifts away from the pattern thinking, which I have in mind. So here are my own thoughts around what's in and what's out: I am NOT looking for a foundational algo ref list, which is implemented as basis for any programming language. Eg. the php reference describes substr and strlen. That implements algos, but is not what I am looking for. the problem the algo does address would even exist, if there were no computers (or other IT components) the main focus of the algo is NOT to help other algo's chances are high, that there are implementions of the solution or any workaround without any IT support out there in the world however the algo could be benefitialy implemented/fully supported by a software application = means: the problem of the algo has to be addressed anyway, but running an algo implementation with software automates the process (that is why I posted it on stackoverflow and not somewhere else) typically such algo implementations have more than one input field value and more than one output field value - which implies it could not be implemented as simple function (which is fixed to produce not more than one output value) in a normalized data model often times such algo implementation outputs span accross multiple rows (sometimes multiple tables), whereby the number of output rows depends on the input paraters and rows in the table(s) at start time - which implies that any algo implementation/procedure must interact with a database (read and/or write) I am mainly looking for patterns, not for specific implementations. Example: The Travelling Salesman assumes any coordinates, however it does not say: You need a table targets with fields x and y. - however sometimes descriptions are focussed on examples with specific implementations very much - no worries, as long as the pattern gets clear

    Read the article

  • iScsiPrt error event ID 5

    - by AZee
    Event Log: "Failed to setup initiator portal. Error status is given in the dump data." This is being recorded every 3/100's of a second. We are using MS iSCSI Initiator on Windows Server 2003, Dell 2970 w/4GB (PAE). I am sure that this was configured by Dell initially. I have no idea what changes or mods were made since the company installed this machine until now. (I'm a new User so the lovely and vibrant screen images had to be removed. They were quite pretty and I am sure you would have been very moved and appreciative of them.) It appears that everything is installed correctly and the 5TB bound volume is accessible but I have never worked with iScsi before so I plead total ignorance. In searching I have found this to be a fairly sparce and bland documented subject. I'd like two things... First, to get rid of the error msg being logged. MS says it can be ignored if everything is working but it chews up resources logging it and I don't feel comfortable about any errors on my servers. I want to correct whatever is causing this problem. Secondly, being totally green to this, I would like to confirm that the setup is optimized and we are taking advantage of all features available. Although there are 3 NIC's in this machine it appears that the initiator is only configured for the Broadcom BMC5708C NetXtreme II on our 10.90.1.#, the other 2 NICS are 1GB on the 192.168.0.#. Would additional targets improve performance? If someone who is experienced in configuring the Microsoft iScsi Initiator can help I would really appreciate it since, as I mentioned, everything I have come across has not been of any value at all. Thanks! ~AZ

    Read the article

  • multiple monitor, ATI EyeFinity vs NVidia Mosaic on HDMI

    - by user1531897
    i have been googling much for answer but only someone with real experience can help here. my aim is to have 5 monitors connected to one computer in 2x2 full screen on all 4 monitors (video wall) with same resolution, 4xfullHD for example, plus one -5th monitor as controling. Ati have example picture of this configuration here. I have no gaming nor 3d needs here. Simple desktop applications plus video streams. So because graphic cards for this are expensive, i need to find out before buying pros and cons of 2 possible solutions (1) Ati Eyefinity capable card(s) ...and (2) Nvidia Quadro/NVS card(s) with mosaic and/or nvidia surround technology. For example i can find that current good cards for this can be: Ati's 7870 eyefinity6 card, link here pros: one card can handle all 5 displays, cons: active DP adapters needed (sometimes with aditional usb-powering complication) Nvidia Qudro NVS 450, link here Both cards have DisplayPorts as outputs (my targets are HDMI displays), but as far i saw information is that Eyefinity needs "active DP-DVI/hdmi adapters" for outputs and they are little expensive...? Does nvidia have this limitation (active adapters) ??? And final question, is Ati Eyefinity still better for this purpose than Nvidia-mosaic by someone's real experience there.

    Read the article

  • bonding module parameters are not shown in /sys/module/bonding/parameters/

    - by c4f4t0r
    I have a server with Suse 11 sp1 kernel 2.6.32.54-0.3-default, with modinfo bonding i see all parameters, but under /sys/module/bonding/parameters/ not modinfo bonding | grep ^parm parm: max_bonds:Max number of bonded devices (int) parm: num_grat_arp:Number of gratuitous ARP packets to send on failover event (int) parm: num_unsol_na:Number of unsolicited IPv6 Neighbor Advertisements packets to send on failover event (int) parm: miimon:Link check interval in milliseconds (int) parm: updelay:Delay before considering link up, in milliseconds (int) parm: downdelay:Delay before considering link down, in milliseconds (int) parm: use_carrier:Use netif_carrier_ok (vs MII ioctls) in miimon; 0 for off, 1 for on (default) (int) parm: mode:Mode of operation : 0 for balance-rr, 1 for active-backup, 2 for balance-xor, 3 for broadcast, 4 for 802.3ad, 5 for balance-tlb, 6 for balance-alb (charp) parm: primary:Primary network device to use (charp) parm: lacp_rate:LACPDU tx rate to request from 802.3ad partner (slow/fast) (charp) parm: ad_select:803.ad aggregation selection logic: stable (0, default), bandwidth (1), count (2) (charp) parm: xmit_hash_policy:XOR hashing method: 0 for layer 2 (default), 1 for layer 3+4 (charp) parm: arp_interval:arp interval in milliseconds (int) parm: arp_ip_target:arp targets in n.n.n.n form (array of charp) parm: arp_validate:validate src/dst of ARP probes: none (default), active, backup or all (charp) parm: fail_over_mac:For active-backup, do not set all slaves to the same MAC. none (default), active or follow (charp) in /sys/module/bonding/parameters ls -l /sys/module/bonding/parameters/ total 0 -rw-r--r-- 1 root root 4096 2013-10-17 11:22 num_grat_arp -rw-r--r-- 1 root root 4096 2013-10-17 11:22 num_unsol_na I found some of this parameters under /sys/class/net/bond0/bonding/, but when i try to change one i got the following error echo layer2+3 > /sys/class/net/bond0/bonding/xmit_hash_policy -bash: echo: write error: Operation not permitted

    Read the article

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • Debian and Multipath IO problem

    - by tearman
    Basically the situation is, I have a box running Debian, the box internally has an Intel SCSI RAID controller which is controlling 2 hard drives in RAID1 mode which is where the OS is installed. Further, I have a QLogic fiber channel adapter that connects the unit to a Fiber Channel SAN. My process of installation is I'll install Debian to the local drives, and leave the QLogic firmware out of it for the time being. Then once I get the unit online, I'll install the firmware drivers. This flops my internal drives from /dev/sda to /dev/sdc, which is a bit annoying, but recoverable. Probably should address these by UUID anyways. Once I get back online, I have to install multipath-tools (the framework is a multipath framework). However, once I reboot the machine again, it fails on boot after discovering multipath targets, saying my local drives are busy and cannot be mounted to /root. Any help in what may be the problem here? Or at least how to disable multipath until after the unit boots and then ignores the internal drives?

    Read the article

  • ISCSI Target Ubuntu

    - by erai
    I'm trying to setup iscsitarget on Ubuntu 12.04 but I can't connect to it. On the windows machine it says Target Error. with no other output. My ietd.conf is Target iqn.2012-06.com.org:virtual_machines.lun Lun 0 Type=fileio,Path=/media/volume0/storlun0.bin When I run iscsiadm -m discovery -t st -p localhost The output is iscsiadm: Connection to Discovery Address 127.0.0.1 failed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: Connection to Discovery Address 127.0.0.1 closed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: Connection to Discovery Address 127.0.0.1 failed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: Connection to Discovery Address 127.0.0.1 failed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: Connection to Discovery Address 127.0.0.1 failed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: connection login retries (reopen_max) 5 exceeded iscsiadm: Could not perform SendTargets discovery. dmesg output: [ 3324.804665] iscsi_trgt: Removing all connections, sessions and targets [ 3325.875343] iSCSI Enterprise Target Software - version 1.4.20.3 [ 3325.875415] iscsi_trgt: Registered io type fileio [ 3325.875420] iscsi_trgt: Registered io type blockio [ 3325.875425] iscsi_trgt: Registered io type nullio

    Read the article

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • What do I need in order to extract and combine text files from multiple ZIP files, via command line?

    - by Iszi
    I've got an interesting scripting challenge in front of me. I'm fairly certain there's a way to do it, but I feel like I'm probably lacking some particular tools and/or functional knowledge. There's some fifty-plus ZIP files that each contain, among other things, text files that need to be merged with one another. The structure is something like this: C:\Reports\FirstJob-1.zip |-MyName |-FirstJob |-1 |-[Some other folders] |-TXTReports |-English |-[Some other files] |-Report.txt C:\Reports\FirstJob-2.zip |-MyName |-FirstJob |-1 |-[Some other folders] |-TXTReports |-English |-[Some other files] |-Report.txt C:\Reports\SecondJob-1.zip |-MyName |-SecondJob |-1 |-[Some other folders] |-TXTReports |-English |-[Some other files] |-Report.txt If I had all the Report.txt files in one regular folder, and uniquely named, I could probably just write a FOR statement that targets *.txt and runs something like type filename.txt >> Consolidated.txt on each. However, these all have the same file name and are embedded deep within separate ZIP files. The potentially useful tools I currently have at my disposal are Windows XP Professional SP3, PowerShell, and WinZip. I'd rather not download or install anything else, but I do understand that third-party tools (or additional tools from Microsoft or WinZip) may be necessary. Whatever tools I use should run natively in Windows. I really don't want to have to mess with Cygwin or other emulators on this system. At the very least, I need a tool that will allow me to analyze and manipulate ZIP files from the command line. Also, are there any other particular complications to this that I've not yet thought of?

    Read the article

  • Error during configuring kerberos5 using macports

    - by ario
    While trying to install libmemcached via MacPorts, I hit the following issue: libmemcached @0.40 +universal ---> Computing dependencies for libmemcached ---> Dependencies to be installed: cyrus-sasl2 kerberos5 ---> Configuring kerberos5 Error: org.macports.configure for port kerberos5 returned: configure failure: command execution failed Error: Failed to install kerberos5 It tells me to look in the log for details. Here's the last bit of the log file: :info:configure checking for setupterm in -lcurses... no :info:configure checking for setupterm in -lncurses... no :info:configure checking for tgetent... no :info:configure configure: error: Could not find tgetent; are you missing a curses/ncurses library? :info:configure configure: error: /bin/sh './configure' failed for appl/telnet :info:configure Command failed: cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_net_kerberos5/kerberos5/work/krb5-1.7.2/src" && ./configure --prefix=/opt/local --disable-dependency-tracking --mandir=/opt/local/share/man :info:configure Exit code: 1 :error:configure org.macports.configure for port kerberos5 returned: configure failure: command execution failed :debug:configure Error code: NONE :debug:configure Backtrace: configure failure: command execution failed while executing "$procedure $targetname" :info:configure Warning: targets not executed for kerberos5: org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install :error:configure Failed to install kerberos5 :debug:configure Registry error: kerberos5 not registered as installed & active. invoked from within "registry_active ${subport}" invoked from within "$workername eval registry_active \${subport}" :notice:configure Please see the log file for port kerberos5 for details: /opt/local/var/macports/logs/_opt_local_var_macports_sources_rsync.macports.org_release_ports_net_kerberos5/kerberos5/main.log It seems to say it's missing ncurses. Looks like it's there though, since if I run port installed I see these: ncurses @5.7_0 ncurses @5.9_1 (active) ncursesw @5.7_0 Any ideas on how to get around this error?

    Read the article

  • Managing a test iSCSI target server

    - by dyasny
    Hi all, I am using a RHEL server with a few hard drives, and tgtd as the iscsi target software. I a looking for a way to allocate and deallocate space and targets with that space, without restarting my system, or harming other LUNs. Currently, all my HDDs are PVs in a single VG, and I lvcreate/lvremove as required, and then export the allocated LVs using a tgt script: usr/sbin/tgtadm --lld iscsi --op new --mode target --tid=1 --targetname iqn.2001-04.com.lab.gss:300gb /usr/sbin/tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/mapper/iscsi_vg-iscsi_300Gb /usr/sbin/tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL /usr/sbin/tgtadm --lld iscsi --op new --mode target --tid=2 --targetname iqn.2001-04.com.lab.gss:200gb /usr/sbin/tgtadm --lld iscsi --op new --mode logicalunit --tid 2 --lun 1 -b /dev/mapper/iscsi_vg-iscsi_200Gb /usr/sbin/tgtadm --lld iscsi --op bind --mode target --tid 2 -I ALL /usr/sbin/tgtadm --lld iscsi --op new --mode target --tid=3 --targetname iqn.2001-04.com.lab.gss:100gb /usr/sbin/tgtadm --lld iscsi --op new --mode logicalunit --tid 3 --lun 1 -b /dev/mapper/iscsi_vg-iscsi_100Gb /usr/sbin/tgtadm --lld iscsi --op bind --mode target --tid 3 -I ALL tgtadm --mode target --op show So in order to remove a LUN, I stop the tgtd service, lvremove the lv, and remove the entry from the iscsi target script When I add a lun, I run lvcreate, and then add an entry to the script and run it. This is not quite optimal, since restarting the service is a bad idea while other LUNs are busy, so I am looking for a more scalable and safer way. Thanks

    Read the article

  • Network Misconfiguration when adding first host to new vSphere cluster

    - by dunxd
    I am building a new vSphere cluster from scratch. I have installed ESXi on the first host, and built a vCenter server on a VM residing on that host (storage is on the local hard drive, although we have iSCSI targets which I can reach from the host). The cluster is configured for HA. When I try and add the host to the cluster, I get an error at the point where HA is configured - Cannot complete the . I have stripped the network configuration of the host down to the most basic - a single NIC attached to a single vSwitch - this is running the VMKernel Port on VLAN 8 - that is our Management VLAN. The vCenter server will have a network address on this VLAN, so I also set the initial Virtual Machine Port Group to this VLAN, and connected the vCenter server NIC to this port group. I understand I can't connect the vCenter server to the VMkernel port group, but shouldn't I be able to connect the vCenter server to a Port Group in the same VLAN? If not, do I need to create a VLAN specifically for VMKernel Port Group? I plan to set up another port group for vMotion with a dedicated and isolated VLAN (i.e. VLAN isn't routed) so this wouldn't allow vCenter to communicate. Does anyone have any suggestions, or other ideas for what might be causing the problem. I've read through the documentation, but it isn't giving me any pointers, and the error message isn't helping me beyond telling me something is wrong with my network config.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >