Search Results

Search found 24784 results on 992 pages for 'process integration packs'.

Page 318/992 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • How do I disable the calendar events section in GNOME Shell's clock applet?

    - by Victor
    I'm running gnome-shell 3.2.0 and when I click the clock applet in the middle of the top panel, the following shows up: I have no need for the entire right part, right of the dotted line, which is dedicated to the "Online Accounts" integration with evolution's calendar. Is there a way to remove/disable it, so I can just have the date part of the calendar applet (left of the dotted vertical line)? I just like to browse the dates to see how many days are left in the month and stuff like that. I use Google's web interface for my "Calendaring".

    Read the article

  • SQL Server 2000 DTS Package Failing with "The number of failing rows exceeds the maximum specified"

    - by Scott McCormick
    I have inherited a SQL Server 2000 DTS package that migrates data from SQL Server to Oracle. This package moves about 20 tables' data to Oracle every night with no transformations, and it is then transformed by a set of SPs and used by a GIS application. Twice this week, during the migration between SQL Server and Oracle, the package has failed with "The number of failing rows exceeds the maximum specified". It has failed on a different table each time, though. Each time it's failed, we've rerun the process the next morning and it has worked. Because the process works the second time it's run, it makes me think the data is being changed by someone or something between the initial failure and our successful second run. I would like to change the DTS package to log the failing rows in a text document so we can compare them later. Can someone help me with that? I can't seem to figure that part out. Scott

    Read the article

  • Re processing emails on error from pop3 account

    - by Timmy O' Tool
    Hi! I have an application that read emails from a pop3 account. When I connect to the account I download all new emails and process body and attachments. If there is an error processing one of the emails I would like to download it again next time I connect to the account but since I only get new emails and the failed one was already downloaded I don't get it so I can try to process it again. I can do it this with any pop3 command or I have to store locally failed emails?

    Read the article

  • JavaFx a-t-il encore une chance de s'imposer face à Flash, Silverlight et l'émergence du HTML 5 ? Ou

    JavaFx a-t-il encore une chance de s'imposer Face à Flash, Silverlight et l'émergence du HTML 5 ? JavaFx a été lancé il y a trois ans pour développer des applications lourdes. Très vite, les développeurs l'ont utilisé pour des applications multimédias et pour faire du web java (notamment des Rich Internet Applications ou RIA). La plateforme - qui se compose du langage de script JavaFX, une plateforme pour client lourd et une intégration avec la machine virtuelle Java - entendait ainsi répondre ainsi aux besoins d'un marché où la compétition fait désormais rage avec, entre autres, des acteurs aussi importants que Flash de Adobe et Silverlight de Microsoft. Selon la

    Read the article

  • Git branch strategy for small dev team

    - by Bilal Aslam
    We have a web app that we update and release almost daily. We use git as our VCS, and our current branching strategy is very simple and broken: we have a master branch and we check changes that we 'feel good about' into it. This works, but only until we check in a breaking change. Does anyone have a favorite git branch strategy for small teams which meets the following requirements: Works well for teams of 2 to 3 developers Lightweight, and not too much process Allows devs to isolate work on bug fixes and larger features with ease Allows us to keep a stable branch (for those 'oh crap' moments when we have to get our production servers working) Ideally, I'd love to see your step-by-step process for a dev working on a new bug

    Read the article

  • SSL in overlay window for login

    - by Sourabh
    HI I have to implement login over SSL in my website. for example cloginForm - this is the form https://www.myweb.com/loginProcess - this is the action which process the form -authenticates user. I am able to do this with usual web form but the problem is the overlay dialog box for login for example if I am on my website home page http://www.myweb.com - notice http and I click a login link there , it shows a small html div with login form (like a litebox).now ,as I am on a non SSL page (http) the data which I post does not get encrypted,and posted to the process action. How do I get around with this so that my overly login also becomes secure. thanks for your help in advance. :)

    Read the article

  • Form based authentication - Login get fails

    - by Sachin
    Hi All, I am using form based suthentication in my site. I have used one custom user control in my site which read items in sharepoint list and display it in a grid. Everything works fine with windows authentication but when I change the authentication to form based the login process get fails. I see the Error log it is giving me an error saying that "An SPRequest object was not disposed before the end of this thread" Then I have dispose all my spweb and spsite object that I have used in user control but still login process is not wotking. Thanks in advance

    Read the article

  • Using Definition of Done to Drive Agile Maturity

    - by Dylan Smith
    I’ve been an Agile Coach at a lot of different clients over the years, and I want to share an approach I use to help them adopt and mature over time. It’s important to realize that “Agile” is not a black/white yes/no thing. Teams can be varying degrees of agile. I think of this as their agile maturity level. When I coach teams I want them to start out being a little agile, and get more agile as they mature. The approach I teach them is to use the definition of done as a technique to continuously improve their agile maturity over time. We’re probably all familiar with the concept of “Done Done” that represents what *actually* being done a feature means. Not just when a developer says he’s done right after he writes that last line of code that makes the feature kind-of work. Done Done means the coding is done, it’s been tested, installers and deployment packages have been created, user manuals have been updated, architecture docs have been updated, etc. To enable teams to internalize the concept of “Done Done”, they usually get together and come up with their Definition of Done (DoD) that defines all the activities that need to be completed before a feature is considered Done Done. The Done Done technique typically is applied only to features (aka User Stories). What I do is extend this to apply to several concepts such as User Stories, Sprints, Releases (and sometimes Check-Ins). During project kick-off I’ll usually sit down with the team and go through an exercise of creating DoD’s for each of these concepts (Stories/Sprints/Releases). We’ll usually start by just brainstorming a bunch of activities that could end up in these various DoD’s. Here’s some examples: Code Reviews StyleCop FxCop User Manuals Updated Architecture Docs Updated Tested by QA Tested by UAT Installers Created Support Knowledge Base Updated Deployment Instructions (for Ops) written Automated Unit Tests Run Automated Integration Tests Run Then we start by arranging these activities into the place they occur today (e.g. Do you do UAT testing only once per release? every sprint? every feature?). If the team was previously Waterfall most of these activities probably end up in the Release DoD. An extremely mature agile team would probably have most of these activities in the DoD for the User Stories (because an extremely mature agile team will probably do continuous deployment and release every story). So what we need to do as a team, is work to move these activities from their current home (Release DoD) down into the Sprint DoD and eventually into the User Story DoD (and maybe into the lower-level Check-In DoD if we decide to use that). We don’t have to move them all down to User Story immediately, but as a team we figure out what we think we’re capable of moving down to the Sprint cycle, and Story cycle immediately, and that becomes our starting DoD’s. Over time the team makes an effort to continue moving activities down from Release->Sprint->Story as they become more agile and more mature. I try to encourage them to envision a world in which they deploy to production as each User Story is completed. They would need to be updating User Manuals, creating installers, doing UAT testing (typical Release cycle activities) on every single User Story. They may never actually reach that point, but they should envision that, and strive to keep driving the activities down closer to the User Story cycle s they mature. This is a great technique to give a team an easy-to-follow roadmap to mature their agile practices over time. Sure there’s other aspects to maturity outside of this, but it’s a great technique, that’s easy to visualize, to drive agility into the team. Just keep moving those activities (aka “gates”) down the board from Release->Sprint->Story. I’ll try to give an example of what a recent client of mine had for their DoD’s (this is from memory, so probably not 100% accurate): Release Create/Update deployment Instructions For Ops Instructional Videos Updated Run manual regression test suite UAT Testing In this case that meant deploying to an environment shared across the enterprise that mirrored production and asking other business groups to test their own apps to ensure we didn’t break anything outside our system Sprint Deploy to UAT Environment But not necessarily actually request UAT testing occur User Guides updated Sprint Features Video Created In this case we decided to create a video each sprint showing off the progress (video version of Sprint Demo) User Story Manual Test scripts developed and run Tested by BA Deployed in shared QA environment Using automated deployment process Peer Code Review Code Check-In Compiled (warning-free) Passes StyleCop Passes FxCop Create installer packages Run Automated Tests Run Automated Integration Tests PS – One of my clients had a great question when we went through this activity. They said that if a Sprint is by definition done when the end-date rolls around (time-boxed), isn’t a DoD on a sprint meaningless – it’s done on the end-date regardless of whether those other activities are complete or not? My answer is that while that statement is true – the sprint is done regardless when the end date rolls around – if the DoD activities haven’t been completed I would consider the Sprint a failure (similar to not completing what was committed/planned – failure may be too strong a word but you get the idea). In the Retrospective that will become an agenda item to discuss and understand why we weren’t able to complete the activities we agreed would need to be completed each Sprint.

    Read the article

  • Problems with PHP System_Daemon and IMAP connection.

    - by mike
    I'm trying to create a PHP daemon that connects to an IMAP server and processes emails as they come in. I have it close to working, but the daemon keeps grabbing the original emails that it finds the first time the daemon is loaded. I believe the reason is because I'm opening the IMAP connection in the parent process. Example below: if ($imapConnection=imap_open($authhost,$user,$pass) or die()) { //start daemon while() { //Grab email headers $imapHeaders = imap_headers($imapConnection); $count = sizeof($imapHeaders) //loop the emails for($i = 1; $i <= $count, $i++) { //process the email //delete the email } System_Daemon::iterate(15); } } imap_close($imapConnection); I'd like to stay away from putting the IMAP connection within the loop. How can I keep the connection to the IMAP server outside of the loop and still get new emails?

    Read the article

  • Processing CSV File

    - by nettguy
    I am using Sebastien LorionReference CSV reader to process my CSV file in C# 3.0. Say example id|name|dob (Header) 1|sss|19700101 (data) 2|xx|19700201 (data) My Business Object is class Employee { public string ID {get;set;} public string Name {get;set;} public string Dob {get;set;} } I read the CSV stream and stored it in List<string[]> List<string[]> col = new List<string[]>(); using (CsvReader csv = new CsvReader (new StreamReader("D:\\sample.txt"), true, '|')) { col = csv.ToList(); } How to iterate over the list to get each Employee like foreach (var q in col) { foreach (var r in q) { Employee emp=new Employee(); emp.ID =r[0]; emp.Name=r[1]; emp.Dob=r[2]; } } If i call r[0],r[1],r[2] i am getting "index out of range exception".How the process the list to avoid the error?

    Read the article

  • Inject Html Into a View Programmatically

    - by madcapnmckay
    Hi, I have a tricky problem and I'm not sure where in the view rendering process to attempt this. I am building a simple blog/CMS in MVC and I would like to inject a some html (preferably a partial view) into the page if the user is logged in as an admin (and therefore has edit privileges). I obviously could add render partials to master pages etc. But in my system master pages/views are the "templates" of the CMS and therefore should not contain CMS specific <% % markup. I would like to hook in to some part of the rendering process and inject the html myself. Does anyone have any idea how to do this in MVC? Where would be the best point, ViewPage, ViewEngine? Thanks, Ian

    Read the article

  • Get Your Workshop Hands On!

    - by Justin Kestelyn
    Now that 2010 is behind us, that means a fresh set of Developer Day workshops (still free, always free) are ahead of us! Developer Day workshops are free, hands-on workshops that give you the software and skills to tame that learning curve and reach the next level in your technical knowledge. We have a range of entrees on the menu, including Java Development, Database Application Development, Fusion Development (Oracle ADF), and more. Most of these workshops let you walk away with a fully functional, VirtualBox-based software appliance that you can use for continued learning. Here's a short list of workshops for which you can register right now: - Java: Boston, March 8- Database App Development: Dallas, March 9- SOA Development: Reston, March 9- Data Integration: Seattle, March 15 + others planned for Toronto, Philadelphia, Shanghai, Perth, Istanbul, and many other cities in 2011! See this URL for more workshop info as it becomes available.

    Read the article

  • Jersey 1.8 - Another GlassFish 3.1.1 component is ready

    - by alexismp
    We now have a new release of the JAX-RS 1.1 reference implementation - Jersey 1.8 is just out! Thisbug-fix release follows the EclipseLink 2.3 release from last week (as part of the Eclipse Indigo train release) and other components such as Woodstox 4.1.1 and Weld 1.1.1 which have already been released and integrated. To get started with Jersey 1.8, begin here and don't forget to visit the Jersey Wiki pages. You can also grab a nightly build of GlassFish 3.1.1 or wait for the next promoted build (#10) due out in a few days. As it currently stands for GlassFish 3.1.1, we have integration of the final bits for Metro 2.1.1 (currently at 2.1.1b7), Mojarra 2.1.3 (currently at 2.1.3b1), and MQ 4.5.1 (currently at 4.5.1b3) still ahead of us.

    Read the article

  • NetBeans 7.3 Beta2 is Out!

    - by Ondrej Brejla
    NetBeans 7.3 Beta2 was published today. You can download it. You could read about the PHP features added to the NetBeans 7.3 release here on the blog, but the main features added or improved are: Parsers for Namespaced Annotations (Symfony 2, Doctrine 2, etc.), Basic Composer Integration (Dependency Manager for PHP), Twig Code Completion (with documentation), Smarty Braces Matching for Related Tags, Smarty Parser Errors of Unmatched Tags. As obvious you can help us to test the build. Just try it and if you find an issue / error, please report it. Thanks for your help.

    Read the article

  • node.js callback getting unexpected value for variable

    - by defrex
    I have a for loop, and inside it a variable is assigned with var. Also inside the loop a method is called which requires a callback. Inside the callback function I'm using the variable from the loop. I would expect that it's value, inside the callback function, would be the same as it was outside the callback during that iteration of the loop. However, it always seems to be the value from the last iteration of the loop. Am I misunderstanding scope in JavaScript, or is there something else wrong? The program in question here is a node.js app that will monitor a working directory for changes and restart the server when it finds one. I'll include all of the code for the curious, but the important bit is the parse_file_list function. var posix = require('posix'); var sys = require('sys'); var server; var child_js_file = process.ARGV[2]; var current_dir = __filename.split('/'); current_dir = current_dir.slice(0, current_dir.length-1).join('/'); var start_server = function(){ server = process.createChildProcess('node', [child_js_file]); server.addListener("output", function(data){sys.puts(data);}); }; var restart_server = function(){ sys.puts('change discovered, restarting server'); server.close(); start_server(); }; var parse_file_list = function(dir, files){ for (var i=0;i<files.length;i++){ var file = dir+'/'+files[i]; sys.puts('file assigned: '+file); posix.stat(file).addCallback(function(stats){ sys.puts('stats returned: '+file); if (stats.isDirectory()) posix.readdir(file).addCallback(function(files){ parse_file_list(file, files); }); else if (stats.isFile()) process.watchFile(file, restart_server); }); } }; posix.readdir(current_dir).addCallback(function(files){ parse_file_list(current_dir, files); }); start_server(); The output from this is: file assigned: /home/defrex/code/node/ejs.js file assigned: /home/defrex/code/node/templates file assigned: /home/defrex/code/node/web file assigned: /home/defrex/code/node/server.js file assigned: /home/defrex/code/node/settings.js file assigned: /home/defrex/code/node/apps file assigned: /home/defrex/code/node/dev_server.js file assigned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js For those from the future: node.devserver.js

    Read the article

  • How to Use KeePass In Your Browser, Across Your Computers, and On Your Phone

    - by Chris Hoffman
    If you’re using a password manager and it’s not the cloud-based LastPass, it’s probably KeePass. KeePass is a completely open-source password manager that stores all your sensitive data locally. However, this means that it isn’t quite as well-integrated as other solutions. Want LastPass-style browser integration, the ability to synchronize your passwords and have them everywhere, and an app to access your passwords on your phone? You’ll have to string together your own system.    

    Read the article

  • Is static universally "evil" for unit testing and if so why does resharper recommend it?

    - by Vaccano
    I have found that there are only 3 ways to unit test (mock/stub) dependencies that are static in C#.NET: Moles TypeMock JustMock Given that two of these are not free and one has not hit release 1.0, mocking static stuff is not too easy. Does that make static methods and such "evil" (in the unit testing sense)? And if so, why does resharper want me to make anything that can be static, static? (Assuming resharper is not also "evil".) Clarification: I am talking about the scenario when you want to unit test a method and that method calls a static method in a different unit/class. By most definitions of unit testing, if you just let the method under test call the static method in the other unit/class then you are not unit testing, you are integration testing. (Useful, but not a unit test.)

    Read the article

  • Multiprocessing Bomb

    - by iKarampa
    I was working the following example from Doug Hellmann tutorial on multiprocessing: import multiprocessing def worker(): """worker function""" print 'Worker' return if __name__ == '__main__': jobs = [] for i in range(5): p = multiprocessing.Process(target=worker) jobs.append(p) p.start() When I tried to run it outside the if statement: import multiprocessing def worker(): """worker function""" print 'Worker' jobs = [] for i in range(5): p = multiprocessing.Process(target=worker) jobs.append(p) p.start() It started spawning processes non-stop, without any way of to terminating it. Why would that happen? Why it did not generate 5 processes and exit? Why do I need the if statement?

    Read the article

  • ADF Business Components

    - by Arda Eralp
    ADF Business Components and JDeveloper simplify the development, delivery, and customization of business applications for the Java EE platform. With ADF Business Components, developers aren't required to write the application infrastructure code required by the typical Java EE application to: Connect to the database Retrieve data Lock database records Manage transactions   ADF Business Components addresses these tasks through its library of reusable software components and through the supporting design time facilities in JDeveloper. Most importantly, developers save time using ADF Business Components since the JDeveloper design time makes typical development tasks entirely declarative. In particular, JDeveloper supports declarative development with ADF Business Components to: Author and test business logic in components which automatically integrate with databases Reuse business logic through multiple SQL-based views of data, supporting different application tasks Access and update the views from browser, desktop, mobile, and web service clients Customize application functionality in layers without requiring modification of the delivered application The goal of ADF Business Components is to make the business services developer more productive.   ADF Business Components provides a foundation of Java classes that allow your business-tier application components to leverage the functionality provided in the following areas: Simplifying Data Access Design a data model for client displays, including only necessary data Include master-detail hierarchies of any complexity as part of the data model Implement end-user Query-by-Example data filtering without code Automatically coordinate data model changes with business services layer Automatically validate and save any changes to the database   Enforcing Business Domain Validation and Business Logic Declaratively enforce required fields, primary key uniqueness, data precision-scale, and foreign key references Easily capture and enforce both simple and complex business rules, programmatically or declaratively, with multilevel validation support Navigate relationships between business domain objects and enforce constraints related to compound components   Supporting Sophisticated UIs with Multipage Units of Work Automatically reflect changes made by business service application logic in the user interface Retrieve reference information from related tables, and automatically maintain the information when the user changes foreign-key values Simplify multistep web-based business transactions with automatic web-tier state management Handle images, video, sound, and documents without having to use code Synchronize pending data changes across multiple views of data Consistently apply prompts, tooltips, format masks, and error messages in any application Define custom metadata for any business components to support metadata-driven user interface or application functionality Add dynamic attributes at runtime to simplify per-row state management   Implementing High-Performance Service-Oriented Architecture Support highly functional web service interfaces for business integration without writing code Enforce best-practice interface-based programming style Simplify application security with automatic JAAS integration and audit maintenance "Write once, run anywhere": use the same business service as plain Java class, EJB session bean, or web service   Streamlining Application Customization Extend component functionality after delivery without modifying source code Globally substitute delivered components with extended ones without modifying the application   ADF Business Components implements the business service through the following set of cooperating components: Entity object An entity object represents a row in a database table and simplifies modifying its data by handling all data manipulation language (DML) operations for you. These are basically your 1 to 1 representation of a database table. Each table in the database will have 1 and only 1 EO. The EO contains the mapping between columns and attributes. EO's also contain the business logic and validation. These are you core data services. They are responsible for updating, inserting and deleting records. The Attributes tab displays the actual mapping between attributes and columns, the mapping has following fields: Name : contains the name of the attribute we expose in our data model. Type : defines the data type of the attribute in our application. Column : specifies the column to which we want to map the attribute with Column Type : contains the type of the column in the database   View object A view object represents a SQL query. You use the full power of the familiar SQL language to join, filter, sort, and aggregate data into exactly the shape required by the end-user task. The attributes in the View Objects are actually coming from the Entity Object. In the end the VO will generate a query but you basically build a VO by selecting which EO need to participate in the VO and which attributes of those EO you want to use. That's why you have the Entity Usage column so you can see the relation between VO and EO. In the query tab you can clearly see the query that will be generated for the VO. At this stage we don't need it and just use it for information purpose. In later stages we might use it. Application module An application module is the controller of your data layer. It is responsible for keeping hold of the transaction. It exposes the data model to the view layer. You expose the VO's through the Application Module. This is the abstraction of your data layer which you want to show to the outside word.It defines an updatable data model and top-level procedures and functions (called service methods) related to a logical unit of work related to an end-user task. While the base components handle all the common cases through built-in behavior, customization is always possible and the default behavior provided by the base components can be easily overridden or augmented. When you create EO's, a foreign key will be translated into an association in our model. It defines the type of relation and who is the master and child as well as how the visibility of the association looks like. A similar concept exists to identify relations between view objects. These are called view links. These are almost identical as association except that a view link is based upon attributes defined in the view object. It can also be based upon an association. Here's a short summary: Entity Objects: representations of tables Association: Relations between EO's. Representations of foreign keys View Objects: Logical model View Links: Relationships between view objects Application Model: interface to your application  

    Read the article

  • How to do steps of an API through CLI

    - by Dolphin
    I'm using Audiveris API to generate xml(MusicXML) file once the sheetmusic(e.g. pdf/img file) is being input (i.e. When I give the sheet music (pdf file) location, to generate the xml file out of it (in another location)). Audiveris has its own GUI to do this. But can I do this process of sheetmusic-to-xml without using their GUI, but only from the CLI? If so how may I approach it? And also if so - is there a possibility to make it work in CLI - using Java code (say to invoke steps for API to occur in the CLI using Java Code)? I managed to open the GUI by opening the jar file from CLI. But I need to know whether there's a possibility to carry out the sheetmusic(say pdf)-to-xml process without using their GUI, but only through CLI? Greatly appreciate any help or guidance Thanks in advance

    Read the article

  • Getting the keyword as a parameter from Adwords using ValueTrack

    - by Stephen Ostermiller
    I set up an AdWords campaign for website following the instructions for Google AdWords ValueTrack. One of the things that it is supposed to be able to do is pass the keyword as a URL parameter using the code {keyword} in the URL. I set it up for integration with Google Analytics such the landing URLs would look like: http://example.com/landing.html?utm_source=adwords&utm_medium=cpc&utm_term=%7Bkeyword%7D&utm_content=my_content&utm_campaign=my_page where {keyword} is in the utm_term parameter. Hower, this keyword substitution isn't happening. Why?

    Read the article

  • Choosing the right version control system for .NET projects [closed]

    - by madxpol
    I'm getting ready for my first "bigger" .NET project (ASP.NET MVC 3/4) on which I'm going to lead another 2 programmers and right now I'm choosing the right version control system for the job (plus I'm gonna use it for my future development too). My problem is that I did't use any version control system before, so I would like it to have as fast learning curve and intuitive merging as possible. So far I quickly looked at VisualSVN (I like the Visual Studio integration in it), but I'm reading everywhere how Git is awesome and dunno which one to choose (not limited to these two).. Maybe I'm ovethinking this but I like when everything goes smoothly:) I'd like to hear some opinions from people who used multiple version control systems (preferably on VS projects) what do you think is the less complicated and effective version control system for such a use (one to 5 man projects)?

    Read the article

  • Get CruiseControl to talk to github with the correct public key.

    - by Danny Lister
    Hi All, Has anybody installed git and ControlControl and got CruiseControl to pull from GitHub on a window 2003 server. I keep getting public key errors (access denied) - Which is good i suppose as that confirms git is talking to github. However what is not good is that I dont not know where to install the rsa keys so they will be picked up by the running process (git in the context of cc.net). Any help would save me a lot of hair! I have tried installing the keys into; c:\Program Files\Git.ssh Whereby running git bash and cd ~ take me to: c:\Program Files\Git Current error from CC.net is Error Message: ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: Permission denied (publickey). fatal: The remote end hung up unexpectedly . Process command: C:\Program Files\Git\bin\git.exe fetch origin Thanks in advance

    Read the article

  • BizTalk 2010 - BAM Portal - No Views to Display

    - by Stuart Brierley
    Our latest BizTalk Server 2010 development project is utilising BizTalk as the integration ring around a new and sizable implementaion of Dynamics AX 2012. With this project we have decided to use BAM to monitor the processes within our various new applications.Although I have been specialising in BizTalk for around 9 years, this is my first time using BAM so it is an interesting process to be going through.Recently when deploying a solution I was attempting to check the BAM Portal to see that the View that I had created was properly deployed and that the Activity I was populating was being surfaced in the Portal as expected. Initially I was presented with the message "No view to display" in the "My Views" area of the BAM Portal landing page.This was because you need to set the permissions on the views that you want to see from the command line using the bm.exe tool:bm.exe add-account -AccountName:YourServerOrDomain\YourUsername -View:YourViewThis tool can be found in the BAM folder at the BizTalk installation location:C:\Program Files (x86)\Microsoft BizTalk Server 2010\Tracking

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >