Search Results

Search found 25908 results on 1037 pages for 'configuration management'.

Page 44/1037 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Hibernate database connection configuration

    - by Alvin
    We have 2 different server environments using the same Hibernate configuration. One server has JNDI support for datasource, but the other does not. Currently the Hibernate configuration is configured to use JNDI, which is causing problem on the server that does not support JNDI. I have also tried to put the direct JDBC configuration together with JNDI configuration into the configuration file, but it looks like hibernate always favors JNDI over direct JDBC configuration if both exist. My question is, will it be the same if both JNDI and connection_provider configuration both exists? Will Hibernate still use JNDI over connection_provider? Or is there any way to change the precedence of the database connection property? I do not have access to the server all the time, so I thought I do ask the question before my window of the sever time. Thanks in advance.

    Read the article

  • One configuration per domain name on the same application. How to easily access config values from m

    - by Aymeric
    Hi, I run a Ruby on Rails website that have multiple domain names. I have a "Website" table in the database that stores the configuration values related to each domain name: Website - domain - name - tagline - admin_email - etc... At the moment, I load the website object at the start of each request (before_filter) in my ApplicationController: @website = Website.find_by_domain(request.host) The problem is when I need to access the @website object from my models methods. I would like to avoid to have to pass @website everywhere. The best solution would be to have something similar to APP_CONFIG but per domain name. def sample_model_property - - "#{@website.name} is a great website!" end How would you do it?

    Read the article

  • Oracle at Gartner IAM Summit Next Week

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Heading to Gartner Identity and Access Management Summit next week? As you know, one of the premier conferences for identity management specialists and security experts, the Gartner IAM Conference this year is in Las Vegas, Nevada from December 3 – 5. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} As you pack your bags and plan your itinerary, do note that Oracle executives including Amit Jasuja, Senior Vice President, Security and Identity Management and Dave Profozich, Group Vice President along with product management and implementation experts would be in attendance. You are invited to meet with the Oracle team and mingle with our customers. We recommend you bookmark the following times and activities: Breakfast Keynote: Trends in Identity Management Tuesday, December 4, 2012 7:30 a.m. – 8:00 a.m., Octavius 16 Amit Jasuja, SVP, Security and Identity Management, Oracle Ranjan Jain, Enterprise Architect, Cisco Don’t miss the opportunity to hear from Amit Jasuja, SVP, Security and Identity Management as he discusses how mobile and social behavior are changing how organizations function, manage their workforce, and interact with their customers. Learn how these new trends are shaping the innovations in Oracle Identity Management solutions. And get a customer’s take on the new trends and their impact on the organization. Visit the Oracle Booth Mingle with peers, customers, product and implementation experts at the Oracle booth. While there, catch live demonstrations of the very latest best-in-class technologies and learn how Oracle Identity Management solutions are enabling the Social, Mobile and Cloud (SoMoClo) environments. And arm yourself with industry resources from our Virtual Collateral Rack. And don’t forget to enter for a chance to win a JAWBONE JAMBOX Wireless Speaker System while at our booth. So, see you there? Gartner Identity and Access Management Summit December 3 -5, 2012 Caesars Palace 3570 Las Vegas Blvd South Las Vegas, NV 89109

    Read the article

  • Adventures in Lab Management Configuration: Part 2 of 3

    - by Enrique Lima
    The first post was the high level overview. Now it is time for the details on what was done to the existing CMMI Project based on CMMI v 4.2. The first step was to go into Visual Studio, then from the Team Project Collection Settings and then to the Process Template Manager.  Once there, it was a matter of selecting the appropriate template (MSF for CMMI Process Improvement v5.0) and download to a point I could reference later (for example C:\Templates). Then on to using the steps from the guidance post. Since I was using an x64 deployment, I will make reference to the path as <toolpath>, however the actual path to reference in a 64-bit environment is “C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE”. As I mentioned on the previous post, make sure to first perform a backup of the Configuration, Collection and Warehouse DBs.  If you did not apply any changes to the names and such, then you will find those as tfs_Configuration, tfs_DefaultCollection and tfs_Warehouse. Now, the work needed with the witadmin tool: That includes the uploading of the structures that differ from v4.2 to v5.0 There is likely going to be an issue with the naming of some fields. For example, TFS 2010 likes something along the lines of “Area ID”, whereas TFS 2008 would have had it as “AreaID”.  So, this will need to be corrected.  Some posts will have you go through this after the errors pop up.  I would recommend doing this process prior to executing the importwitd process.  witadmin listfields /collection:<path to collection> > c:\ListFields.txt Review the following fields: AreaID, review the Name property and validate if it states “AreaID”, the you will need to rename the Name field to reflect “Area ID”. ExternalLinkCount, RelatedLinkCount, HyperLinkCount, AttachedFileCount and IterationID would be the other fields to check. To correct the issue, then execute the following: witadmin changefield /collection:<path to collection> /n:"System.ExternalLinkCount" /name:"External Link Count" Repeat for Area ID, Related Link Count, Hyperlink Count, Attached File Count and Iteration ID.  Once this is done, proceed with the commands below. witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\TestCase.xml" witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\SharedStep.xml" witadmin importcategories /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\categories.xml" Modifications to the Bug Definition: First step is to export the existing definition. witadmin exportwitd /collection<path to collection> /p:<project> /n:bug /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Make modifications to recently exported MyBug.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Repeat the process for the the Scenario or Requirement Type Definition witadmin exportwitd /collection<path to collection> /p:<project> /n:requirement /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Make modifications to recently exported MyRequirement.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Provide the Bug Field Mapping definition, after creating the file as specified here: http://msdn.microsoft.com/en-us/library/ff452591.aspx#TCMBugFieldMapping tcm bugfieldmapping /import /mappingfile:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\bugfieldmappings.xml" /collection:<path to collection> /teamproject:<project name>

    Read the article

  • Developing Schema Compare for Oracle (Part 4): Script Configuration

    - by Simon Cooper
    If you've had a chance to play around with the Schema Compare for Oracle beta, you may have come across this screen in the synchronization wizard: This screen is one of the few screens that, along with the project configuration form, doesn't come from SQL Compare. This screen was designed to solve a couple of issues that, although aren't specific to Oracle, are much more of a problem than on SQL Server: Datatype conversions and NOT NULL columns. 1. Datatype conversions SQL Server is generally quite forgiving when it comes to datatype conversions using ALTER TABLE. For example, you can convert from a VARCHAR to INT using ALTER TABLE as long as all the character values are parsable as integers. Oracle, on the other hand, only allows ALTER TABLE conversions that don't change the internal data format. Essentially, every change that requires an actual datatype conversion has to be done using a rebuild with a conversion function. That's OK, as we can simply hard-code the various conversion functions for the valid datatype conversions and insert those into the rebuild SELECT list. However, as there always is with Oracle, there's a catch. Have a look at the NUMTODSINTERVAL function. As well as specifying the value (or column) to convert, you have to specify an interval_unit, which tells oracle how to interpret the input number. We can't hardcode a default for this parameter, as it is entirely dependent on the user's data context! So, in order to convert NUMBER to INTERVAL DAY TO SECOND/INTERVAL YEAR TO MONTH, we need to have feedback from the user as to what to put in this parameter while we're generating the sync script - this requires a new step in the engine action/script generation to insert these values into the script, as well as new UI to allow the user to specify these values in a sensible fashion. In implementing the engine and UI infrastructure to allow this it made much more sense to implement it for any rebuild datatype conversion, not just NUMBER to INTERVALs. For conversions which we can do, we pre-fill the 'value' box with the appropriate function from the documentation. The user can also type in arbitary SQL expressions, which allows the user to specify optional format parameters for the relevant conversion functions, or indeed call their own functions to convert between values that don't have a built-in conversion defined. As the value gets inserted as-is into the rebuild SELECT list, any expression that is valid in that context can be specified as the conversion value. 2. NOT NULL columns Another problem that is solved by the new step in the sync wizard is adding a NOT NULL column to a table. If the table contains data (as most database tables do), you can't just add a NOT NULL column, as Oracle doesn't know what value to put in the new column for existing rows - the DDL statement will fail. There are actually 3 separate scenarios for this problem that have separate solutions within the engine: Adding a NOT NULL column to a table without a rebuild Here, the workaround is to add a column default with an appropriate value to the column you're adding: ALTER TABLE tbl1 ADD newcol NUMBER DEFAULT <value> NOT NULL; Note, however, there is something to bear in mind about this solution; once specified on a column, a default cannot be removed. To 'remove' a default from a column you change it to have a default of NULL, hence there's code in the engine to treat a NULL default the same as no default at all. Adding a NOT NULL column to a table, where a separate change forced a table rebuild Fortunately, in this case, a column default is not required - we can simply insert the default value into the rebuild SELECT clause. Changing an existing NULL to a NOT NULL column To implement this, we run an UPDATE command before the ALTER TABLE to change all the NULLs in the column to the required default value. For all three, we need some way of allowing the user to specify a default value to use instead of NULL; as this is essentially the same problem as datatype conversion (inserting values into the sync script), we can re-use the UI and engine implementation of datatype conversion values. We also provide the option to alter the new column to allow NULLs, or to ignore the problem completely. Note that there is the same (long-running) problem in SQL Compare, but it is much more of an issue in Oracle as you cannot easily roll back executed DDL statements if the script fails at some point during execution. Furthermore, the engine of SQL Compare is far less conducive to inserting user-supplied values into the generated script. As we're writing the Schema Compare engine from scratch, we used what we learnt from the SQL Compare engine and designed it to be far more modular, which makes inserting procedures like this much easier.

    Read the article

  • Reduce ERP Consolidation Risks with Oracle Master Data Management

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Reducing the Risk of ERP Consolidation starts first and foremost with your Data.This is nothing new; companies with multiple misaligned ERP systems are often putting inordinate risk on their business. It can translate to too much inventory, long lead times, and shipping issues from poorly organized and specified goods. And don’t forget the finance side! When goods are shipped and promises are kept/not kept there’s the issue of accounts. No single chart of counts translates to no accountability. So – I’ve decided. I need to consolidate! Well, you can’t consolidate ERP applications [for that matter any of your applications] without first considering your data. This means looking at how your data is being integrated by these ERP systems, how it is being synchronized, what information is being shared, or not being shared. Most importantly, making sure that the data is mastered. What is the best way to do this? In the recent webcast: Reduce ERP consolidation Risks with Oracle Master Data Management we outlined 3 key guidelines: #1: Consolidate your Product Data#2: Consolidate your Customer, Supplier (Party Data) #3: Consolidate your Financial Data Together these help customers achieve reduced risk, better customer intimacy, reducing inventory levels, elimination of product variations, and finally a single master chart of accounts. In the case of Oracle's customer Zebra Technologies, they were able to consolidate over 140 applications by mastering their data. Ultimately this gave them 60% cost savings for the year on IT spend. Oracle’s Solution for ERP Consolidation: Master Data Management Oracle's enterprise master data management (MDM) can play a big role in ERP consolidation. It includes a set of products that consolidates and maintains complete, accurate, and authoritative master data across the enterprise and distributes this master information to all operational and analytical applications as a shared service. It’s optimized to work with any application source (not just Oracle’s) and can integrate using technology from Oracle Fusion Middleware (i.e. GoldenGate for data synchronization and real-time replication or ODI with its E-LT optimized bulk data and transformation capability). In addition especially for ERP consolidation use cases it’s important to leverage the AIA and SOA capabilities as part of Fusion Middleware to connect these multiple applications together and relay the data into the correct hub. Oracle’s MDM strategy is a unique offering in the industry, one that has common elements across the top and bottom in Middleware, BI/DW, Engineered systems combined with Enterprise Data Quality to enable comprehensive Data Governance at all levels. In addition, Oracle MDM provides the best-in-class capabilities to master all variations of data, including customer, supplier, product, financial data. But ultimately at the center of Oracle MDM is your data, making it more trusted, making it secure and accessible as part of a role-based approach, and getting it to make sense to you in any situation, whether it’s a specific ERP process like we talked about or something that is custom to your organization. To learn more about these techniques in ERP consolidation watch our webcast or goto our Oracle MDM website at www.oracle.com/goto/mdm

    Read the article

  • SQL SERVER – Why Do We Need Master Data Management – Importance and Significance of Master Data Management (MDM)

    - by pinaldave
    Let me paint a picture of everyday life for you.  Let’s say you and your wife both have address books for your groups of friends.  There is definitely overlap between them, so that you both have the addresses for your mutual friends, and there are addresses that only you know, and some only she knows.  They also might be organized differently.  You might list your friend under “J” for “Joe” or even under “W” for “Work,” while she might list him under “S” for “Joe Smith” or under your name because he is your friend.  If you happened to trade, neither of you would be able to find anything! This is where data management would be very important.  If you were to consolidate into one address book, you would have to set rules about how to organize the book, and both of you would have to follow them.  You would also make sure that poor Joe doesn’t get entered twice under “J” and under “S.” This might be a familiar situation to you, whether you are thinking about address books, record collections, books, or even shopping lists.  Wherever there is a lot of data to consolidate, you are going to run into problems unless everyone is following the same rules. I’m sure that my readers can figure out where I am going with this.  What is SQL Server but a computerized way to organize data?  And Microsoft is making it easier and easier to get all your “addresses” into one place.  In the  2008 version of SQL they introduced a new tool called Master Data Services (MDS) for Master Data Management, and they have improved it for the new 2012 version. MDM was hailed as a major improvement for business intelligence.  You might not think that an organizational system is terribly exciting, but think about the kind of “address books” a company might have.  Many companies have lots of important information, like addresses, credit card numbers, purchase history, and so much more.  To organize all this efficiently so that customers are well cared for and properly billed (only once, not never or multiple times!) is a major part of business intelligence. MDM comes into play because it will comb through these mountains of data and make sure that all the information is consistent, accurate, and all placed in one database so that employees don’t have to search high and low and waste their time. MDM also has operational MDM functions.  This is not a redundancy.  Operational MDM means that when one employee updates one bit of information in the database, for example – updating a new address for a customer, operational MDM ensures that this address is updated throughout the system so that all departments will have the correct information. Another cool thing about MDM is that it features Master Data Services Configuration Manager, which is exactly what it sounds like.  It has a built-in “helper” that lets you set up your database quickly, easily, and with the correct configurations.  While talking about cool features, I can’t skip over the add-in for Excel.  This allows you to link certain data to Excel files for easier sharing and uploading. In summary, I want to emphasize that the scariest part of the database is slowly disappearing.  Everyone knows that a database – one consolidated area for all your data – is a good idea, but the idea of setting one up is daunting.  But SQL Server is making data management easier and easier with features like Master Data Services (MDS). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Master Data Services, MDM

    Read the article

  • Successfully Deliver on State and Local Capital Projects through Project Portfolio Management

    - by Sylvie MacKenzie, PMP
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} While the debate continues on Capitol Hill about which federal programs to cut and which to keep, communities and towns across America are feeling the budget crunch closer to home. State and local governments are trying to save as many projects as they can without promising too much to constituents – and they, in turn, want to know where their tax dollars are going. Fortunately, with the right planning and management, you can deliver successful projects and portfolios on a limited budget. Watch the replay of our recent webcast with Oracle Primavera and Industry Product Manager Garrett Harley that will demonstrate how state and local governments can get the most out of their capital projects and learn how two Oracle Primavera customers have implemented project portfolio management practices to: Predict the cost of long-term capital programs and projects Assess risk and mitigation strategies Collaborate and track performance across government agencies Speakers: Garrett Harley, Industry and Product Manager, Oracle Primavera Cory Davis, Director of Capital Renovation and New Construction, Chicago Public Schools Julie Owen, PSP™, CCC™, Sr. Project Controls Manager,LA Metro Transit Authority With the right planning and management, state and local governments can deliver successful projects on a limited budget. 1024x768 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";}

    Read the article

  • Should core application configuration be stored in the database, and if so what should be done to se

    - by Rl
    I'm writing an application around a lot of hierarchical data. Currently the hierarchy is fixed, but it's likely that new items will be added to the hierarchy in the future. (please let them be leaves) My current application and database design is fairly generic and nothing dealing with specific nodes in the hierarchy is hardcoded, with the exception of validation and lookup functions written to retrieve external data from each node's particular database. This pleases me from a design point of view, but I'm nervous at the realization that the entire application rests on a handful of records in the database. I'm also frustrated that I have to enforce certain aspects of data integrity with database triggers rather than by foreign key constraints (an example is where several different nodes in the hierarchy have their own proprietary IDs and I store them in a single column which, when coupled with the node ID can be used to locate the foreign data). I'm starting to wonder whether it may have been appropriate to simply hardcoded these known nodes into the system so that it would be more "type safe" and less generic. How does one know when something should be hardcoded, and when it should be a configuration item? Is it just a cost-benefit analysis of clarity/safety now vs less work later, or am I missing some metric I should be using to determine whether or not this is appropriate. The steps I'm taking to protect these valuable configurations are to add triggers that prevent updates/deletes. The database user that this application uses will only have the ability to manipulate data through stored procedures. What else can I do?

    Read the article

  • Configuration Error , Finding assembly after I swapped referenced dll out. Visual Studio 2003

    - by TampaRich
    Here is the situation. I had a clean build of my asp.net web application working. I then went into the bin folder under the web app and replaced two referenced dll's with two older version of the same dll's. (Same name etc.) After testing I replaced those dll's back to the new ones and now my application keeps throwing the configuration error === Pre-bind state information === LOG: DisplayName = xxxxx.xxxx.Personalization (Partial) LOG: Appbase = file:///c:/inetpub/wwwroot/appname LOG: Initial PrivatePath = bin Calling assembly : (Unknown). LOG: Policy not being applied to reference at this time (private, custom, partial, or location-based assembly bind). I found this issue on the web and tried all the solutions to it but nothing worked. I then went into all my projects that it references under the solution and cleared out the bin/debug folder in each, I cleared out the obj folder under each and also deleted the temporary files associated with the application. I rebuilt it and it still will not work due to this error Not sure what is causing this or how to fix this issue. I have tried restarting IIS, stopping index services which was said to be a known issue. This is .net framework 1.1 app and visual studio 2003 Any suggestions would be great. Thanks.

    Read the article

  • SO-Aware at the Atlanta Connected Systems User Group

    - by gsusx
    Today my colleague Don Demsak will be presenting a session about WCF management, testing and governance using SO-Aware and the SO-Aware Test Workbench at the Connected Systems User Group in Atlanta . Don is a very engaging speaker and has prepared some very cool demos based on lessons of real world WCF solutions. If you are in the ATL area and interested in WCF, AppFabric, BizTalk you should definitely swing by Don’s session . Don’t forget to heckle him a bit (you can blame it for it ;) )...(read more)

    Read the article

  • SO-Aware sessions in Dallas and Houston

    - by gsusx
    Our WCF Registry: SO-Aware keeps being evangelized throughout the world. This week Tellago Studios' Dwight Goins will be speaking at Microsoft events in Dallas and Houston ( https://msevents.microsoft.com/cui/EventDetail.aspx?culture=en-US&EventID=1032469800&IO=ycqB%2bGJQr78fJBMJTye1oA%3d%3d ) about WCF management best practices using SO-Aware . If you are in the area and passionate about WCF you should definitely swing by and give Dwight a hard time ;)...(read more)

    Read the article

  • OpenWorld Approaching... A few opportunities to share your needs with Oracle

    - by RichMill
    At OpenWorld from Monday the 1st to Wed. the 3rd. The My Oracle Support and Enterprise Manager user research team will be in action.  If you are someone who does patching, edits configurations, or uses either MOS configuration management (the collector) OR Enterprise Manager configuration compare or search, we have a treat for you!  Come give us your feedback on how you do your tasks, what needs you have, and how we can do better in this space. We will be doing this during OOW, but an OOW badge is not required to participate.  OR If you are someone who downloads large amounts of software (say, the entire EBS stack) and wants to understand how one customize a "recommended" stack of software for yourself, or your customers, let us know!  We have a study looking at how to create, customize and download all of the software needed for an installation. This will be done after OOW via webconference, so customers from anywhere in the world can participate. We want to hear from you, so we can get this right! E-mail us directly at [email protected] - or leave a comment with your email, so we can get your feedback into one or both of these two discussions. Hope you can participate!

    Read the article

  • xubutnu Nvidia Settings not remembered

    - by hozza
    I have an Nvidia card and I'm using NVIDIA Driver Version: 304.51 and the NVIDIA X Server Settings GUI. Everything works fine but when I reboot and login again both my two screens are set to +0 +0 so they mirror each other... I change the settings to screen 2 (NEC LED) to be left of screen 1 (DELL) click Apply and Save to X Configuration File... It all works but when I login again the settings are not remembered... This is my xorg config file, can anyone help out? # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 304.51 (buildd@komainu) Fri Oct 12 12:53:49 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "DELL 2005FPW" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 640" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "nvidiaXineramaInfoOrder" "DFP-0" Option "metamodes" "DFP-0: nvidia-auto-select +1280+0, DFP-2: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Kanban vs. Scrum

    - by Andrew Siemer
    Can someone with Kanban experience tell me how Kanban and Scrum differ? What are the pro's and con's of each of the different project management methodologies? Kanban seems to be getting a lot of press these days. I don't want to miss the hottest new way of tracking my teams failures (...and successes). Responses @S. Lott - What part of this article wasn't clear enough? infoq.com/articles/hiranabe-lean-agile-kanban/…. Do you have a more specific question? That is a great article but technically no it is not clear enough. That article gives a great amount of detail about kanban (and thank you for it...good read) but it does not specifically contrast Kanban vs. Scrum. That article will help someone like me make a decision but it most certainly won't help someone like my boss or in general someone less experienced! I was hoping for a quick overview of kanban pros and cons contrasted to scrum pros and cons. Thanks though! @S. Lott - Why do you say kanban vs. scrum? What leads you to conclude they are conflicting approaches? Can you make your question more specific? I don't think that they are necessarily conflicting. But they are different enough for a user to adhere to one over the other. Perhaps one fits a project or company better than the other? How would I sell one over the other when presenting a project management approach. Say I went to a company that was currently stuck in the rutt that is "water fall" - why would I sell one approach over the other?

    Read the article

  • IIS 7.5 saving configuration settings to web.config with IIS Manager

    - by Caroline Beltran
    I installed IIS 7.5 (Windows 7) on two different PCs, one PC saves configuration to applicationHost.config and the other to web.config! Screenshot: As you can see, the PC on the left does not contain the settings (they are stored in applicationHost.config) and the PC to the right does in fact store the settings in web.config. I have not and hope not to modify the configuration files by hand and would like to do it using IIS Manager only. Does anyone know why this happens or if there is some setting to force configuration to the web.config file? Thank you. Edit: I am adding a screenshot of the applicationHost.config file (PC on the left) demonstrating how the configuration is stored in it instead of inside the web.config file.

    Read the article

  • configuration transfer over scp on commit not working on Juniper EX-2200 switch

    - by liv2hak
    I am making a series of configuration changes on Junos EX- 2200 switch.I have this router connected to another PC via an ethernet cable.The IP address of the switch is 192.168.1.1.I am able to ping from 192.168.1.1 to 192.168.1.0 and vice-versa. After the changes I make I do the following commands set system archival configuration transfer-on-commit set system archival configuration archive-sites "scp://[email protected]:/home/karthik/ws_karthik/sw1_config_1.txt" password godfather commit Where there is a user with user-name "karthik " and password "godfather".The path shown above also exists in the system How ever I don't see the configuration file sw1_config_1.txt created at the path specified. Also I have verified that sshd is running on the PC (192.168.1.10) Am I doing something wrong here? It would be great if anyone could help me out.

    Read the article

  • Ubuntu 12.04 on VMware Player loses network configuration

    - by d4ryl3
    I've been having this issue for 2 weeks now with my VMware Player-hosted Ubuntu 12.04. I only use it for my LAMP stack. I've had no issues with it before until about 2 weeks ago when it almost always (once per day at least) loses its network configuration. On boot it shows: Waiting for network configuration... Waiting up to 60 more seconds for network configuration... Booting system without full network configuration... Then when I do ifconfig -a it doesn't show an IP Address and couldn't get online. The only resolutions I've found so far was either to reinstall VMware Tools or use the VMware Player installer and choose Repair. This is frustrating to me because even when the issue was resolved after doing either of the steps I mentioned, the IP Address gets changed. Then I'd have to update the Remote Configuration of my IDE (Netbeans) and my database manager. What could possible cause this? Please help. Thank you. Additional details: I'm using a laptop with Windows 7 and connected to the office WiFi, which is unrestricted as far as I know. Thanks again.

    Read the article

  • Windows CE restores default configuration after restart (Motorola MC3190)

    - by jarek
    Good morning, everyone! I have a problem with Motorola MC3190 hand scanner running on Windows CE. I've got few of those to make a new program for some kind of warehouse. There is already installed program which have been used by the customers before, so I delete this one and instead I install my new software which I've just made. It is running very well, but when I take out the battery and leave the device for entire night without power supply, it restores the whole configuration, so the old program is back, the wireless configuration is back and... Yeah. The scanner is restored to the configuration which was running when I received it few weeks ago. What I want to do is to set the whole configuration of the scanner so after a long power-off my program and my configuration will be restored. I truly believe someone knows how to do it. The time is running out, and I believe that the customer will be kind of annoyed when he will change the battery and the program which he bought will be gone. ;-) Regards, Jarek

    Read the article

  • Dynamic virtual host configuration in Apache

    - by Kostas Andrianopoulos
    I want to make a virtual host in Apache with dynamic configuration for my websites. For example something like this would be perfect. <VirtualHost *:80> AssignUserId $domain webspaces ServerName $subdomain.$domain.$tld ServerAdmin admin@$domain.$tld DocumentRoot "/home/webspaces/$domain.$tld/subdomains/$subdomain" <Directory "/home/webspaces/$domain.$tld/subdomains/$subdomain"> .... </Directory> php_admin_value open_basedir "/tmp/:/usr/share/pear/:/home/webspaces/$domain.$tld/subdomains/$subdomain" </VirtualHost> $subdomain, $domain, $tld would be extracted from the HTTP_HOST variable using regex at request time. No more loads of configuration, no more apache reloading every x minutes, no more stupid logic. Notice that I use mpm-itk (AssignUserId directive) so each virtual host runs as a different user. I do not intend to change this part. Since now I have tried: - mod_vhost_alias but this allows dynamic configuration of only the document root. - mod_macro but this still requires the arguments of the vhost to be declared explicitly for each vhost. - I have read about mod_vhs and other modules which store configuration in a SQL or LDAP server which is not acceptable as there is no need for configuration! Those 3 necessary arguments can be generated at runtime. - I have seen some Perl suggestions like this, but as the author states $s->add_config would add a directive after every request, thus leading to a memory leak, and $r->add_config seems not to be a feasible solution.

    Read the article

  • Group Policy Task Schedule deployed to User Configuration not working, works when in Computer Configuration?

    - by user80130
    I added a Scheduled Task on my Windows 2008 R2 Domain Controller in the Group Policy Manager: MyDomain Policy User Configuration Preferences Control Panel Settings Scheduled Tasks Basic Task, like starting notepad, when user unlocks his workstation. This should show up in the client workstation's task scheduler, but it dosn't. No errors or anything like that. If I use the "Computer Configuration" instead of "User Configuration" the task appears, and I'm able to run the task. I've tried the gpupdate /force followed by gpresult and checked the report, but it dosn't contain the GPO Scheduled Tasks I created? (again, does show up when using "Computer Configuration".) The issue is that I have to run the application in the current users context, and only on a specific Employee OU, and thereby limit this task only to Employee Workstations and not apply the application when the same employee log on to internal servers and such. Primary domain dontroller is a Windows 2008 R2, workstations Windows 7 Enterprise. What am I doing wrong ?

    Read the article

  • Which is the best opensource IT infra management s/w?

    - by karthick
    I am looking for some opensource IT infrastructure management s/w which should be able to monitor, manage servers & pc's, network devices, printers etc and it should have patch management, software inventory, user activity data etc And I am planning to have it on a linux server and it should be manageable for both linux and windows machines. I have found many while googling, but I don't know which is the best one. So anyone please suggest me, which is the best one I am looking for?.... Thanks... Your help is greatly appreciated..

    Read the article

  • Memory management (segmentation and paging) in 80286 and 80386: How does it work?

    - by Andrew J. Brehm
    I found lots of Web sites and books explaining how memory management worked on the 8086 and later x86 CPUs in Real Mode. I understand, I think, how two 16 bit values, segment address and offset are combined to get a linear 20 bit physical address (shift segment four bits to the left, add offset; segments are 64K and start every 16 bytes). But I couldn't find any good Web sites or books that explained how memory management works in Protected Mode, specifically the differences between 80286 and 80386. Can anyone point me to a good Web site or book (or explain it right here)? (For extra credit, i.e. an upvote, how does it work in Long Mode?)

    Read the article

  • Mostly offsite asset management (laptops/smartphones) - what is a good SaaS based solution?

    - by Jack T
    Most of our company assets are offsite. Everyone either works at home or onsite at a customer. Most asset management/audit/remote control software concentrate on company LAN based assets. We don't need an NMS as we use OpenNMS in the internal network. I was thinking of something like Altiris Client Management Suite but since everything is connected to the internet a SaaS based solution sounds like the ways to go. LogMeIn Central looks ok but not that comprehensive. What do you guys use?

    Read the article

  • Mirror DFS configuration data between 2 servers/ sites

    - by Retro69
    I have 1 Windows 2008 R2 server in Site A running Domain Integrated DFS in 2008 mode with a Single Namespace with a large number of DFS Targets all configured to point to a share on our NetApp SAN. Step 1. I want to initially copy this configuration data across to a 2012 server in Site A preserving all the configuration data. Step 2. I need to mirror this configuration to a 2nd server in Site B so we dont have a single point of failure for the DFS namespace. For Example. A user in Site B would "connect" to the DFS server in Site B, but if that site was down, it would attempt to connect to the Server in site A and vice versa. Note im not interesting in replicating actual Data here, just the configuration. Our NetApp SANS have mirroring which take care of that. Is this possible? Many thanks.

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >