Search Results

Search found 5611 results on 225 pages for 'contained databases'.

Page 135/225 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Would Using a PHP Framework Be Beneficial in My Context?

    - by Fractal
    I've just started work at a small start-up company who mainly uses PHP to develop their front-end apps. I had no prior PHP experience before joining, and this has led to my apps becoming large pieces of spaghetti code. I essentially started by adding code to implement an initial feature, and then continued to hack in more code to implement further features – without much thought for the overall design. The apps themselves output XML to render on small mobile devices. I recently started looking into frameworks that I could use. I reckon an advantage would be that they seem to force developers to modularise their programs using good-practice design patterns. This seems great for someone in my position. The extra functions they provide, for example: interfacing with databases in such a way as to make SQL injection impossible, would be very useful too. The downside I can see is that there will be a lot of overhead for me in terms of the time taken to learn the framework itself (while still getting to grips with PHP itself). I'm also worried that it will be overkill for the scale of the apps we develop. They tend to be programs that interface with a fairly simple back-end DB, and will generate about 5 different XML screens. Probably around 1 or 2 thousand lines of code. The time it takes just to configure the frameworks may not be worth it. The final problem I can see is that developers in the company – who have to go over my code, and who do not know the PHP framework I may use – will have a much harder time understanding it. Given those pros and cons, I'm still not sure on what the best course of action will be; so any advice will be greatly appreciated.

    Read the article

  • Controlling what data populates STAR

    - by user10747017
    Beginning with the Primavera Reporting Database 2.2\P6 Analytics 1.2 release, the first release that supported the P6 Extended Schema, a new ability was added to filter which projects could be included during an ETL run. In previous releases, all projects were included in an ETL run. Additionally, all projects with the option to enable publication are included in the ETL run by default.Because the reporting needs for P6 Extended Schema are different from those of STAR, you can define a filter that will limit the data that is included in the STAR schema. For example, your STAR schema can be filter to only include all projects in a specific Portfolio, or all projects with a project code assignment of 'For Analytics.'  Any criteria that can be defined in a Where clause and added to a view can be used to filter the projects included in the STAR schema. I highly suggest this approach when dealing with large databases. Unnecessary projects could cause the Extract portion of the ETL process to take longer. A table in STAR called etl_projectlist is the key for what projects are targeted during the ETL process. To setup the filter, perform the following steps:1. Connect to your Primavera P6 Project Management Database as Pxrptuser (extended schema owner) and create a new view:create or replace view star_project_viewasselect PROJECTOBJECTID objectidfrom projectportfolio pp, projectprojectportfolio pppwhere pp.objectid = ppp.PROJECTPORTFOLIOOBJECTIDand pp.name = 'STAR Projects'--The main field that MUST be selected in the view is the projectobjectid. Selecting any other field besides the projectobjectid will cause the view to be invalid and will not work. Any Where clause can be used, but projectobjectid is the key.2. In your STAR installation directory go the \res folder and edit the staretl.properties file.  Here you will define the view to be used.  Add the following line or update if exists:star.project.filter.ds1=star_project_view3. When running the  staretl.cmd or staretl.sh process the database link to Pxrtpuser will be accessed and this view will be used to populate the etl_projectlist table  with the appropriate projectobjectids as defined in the view created in step 1 above.

    Read the article

  • Real world pitfalls of introducing F# into a large codebase and engineering team

    - by nganju
    I'm CTO of a software firm with a large existing codebase (all C#) and a sizable engineering team. I can see how certain parts of the code would be far easier to write in F#, resulting in faster development time, fewer bugs, easier parallel implementations, etc., basically overall productivity gains for my team. However, I can also see several productivity pitfalls of introducing F#, namely: 1) Everyone has to learn F#, and it's not as trivial as switching from, say, Java to C#. Team members that have not learned F# will be unable to work on F# parts of the codebase. 2) The pool of hireable F# programmers, as of now (Dec 2010) is non-existent. Search various software engineer resume databases for "F#", way less than 1% of resumes contain the keyword. 3) Community support as of now (Dec 2010) is less available. You can google almost any problem in C# and find someone that has already dealt with it, not so with F#. Third party tool support (NUnit, Resharper etc) is also sketchy. I realize that this is a bit Catch-22, i.e. if people like me don't use F# then the community and tools will never materialize, etc. But, I've got a company to run, and I can be cutting edge but not bleeding edge. Any other pitfalls I'm not considering? Or anyone care to rebut the pitfalls I've mentioned? I think this is an important discussion and would love to hear your counter-arguments in this public forum that may do a lot to increase F# adoption by industry. Thanks.

    Read the article

  • Spotlight on an ACE: Edwin Biemond

    - by jeckels
    Edwin Biemond is an active member of the ACE community, having worked with Oracle's development tooling and database technologies since 1997. Since then, Edwin has become an expert in many of Oracle's middleware technologies as well, including WebLogic and SOA. In fact, Edwin has become so prolfic that he was named the Java Developer of the Year in 2009. Edwin hails from the Netherlands, where he is an architect at the company Amis, and is also a co-author of the OSB Development Cookbook. He's a proven expert in ADF, JSF, messaging (Edifact / ebXML), Enterprise Service Bus, web services and tuning of application servers and databases. Recently, Edwin posted a blog on the road map of WebLogic 12c, going over salient features and what the future looks like for Fusion Middleware and the Application Server areas - it's well worth a read, so give it a look. A snippet: WebLogic 12.1.3 will be the first version for many FMW 12c products like Oracle SOA Suite 12c and probably come in one big jar. 12.1.3 & 12.1.4 will add extra features and improvements to Elastic JMS & Dynamic Clusters. Elastic JMS in 12.1.3 will support Server Migration so you can’t lose any JMS messages. In 12.1.4, Dynamic Clusters will have support for auto-scaling based on thresholds based on user-defined metrics. WebLogic 12.1.4 will also have an API to control the Dynamic Clusters, this way we can easily program when to stop, start or remove nodes from a dynamic cluster. Further, Edwin is hosting a session on getting your FMW environment up and running in less than 10 minutes using popular tooling to configure and manage the many FMW components you have in your technology stack. Register now for this virtual developer day to see more. We thank Edwin for his commitment to being an ACE, his work on his blog, his social media publishing and his overall commitment to helping other technologists be even more successful with Oracle products. Follow Edwin on his blog, Twitter, Facebook, LinkedIn, or read his ACE Profile

    Read the article

  • Authentication system brainstorm

    - by gansbrest
    Hi. We got multiple small websites (microsites) and one main high traffic one with big users base. Right now the requirement is to build authentication system which should allow users to loign with the same identity across the network. All website are running on different domains, powered by Drupal 6 CMS and have separate databases (so sharing tables with prefix is not an option + it creates a huge mess in the db). Here is the set of core requirements I came up with: Users should be able to login with the same credentials to all sites within the network User’s data sharing between Main site (storage) and all micro sites within the network Data synchronization across the network when user changes the data (update email or password for example) The login/registration process should be seamless and consistent Register on any of the sites across the network and use that identity to login later on. In the future there might be a need to add openid authentication options. Basically we are looking at something similar stackexchange does, but not sure if they have central users base on not. I was thinking about custom solution which will include 2 parts (modules), one will be stored on the Main site for users data storing and responding to requests from clients. Second part (module) will be placed on each microsite, which is going to send requests to the Master. Some kind of client - server setup. One of the complications I see right away is #3. Data Synhcronization across the network. I just don't want to reinvent the wheel and maybe some work is already done in this direction. Looking forward to your ideas on how to approach this project. EDIT: We use MySQL database

    Read the article

  • Backup those keys, citizen

    - by BuckWoody
    Periodically I back up the keys within my servers and databases, and when I do, I blog a reminder here. This should be part of your standard backup rotation – the keys should be backed up often enough to have at hand and again when they change. The first key you need to back up is the Service Master Key, which each Instance already has built-in. You do that with the BACKUP SERVICE MASTER KEY command, which you can read more about here. The second set of keys are the Database Master Keys, stored per database, if you’ve created one. You can back those up with the BACKUP MASTER KEY command, which you can read more about here. Finally, you can use the keys to create certificates and other keys – those should also be backed up. Read more about those here. Anyway, the important part here is the backup. Make sure you keep those keys safe! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Is ASP.NET MVC too much overhead for smaller projects?

    - by Alexander Ryan Baggett
    I will be honest I don't really know much about MVC other than the stuff you can read online in 5 minutes. Unfortunately this doesn't really tell me whether its suited to smaller projects or not. I also read this related question and its chosen answer, but the business perspective is not a concern in this case for me as I am the only one making it. The next answer proceeds to say why it is more flexible. Sure, that's great. But my question is again, if its an ideal choice for a small project. For example I would rather use winforms to make a simple mockup of a small desktop program than do it on WPF because of the overhead of custom styling. So I have a project that will essentially have about 6-8 pages that read excel files and user input use that to pull a bit of data from databases and output resulting excel files. I will be the only one working on this project. If I used webforms I would expect it to take no more than 2-3 weeks. Now I am 100% comfortable with webforms. And I know its easy to do a small project in webforms. But I have only heard good things about MVC so I am seriously considering it.

    Read the article

  • SQL – Download FREE Book – Data Access for HighlyScalable Solutions: Using SQL, NoSQL, and Polyglot Persistence

    - by Pinal Dave
    Recently I was preparing for Big Data and I ended up on very interesting read for everybody. This is created by Microsoft and it is indeed a fantastic read as per my opinion. It took me some time to read this entire book but it was worth reading this as it tried to answer two of the very interesting questions related to muscle. Here is the abstract from the book: Organizations seeking to use a NoSQL database are therefore faced with a twofold challenge: • Which NoSQL database(s) best meet(s) the needs of the organization? • How does an organization integrate a NoSQL database into its solutions? As I keep on reading the book, I find it very interesting and informative. I suggest if you have time this weekend, download the book and read it. This guide focuses on the most common types of NoSQL database currently available, describes the situations for which they are most suited, and shows examples of how you might incorporate them into a business application. The guide summarizes the experiences of a fictitious organization named Adventure Works, who implemented a solution that comprised an assortment of different databases. Download Data Access for HighlyScalable Solutions:  Using SQL, NoSQL,  and Polyglot Persistence While we are talking about Big Data and NoSQL do not forget to check out my tomorrow’s blog as I am going to talk about the same subject and it will be very interesting. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, NoSQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Enabling support of EUS and Fusion Apps in OUD

    - by Sylvain Duloutre
    Since the 11gR2 release, OUD supports Enterprise User Security (EUS) for database authentication and also Fusion Apps. I'll plan to blog on that soon. Meanwhile, the R2 OUD graphical setup does not let you configure both EUS and FusionApps support at the same time. However, it can be done manually using the dsconfig command line. The simplest way to proceed is to select EUS from the setup tool, then manually add support for Fusion Apps using dsconfig using the commands below: - create a FA workflow element with eusWfe as next element: dsconfig create-workflow-element \           --set enabled:true \           --set next-workflow-element:Eus0 \           --type fa \           --element-name faWfe - modify the workflow so that it starts from your FA workflow element instead of Eus: dsconfig set-workflow-prop \           --workflow-name userRoot0 \           --set workflow-element:faWfe  Note: the configuration changes may slightly differ in case multiple databases/suffixes are configured on OUD.

    Read the article

  • 50% off ASP.NET hosting

    - by Fabrice Marguerie
    I haven't blogged for a long time because I'm busy working on an exciting new project. It's too early to tell you more. I'll provide details in a few months. Meanwhile, I wanted to write a quick post to share an excellent offer with you. It's that time of the year when you can get deals on many things, including web hosting. I'd like to remind you about Arvixe, a great web hosting provider for Windows (for ASP.NET) and Linux. For 48 hours, between Thursday November 24th at 00:00 PST (08:00 GMT/UTC) and Friday November 25th, Arvixe will be offering 50% off all of their shared hosting products. This will be for all new accounts, for life (as long as you continue to renew the account)!I've been using Arvixe for my websites for more than one year and a half now, and I highly recommend them. Here is an overview of what I get for a very good price:Unlimited diskspaceUnlimited data transferUnlimited domainsUnlimited POP3 and IMAP mailboxesUnlimited SQL Server 2008 databasesUnlimited MySQL databases.NET 1.1, 2, 3.5 and 4Dedicated application poolsFull trustIIS 7Daily backupsand more... And now, you can get that too for half the price. Just go to Arvixe.com and secure your own hosting account now by using the coupon code "Black Friday" during checkout.Disclaimer: the links to Arvixe are affiliate links that may bring me some money home if you sign up. Still, I recommend Arvixe because I use them and I'm very happy with what they offer.

    Read the article

  • Connecting Clinical and Administrative Processes: Oracle SOA Suite for Healthcare Integration

    - by Mala Ramakrishnan
    One of the biggest IT challenges facing today’s health care industry is the difficulty finding reliable, secure, and cost-effective ways to exchange information. Payers and providers need versatile platforms for enterprise-wide information sharing. Clinicians require accurate information to provide quality care to patients while administrators need integrated information for all facets of the business operation. Both sides of the organization must be able to access information from research and development systems, practice management systems, claims systems, financial systems, and many others. Externally, these organizations must share claims data, patient records, pharmaceutical data, lab reports, and diagnostic information among third party entities—all while complying with emerging standards for formatting, processing, and storing electronic health records (EHR). Service-oriented architecture (SOA) enables developers to integrate many types of software applications, databases and computing platforms within a particular health network as well as with community, state, and national health information exchanges. The Oracle SOA Suite for healthcare integration is designed to provide healthcare organizations with comprehensive integration capabilities within a unified middleware platform, as well as with healthcare libraries and templates for streamlining healthcare IT projects. It reduces the need for specialized skills and enforces an enterprise-wide view of critical healthcare data.  Here is a new white paper that details more about this offering: Oracle SOA Suite for Healthcare Integration

    Read the article

  • High Performance SQL Views Using WITH(NOLOCK)

    - by gt0084e1
    Every now and then you find a simple way to make everything much faster. We often find customers creating data warehouses or OLAP cubes even though they have a relatively small amount of data (a few gigs) compared to their server memory. If you have more server memory than the size of your database or working set, nearly any aggregate query should run in a second or less. In some situations there may be high traffic on from the transactional application and SQL server may wait for several other queries to run before giving you your results. The purpose of this is make sure you don’t get two versions of the truth. In an ATM system, you want to give the bank balance after the withdrawal, not before or you may get a very unhappy customer. So by default databases are rightly very conservative about this kind of thing. Unfortunately this split-second precision comes at a cost. The performance of the query may not be acceptable by today’s standards because the database has to maintain locks on the server. Fortunately, SQL Server gives you a simple way to ask for the current version of the data without the pending transactions. To better facilitate reporting, you can create a view that includes these directives. CREATE VIEW CategoriesAndProducts AS SELECT * FROM dbo.Categories WITH(NOLOCK) INNER JOIN dbo.Products WITH(NOLOCK) ON dbo.Categories.CategoryID = dbo.Products.CategoryID In some cases quires that are taking minutes end up taking seconds. Much easier than moving the data to a separate database and it’s still pretty much real time give or take a few milliseconds. You’ve been warned not to use this for bank balances though. More from Data Stream

    Read the article

  • PHP - Making CMS (architecture, etc.)

    - by UnknownProgramer
    I'm in the stage of planning new CMS. Before I used WordPress and other open source CMS for my clients, but I always had to write new modules and even mess with the code in order to do certain things. Which as you understand is not the best thing to do. So I finally decided to make my own CMS to work with, the way I need. But before I start it, I would like to think it trough carefully to ensure that I won't need to rewrite it ground up, just because I forgot to include some feature into architecture or did it wrong. I would like to hear your thoughs and the most important I would like you to suggest me some articles or books on that subject, especially on architecture of such systems. I googled a few good books, but that is not enough. The way I'm planning to do it: PHP5, completely OOP, modules architecture. You make a page and add any modules you need there, but modules are not global, but local to a page so you can make two pages with the same module, but content will be different if you set different "content ID" for these two entities. But it can be set the same, so two pages has the same content of the modules put there. Also I plan to support online storage web service (like amazon S3) for images and files, so I would like to hear your thoughs on it too. Also I have not yet decided how to store language data. I don't want to use DB for that, but I haven't decided yet. Also I think I will support other DB with global DB class and separate DB wrappers for MySQL and other databases. And, well, I would appreciate any other information you can provide for that subject.

    Read the article

  • From TFS to Git

    - by Saeed Neamati
    I'm a .NET developer and I've used TFS (team foundation server) as my source control software many times. Good features of TFS are: Good integration with Visual Studio (so I do almost everything visually; no console commands) Easy check-out, check-in process Easy merging and conflict resolution Easy automated builds Branching Now, I want to use Git as the backbone, repository, and source control of my open source projects. My projects are in C#, JavaScript, or PHP language with MySQL, or SQL Server databases as the storage mechanism. I just used github.com's help for this purpose and I created a profile there, and downloaded a GUI for Git. Up to this part was so easy. But I'm almost stuck at going along any further. I just want to do some simple (really simple) operations, including: Creating a project on Git and mapping it to a folder on my laptop Checking out/checking in files and folders Resolving conflicts That's all I need to do now. But it seems that the GUI is not that user friendly. I expect the GUI to have a Connect To... or something like that, and then I expect a list of projects to be shown, and when I choose one, I expect to see the list of files and folders of that project, just like exploring your TFS project in Visual Studio. Then I want to be able to right click a file and select check-in... or check-out and stuff like that. Do I expect much? What should I do to easily use Git just like TFS? What am I missing here?

    Read the article

  • Essbase Excel Add in - S.o.D.

    - by THE
    #cross { font-size: 72pt; } sadly another long lasting friend is about to be buried in the wet, cold data void that holds past programs (... and AOL CDs). The Essbase Excel Add In is about to be de-continued (see  Doc ID 1466700.1) in January '13. The (already out) version 11.1.2.2.x of the Excel Add In must be considered the last release of this particular program (Unless the guys from Applied OLAP bring out their own version next to the openOffice Add In that they already sport). As expected, SmartView achieved parity in functionality with Release 11.1.2.1.102 and ever since then it was just a question of time when our old buddy would get the shoe. For all users out there like me that have known and worked with the Excel Add In for the last decade(s) this is a loss. SmartView may have functionality parity, and may altogether be the stronger, open technology - capable of Planning forms, connection to HFM etc. .But (from my personal point of view) it will not give the end user the same direct access to his databases, with nothing between him and his Essbase Server. Of course it was to be expected that only one of the two could survive and it was obvious that this would be SmartView, so this does not come as a surprise. Still.A minute for an old friend . . . . . . Thank you, and let us look forward! Unless you had other plans for the upcoming season, why not spend it investigating SmartView for your Essbase interaction needs. We hear that the days between Christmas and new year hold unlimited potential to test out new things. Or take it as a new year resolution: "I will switch to SmartView at the earliest possible moment".

    Read the article

  • Today @ OOW: Identity Management for the SoMoClo world

    - by B Shashikumar
    Today at OpenWord, we have a very interesting lineup of Identity Management sessions that discuss how to extend identity management securrley to cloud, mobile and social ecosystems. Here are 3 of the can’t miss identity management sessions today: Identity Management and the Cloud: Security is regularly identified as the #1 barrier to cloud service adoption. Oracle Identity Management is designed to help customers extend and connect core identity services to SaaS applications and systems. This session explores how organizations are using Oracle Identity Management with cloud services and how some customers are offering identity management as a cloud service. Real-time External Authorization for Applications, Middleware and Databases: Externalization of authorization is key to manageability and audit. This session covers enterprise wide authorization solution deployment best practices and real-world examples of using Oracle Entitlements Server—the one-stop standards-compliant authorization solution—for middleware, applications, and data. Delivering Secure WiFi on the Tube as an Olympics Legacy from London 2012: In this session, Virgin Media, the U.K.’s first combined provider of broadband, TV, mobile, and home phone services, shares how it is providing free secure Wi-Fi services to the London Underground, using Oracle Virtual Directory and Oracle Entitlements Server, leveraging back-end legacy systems that were never designed to be externalized. As an Olympics 2012 legacy, the Oracle architecture will form a platform to be consumed by other Virgin Media services such as video on demand. Here is the complete lineup of Identity Management sessions today at OOW.

    Read the article

  • EPM Architecture: Reporting and Analysis

    - by Marc Schumacher
    Reporting and Analysis is the basis for all Oracle EPM reporting components. Through the Java based Reporting and Analysis web application deployed on WebLogic, it enables users to browse through reports for all kind of Oracle EPM reporting components. Typical users access the web application by browser through Oracle HTTP Server (OHS). Reporting and Analysis Web application talks to the Reporting and Analysis Agent using CORBA protocol on various ports. All communication to the repository databases (EPM System Registry and Reporting and Analysis database) from web and application layer is done using JDBC. As an additional data store, the Reporting and Analysis Agent uses the file system to lay down individual reports. While the reporting artifacts are stored on the file system, the folder structure and report based security information is stored in the relational database. The file system can be either local or remote (e.g. network share, network file system). If an external user directory is used, Reporting and Analysis services also communicate to this directory. The next post will cover WebAnalysis.

    Read the article

  • Why is Javascript used in MongoDB and CouchDB instead of other languages such as Java, C++?

    - by startup007
    I asked this question on SO but was suggested to try here. So here it goes: My understanding of Javascript so far has been that it is a client-side language that capture events and makes a web-page dynamic. But on reading the comparison between MongoDB and CouchDB I noticed that both are using Javascript. This makes me wonder the reason behind the choice of JavaScript over other conventional languages. I guess I am trying to understand the role of JavaScript and its advantages over other languages. Update: I am not asking about the languages / drivers supported by the two databases. The comparison says: Both CouchDB and MongoDB make use of Javascript. CouchDB uses Javascript extensively including in the building of views. MongoDB also supports running arbitrary javascript functions server-side and uses javascript for map/reduce operations. My lack of understanding pertains to why is Javascript being used at all for the backend work. Why is it preferred for building views in CouchDB, or for using map/reduce operations? Why C/C++ or Java were not used? What are the advantages in using Javascript for such back-end work?

    Read the article

  • Do you store mysql exports in your version control tool for reverting to in event of error?

    - by Rob
    We run an internal web server with in-house software to run a manufacturing line. When new product features are to be added, either or both of the following occur: changes to the in-house server software may be required to support these - these are for significant changes in functionality, being code drive. changes to the MySQL database for new entries for the part numbers, these are for smaller changes, configurations, changes to already existing values and parameters -- such changes don't require code changes. Ideally we'd want our changes to be here rather than in item 1. Item 1 is version controlled in Subversion, so previous revisions can be referred to for rolling back to in the event of problems introduced in the latest revision. But what about changes to the MySQL database? We have quality processes to ensure that such changes are error-free but there is always a chance that errors can pass through, e.g. mistake in data entry or faults with the code that uses the MySQL corrupting the database etc. We have a automated backup every 6 hours but what if we want more manual defined checkpoints in between these intervals, we could use the same backup system but I wondered if folks here used other methods to store previous states of databases, e.g. exporting the database as a plain text SQL dump -- at least with this method it would be possible to see diffs e.g. in Beyond Compare for trouble shooting. Thoughts?

    Read the article

  • Cleaning your BizTalk Build Server

    - by Michael Stephenson
    Just a little note for myself this one.At one of my customers where it is still BizTalk 2006 one of the build servers is intermittently getting issues so I wanted to run a script periodically to clean things up a little.  The below script is an example of how you can stop cruise control and all of the biztalk services, then clean the biztalk databases and reset the backup process and then click everything off again.This should keep the server a little cleaner and reduce the number of builds that occasionally fail for adhoc environmental issues.REM Server Clean ScriptREM =================== REM This script is ran to move the build server back to a clean state echo Stop Cruise Controlnet stop CCService echo Stop IISiisreset /stop echo Stop BizTalk Servicesnet stop BTSSvc$<Name of BizTalk Host><Repeat for other BizTalk services> echo Stop SSOnet stop ENTSSO echo Stop SQL Job Agentnet stop SQLSERVERAGENT echo Clean Message Boxsqlcmd -E -d BizTalkMsgBoxDB -Q "Exec bts_CleanupMsgbox"sqlcmd -E -d BizTalkMsgBoxDB -Q "Exec bts_PurgeSubscriptions"  echo Clean Tracking Databasesqlcmd -E -d BizTalkDTADb -Q "Exec dtasp_CleanHMData" echo Reset TDDS Stream Statussqlcmd -E -d BizTalkDTADb -Q "Update TDDS_StreamStatus Set lastSeqNum = 0" echo Force Full Backupsqlcmd -E -d BizTalkMgmtDB -Q "Exec sp_ForceFullBackup" echo Clean Backup Directorydel E:\BtsBackups\*.* /q  echo Start SSOnet start ENTSSO echo Start SQL Job Agentnet start SQLSERVERAGENT echo Start BizTalk Servicesnet start BTSSvc$<Name of BizTalk Host><Repeat for other BizTalk services> echo Start IISiisreset /start echo Start Cruise Controlnet start CCService

    Read the article

  • In search of database delivery practitioners and enthusiasts

    - by Claire Brooking
    We know from speaking with many of you at tradeshows and user groups that database delivery is not a factory production line. During planning, evaluation, quality control, and disaster mitigation, the people having their say at each step means that successful database deployment is a carefully managed course of action. With so many factors involved at every stage, we would love to find a way for our software to help out, by simplifying processes, speeding them up or joining together the people and the steps that make it all happen. We’re hoping our new research group for database delivery (SQL Server and Oracle) will help us understand the views and experiences of those of you out there in the trenches managing database changes. As part of our new group, we’ll be running a variety of research sessions, including surveys and phone interviews, over coming months. If you have opinions to share on Continuous Integration or Continuous Delivery for databases, we’d love to hear from you. Your feedback really will count as the product teams at Red Gate build plans. For some of our more in-depth sessions, we’ll also be offering participants an Amazon voucher as a thank-you for your time. If you’re not yet practising automated database deployment processes, but are contemplating or planning it, please do consider joining our research group too. If you’d like to sign up to the group and find out more, please fill in a quick form online, and we’ll be in touch to let you know about new research opportunities you might be interested in. We look forward to hearing your stories!

    Read the article

  • Database Backup History From MSDB in a pivot table

    - by steveh99999
    I knocked up a nice little query to display backup history for each database in a pivot table format.I wanted to display the most recent full, differential, and transaction log backup for each database. Here's the SQL :-WITH backupCTE AS (SELECT name, recovery_model_desc, d AS 'Last Full Backup', i AS 'Last Differential Backup', l AS 'Last Tlog Backup' FROM ( SELECT db.name, db.recovery_model_desc,type, backup_finish_date FROM master.sys.databases db LEFT OUTER JOIN msdb.dbo.backupset a ON a.database_name = db.name WHERE db.state_desc = 'ONLINE' ) AS Sourcetable   PIVOT (MAX (backup_finish_date) FOR type IN (D,I,L) ) AS MostRecentBackup ) SELECT * FROM backupCTE Gives output such as this :-  With this query, I can then build up some straightforward queries to ensure backups are scheduled and running as expected -For example, the following logic can be used ;-  - WHERE [Last Full Backup] IS NULL) - ie database has never been backed up.. - WHERE [Last Tlog Backup] < DATEDIFF(mm,GETDATE(),-60) AND recovery_model_desc <> 'SIMPLE') - transction log not backed up in last 60 minutes. - WHERE [Last Full Backup] < DATEDIFF(dd,GETDATE(),-1) AND [Last Differential Backup] < [Last Full Backup]) -- no backup in last day.- WHERE [Last Differential Backup] < DATEDIFF(dd,GETDATE(),-1) AND [Last Full Backup] < DATEDIFF(dd,GETDATE(),-8) ) -- no differential backup in last day when last full backup is over 8 days old.   

    Read the article

  • Automated Acceptance tests under specific contraints

    - by HH_
    This is a follow up to my previous question, which was a bit general, so I'll be asking for a more precise situation. I want to automate acceptance testing on a web application. Briefly, this application allows the user to create contracts for subscribers with the two constraints: You cannot create more than one contract for a subscriber. Once a contract is created, it cannot be deleted (from the UI) Let's say TestCreate is a test case with tests for the normal creation of a contract. The constraints have introduced complexities to the testing process, mainly dependencies between test cases and test executions. Before we run TestCreate we need to make sure that the application is in a suitable state (the subscriber has no contract) If we run TestCreate twice, the second run will fail since the state of the application will have changed. So we need to revert back to the initial state (i.e. delete the contract), which is impossible to do from the UI. More generally, after each test case we should guarantee that the state is reverted back. And since, in this case, it is impossible to do it from the UI, how do you handle this? Possible solution: I thought about doing a backup of the database in the state that I desire, and after each test case, run a script which deletes the db and restores the backup. However, I find that to be too heavy to do for each single test case. In addition, what if some information are stored in files? or in multiple or unaccessible databases? My question: In this situation, what would an experienced tester do to write automated and maintanable tests. Thank you. More info: I'm trying to integrate tests into a BDD framework, which I find to be a neat solution for test documentation and communication, but it does not solve this particular problem (it even makes it harder)

    Read the article

  • Discovering Your Project

    - by Tim Murphy
    The discovery phase of any project is both exciting and critical to the project’s success.  There are several key points that you need to keep in mind as you navigate this process. The first thing you need to understand is who the players in the project are and what their motivations are for the project.  Leaving out a key stakeholder in the resulting product is one of the easiest ways to doom your project to fail.  The better the quality of the input you have at this early phase the better chance you will have of creating a well accepted deliverable. The next task you should tackle is to gather the goals for the project.  Specifically, what does the company expect to get for the money they are about to layout.  This seems like a common sense task, but you would be surprised how many teams to straight to building the system.  Even if you are following an agile methodology I believe that this is critical. Inventorying the resources that already exists gives you an idea what you are going to have to build and what you can leverage at lower risk.  This list should include documentation, servers, code repositories, databases, languages, security systems and supporting teams.  All of these are “resources” that can effect the cost and delivery schedule of your project. Finally, you need to verify what you have found and documented with the stakeholders and subject matter experts.  Documentation that has not been reviewed is actually a list of assumptions and we all know that assumptions are the mother of all screw ups. If you give the discovery phase of your project the attention that it deserves your project has a much better chance of success. I would love to hear what other people find important for this phase.  Please leave comments on this post so we can share the knowledge. del.icio.us Tags: Project discovery,documentation,business analysis,architecture

    Read the article

  • starting up with VPS or cloud hosting? [closed]

    - by FlyOn
    Possible Duplicate: How to find web hosting that meets my requirements? Summary: I want to start hosting my product. I'd like to register domains (at some point). I'm a linux beginner. Thinking about scalability and price, I'm thinking am I better off on a VPN to get started or would some form of cloud hosting be better (not being familiar with either). Full question: I'm creating a product where people can create their own 3D representations of whatever data / info they have, and (re)organise that data. The product is coming along beautifully on my local environment, but it's about time I start getting some form of hosting ready, and I could really use some advice where / how to get started: I'd like people to be able to move/register their own domains on my server. I could start without this just to demo the product, but it would be the very first on the todo list. I'd like to automatically copy some files / install databases etc for each domain. I probably want to see if I can let users manage their own subdomains at some points, but for now: I'd like start as simple as possible. I've always on a windows machine, so my linux experience is quite basic. I really don't mind getting into it, but I'm thinking it's better to get my product out first of all and see where to go from there. Although... I'd like things to be scalable. If I set up some reseller VPN now which only scales to 100 domains or so, which means I have to set up something else / move again when I pass that level, or which means that I'm in trouble if I suddenly get lots of new customers... hmm. Finally, I need to start cheap. I'm putting all I have into starting this company, and live on very little. So before I have any customers, 50 dollars a month is a fair bit and 100 dollars a month may be too much. If anyone has some tips to help get me started I'd be really grateful.

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >