Search Results

Search found 57000 results on 2280 pages for 'alter database open'.

Page 72/2280 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • NEW CERTIFICATION: Oracle Certified Expert, Oracle Database 11g Release 2 SQL Tuning

    - by Brandye Barrington
    Oracle Certification announces the release of the new Oracle Certified Expert, Oracle Database 11g Release 2 SQL Tuning certification. This certification is designed forDevelopers, Database Administrators and SQL developers who are proficient at tuning efficient SQL statements. This certification covers topics on core elements such as: identifying and tuning inefficient SQL statements, using automatic SQL tuning, managing optimizer statistics on database objects, implementing partitioning and analyizing queries. Beta testing for the Oracle Database 11g Release 2: SQL Tuning exam (1Z1-117) is now underway and thus is available at the greatly discounted rate of $50 USD. Visit pearsonvue.com/oracle and register for exam 1Z1-117. You can get all preparation details on the Oracle Certification website, including exam objectives, number of questions, time allotments, and pricing. QUICK LINKS: Certification Track: Oracle Certified Expert, Oracle Database 11g Release 2 SQL Tuning Certification Exam: Oracle Database 11g Release 2: SQL Tuning (1Z0-117) Certification Website: About Beta Exams Register Now: Pearson VUE

    Read the article

  • LibreOffice: Open in current program by default?

    - by David Oneill
    I often need to open pipe delimited .txt files in LibreOffice Calc. However, once I have Calc running, if I do File Open and select a spreadsheet with the extension .txt, it opens it in Writer instead. Is there a way to tell the file I'm trying to open using whatever program instead of trying to pick which one to use? Barring that, is there a way I tell it to always use Calc for .txt files (when I open them from the open dialog in Calc)? I still want them to open in GEdit like they currently do if I double click them from Thunar.

    Read the article

  • Database Security: The First Step in Pre-Emptive Data Leak Prevention

    - by roxana.bradescu
    With WikiLeaks raising awareness around information leaks and the harm they can cause, many organization are taking stock of their own information leak protection (ILP) strategies in 2011. A report by IDC on data leak prevention stated: Increasing database security is one of the most efficient and cost-effective measures an organization can take to prevent data leaks. By utilizing the data protection, access control, account management, encryption, log management, and other security controls inherent in the database management system, entities can institute first-level control over the widest range of protected information. As a central repository for unstructured data, which is growing at leaps and bounds, the database should be the first layer providing information leakage protection. Unfortunately, most organizations are not taking sufficient steps to protect their databases according to a survey of the Independent Oracle User Group. For example, any operating system administrator or database administrator can access the all the data stored in the database in most organizations. Without any kind of auditing or monitoring. And it's not just administrators, database users can typically access the database with ad-hoc query tools from their desktop and by-pass any application level controls. Despite numerous regulations calling for controls to limit the powers of insiders, most organizations still put too many privileges in the hands of their employees. Time and time again these excess privileges have backfired. Internal agents were implicated in almost half of data breaches according to the Verizon Data Breach Investigations Report and the rate is rising. Hackers also took advantage of these excess privileges very successfully using stolen credentials and SQL injection attacks. But back to the insiders. Who are these insiders and why do they do it? In 2002, the U.S. Secret Service (USSS) behavioral psychologists and CERT information security experts formed the Insider Threat Study team to examine insider threat cases that occurred in US critical infrastructure sectors, and examined them from both a technical and a behavioral perspective. A series of fascinating reports has been published as a result of this work. You can learn more by watching the ISSA Insider Threat Web Conference. So as your organization starts to look at data leak prevention over the coming year, start off by protecting your data at the source - your databases. IDC went on to say: Any enterprise looking to improve its competitiveness, regulatory compliance, and overall data security should consider Oracle's offerings, not only because of their database management capabilities but also because they provide tools that are the first layer of information leak prevention. Learn more about Oracle Database Security solutions and get the whitepapers, demos, tutorials, and more that you need to protect data privacy from internal and external threats.

    Read the article

  • #OOW 2012 : IaaS, Private Cloud, Multitenant Database, and X3H2M2

    - by Eric Bezille
    The title of this post is a summary of the 4 announcements made by Larry Ellison today, during the opening session of Oracle Open World 2012... To know what's behind X3H2M2, you will have to wait a little, as I will go in order, beginning with the IaaS - Infrastructure as a Service - announcement. Oracle IaaS goes Public... and Private... Starting in 2004 with Fusion development, Oracle Cloud was launch last year to provide not only SaaS Application, based on standard development, but also the underlying PaaS, required to build the specifics, and required interconnections between applications, in and outside of the Cloud. Still, to cover the end-to-end Cloud  Services spectrum, we had to provide an Infrastructure as a Service, leveraging our Servers, Storage, OS, and Virtualization Technologies, all "Engineered Together". This Cloud Infrastructure, was already available for our customers to build rapidly their own Private Cloud either on SPARC/Solaris or x86/Linux... The second announcement made today bring that proposition a big step further : for cautious customers (like Banks, or sensible industries) who would like to benefits from the Cloud value of "as a Service", but don't want their Data out in the Cloud... We propose to them to operate the same systems, Exadata, Exalogic & SuperCluster, that are providing our Public Cloud Infrastructure, behind their firewall, in a Private Cloud model. Oracle 12c Multitenant Database This is also a major announcement made today, on what's coming with Oracle Database 12c : the ability to consolidate multiple databases with no extra additional  cost especially in terms of memory needed on the server node, which is often THE consolidation limiting factor. The principle could be compare to Solaris Zones, where, you will have a Database Container, who is "owning" the memory and Database background processes, and "Pluggable" Database in this Database Container. This particular feature is a strong compelling event to evaluate rapidly Oracle Database 12c once it will be available, as this is major step forward into true Database consolidation with Multitenancy on a shared (optimized) infrastructure. X3H2M2, enabling the new Exadata X3 in-Memory Database Here we are :  X3H2M2 stands for X3 (the new version of Exadata announced also today) Heuristic Hierarchical Mass Memory, providing the capability to keep most if not all the Data in the memory cache hierarchy. Of course, this is the major software enhancement of the new X3 Exadata machine, but as this is a software, our current customers would be able to benefit from it on their existing systems by upgrading to the new release. But that' not the only thing that we did with X3, at the same time we have upgraded everything : the CPUs, adding more cores per server node (16 vs. 12, with the arrival of Intel E5 / Sandy Bridge), the memory with 512GB memory as well per node,  and the new Flash Fire card, bringing now up to 22 TB of Flash cache. All of this 4TB of RAM + 22TB of Flash being use cleverly not only for read but also for write by the X3H2M2 algorithm... making a very big difference compare to traditional storage flash extension. But what does those extra performances brings to you on an already very efficient system: double your performances compare to the fastest storage array on the market today (including flash) and divide you storage price x10 at the same time... Something to consider closely this days... Especially that we also announced the availability of a new Exadata X3-2 8th rack : a good starting point. As you have seen a major opening for this year again with true innovation. But that was not the only thing that we saw today, as before Larry's talk, Fujitsu did introduce more in deep the up coming new SPARC processor, that they are co-developing with us. And as such Andrew Mendelsohn - Senior Vice President Database Server Technologies came on stage to explain that the next step after I/O optimization for Database with Exadata, was to accelerate the Database at execution level by bringing functions in the SPARC processor silicium. All in all, to process more and more Data... The big theme of the day... and of the Oracle User Groups Conferences that were also happening today and where I had the opportunity to attend some interesting sessions on practical use cases of Big Data one in Finances and Fraud profiling and the other one on practical deployment of Oracle Exalytics for Data Analytics. In conclusion, one picture to try to size Oracle Open World ... and you can understand why, with such a rich content... and this only the first day !

    Read the article

  • Document Link about Database Features on Exadata

    - by Bandari Huang
    DBFS on Exadata Exadata MAA Best Practices Series - Using DBFS on Exadata  (Internal Only) Oracle® DatabaseSecureFiles and Large Objects Developer's Guide 11g Release 2 (11.2) E18294-01 Configuring a Database for DBFS on Oracle Database Machine [ID 1191144.1] Configuring DBFS on Oracle Database Machine [ID 1054431.1] Oracle Sun Database Machine Setup/Configuration Best Practices [ID 1274318.1] - Verify DBFS Instance Database Initialization Parameters    DBRM on Exadata Exadata MAA Best Practices Series - Benefits and use cases with Resource Manager, Instance Caging, IORM  (Internal Only) Oracle® Database Administrator's Guide 11g Release 2 (11.2) E25494-02    

    Read the article

  • Oracle Database In-Memory

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE Larry Ellison unveiled the next major milestone in database technology, Oracle Database In-Memory, on June 10, 2014. Oracle Database In-Memory will be generally available in July 2014 and can be used with all hardware platforms on which Oracle Database 12c is supported. This option will accelerate database performance by orders of magnitude for analytics, data warehousing, and reporting while also speeding up online transaction processing (OLTP). It allows any existing Oracle Database-compatible application to automatically and transparently take advantage of columnar in-memory processing, without additional programming or application changes. Benefits Fast ad-hoc analytics without the need to pre-create indexes Completely transparent to existing applications Faster mixed workload OLTP No database size limit Industrial strength availability and security Robustness and maturity of Oracle Database 12c To find out more see Oracle Database In-Memory Comment from Rittman Mead on Oracle In-Memory Option Launch  ... and I will let you know how this unfolds in regards to advantages for OBI11g and Exalytics and Big Data over the coming months. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Entity Framework: Connecting to a mdf user database file via localDB during script execution

    - by Marko Apfel
    Problem If you run the “Generate database from model” wizard and execute the generated script the destination database could be the wrong one (for instance master of the SQL Server). Solution To use an own mdf attachable user database some connection information must specified during script execution. Execute your script opens the dialog “Connect to Server”. Press “Options” and go to the second tab “Connection Properties”. Select “Browse server” in the “Connect to database” dropdown box: Confirm the information dialog with “Yes”. In the following dialog you could choose your user database. Now the schema is created in the user database.

    Read the article

  • OpenBSD has open ports in default installation

    - by celil
    I have been considering replacing Ubuntu with OpenBSD to improve the security on my local server. I need to have ssh access to it, and I also need it to serve static web content - so the only ports I need open are 22 and 80. However, when I scan my server for open ports after installing OpenBSD 4.8, and enabling ssh and http at /etc/rc.conf httpd_flags="" sshd_flags="" I discovered that it had several other open ports: Port Scan has started… Port Scanning host: 192.168.56.102 Open TCP Port: 13 daytime Open TCP Port: 22 ssh Open TCP Port: 37 time Open TCP Port: 80 http Open TCP Port: 113 ident ssh (22) and http (80) should be open as I enabled httpd and sshd, but why are the other ports open, and should I worry about them creating additional security vulnerabilities? Should they be open in a default installation?

    Read the article

  • MySQL maintenance - how to clear the buffer?

    - by Dougal
    We have a server running our web app (PHP / MySQL) which is SLOW. My predecessor says that: "We use to do database maintenance, which use to clear the buffer, cached and unwanted variables." And I wonder what on earth he means with that statement? Does he mean a simple optimize of the tables? Or the query cache? I understand MySQL but don't really know what he is describing. I would appreciate any pointers. Thanks.

    Read the article

  • Is there an industry standard for systems registered user permissions in terms of database model?

    - by EASI
    I developed many applications with registered user access for my enterprise clients. In many years I have changed my way of doing it, specially because I used many programming languages and database types along time. Some of them not very simple as view, create and/or edit permissions for each module in the application, or light as access or can't access certain module. But now that I am developing a very extensive application with many modules and many kinds of users to access them, I was wondering if there is an standard model for doing it, because I already see that's the simple or the light way won't be enough.

    Read the article

  • Which database to use for quickly and pygtk

    - by usher
    I'm writing application using quickly Pygtk and glade. this application should have database connection (such as MySQL) for reading and writing data from the local or outsourcing machine \ server. However, in my machine there is MySQL installed, but when releasing the app it sould be installed on another ubuntu machine, which may not have mysql and moreover not the same database with the required database name and structure.... So my questions are: Is it a good choice using mysql as database 1.2 If not what is? Is it posible to embeding mysql or other database program during the installation from ubuntu software center? 2.2 If it's posible: hwo(any tutorial?) Where to store secure data outside the mysql (or whatever) for conecting the database every time a user launch the application

    Read the article

  • New technical whitepaper on Database-as-a-Service

    - by Javier Puerta
    High Availability Best Practices for Database Consolidation- The Foundation for Database-as-a-Service. An Oracle White Paper - April 2014 This paper provides MAA best practices for Database Consolidation using Oracle Multitenant. It describes standard HA architectures that are the foundation for DBaaS. It is most appropriate for a technical audience: Architects, Directors of IT and Database Administrators responsible for the consolidation and migration of traditional database deployments to DBaaS.  Recommended best practices are equally relevant to any platform supported by Oracle Database except where explicitly noted as being an optimization or an example that applies only to Oracle Engineered systems. 

    Read the article

  • Are there Negative Impact of opensource on commercial environment?

    - by Lostsoul
    I know this is not a good fit for Stack Overflow but wasn't sure if it was good for this site also so let me know if its not and I'll delete it. I love programming for fun but my role in my company is not technical. I have always loved the hacker culture and have been trying to drive that openness within my company from day one. My company has a very broad range of products and there are a few that are not strategic to us so I wanted to open source them (so we can focus on what makes us unique and open source the products that every firm has). Our industry does not open source(we would be the first firm to try this) and the feedback I'm getting from my management team is either 1) we'll destroy the industry or 2) all competitive commercial firms will unite against us and we'll be wiped out either way. I disagreed on both points because I think transparency will only grow our industry and our firm (think of McDonalds/KFC sharing their recipe openly, people may copy you, competitors may target you, but customers also may feel more comfortable buying your product. The value add, I believe, is in the delivery and experience not in hoarding the recipe). It's a big battle in my firm right now between the IT people who have seen the positive effects of sharing and the business people who think we'll be giving up everything (they prefer we sell parts we want to opensource, but in their defense this is standard when divesting something). Our industry is very secretive and I don't want to put anyone(even my competitors employees) out of a job yet I don't want to protect inefficient people by not being open with everyone. Yet I've seen so many amazing technologies created in interesting ways just by giving people freedom to take apart code and put it back together. I'm interested in hearing people's thoughts(doesn't have to be to my specific situation, I'm looking for the general lessons). Its a very stressful decision(but one I feel I must make) because if we go the open source route then there will be no going back. So what are your thoughts? Does open sourcing apply generally or is it only really applicable to software? Is it overall good for people in the industry and outside? I'm actually more interested in the negativeness effects(although positive are welcomed as well) Update: Long story short, although code is involved this is not so much about code as it is more about the idea of open sourcing. We are a mid sized quant hedge fund. We have some unique strategies but also have the standard long/short, arbitrage, global macro, etc.. funds. We are keeping the unique funds we have but the other stuff that everyone else has we are considering open sourcing (We have put in years of work & millions of dollars into. Our funds is pretty popular and our performance is either in first or second quartile so I suspect there will be interest but I don't know to what extent). The goal is not to get a community to work for us or anything, the goal is to let anyone who wants to tinker with it do so and create anything they want (it will not be part of our product line although I may unofficially allocate some our of staff's time to assist any community that grows). Although the code base is quite large, the value in this is the industry knowledge and approaches we have acquired (there are many books on artificial intelligence and quant trading but they are often years behind what's really going on as most firms forbid their staff from discussing what they are doing). We are also considering after we move our clients out to let the software still run and output the resulting portfolios for free as well so people can at least see the results(as long as we have avail. infrastructure). I think our main choices are, we can continue to fight for market share in a products that are becoming commoditized, we can shut the funds/products down(and keep the code but no one outside of our firm will ever learn from it) or we can open source it and let people do what they want. By open sourcing it, my idea is that the talent pool in the industry will grow because right now most of our hires have the same background (CFA, MBA, similar school, same experience,etc.. because we can't spend time training people so the industry 'standardizes' most people and thus the firms themselves start to look/act similar) but this may allow us to identify talent that has never been in the industry before (if we put a GPU license then as people learn from what we did, we can learn from what they do as well and maybe apply it to other areas of our firm). I see a lot of benefits but not many negatives while my peers at the company see the opposite.

    Read the article

  • What are some ways people deploy relational database changes using Node.js? [closed]

    - by JamesEggers
    I've been diving more and more into Node.js and hosting services like Heroku and Nodejitsu recently and have been trying to figure out how to best deploy database changes for postgres or mysql. There are a few migration projects under npm that I can see; however, all seem to be really buggy or just not work. I currently manage the Monarch migration project on npm, but it's currently buggy itself and my experiences developing such utilities are in other, more procedural, languages. So what do people use to deploy changes to their databases on these environments? What has worked for people? I'm looking for a better understanding of what the current situation/process looks like.

    Read the article

  • Is it safe to set MySQL isolation to "Read Uncommitted" (dirty reads) for typical Web usage? Even with replication?

    - by Continuation
    I'm working on a website with typical CRUD web usage pattern: similar to blogs or forums where users create/update contents and other users read the content. Seems like it's OK to set the database's isolation level to "Read Uncommitted" (dirty reads) in this case. My understanding of the general drawback of "Read Uncommitted" is that a reader may read uncommitted data that will later be rollbacked. In a CRUD blog/forum usage pattern, will there ever be any rollback? And even if there is, is there any major problem with reading uncommitted data? Right now I'm not using any replication, but in the future if I want to use replication (row-based, not statement-based) will a "Read Uncommitted" isolation level prevent me from doing so? What do you think? Has anyone tried using "Read Uncommitted" on their RDBMS?

    Read the article

  • Is it better to use a Database or a data structure for network stack?

    - by poly
    I've built a multi threaded messaging application in C and I'm currently using a MySQL Memory table to save the session ID, but I'm not sure whether this was a good decision or not. It works like this, the application sends a message and saves the source session ID in the MySQL table. When the application gets the success response it will remove the session's ID from the MySQL table, or if it received an error response then it will keep the ID to be retried later. I've built it this way so that I don't need to care about building a data structure by myself, and the Database provides flexibility when it comes to querying it. Do you think this is appropriate or do I need to use something else? Please note that the application is expecting to handle a large number of transactions/sec.

    Read the article

  • Designing a user-defined list to be stored in a relational database - Should I include user index?

    - by Zaemz
    By index, I mean, as the user creates the list, each item receives an integer index for its place in that particular list. Since there will be a table of ListItems, I'd prefer to avoid using the name "Index" for the field. Then I was thinking - should I even include the list index in the database? I figured I would because the list would be created in the same fashion every time, then. Or I could order the list for the user based on its actual primary key, since the list items are created in succession anyway... What should I do?

    Read the article

  • Is there anyway i can create an intranet request form and have it be stored in a database? [on hold]

    - by eternalearth888
    I am trying to create a form for my companies intraNet site. The idea is as follows: An employee wants to make a purchase, so they will go to the appropriate page in the intraNet They will fill out the form on the intraNet page They click the email button The data in the form is saved in a database, and an email is sent to me stating that there is purchase order request form filled out I am not exactly sure how to go about this. Part of me wants to create it in a Data Access Page but I am not sure that's correct. If there is no one here who can help, is there anyone who can direct me to someone/something that can help me?

    Read the article

  • 2012?6?20?(?):?????Oracle Database Appliance????Enterprise Edition???????

    - by Yusuke.Yamamoto
    Oracle Database Appliance????Enterprise Edition??????~Enterprise Edition?????????????????!~ Oracle Database Enterprise Edition ??? ??????????? ????????????? ??????????????????????????! Oracle Database Enterprise Edition ????????????????????????????Oracle Database Appliance ??????????????????????????????????? ????? 2???RAC?!? Oracle Database Appliance??????? Oracle Database11g Release2 RAC One Node???? ??????????????OracleDB??????????? ???? ??:2012?6?20?(?) 13:30~16:30(????:13:00~) ??:????????? 3? ??:???????? ??:???????????????HBA ???:?? ?????·?????? Oracle Database Appliance????Enterprise Edition??????

    Read the article

  • ??????????? Database Firewall ??????????

    - by ???02
    ??????????? Database Firewall ??????????SQL?????????????????SQL????????????WEB?????HTTP??????SQL??????????????????????????????????????????????SQL????????????????WEB??????????????????????????????????SQL??????????????????????Oracle Database Firewall????????????????????SQL?????SQL?????????·???????????????????????????????????Databese Firewall ???????????????Oracle Database Firewall???????????????????????????????·??SQL???????????????·Database ?????????????????·SQL??????Database??????·????·?????????????????????·Oracle Database Firewall ???????Oracle Databese Firewall ?????????? ?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????_DBFW????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ???????????????·?????????????????????3?????????????·???????????????????????????????????????????????????????????? ·???????????????????90?????????????????????????????????????????????????????·????????????????????????????????????????????????? ·???????????????????????????????????????????????????????????????????????????????????????????????????????? Web????????????????????????????????????1~4????????????????????———?????———?????????????? 1.????????? : 2.??????(????) : 3.??????(????) : 4.??????? :———?????———?????????????? 1.????????? : ????????????????2.??????(????) :         2011?11?15? 13:00-14:303.??????(????) :         2011?11?17? 15:00-16:304.??????? : ?????????????? ???????????????????????????? ?????? Oracle Direct

    Read the article

  • Best approach for Java/Maven/JPA/Hibernate build with multiple database vendor support?

    - by HDave
    I have an enterprise application that uses a single database, but the application needs to support mysql, oracle, and sql*server as installation options. To try to remain portable we are using JPA annotations with Hibernate as the implementation. We also have a test-bed instance of each database running for development. The app is building nicely in Maven, and I've played around with the hibernate3-maven-plugin and can auto-generate DDL for a given database dialect. What is the best way to approach this so that individual developers can easily test against all three databases and our Hudson based CI server can build things propertly. More specifically: 1) I thought the hbm2ddl goal in the hibernate3-maven-plugin would just generate a schema file, but apparently it connects to a live database and attempts to create the schema. Is there a way to have this just create the schema file for each database dialect without connecting to a database? 2) If the hibernate3-maven-plug insists on actually creating the database schema, is there a way to have it drop the database and recreate it before creating the schema? 3) I am thinking that each developer (and the hudson build machine) should have their own separate database on each database server. Is this typical? 4) Will developers have to run Maven three times...once for each database vendor? If so, how do I merge the results on the build machine? 5) There is a hbm2doc goal within hibernate3-maven-plugin. It seems overkill to run this three times...I gotta believe it'd be nearly identical for each database.

    Read the article

  • Stored Procedure with ALTER TABLE

    - by psayre23
    I have a need to sync auto_increment fields between two tables in different databases on the same MySQL server. The hope was to create a stored procedure where the permissions of the admin would let the web user run ALTER TABLE [db1].[table] AUTO_INCREMENT = [num]; without giving it permissions (That just smells of SQL injection). My problem is I'm receiving errors when creating the store procedure. Is this something that is not allowed by MySQL? DROP PROCEDURE IF EXISTS sync_auto_increment; CREATE PROCEDURE set_auto_increment (tableName VARCHAR(64), inc INT) BEGIN ALTER TABLE tableName AUTO_INCREMENT = inc; END;

    Read the article

  • Entity Framework 4 "Generate Database from Model" to SQLEXPRESS mdf results in "Could not locate ent

    - by InfinitiesLoop
    I'm using Visual Studio 2010 RTM. I want to do model-first, so I started a new MVC app and added a new blank edmx. Created a few entities. No problem. Then I "Generate Database from Model", and allow the dialog to create a new database for me, which it does successfully as 'mydatabase.mdf' in the app's App_Data directory. Then I open the generated sql file (in Visual Studio). To run it of course I have to give it a connection. I am not sure if it's right, but I used '.\SQLEXPRESS' and Windows authentication. No idea how I'd tell it where the MDF is. Then the problem -- upon executing it, I get: Msg 911, Level 16, State 1, Line 1 Could not locate entry in sysdatabases for database 'mydatabase'. No entry found with that name. Make sure that the name is entered correctly. And indeed there were no tables created in the MDF. So... what am I doing wrong, or am I off my rocker expecting this to work? :)

    Read the article

  • Database Change Management - Setup for Initial Create Scripts, Subsequent Migration Scripts

    - by Martin Aatmaa
    I've got a database change management workflow in place. It's based on SQL scripts (so, it's not a managed code-based solution). The basic setup looks like this: Initial/ Generate Initial Schema.sql Generate Initial Required Data.sql Generate Initial Test Data.sql Migration 0001_MigrationScriptForChangeOne.sql 0002_MigrationScriptForChangeTwo.sql ... The process to spin up a database is to then run all the Initlal scripts, and then run the sequential Migration scripts. A tool takes case of the versioning requirements, etc. My question is, in this kind of setup, is it useful to also maintain this: Current/ Stored Procedures/ dbo.MyStoredProcedureCreateScript.sql ... Tables/ dbo.MyTableCreateScript.sql ... ... By "this" I mean a directory of scripts (separated by object type) that represents the create scripts for spinning up the current/latest version of the database. For some reason, I really like the idea, but I can't concretely justify it's need. Am I missing something? The advantages would be: For dev and source control, we would have the same object-per-file setup that we're used to For deployment, we can spin up a new DB instance to the latest version either by running the Initial+Migrate, or by running the scripts from Current/ For dev, we do not need a DB instance running in order to do development. We can do "offline" development on the Current/ folder. The disadvantages would be: For each change, we need to update the scripts in the Current/ folder, as well as create a Migration script (in the Migration/ folder) Thanks in advance for any input!

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >