Search Results

Search found 31483 results on 1260 pages for 'database migration'.

Page 136/1260 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • Django database - how to add this column in raw SQL.

    - by alex
    Suppose I have my models set up already. class books(models.Model): title = models.CharField... ISBN = models.Integer... What if I want to add this column to my table? user = models.ForeignKey(User, unique=True) How would I write the raw SQL in my database so that this column works?

    Read the article

  • What's the quickest way to dump & load a MySQL InnoDB database using mysqldump?

    - by Josh Schwartzman
    I would like to create a copy of a database with approximately 40 InnoDB tables and around 1.5GB of data with mysqldump and MySQL 5.1. What are the best parameters (ie: --single-transaction) that will result in the quickest dump and load of the data? As well, when loading the data into the second DB, is it quicker to: 1) pipe the results directly to the second MySQL server instance and use the --compress option or 2) load it from a text file (ie: mysql < my_sql_dump.sql)

    Read the article

  • SQL Server Log File Won't Shrink due cause "log are pending replication" on non replicated DB?

    - by user796466
    I have a non Mission Critial DB 9am-5pm SQL Server database that I have set up to do nightly full backups and log backups every 30 minutes during business hours. The database is in full recovery and normally I have no reason to truncate/shrink logs unless I do some heavy maintenance. Log backups manage the size with no issue. However I have not been at this client for several weeks and upon inspection I noticed that the log had grown to about 10 times the size of the .mdf file. I poked around backups had been running and I had not gotten any severity error alerts (SQL mail). I attempted to put DB in simple recovery and shrink the log, this was no good. I precede to try a log backup and I got: The log was not truncated because records at the beginning of the log are pending replication or Change Data Capture. Ensure the Log Reader Agent or capture job is running or use sp_repldone to mark transactions as distributed or captured. Restart SQL Server rinse repeat same thing ... I said ??? Replication is not nor ever has been set up on this DB or database /server ??? So the log backups have not been flushing the .ldf. So I did a couple hours of research and I found: http://www.sqlmonster.com/Uwe/Forum.aspx/sql-server/5445/Log-file-is-not-truncated-inspite-of-regular-log-backup http://www.eggheadcafe.com/software/aspnet/30708322/the-log-was-not-truncated-because-records-at-the-beginning-of-the-log-are-pending-replication.aspx seems to be some kind of poorly documented bug ?? The solution seems to have been to run exec sp_repldone, more precisley EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time= 0, @reset = 1 This procedure can be used in emergency situations to allow truncation of the transaction log when transactions pending replication are present. Using this procedure prevents Microsoft SQL Server 2000 from replicating the database until the database is unpublished and republished. ~ MSDN When I do that I get the following Msg 18757, Level 16, State 1, Procedure sp_repldone, Line 1 Unable to execute procedure. The database is not published. Execute the procedure in a database that is published for replication. Which makes sense Because the DB has never been published for replication. I have several questions: A) First and foremost is, WTF is going on ? What is causeing this, I am interested in knowing the why here ? Is this genuinley a bug or is there some aspect of the backup that is not functioning properly that cause's the DB to mimick a replicated state ? Someone please edify me on this. B) Second ... Do I really have to publish / replicate this DB to exec this SP to fix this ??? Sounds crazy or is there some T-SQL that I can put it in a published state exec the proc and be on my way ... C) Third, if I do indeed have to publish this database to exec the SP to release this unneeded mis replicated/intended log , to get my .ldf file and backup back on track. How do I publish the database without an online host that it is asking for ??? I don't generally do this kind of database administration and need some guidance. Sorry if this is too verbose but just voicing the question helps me clarify it ... Thank you in advance for your help

    Read the article

  • CSC folder data access AND roaming profiles issues (Vista with Server 2003, then 2008)

    - by Alex Jones
    I'm a junior sysadmin for an IT contractor that helps small, local government agencies, like little towns and the like. One of our clients, a public library with ~ 50 staff users, was recently migrated from Server 2003 Standard to Server 2008 R2 Standard in a very short timeframe; our senior employee, the only network engineer, had suddenly put in his two weeks notice, so management pushed him to do this project before quitting. A bit hasty on management's part? Perhaps. Could we do anything about that? Nope. Do I have to fix this all by myself? Pretty much. The network is set up like this: a) 50ish staff workstations, all running Vista Business SP2. All staff use MS Outlook, which uses RPC-over-HTTPS ("Outlook Anywhere") for cached Exchange access to an offsite location. b) One new (virtualized) Server 2008 R2 Standard instance, running atop a Server 2008 R2 host via Hyper-V. The VM is the domain's DC, and also the site's one and only file server. Let's call that VM "NEWBOX". c) One old physical Server 2003 Standard server, running the same roles. Let's call it "OLDBOX". It's still on the network and accessible, but it's been demoted, and its shares have been disabled. No data has been deleted. c) Gigabit Ethernet everywhere. The organization's only has one domain, and it did not change during the migration. d) Most users were set up for a combo of redirected folders + offline files, but some older employees who had been with the organization a long time are still on roaming profiles. To sum up: the servers in question handle user accounts and files, nothing else (eg, no TS, no mail, no IIS, etc.) I have two major problems I'm hoping you can help me with: 1) Even though all domain users have had their redirected folders moved to the new server, and loggin in to their workstations and testing confirms that the Documents/Music/Whatever folders point to the new paths, it appears some users (not laptops or anything either!) had been working offline from OLDBOX for a long time, and nobody realized it. Here's the ugly implication: a bunch of their data now lives only in their CSC folders, because they can't access the share on OLDBOX and sync with it finally. How do I get this data out of those CSC folders, and onto NEWBOX? 2) What's the best way to migrate roaming profile users to non-roaming ones, without losing vital data like documents, any lingering PSTs, etc? Things I've thought about trying: For problem 1: a) Reenable the documents share on OLDBOX, force an Offline Files sync for ALL domain users, then copy OLDBOX's share's data to the equivalent share on NEWBOX. Reinitialize the Offline Files cache for every user. With this: How do I safely force a domain-wide Offline Files sync? Could I lose data by reenabling the share on OLDBOX and forcing the sync? Afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? b) Determine which users have unsynced changes to OLDBOX (again, how?), search each user's CSC folder domain-wide via workstation admin shares, and grab the unsynched data. Reinitialize the Offline Files cache for every user. With this: How can I detect which users have unsynched changes with a script? How can I search each user's CSC folder, when the ownership and permissions set for CSC folders are so restrictive? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? c) Manually visit each workstation, copy the contents of the CSC folder, and manually copy that data onto NEWBOX. Reinitialize the Offline Files cache for every user. With this: Again, how do I 'break into' the CSC folder and get to its data? As an experiment, I took one workstation's HD offsite, imaged it for safety, and then tried the following with one of our shop PCs, after attaching the drive: grant myself full control of the folder (failed), grant myself ownership of the folder (failed), run chkdsk on the whole drive to make sure nothing's messed up (all OK), try to take full control of the entire drive (failed), try to take ownership of the entire drive (failed) MS KB articles and Googling around suggests there's a utility called CSCCMD that's meant for this exact scenario...but it looks like it's available for XP, not Vista, no? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? For problem 2: a) Figure out which users are on roaming profiles, and where their profiles 'live' on the server. Create new folders for them in the redirected folders repository, migrate existing data, and disable the roaming. With this: Finding out who's roaming isn't hard. But what's the best way to disable the roaming itself? In AD Users and Computers, or on each user's workstation? Doing it centrally on the server seems more efficient; that said, all of the KB research I've done turns up articles on how to go from local to roaming, not the other way around, so I don't have good documentation on this. In closing: we have good backups of NEWBOX and OLDBOX, but not of the workstations themselves, so anything drastic on the client side would need imaging and testing for safety. Thanks for reading along this far! Hopefully you can help me dig us out of this mess.

    Read the article

  • Rails application keeps timing out when attempting to connect to Postgresql DB

    - by Corillian
    I'm hosting a postgresql database on a small windows azure Ubuntu 13.04 VM with a default postgresql.conf. I have a Rails application running on a medium windows azure Ubuntu 13.04 VM. When accessing the postgresql database the rails application is constantly timing out. In its database.yml I have the connection pool size set to 120 and the timeout set to 15 seconds. Despite this my rails logs are full of the following error message: ActiveRecord::ConnectionTimeoutError: could not obtain a database connection within 5 seconds (waited 5.0023203 seconds). The max pool size is currently 120; consider increasing it. My postgresql.conf has a max connection limit of 120, making it any larger prevents the server from being able to successfully restart. I've also made sure that ssl was off in the postgresql.conf per this article but beyond that I have no idea what's going on. My postgresql logs don't contain any info indicating something is going wrong. My website is getting ~1k hits per day so perhaps a small VM instance just isn't powerful enough? I appreciate any assistance! [Edit1] The postgresql database is in a separate cloud service within the same affinity group. For example: db small VM: mydatabase.cloudapp.net (Affinity Group US East) forums medium VM: myforums.cloudapp.net (Affinity Group US East) On the database server I have opened port 5432. The connection to the database server from the forums server is using its hostname. Is it possible that the DNS resolution is what's taking so long?

    Read the article

  • Html.EditorFor, Html.DisplayFor not working on MVC1.0 -> MVC2.0 manual migration

    - by lawrence-chase
    Has anyone encountered this problem? I manually migrated a MVC1.0 application to MVC2.0 and everything so far seems to be working fine. Today I wanted to try out the Html.EditorFor helper and it doesn't render the template. I set it up the same way in a fresh MVC2.0 application and the template does render. Is there anything other (or specifically needed when mirgrating to activate this behavior) than throwing the partial view like DateTime.ascx into Views/Shared/EditorTemplates and using the helper methods to render the model objects? I'm stumped.

    Read the article

  • Consolidate Data in Private Clouds, But Consider Security and Regulatory Issues

    - by Troy Kitch
    The January 13 webcast Security and Compliance for Private Cloud Consolidation will provide attendees with an overview of private cloud computing based on Oracle's Maximum Availability Architecture and how security and regulatory compliance affects implementations. Many organizations are taking advantage of Oracle's Maximum Availability Architecture to drive down the cost of IT by deploying private cloud computing environments that can support downtime and utilization spikes without idle redundancy. With two-thirds of sensitive and regulated data in organizations' databases private cloud database consolidation means organizations must be more concerned than ever about protecting their information and addressing new regulatory challenges. Join us for this webcast to learn about greater risks and increased threats to private cloud data and how Oracle Database Security Solutions can assist in securely consolidating data and meet compliance requirements. Register Now.

    Read the article

  • msdeploy IIS 6 to 7 migration issue

    - by rboorgapally
    I am trying to view the dependencies of my website on IIS 6.0 running on windows server 2003. When I type the following command, msdeploy -verb:getDependencies -source:metakey=lm/w3svc/1 I got the following error: C:\Program Files\IIS\Microsoft Web Deploy>msdeploy -verb:getDependencies -source :metakey=lm/w3svc/1 Error: Object of type 'metaKey' and path 'lm/w3svc/1' cannot be created Error: The metabase key '/lm/w3svc/1' could not be found. Error: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) Error count: 1 Can any one explain these to me?

    Read the article

  • SQL SERVER – Reducing CXPACKET Wait Stats for High Transactional Database

    - by pinaldave
    While engaging in a performance tuning consultation for a client, a situation occurred where they were facing a lot of CXPACKET Waits Stats. The client asked me if I could help them reduce this huge number of wait stats. I usually receive this kind of request from other client as well, but the important thing to understand is whether this question has any merits or benefits, or not. Before we continue the resolution, let us understand what CXPACKET Wait Stats are. The official definition suggests that CXPACKET Wait Stats occurs when trying to synchronize the query processor exchange iterator. You may consider lowering the degree of parallelism if a conflict concerning this wait type develops into a problem. (from BOL) In simpler words, when a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. Threads which came first have to wait for the slower thread to finish. The Wait by a specific completed thread is called CXPACKET Wait Stat. Note that CXPACKET Wait is done by completed thread and not the one which are unfinished. “Note that not all the CXPACKET wait types are bad. You might experience a case when it totally makes sense. There might also be cases when this is also unavoidable. If you remove this particular wait type for any query, then that query may run slower because the parallel operations are disabled for the query.” Now let us see what the best practices to reduce the CXPACKET Wait Stats are. The suggestions, with which you will find that if you search online through the browser, would play a major role as and might be asked about their jobs In addition, might tell you that you should set ‘maximum degree of parallelism’ to 1. I do agree with these suggestions, too; however, I think this is not the final resolutions. As soon as you set your entire query to run on single CPU, you will get a very bad performance from the queries which are actually performing okay when using parallelism. The best suggestion to this is that you set ‘the maximum degree of parallelism’ to a lower number or 1 (be very careful with this – it can create more problems) but tune the queries which can be benefited from multiple CPU’s. You can use query hint OPTION (MAXDOP 0) to run the server to use parallelism. Here is the two-quick script which helps to resolve these issues: Change MAXDOP at Server Level EXEC sys.sp_configure N'max degree of parallelism', N'1' GO RECONFIGURE WITH OVERRIDE GO Run Query with all the CPU (using parallelism) USE AdventureWorks GO SELECT * FROM Sales.SalesOrderDetail ORDER BY ProductID OPTION (MAXDOP 0) GO Below is the blog post which will help you to find all the parallel query in your server. SQL SERVER – Find Queries using Parallelism from Cached Plan Please note running Queries in single CPU may worsen your performance and it is not recommended at all. Infect this can be very bad advise. I strongly suggest that you identify the queries which are offending and tune them instead of following any other suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • UpdateModel() fails after migration from MVC 1.0 to MVC 2.0

    - by Alastair Pitts
    We are in the process of migrating our ASP.NET MVC 1.0 web app to MVC 2.0 but we have run into a small snag. In our report creation wizard, it is possible leave the Title text box empty and have it be populated with a generic title (in the post action). The code that does the update on the model of the Title is: if (TryUpdateModel(reportToEdit, new[] { "Title" })) { //all ok here try to create (custom validation and attach to graph to follow) //if title is empty get config subject if (reportToEdit.Title.Trim().Length <= 0) reportToEdit.Title = reportConfiguration.Subject; if (!_service.CreateReport(1, reportToEdit, SelectedUser.ID, reportConfigID, reportCategoryID, reportTypeID, deviceUnitID)) return RedirectToAction("Index"); } In MVC 1.0, this works correctly,the reportToEdit has an empty title if the textbox is empty, which is then populated with the Subject property. In MVC 2.0 this fails/returns false. If I add the line above: UpdateModel(reportToEdit, new[] { "Title" }); it throws System.InvalidOperationException was unhandled by user code Message="The model of type 'Footprint.Web.Models.Reports' could not be updated." Source="System.Web.Mvc" StackTrace: at System.Web.Mvc.Controller.UpdateModel[TModel](TModel model, String prefix, String[] includeProperties, String[] excludeProperties, IValueProvider valueProvider) at System.Web.Mvc.Controller.UpdateModel[TModel](TModel model, String[] includeProperties) at Footprint.Web.Controllers.ReportsController.Step1(FormCollection form) in C:\TFS Workspace\ExtBusiness_Footprint\Branches\apitts_uioverhaul\Footprint\Footprint.Web\Controllers\ReportsController.cs:line 398 at lambda_method(ExecutionScope , ControllerBase , Object[] ) at System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters) at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters) at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters) at System.Web.Mvc.ControllerActionInvoker.<>c__DisplayClassd.<InvokeActionMethodWithFilters>b__a() at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation) InnerException: Reading the MVC2 Release notes I see this breaking change: Every property for model objects that use IDataErrorInfo to perform validation is validated, regardless of whether a new value was set. In ASP.NET MVC 1.0, only properties that had new values set would be validated. In ASP.NET MVC 2, the Error property of IDataErrorInfo is called only if all the property validators were successful. but I'm confused how this is affecting me. I'm using the entity framework generated classes. Can anyone pinpoint why this is failing?

    Read the article

  • SQL SERVER Enumerations in Relational Database Best Practice

    This article has been submitted by Marko Parkkola, Data systems designer at Saarionen Oy, Finland. Marko is excellent developer and always thinking at next level. You can read his earlier comment which created very interesting discussion here: SQL SERVER- IF EXISTS(Select null from table) vs IF EXISTS(Select 1 from table). I must express my special [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Any Flex 4 migration experience?

    - by Gok Demir
    My current development stack is MySQL + iBatis + Spring + Spring BlazeDS Integration 1.01 + BlazeDS 3.2 and Flex 3 with Mate 0.8.9 framework. Now Flash Builder 4 beta 2 is out. There are cool features like Data Centric Development (DCD), form generation etc... Do you know how Spring Blazeds Integration works with BlazeDS 4? What about Mate? Is there any issues with Flex 4 ? How DCD suits with mate eventmaps. I know it is better to try it out myself but I just want to check if somebody ever tried to migrate Flex 4. If so what are the issues? Did you notice any productivity speed up? Thanks.

    Read the article

  • Fusion Concepts: Fusion Database Schemas

    - by Vik Kumar
    You often read about FUSION and FUSION_RUNTIME users while dealing with Fusion Applications. There is one more called FUSION_DYNAMIC. Here are some details on the difference between these three and the purpose of each type of schema. FUSION: It can be considered as an Administrator of the Fusion Applications with all the corresponding rights and powers such as owning tables and objects, providing grants to FUSION_RUNTIME.  It is used for patching and has grants to many internal DBMS functions. FUSION_RUNTIME: Used to run the Applications.  Contains no DB objects. FUSION_DYNAMIC: This schema owns the objects that are created dynamically through ADM_DDL. ADM_DDL is a package that acts as a wrapper around the DDL statement. ADM_DDL support operations like truncate table, create index etc. As the above statements indicate that FUSION owns the tables and objects including FND tables so using FUSION to run applications is insecure. It would be possible to modify security policies and other key information in the base tables (like FND) to break the Fusion Applications security via SQL injection etc. Other possibilities would be to write a logon DB trigger and steal credentials etc. Thus, to make Fusion Applications secure FUSION_RUNTIME is granted privileges to execute DMLs only on APPS tables. Another benefit of having separate users is achieving Separation of Duties (SODs) at schema level which is required by auditors. Below are the roles and privileges assigned to FUSION, FUSION_RUNTIME and FUSION_DYNAMIC schema: FUSION It has the following privileges: Create SESSION Do all types of DDL owned by FUSION. Additionally, some specific priveleges on other schemas is also granted to FUSION. EXECUTE ON various EDN_PUBLISH_EVENT It has the following roles: CTXAPP for managing Oracle Text Objects AQ_SER_ROLE and AQ_ADMINISTRATOR_ROLE for managing Advanced Queues (AQ) FUSION_RUNTIME It has the following privileges: CREATE SESSION CHANGE NOTIFICATION EXECUTE ON various EDN_PUBLISH_EVENT It has the following roles: FUSION_APPS_READ_WRITE for performing DML (Select, Insert, Delete) on Fusion Apps tables FUSION_APPS_EXECUTE for performing execute on objects such as procedures, functions, packages etc. AQ_SER_ROLE and AQ_ADMINISTRATOR_ROLE for managing Advanced Queues (AQ) FUSION_DYNAMIC It has following privileges: CREATE SESSION, PROCEDURE, TABLE, SEQUENCE, SYNONYM, VIEW UNLIMITED TABLESPACE ANALYZE ANY CREATE MINING MODEL EXECUTE on specific procedure, function or package and SELECT on specific tables. This depends on the objects identified by product teams that ADM_DDL needs to have access  in order to perform dynamic DDL statements. There is one more role FUSION_APPS_READ_ONLY which is not attached to any user and has only SELECT privilege on all the Fusion objects. FUSION_RUNTIME does not have any synonyms defined to access objects owned by FUSION schema. A logon trigger is defined in FUSION_RUNTIME which sets the current schema to FUSION and eliminates the need of any synonyms.   What it means for developers? Fusion Application developers should be using FUSION_RUNTIME for testing and running Fusion Applications UI, BC and to connect to any SQL front end like SQL *PLUS, SQL Loader etc. For testing ADFbc using AM tester while using FUSION_RUNTIME you may hit the following error: oracle.jbo.JboException: JBO-29000: Unexpected exception caught: java.sql.SQLException, msg=invalid name pattern: FUSION.FND_TABLE_OF_VARCHAR2_255 The fix is to add the below JVM parameter in the Run/Debug client property in the Model project properties -Doracle.jdbc.createDescriptorUseCurrentSchemaForSchemaName=true More details are discussed in this forum thread for it.

    Read the article

  • Configure Windows Firewall for SQL Server 2008 Database Engine in Windows Server 2008 R2

    I have installed SQL Server 2008 Developer Edition on Windows Server 2008 R2 and I am unable to get connect to SQL Server 2008 Instance from SQL Server 2008 Management Studio which is installed on another remote server. As I am new to Windows Server 2008 R2 it would be great if you can let me know the step by step approach to enable the default port of SQL Server 2008 in Windows Firewall for user connectivity.

    Read the article

  • sqlalchemy date type in 0.6 migration using mssql

    - by nosklo
    I'm connection to mssql server through pyodbc, via FreeTDS odbc driver, on linux ubuntu 10.04. Sqlalchemy 0.5 uses DATETIME for sqlalchemy.Date() fields. Now Sqlalchemy 0.6 uses DATE, but sql server 2000 doesn't have a DATE type. How can I make DATETIME be the default for sqlalchemy.Date() on sqlalchemy 0.6 mssql+pyodbc dialect? I'd like to keep it as clean as possible. Here's code to reproduce the issue: import sqlalchemy from sqlalchemy import Table, Column, MetaData, Date, Integer, create_engine engine = create_engine( 'mssql+pyodbc://sa:sa@myserver/mydb?driver=FreeTDS') m = MetaData(bind=engine) tb = sqlalchemy.Table('test_date', m, Column('id', Integer, primary_key=True), Column('dt', Date()) ) tb.create() And here is the traceback I'm getting: Traceback (most recent call last): File "/tmp/teste.py", line 15, in <module> tb.create() File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/schema.py", line 428, in create bind.create(self, checkfirst=checkfirst) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1647, in create connection=connection, **kwargs) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1682, in _run_visitor **kwargs).traverse_single(element) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/sql/visitors.py", line 77, in traverse_single return meth(obj, **kw) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/ddl.py", line 58, in visit_table self.connection.execute(schema.CreateTable(table)) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1157, in execute params) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1210, in _execute_ddl return self.__execute_context(context) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1268, in __execute_context context.parameters[0], context=context) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1367, in _cursor_execute context) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1360, in _cursor_execute context) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/default.py", line 277, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.ProgrammingError: (ProgrammingError) ('42000', '[42000] [FreeTDS][SQL Server]Column or parameter #2: Cannot find data type DATE. (2715) (SQLExecDirectW)') '\nCREATE TABLE test_date (\n\tid INTEGER NOT NULL IDENTITY(1,1), \n\tdt DATE NULL, \n\tPRIMARY KEY (id)\n)\n\n' ()

    Read the article

  • Design Book– Third Section (Implementing the Database)

    - by drsql
    The third section is the primary section that a person who has some decent knowledge and experience doing design will likely really find exciting. Whereas the first half of the book is there for fundamentals, this section is more skills based, and unless you are a walking encyclopedia of SQL Server syntax (and I am not), you have to use some form of reference to discover how to implement different sorts of problems using DDL, including Triggers, Constraints, etc;  Security; Source Control, etc....(read more)

    Read the article

  • Custom Workflow migration to Windows Workflow foundation

    - by ASV
    I have an application that runs workflows custom developed in .Net 3.5. Now we want to build a case (to help customer understand) for migrating these custom workflows to WF. What should be the highlights of such a case? P.N: the customer has not asked for it but we want to build a case and present it to the customer. Thanks in advance ASV

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >