Search Results

Search found 993 results on 40 pages for 'audit ddl'.

Page 31/40 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Problem with network policy rule in Network Policy Server

    - by Robert Moir
    Trying to configure RADIUS for a college network, and have run into the following frustration: I can't set an "AND" condition for group membership of authenticated objects in the network policy rules, e.g. I'm trying to create a NPS rule that says, essentially "IF user is a member of [list of user groups] And is authenticating from a computer in [wireless computer group] then allow access. The screenshot above is the rule I am having trouble with. It does not work as written. The rule underneath it, which is identical in every aspect except the conditions rule, does work. I've tried changing the non-working rule to define each set of groups as "Windows group" rather than specifically as machine and user groups, with no change. With the "faulty" rule enabled and the working one disabled, any attempt to login with a valid account from a machine that is in the wireless computers group gives a 6273 audit event in the windows event log: Reason code 66 - "the user attempted to use an authentication method that is not enabled on the matching network policy". Disabling the "faulty" rule, enabling the other rule and logging in with the same account and computer works just fine.

    Read the article

  • How can I parse/ transform text log data before it gets captured in SCOM 2007 R2?

    - by Abs
    I'm pretty much a noob with System Center Operations Manager 2007, and I'm probably missing something pretty basic, but I'm stumped anyway. We're setting up monitoring on some of our servers, and we'd like to capture data from some plain text log files (e.g. DNS debug logs, DHCP logs). It looks to me like I can set up a generic text file monitoring rule and get events captured into the main Ops Manager database, but my understanding is that the whole line of text from the plain text log gets captured as one field. In an ideal world, we'd be able to parse or transform that log file data to make it easier to query later. Is this possible? Is it easy? Do I have to buy expensive 3rd-party software to do it? One more thing: it would be even better if there was a way to stuff this data into the Audit Collection Services (ACS) database instead of the main one, but I'll take what I can get. Any help would be greatly appreciated.

    Read the article

  • Alternatives to using email (in particular, Outlook) as a knowledge store?

    - by Umber Ferrule
    I suspect that, like many people, I use my work email account (accessed via Outlook 2007) to store information. I generally try to group similar things in folders and sub-folders, but with a multitude of folders this gets very unwieldy. In particular, it can be a bind to locate things using Outlook's tree structure. (As an aside: I've yet to come across a good free search add-on for Outlook.) I realise Outlook is not the best place to store all my information and I'd prefer not to. In an ideal world I'd like to be able to organise all of the information stored in Outlook in a MindMap (my software of choice being Freemind) or Wiki. To maintain an email audit-trail, I've considered saving individual emails as files using a MindMap or Wiki to link them. What do people think of this? (I can't say I relish the thought of the exporting process!) Whatever I do is going to involve some pain (i.e. setting up a Wiki/MindMap) or sticking with what Outlook provides currently. Has anyone been in the same position? Has anyone mass-migrated information from Outlook? If so, what was the best way? Any ideas or alternative proposals?

    Read the article

  • Suspected brute force attack

    - by HarveySaayman
    Recently I acquired a dedicated server from a local ISP to play around with. As the tags suggest, its a windows server 2008 R2 machine. I've only had it for a few days, and no real traffic is going to it yet. I haven't even deployed a "real" website to it yet. Just a silly page so that I could check IIS, my host headers, DNS records, etc are all configured correctly. While playing around, I noticed a ton of Audit Failure entries in the event viewers security logs. It seems something is trying to access the administrator account, and failing. It smells like a brute force attack to me. My ISP gave me the account details of the administrator account and I used those to RDP into the box, which I've heard is not the securest of situations. I created myself another account and added myself to the administrator group, so im using that account to gain acceess to the machine now. In response to all of this i used http://strongpasswordgenerator.com/ to generate me some 20 character length strong passwords and changed all of my account passwords, even the SQL sa user. I also enabled the auto ban feature of FileZillaServer (my FTP server) My questions: 1) how can i detect this kind of thing better? 2) how can i protect my server from unauthorized access better? PS: I'm a software dev, not a sysadmin so please mind my server security idiot-ness-ness

    Read the article

  • Utility to LOGICALLY compare two xml files?

    - by Matthew
    Right now we are attempting to build golden configurations for our environment. One piece of software that we use relies on large XML files to contain the bulk of its configuration. We want tot ake our lab environment, catalog it as our "golden configuration" and then be able to audit against that configuration in the future. Since diff is bytewise comparison and NOT logical comparison, we can't use it to compare files in this case (XML is unordered, so it won't work). What I am looking for is something that can parse the two XML files, and compare them element by element. So far we have yet to find any utilities that can do this. OS doesn't matter, I can do it on anything where it will work. The preference is something off the shelf. Any ideas? Edit: One issue we have run into is one vendor's config files will occasionally mention the same element several times, each time with different attributes. Whatever diff utility we use would need to be able to identify either the set of attributes or identify them all as part of one element. Tall order :)

    Read the article

  • Can I get all active directory passwords in clear text using reversible encryption?

    - by christian123
    EDIT: Can anybody actually answer the question? Thanks, I don't need no audit trail, I WILL know all the passwords and users can't change them and I will continue to do so. This is not for hacking! We recently migrated away from a old and rusty Linux/Samba domain to an active directory. We had a custom little interface to manage accounts there. It always stored the passwords of all users and all service accounts in cleartext in a secure location (Of course, many of you will certainly not think of this a being secure, but without real exploits nobody could read that) and disabled password changing on the samba domain controller. In addition, no user can ever select his own passwords, we create them using pwgen. We don't change them every 40 days or so, but only every 2 years to reward employees for really learning them and NOT writing them down. We need the passwords to e.g. go into user accounts and modify settings that are too complicated for group policies or to help users. These might certainly be controversial policies, but I want to continue them on AD. Now I save new accounts and their PWGEN-generated (pwgen creates nice sounding random words with nice amounts of vowels, consonants and numbers) manually into the old text-file that the old scripts used to maintain automatically. How can I get this functionality back in AD? I see that there is "reversible encryption" in AD accounts, probably for challenge response authentication systems that need the cleartext password stored on the server. Is there a script that displays all these passwords? That would be great. (Again: I trust my DC not to be compromised.) Or can I have a plugin into AD users&computers that gets a notification of every new password and stores it into a file? On clients that is possible with GINA-dlls, they can get notified about passwords and get the cleartext.

    Read the article

  • Terminal Server 2008 Login: Access Denied

    - by user1236435
    When I try to RDP into a Server 2008 Terminal Server, I get a message that says "Access Denied" and an OK button. I setup the licensing mode correctly (per user) and also have setup to allow all remote connections. I get the following in the security event log: Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 28/06/2012 12:01:16 Event ID: 4656 Task Category: File System Level: Information Keywords: Audit Failure User: N/A Computer: 0BraApps1.brenntagLA.hou Description: A handle to an object was requested. Subject: Security ID: BRENNTAGLA\jaadmin Account Name: jaadmin Account Domain: BRENNTAGLA Logon ID: 0xbbe3f Object: Object Server: Security Object Type: File Object Name: C:\Windows\System32\ServerManager.msc Handle ID: 0x0 Process Information: Process ID: 0x60c Process Name: C:\Windows\System32\mmc.exe Access Request Information: Transaction ID: {00000000-0000-0000-0000-000000000000} Accesses: READ_CONTROL SYNCHRONIZE WriteData (or AddFile) AppendData (or AddSubdirectory or CreatePipeInstance) WriteEA ReadAttributes WriteAttributes Access Reasons: READ_CONTROL: Granted by D:(A;;0x1200a9;;;BA) SYNCHRONIZE: Granted by D:(A;;0x1200a9;;;BA) WriteData (or AddFile): Not granted AppendData (or AddSubdirectory or CreatePipeInstance): Not granted WriteEA: Not granted ReadAttributes: Granted by ACE on parent folder D:(A;;0x1301bf;;;BA) WriteAttributes: Not granted Access Mask: 0x120196 Privileges Used for Access Check: - Restricted SID Count: 0 Event Xml: 4656 1 0 12800 0 0x8010000000000000 1535565 Security 0BraApps1.brenntagLA.hou S-1-5-21-205301047-3902605089-2438454170-21511219 jaadmin BRENNTAGLA 0xbbe3f Security File C:\Windows\System32\ServerManager.msc 0x0 {00000000-0000-0000-0000-000000000000} %%1538 %%1541 %%4417 %%4418 %%4420 %%4423 %%4424 %%1538: %%1801 D:(A;;0x1200a9;;;BA) %%1541: %%1801 D:(A;;0x1200a9;;;BA) %%4417: %%1805 %%4418: %%1805 %%4420: %%1805 %%4423: %%1811 D:(A;;0x1301bf;;;BA) %%4424: %%1805 0x120196 - 0 0x60c C:\Windows\System32\mmc.exe Any ideas?

    Read the article

  • Backing up SQL NetApp Snapshots using TSM

    - by WerkkreW
    In our environment we have a 3 node SQL 2005 Cluster which is on NetApp storage. We are currently using SMSQL (NetApp SnapManager for SQL) to take Snapshot backups of the data. This works great, but due to some audit requirements we are also forced to maintain some copies on tape. We have used NDMP in other places across the enterprise but we do not want to use it in this specific instance. Basically what I need to do is, get the most recent snapshot copy of the databases on tape, via Tivoli Storage Manager (TSM). What I have done is, obtained a basic Windows Server 2003 VM with SnapDrive installed, which is SAN attached and zoned to the NetApp, and I have written a batch file to do the following: Mount the latest __RECENT snapshot lun to the host, using a specific drive letter Perform a TSM based incremental backup Dis-mount the LUN This seems to work fine, except sometimes the LUN's do not mount due to some sort of timeout. Also, due to my limited knowledge of windows batch scripting, I have no way to monitor the success or failure of these backups since I do not know how to send a valid return code back to the TSM scheduling service. Is there a more efficient/elegant way to accomplish this without NDMP?

    Read the article

  • Can Windows logoff events be tracked?

    - by Massimo
    I'm working on an application to track network user logon/logoff events in an Active Directory domain; the application will work by auditing security logs on domain controllers. Auditing logon events can get somewhat tricky, but it can succesfully be done. My problem: how can I track logoff events? Based on some research I've done, it looks like these events are only logged locally on workstations, but not on DCs; also, the "lastLogoff" attribute exists on AD user objects, but it's not actually used by anyone. This is a very specific question: is something logged on DCs when a user logs off from a domain workstation? To clarify: I'm not intereseted in other auditing mehods, I can't deploy logon/logoff scripts and I can't install anything anywhere; I also know opened and closed network sessions are logged, but this is not what I'm looking for. I need to audit interactive logons and logoffs to domain workstations, and I can do this by only reading domain controllers security logs; reading each workstation's local event logs is out of question. If this can't be done, it's ok; but I need a clear answer on that. Can this be done? If yes, how?

    Read the article

  • Anonymous Login attemps from IPs all over Asia, how do I stop them from being able to do this?

    - by Ryan
    We had a successful hack attempt from Russia and one of our servers was used as a staging ground for further attacks, actually somehow they managed to get access to a Windows account called 'services'. I took that server offline as it was our SMTP server and no longer need it (3rd party system in place now). Now some of our other servers are having these ANONYMOUS LOGIN attempts in the Event Viewer that have IP addresses coming from China, Romania, Italy (I guess there's some Europe in there too)... I don't know what these people want but they just keep hitting the server. How can I prevent this? I don't want our servers compromised again, last time our host took our entire hardware node off of the network because it was attacking other systems, causing our services to go down which is really bad. How can I prevent these strange IP addresses from trying to access my servers? They are Windows Server 2003 R2 Enterprise 'containers' (virtual machines) running on a Parallels Virtuozzo HW node, if that makes a difference. I can configure each machine individually as if it were it's own server of course... UPDATE: New login attempts still happening, now these ones are tracing back to Ukraine... WTF.. here is the Event: Successful Network Logon: User Name: Domain: Logon ID: (0x0,0xB4FEB30C) Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: REANIMAT-328817 Logon GUID: - Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: 94.179.189.117 Source Port: 0 For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Here is one from France I found too: Event Type: Success Audit Event Source: Security Event Category: Logon/Logoff Event ID: 540 Date: 1/20/2011 Time: 11:09:50 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: QA Description: Successful Network Logon: User Name: Domain: Logon ID: (0x0,0xB35D8539) Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: COMPUTER Logon GUID: - Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: 82.238.39.154 Source Port: 0 For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • Win 7 crashes, PC reboots and says "Hard drive 0 not found" until I turn if off and on again

    - by Danny T.
    I recently made the move from Windows XP to Windows 7. Since then, when my computer is on for a few hours it always ends up rebooting without warning. Then the BIOS won't recognize my hard drive (hard drive 0 not found). If I turn off my computer and then on again, it boots normally. Some details: Dell Dimension 9150 Windows 7 I updated the BIOS I updated all system drivers with the latest version from Dell (SATA, Chipset, etc.) Other drivers updated too (Graphic card, sound, etc.) There is one thing that I tried after some Googling: I turned off the DMA access to the drives, but it's still rebooting after a few hours. Any clue? UPDATE 2010/12/13 Here are the events from the Event Log for today, from when I turned the computer on until it crashed: 19:17 - Error - ID 10016 - DistributedCom 20:06 - Error - ID 1008 - Customer Improvement Program (could not send data to Microsoft) 21:48 - Critical - ID 41 - Kernel-Power (System was restarted without proper shutdown) 21:48 - Error - ID 6008 - EventLog (Previous system down was not planned) 21:48 - Error - ID 1101 - EventLog (Audit Event ignored) 21:49 - Error - ID 10016 - DistributedCom Both DistributedCom events have a description along these lines (translated from French): The authorisation parameters specific to the application are not allowing Local Execeution for the COM server application with the CLSID {C97FCC79-E628-407D-AE68-A06AD6D8B4D1} and the APPID {344ED43D-D086-4961-86A6-1106F4ACAD9B} to the SID AUTHORITY NT\User System (S-1-5-18) from the address LocalHost (LRPC usage). This security authorisation can be changed with the Component service administration tool. UPDATE 2010/12/31 Here are the error messages I have on blue screens : STOP C000007xA - Kernel_Data_Inpage_Error "Unkown hard error" C00000135 - Can't start because &hs is missing

    Read the article

  • Drowning in documents - recommend doc management solutions?

    - by Martin Day
    I've been researching document management lately. I want to organise my docs at home and also at the office. Finding affordable solutions one can actually test drive is quite hard. Some that I've downloaded just don't seem to work (testing on brand new Vista PC). I've seen some software on Amazon like Paperport but not really sure what they're like. For home I'd like something to organise files, full text search, good scanner integration, nice interface etc. But for the office it seems harder. I need something that does proper workflow and keeps versions. It will have an audit trail. Documents can be approved, checked in/out etc. I know a few clients who would like something similar. It would be great just to import thousands of documents from a shared drive and get them indexed with dupes killed. I'd like to be super clear about how/where the documents are being stored so that maintenance and backups are clear. My Google/twitter searches lead back to the same tired and vague webpages pushing what look like expensive and custom made solutions. Some might be very good I suppose but it's darn hard to tell. I don't mind a hosted package but all in all I don't think something like Google Docs, as good as it is now, will work. There are too many quirks and missing features (as compared to Office). Being able to work directly with the common Office file formats is important. I've noted a similar sounding question asked here back in August but it didn't seem to turn up too many solutions that I could easily and quickly apply. Also there could have been some changes since then so I feel it's worth asking.

    Read the article

  • Configure X connections over TCP without using an X connection

    - by Darren Cook
    I want to run a GUI application on a remote machine I only have ssh access to. I don't need to, or want to, see the GUI window. (I know I could use something like ssh -C -X remote_server if I wanted the GUI to be on my client.) I know X is running on the remote machine, as ps shows this: root ... /usr/bin/Xorg :0 -br -audit 0 -auth /var/gdm/:0.Xauth -nolisten tcp vt7 I set DISPLAY=:0.0 but I then get "Xlib: connection to ":0.0" refused by server" when I try to use it. At Get remote x display working in linux without ssh tunneling and Xserver doesn't work unless DISPLAY=0.0 I see the advice to use gdmsetup to allow X to listen on TCP. But, gdmsetup is a GUI application! And trying to run it over ssh -X did not work ("X11 connection rejected because of wrong authentication"). So, is there a text file I can edit to remove -nolisten? And, after editing it, how do I safely restart X, remotely? (There is other stuff running on this machine, so requesting a reboot is possible, but undesirable.) If not, should gdmsetup be able to run over ssh and I should persevere in that direction? UPDATE: I had to do the ssh -X session as root (ssh as a normal user, then sudo or su, does not work.) So, I did the edit with gdmsetup. I then restarted X with gdm-restart. I've also done xhost + from that ssh -X session. The ps line no longer shows the -nolisten tcp part. But still no luck connecting to it, with either DISPLAY=:0 or DISPLAY=localhost:0

    Read the article

  • Security Token for Mac/Linux/Windows, self-managed, pref. open source?

    - by DevelopersDevelopersDevelopers
    I'm looking to buy an evaluation security token (combined smart card/usb reader) for my business that works on: Windows 7 x64 OS X 10.6.x x64 Ubuntu Linux (64 or 32 bit, 10.04 or 10.10, I can bend based on possible tokens) Functionality I need is: Login authentication Authentication for whole-disk encryption (in Linux/Windows, Mac is flexible here) Signing/encryption using PGP and x.509 certificates RSA-2048 key-capable (1024 not good enough.) I can manage the certificates myself Open source middleware/drivers (not necessarily FOSS, just source available. Can flex on this, I just want to be able to audit the code. OpenSC-compatible on Linux would be great.) Is there any token that can do all of this? Or would I need multiple ones to accomplish this? Or do I need to look at smart cards and readers to get this? I have been researching this for a while and have had a heck of a time even getting accurate information about products. Also, I am in the USA, and it appears that EU export laws prevent me from buying from there, so those vendors are out. I was looking at Feitian tokens from Gooze, but since they are in France I can't buy.

    Read the article

  • How To Completely Move Users/Program Files/Program Files (x86)/ProgramData (Folders) To Another Partition(s) On Windows 8?

    - by Enigma83
    I am attempting to move folders Users Program Files Program Files (x86), ProgramData (at the root of the C drive) to at least 2 other partitions, preferably on a fresh install. I have read that there are methods for doing this post-install, but it seems like it would be a bit more tedious to do things that way. I want to move the 2 Program Files folders to another partition on the same HDD, and Users/ProgramData will go to yet another partition on same HDD. I have done a bit of research on this, read up on some things that involved booting into Audit Mode, using the RoboCopy command to copy folders via booting into my Windows 8 USB drive, creating NTFS junctions/symbolic links, Registry edits, as well as accomplishing this automatically by creating an auto-attend file which Windows Setup processes automatically before the user is ever booted in for the 1st time. I tried this morning and now have a basic installation in which programs like Internet Explorer fail to open, certain files can't be found/opened (even if I click on them directly), an example is Regedit. Also, I can't run the Command/DOS (CMD) prompt as Administrator (or otherwise, as any other user), can't activate the real Administrator account or open any of the Administrative Tools (despite having added them to my Start Screen). So far I have only tried RoboCopy-ing Program Files and Program Files (x86) so far, creating junction points for them, and editing the Registry in the relevant locations. This is what I'm left with now. I also found the following blog article which describes how to do this for Windows 7 So, where should I go from here and where can I find more information? And how can this be done without disabling the Metro apps, which I've read will stop working if you move ProgramData. Once I have everything moved, where do I install programs to? Do I tell them to install to C:\Program Files\Program Files (x86) or to the junctioned/symbolic-linked partition/drive? I plan to test in VMware virtual machines from here on until things are working correctly, while using a baseline default install for daily tasks.

    Read the article

  • Install Windows 8 on SSD and Program & Users on HDD

    - by Foe
    I have been dealing with a few problems while installing Windows 8 on my computer. On my old configuration, I had Windows 7 installed on my 60Go SSD, and my programs and user data on my 1To HDD, thanks to relative links. Yet, while installing Windows 8 on my SSD, he made a small partition on my HDD "System related". Plus, I'm afraid using only links is a bit cheap, and I saw lots of people messing with their registry when trying to put user data on another drive. I read a lot about optimizing Windows 8 for SSD, putting Users on another drive, and very similar situation that didn't quite correspond to what I was trying to achieve. Here's what I tried : http://www.eightforums.com/tutorials/4275-user-profiles-relocate-another-partition-disk.html Booting on Audit mode and using an XML to relocate Users didn't work as the specified version in the file is a test one, and I don't what to enter if I'm using the last release. Booting with the install DVD in repair mode to do a copy of the User and create a relative link, resulting in an error on the logon screen while entering my password saying that "The profile can't be load" (average translation of my error from french to english) Do anyone know how to do a clean separated install of Windows 8, with OS on a drive, and the other data on a second one ? Thanks.

    Read the article

  • How SQL Server 2014 impacts Red Gate’s SQL Compare

    - by Michelle Taylor
    SQL Compare 10.7 successfully connects to SQL Server 2014, but it doesn’t yet cover the SQL Server 2014 features which would require us to make major changes to SQL Compare to support. In this post I’m going to talk about the SQL Server 2014 features we’ve already begun supporting, and which ones we’re working on for the next release of SQL Compare (v11). From SQL Compare’s perspective, the new memory-optimized table functionality (some might know it as ‘Hekaton’) has been the most important change. It can’t be described as its own object type, but the new functionality is split across two existing object types (three if you count indexes), as it also comes with native stored procedures and inline indexes. Along with connectivity support, the SQL Compare team has already implemented the first part of the puzzle – inline specification of indexes. These are essential for memory-optimized tables because it’s not possible to alter the memory optimized table’s structure, and so indexes can’t be added after the fact without dropping the table. Books Online  shows this in more detail in the table_index and column_index clauses of http://msdn.microsoft.com/en-us/library/ms174979(v=sql.120).aspx. SQL Compare 10.7 currently supports reading the new inline index specification from script folders and source control repositories, and will write out inline indexes where it’s necessary to do so (i.e. in UDDTs or when attempting to write projects compatible with the SSDT database project format). However, memory-optimized tables themselves are not yet supported in 10.7. The team is actively working on making them available in the v11 release with full support later in the year, and in a beta version before that. Fortunately, SQL Compare already has some ways of handling tables that have to be dropped and created rather than altered, which are being adapted to handle this new kind of table. Because it’s one of the largest new database engine features, there’s an equally large Books Online section on memory-optimized tables, but for us the most important parts of the documentation are the normal table features that are changed or unsupported and the new syntax found in the T-SQL reference pages. We are treating SQL Compare’s support of Natively Compiled Stored Procedures as a separate unit of work, which will be available in a subsequent beta and also feed into the v11 release. This new type of stored procedure is designed to work with memory-optimized tables to maintain the performance improvements gained by them – but you can still also access memory-optimized tables from normal stored procedures and ad-hoc queries. To us, they’re essentially a limited-syntax stored procedure with a few extra options in the create statement, embodied in the updated CREATE PROCEDURE documentation and with the detailed limitations. They should be easier to handle than memory-optimized tables simply because the handling of stored procedures is less sensitive to dropping the object than the handling of tables. However, both share an incompatibility with DDL triggers and Event Notifications which mean we’ll need to temporarily disable these during the specific deployment operations that involve them – don’t worry, we’ll supply a warning if this is the case so that you can check your auditing arrangements can handle the situation. There are also a handful of other improvements in SQL Server 2014 which affect SQL Compare and SQL Data Compare that are not connected to memory optimized tables. The largest of these are the improvements to columnstore indexes, with the capability to create clustered columnstore indexes and update columnstore tables through them – for more detail, take a look at the new syntax reference. There’s also a new index option for better compression of columnstores (COLUMNSTORE_ARCHIVE) and a new statistics option for incremental per-partition statistics, plus the 90 compatibility level is being retired. We’re planning to finish up these small clean-up features last, and be ready to release SQL Compare 11 with full SQL 2014 support early in Q3 this year. For a more thorough overview of what’s new in SQL Server 2014, Books Online’s What’s New section is a good place to start (although almost all the changes in this version are in the Database Engine).

    Read the article

  • Database Mirroring on SQL Server Express Edition

    - by Most Valuable Yak (Rob Volk)
    Like most SQL Server users I'm rather frustrated by Microsoft's insistence on making the really cool features only available in Enterprise Edition.  And it really doesn't help that they changed the licensing for SQL 2012 to be core-based, so now it's like 4 times as expensive!  It almost makes you want to go with Oracle.  That, and a desire to have Larry Ellison do things to your orifices. And since they've introduced Availability Groups, and marked database mirroring as deprecated, you'd think they'd make make mirroring available in all editions.  Alas…they don't…officially anyway.  Thanks to my constant poking around in places I'm not "supposed" to, I've discovered the low-level code that implements database mirroring, and found that it's available in all editions! It turns out that the query processor in all SQL Server editions prepends a simple check before every edition-specific DDL statement: IF CAST(SERVERPROPERTY('Edition') as nvarchar(max)) NOT LIKE '%e%e%e% Edition%' print 'Lame' else print 'Cool' If that statement returns true, it fails. (the print statements are just placeholders)  Go ahead and test it on Standard, Workgroup, and Express editions compared to an Enterprise or Developer edition instance (which support everything). Once again thanks to Argenis Fernandez (b | t) and his awesome sessions on using Sysinternals, I was able to watch the exact process SQL Server performs when setting up a mirror.  Surprisingly, it's not actually implemented in SQL Server!  Some of it is, but that's something of a smokescreen, the real meat of it is simple filesystem primitives. The NTFS filesystem supports links, both hard links and symbolic, so that you can create two entries for the same file in different directories and/or different names.  You can create them using the MKLINK command in a command prompt: mklink /D D:\SkyDrive\Data D:\Data mklink /D D:\SkyDrive\Log D:\Log This creates a symbolic link from my data and log folders to my Skydrive folder.  Any file saved in either location will instantly appear in the other.  And since my Skydrive will be automatically synchronized with the cloud, any changes I make will be copied instantly (depending on my internet bandwidth of course). So what does this have to do with database mirroring?  Well, it seems that the mirroring endpoint that you have to create between mirror and principal servers is really nothing more than a Skydrive link.  Although it doesn't actually use Skydrive, it performs the same function.  So in effect, the following statement: ALTER DATABASE Mir SET PARTNER='TCP://MyOtherServer.domain.com:5022' Is turned into: mklink /D "D:\Data" "\\MyOtherServer.domain.com\5022$" The 5022$ "port" is actually a hidden system directory on the principal and mirror servers. I haven't quite figured out how the log files are included in this, or why you have to SET PARTNER on both principal and mirror servers, except maybe that mklink has to do something special when linking across servers.  I couldn't get the above statement to work correctly, but found that doing mklink to a local Skydrive folder gave me similar functionality. To wrap this up, all you have to do is the following: Install Skydrive on both SQL Servers (principal and mirror) and set the local Skydrive folder (D:\SkyDrive in these examples) On the principal server, run mklink /D on the data and log folders to point to SkyDrive: mklink /D D:\SkyDrive\Data D:\Data On the mirror server, run the complementary linking: mklink /D D:\Data D:\SkyDrive\Data Create your database and make sure the files map to the principal data and log folders (D:\Data and D:\Log) Viola! Your databases are kept in sync on multiple servers! One wrinkle you will encounter is that the mirror server will show the data and log files, but you won't be able to attach them to the mirror SQL instance while they are attached to the principal. I think this is a bug in the Skydrive, but as it turns out that's fine: you can't access a mirror while it's hosted on the principal either.  So you don't quite get automatic failover, but you can attach the files to the mirror if the principal goes offline.  It's also not exactly synchronous, but it's better than nothing, and easier than either replication or log shipping with a lot less latency. I will end this with the obvious "not supported by Microsoft" and "Don't do this in production without an updated resume" spiel that you should by now assume with every one of my blog posts, especially considering the date.

    Read the article

  • T-SQL Tuesday #025 &ndash; CHECK Constraint Tricks

    - by Most Valuable Yak (Rob Volk)
    Allen White (blog | twitter), marathoner, SQL Server MVP and presenter, and all-around awesome author is hosting this month's T-SQL Tuesday on sharing SQL Server Tips and Tricks.  And for those of you who have attended my Revenge: The SQL presentation, you know that I have 1 or 2 of them.  You'll also know that I don't recommend using anything I talk about in a production system, and will continue that advice here…although you might be sorely tempted.  Suffice it to say I'm not using these examples myself, but I think they're worth sharing anyway. Some of you have seen or read about SQL Server constraints and have applied them to your table designs…unless you're a vendor ;)…and may even use CHECK constraints to limit numeric values, or length of strings, allowable characters and such.  CHECK constraints can, however, do more than that, and can even provide enhanced security and other restrictions. One tip or trick that I didn't cover very well in the presentation is using constraints to do unusual things; specifically, limiting or preventing inserts into tables.  The idea was to use a CHECK constraint in a way that didn't depend on the actual data: -- create a table that cannot accept data CREATE TABLE dbo.JustTryIt(a BIT NOT NULL PRIMARY KEY, CONSTRAINT chk_no_insert CHECK (GETDATE()=GETDATE()+1)) INSERT dbo.JustTryIt VALUES(1)   I'll let you run that yourself, but I'm sure you'll see that this is a pretty stupid table to have, since the CHECK condition will always be false, and therefore will prevent any data from ever being inserted.  I can't remember why I used this example but it was for some vague and esoteric purpose that applies to about, maybe, zero people.  I come up with a lot of examples like that. However, if you realize that these CHECKs are not limited to column references, and if you explore the SQL Server function list, you could come up with a few that might be useful.  I'll let the names describe what they do instead of explaining them all: CREATE TABLE NoSA(a int not null, CONSTRAINT CHK_No_sa CHECK (SUSER_SNAME()<>'sa')) CREATE TABLE NoSysAdmin(a int not null, CONSTRAINT CHK_No_sysadmin CHECK (IS_SRVROLEMEMBER('sysadmin')=0)) CREATE TABLE NoAdHoc(a int not null, CONSTRAINT CHK_No_AdHoc CHECK (OBJECT_NAME(@@PROCID) IS NOT NULL)) CREATE TABLE NoAdHoc2(a int not null, CONSTRAINT CHK_No_AdHoc2 CHECK (@@NESTLEVEL>0)) CREATE TABLE NoCursors(a int not null, CONSTRAINT CHK_No_Cursors CHECK (@@CURSOR_ROWS=0)) CREATE TABLE ANSI_PADDING_ON(a int not null, CONSTRAINT CHK_ANSI_PADDING_ON CHECK (@@OPTIONS & 16=16)) CREATE TABLE TimeOfDay(a int not null, CONSTRAINT CHK_TimeOfDay CHECK (DATEPART(hour,GETDATE()) BETWEEN 0 AND 1)) GO -- log in as sa or a sysadmin server role member, and try this: INSERT NoSA VALUES(1) INSERT NoSysAdmin VALUES(1) -- note the difference when using sa vs. non-sa -- then try it again with a non-sysadmin login -- see if this works: INSERT NoAdHoc VALUES(1) INSERT NoAdHoc2 VALUES(1) GO -- then try this: CREATE PROCEDURE NotAdHoc @val1 int, @val2 int AS SET NOCOUNT ON; INSERT NoAdHoc VALUES(@val1) INSERT NoAdHoc2 VALUES(@val2) GO EXEC NotAdHoc 2,2 -- which values got inserted? SELECT * FROM NoAdHoc SELECT * FROM NoAdHoc2   -- and this one just makes me happy :) INSERT NoCursors VALUES(1) DECLARE curs CURSOR FOR SELECT 1 OPEN curs INSERT NoCursors VALUES(2) CLOSE curs DEALLOCATE curs INSERT NoCursors VALUES(3) SELECT * FROM NoCursors   I'll leave the ANSI_PADDING_ON and TimeOfDay tables for you to test on your own, I think you get the idea.  (Also take a look at the NoCursors example, notice anything interesting?)  The real eye-opener, for me anyway, is the ability to limit bad coding practices like cursors, ad-hoc SQL, and sa use/abuse by using declarative SQL objects.  I'm sure you can see how and why this would come up when discussing Revenge: The SQL.;) And the best part IMHO is that these work on pretty much any version of SQL Server, without needing Policy Based Management, DDL/login triggers, or similar tools to enforce best practices. All seriousness aside, I highly recommend that you spend some time letting your mind go wild with the possibilities and see how far you can take things.  There are no rules! (Hmmmm, what can I do with rules?) #TSQL2sDay

    Read the article

  • SQL SERVER – Last Two Days to Get FREE Book – Joes 2 Pros Certification 70-433

    - by pinaldave
    Earlier this week we announced that we will be giving away FREE SQL Wait Stats book to everybody who will get SQL Server Joes 2 Pros Combo Kit. We had a fantastic response to the contest. We got an overwhelming response to the offer. We knew there would be a great response but we want to honestly say thank you to all of you for making it happen. Rick and I want to make sure that we express our special thanks to all of you who are reading our books. The offer is still on and there are two more days to avail this offer. We want to make sure that everybody who buys our most selling combo kits, we will send our other most popular SQL Wait Stats book. Please read all the details of the offer here. The books are great resources for anyone who wants to learn SQL Server from fundamentals and eventually go on the certification path of 70-433. Exam 70-433 contains following important subject and the book covers the subject of fundamental. If you are taking the exam or not taking the exam – this book is for every SQL Developer to learn the subject from fundamentals.  Create and alter tables. Create and alter views. Create and alter indexes. Create and modify constraints. Implement data types. Implement partitioning solutions. Create and alter stored procedures. Create and alter user-defined functions (UDFs). Create and alter DML triggers. Create and alter DDL triggers. Create and deploy CLR-based objects. Implement error handling. Manage transactions. Query data by using SELECT statements. Modify data by using INSERT, UPDATE, and DELETE statements. Return data by using the OUTPUT clause. Modify data by using MERGE statements. Implement aggregate queries. Combine datasets. INTERSECT, EXCEPT Implement subqueries. Implement CTE (common table expression) queries. Apply ranking functions. Control execution plans. Manage international considerations. Integrate Database Mail. Implement full-text search. Implement scripts by using Windows PowerShell and SQL Server Management Objects (SMOs). Implement Service Broker solutions. Track data changes. Data capture Retrieve relational data as XML. Transform XML data into relational data. Manage XML data. Capture execution plans. Collect output from the Database Engine Tuning Advisor. Collect information from system metadata. Availability of Book USA - Amazon | India - Flipkart | Indiaplaza Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Find Rules and Defaults using the PowerShell for SQL Server 2008 Provider

    - by BuckWoody
    I ran into an issue the other day where I couldn't set up some features in SQL Server 2008 because they ddon't support the use of Rules or Defaults. Let me explain a little more about that. In older versions of SQL Server, you could decalre a "Rule" or "Default" just like you do with a Table Constraint today. You would then "bind" these rules or defaults to the tables you wanted them to apply to. Sure, there are advantages and disadvantages to this approach, but it certainly isn't standard Data Definition Language (DDL), so they are deprecated and many features don't work with them any more. Honestly, it's been so long since I've seen them in use I had forgotten to even check for them. My suspicion is that this was a new database created with an older script. Nevertheless, the feature failed when it ran into one. Immediately I thought that I had better build some logic into my process to try and catch those - but how? Lots of choices here, but since I was using PowerShell to do the rest of the work, I thought I would investigate how easy it would be just to do it there. And using the SQL Server 2008 provider, this could not be simpler. I won't show all of the scrupt here, because I was testing for these as a condition and then bailing out of the script and sending a notification, but all it is using is the DIR command! Here's an example on my "UNIVAC" computer for the "pubs" database: Find Rules using PowerShell: dir SQLSERVER:\SQL\UNIVAC\DEFAULT\Databases\pubs\Rulesdir SQLSERVER:\SQL\UNIVAC\DEFAULT\Databases\pubs\Defaults And this one will look in all databases:  #All Databases:dir SQLSERVER:\SQL\UNIVAC\DEFAULT\Databases | select-object -property Name, Rules, Defaults Awesome. Love me some PowerShell. Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately.       Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • MySQL Enterprise Backup 3.8.2 has been released!

    - by Hema Sridharan
    MySQL Enterprise Backup v3.8.2, a maintenance release of online MySQL backup tool, is now available for download from My Oracle Support  (MOS) website as our latest GA release.  It will also be available via the Oracle Software Delivery Cloud in approximately 1-2 weeks. A brief summary of the changes in MySQL Enterprise Backup version 3.8.2 is given below.   A. Functionality Added or Changed:  MySQL Enterprise Backup has a new --on-disk-full command line option. mysqlbackup could hang when the disk became full, rather than detecting the low space condition. mysqlbackup now monitors disk space when running backup commands, and users can now specify the action to take at a disk-full condition with the --on-disk-full option. For more details, refer this page MySQL Enterprise Backup has a new progress report feature, which periodically outputs short progress indicators on its  operations to user-selected destinations (for example, stdout, stderr, a file, or other choices). For more details on progress report options, refer here   B. Bugs Fixed: When --innodb-file-per-table=ON, if a table was renamed and backup-to-image was in progress, apply-log would fail when being run on the backup. (Bug #16903973)   MySQL Server failed to start after a backup was restored if  there had been online DDL transactions on partitioned tables during the time of backup. (Bug #16924499)   apply-log failed if ALTER TABLE ... REORGANIZE PARTITION was applied to partitioned InnoDB tables during backup. (Bug #16721824, Bug #16903951)  apply-incremental-backup might fail with an assertion error if  the InnoDB tables being backed up were created in Barracuda format and with their KEY_BLOCK_SIZE  values  different from the innodb_page_size . This fix ensures that different KEY_BLOCK_SIZE  values are handled properly during incremental backup and apply-incremental-backup operations.  If a table was renamed following a full backup, a subsequent incremental backup could copy the .frm file with the new name, but not the associated .ibd file with the new name. After a  restore, the InnoDB data dictionary could be in an  inconsistent state. This issue primarily occurred if the table  was not changed between the full backup and the subsequent  incremental backup. Bug #16262690)  After a full backup, if a table was renamed and modified,  apply-incremental-backup would crash when run on the backup directory. (Bug #16262609) The value of the binary log position in backup_variables.txt  could be different from the output displayed during the   backup-and-apply-log operation. (This issue did not occur if  the backup and apply-log steps were done separately.) (Bug  #16195529) When using the --only-innodb-with-frm option, MySQL Enterprise Backup tried to create temporary files at unintended locations in the file system, which might cause a failure when, for example, the user had no write privilege for those locations.   This fix makes sure the paths for the temporary files are  correct. (Bug #14787324)  A backup process might hang when it ran into an LSN mismatch between a data file  and the redo log. This fix makes sure the process does not hang and it displays an error message showing the  name of the problematic data file (Bug #14791645) Please post your questions / comments about Backup in forums. Thanks, MEB Team

    Read the article

  • What causes this org.hibernate.MappingException?

    - by stacker
    I'm trying to configure an ejb3 sample application, it's entities where mapped to postgres now I want the app run on Jboss4.3 and Informix using JPA. If the DDL creation <property name="hibernate.hbm2ddl.auto" value="create"/> is active this error appears > WARN [ServiceController] Problem > starting service > persistence.units:ear=weblog.ear,jar=weblog.jar,unitName=weblog > javax.persistence.PersistenceException: > [PersistenceUnit: weblog] Unable to > build EntityManagerFactory > at org.hibernate.ejb.Ejb3Configuration.buildEntityManagerFactory(Ejb3Configuration.java:677) > at org.hibernate.ejb.HibernatePersistence.createContainerEntityManagerFactory(HibernatePersistence.java:132) > at org.jboss.ejb3.entity.PersistenceUnitDeployment.start(PersistenceUnitDeployment.java:246) followed by Caused by: org.hibernate.MappingException: No Dialect mapping for JDBC type: 2005 at org.hibernate.dialect.TypeNames.get(TypeNames.java:56) at org.hibernate.dialect.TypeNames.get(TypeNames.java:81) at org.hibernate.dialect.Dialect.getTypeName(Dialect.java:291) at org.hibernate.mapping.Column.getSqlType(Column.java:182) at org.hibernate.mapping.Table.sqlCreateString(Table.java:394) at org.hibernate.cfg.Configuration.generateSchemaCreationScript(Configuration.java:854) at org.hibernate.tool.hbm2ddl.SchemaExport.<init>(SchemaExport.java:74) at org.hibernate.impl.SessionFactoryImpl.<init>(SessionFactoryImpl.java:311) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1300) at org.hibernate.cfg.AnnotationConfiguration.buildSessionFactory(AnnotationConfiguration.java:874) at org.hibernate.ejb.Ejb3Configuration.buildEntityManagerFactory(Ejb3Configuration.java:669) What does JDBC type: 2005 mean? Any idea how I can track down the entity/column causes the problem? Thanks

    Read the article

  • sqlalchemy date type in 0.6 migration using mssql

    - by nosklo
    I'm connection to mssql server through pyodbc, via FreeTDS odbc driver, on linux ubuntu 10.04. Sqlalchemy 0.5 uses DATETIME for sqlalchemy.Date() fields. Now Sqlalchemy 0.6 uses DATE, but sql server 2000 doesn't have a DATE type. How can I make DATETIME be the default for sqlalchemy.Date() on sqlalchemy 0.6 mssql+pyodbc dialect? I'd like to keep it as clean as possible. Here's code to reproduce the issue: import sqlalchemy from sqlalchemy import Table, Column, MetaData, Date, Integer, create_engine engine = create_engine( 'mssql+pyodbc://sa:sa@myserver/mydb?driver=FreeTDS') m = MetaData(bind=engine) tb = sqlalchemy.Table('test_date', m, Column('id', Integer, primary_key=True), Column('dt', Date()) ) tb.create() And here is the traceback I'm getting: Traceback (most recent call last): File "/tmp/teste.py", line 15, in <module> tb.create() File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/schema.py", line 428, in create bind.create(self, checkfirst=checkfirst) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1647, in create connection=connection, **kwargs) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1682, in _run_visitor **kwargs).traverse_single(element) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/sql/visitors.py", line 77, in traverse_single return meth(obj, **kw) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/ddl.py", line 58, in visit_table self.connection.execute(schema.CreateTable(table)) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1157, in execute params) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1210, in _execute_ddl return self.__execute_context(context) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1268, in __execute_context context.parameters[0], context=context) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1367, in _cursor_execute context) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1360, in _cursor_execute context) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/default.py", line 277, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.ProgrammingError: (ProgrammingError) ('42000', '[42000] [FreeTDS][SQL Server]Column or parameter #2: Cannot find data type DATE. (2715) (SQLExecDirectW)') '\nCREATE TABLE test_date (\n\tid INTEGER NOT NULL IDENTITY(1,1), \n\tdt DATE NULL, \n\tPRIMARY KEY (id)\n)\n\n' ()

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >