Search Results

Search found 15621 results on 625 pages for 'creating'.

Page 275/625 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • MySQL equivalent to .pgpass, or automatic authentication in a cron job for mySQL

    - by Ibrahim
    I'm writing a bash script to back up my databases. Most are postgresql, and in postgres there's a way to avoid having to authenticate by creating a ~/.pgpass file which contains the postgres password. I put this in root's home directory and made it chmod 0600, so that root could dump the postgres databases without having to authenticate. Now I want to do something similar for mysql, although I only have one mysql database. How can I do this? I don't want to specify the password on the command line for mysqldump because this is part of a script that might be somewhat visible to other users. Is there a better way (i.e. built in to mysql) to do this than make a file that only root can read and then read that to get the mysql password, and then use that in the bash script as a variable?

    Read the article

  • MySQL equivalent to .pgpass, or automatic authentication in a cron job for mySQL

    - by Ibrahim
    I'm writing a bash script to back up my databases. Most are postgresql, and in postgres there's a way to avoid having to authenticate by creating a ~/.pgpass file which contains the postgres password. I put this in root's home directory and made it chmod 0600, so that root could dump the postgres databases without having to authenticate. Now I want to do something similar for mysql, although I only have one mysql database. How can I do this? I don't want to specify the password on the command line for mysqldump because this is part of a script that might be somewhat visible to other users. Is there a better way (i.e. built in to mysql) to do this than make a file that only root can read and then read that to get the mysql password, and then use that in the bash script as a variable?

    Read the article

  • Running a scheduled task as SYSTEM with console window open

    - by raoulsson
    I am auto creating scheduled tasks with this line within a batch windows script: schtasks /Create /RU SYSTEM /RP SYSTEM /TN startup-task-%%i /TR %SPEEDWAY_DIR%\%TARGET_DIR%%%i\%STARTUPFILE% /SC HOURLY /MO 1 /ST 17:%%i1:00 I wanted to avoid using specific user credentials and thus decided to use SYSTEM. Now, when checking in the taskmanagers process list or, even better, directly with the C:\> schtasks command itself, all is working well, the tasks are running as intended. However in this particular case I would like to have an open console window where I can see the log flying by. I know I could use C:\> tail -f thelogfile.log if I installed e.g. cygwin (on all machines) or some proprietary tools like Baretail on Windows. But since I only switch to these machines in case of trouble, I would prefer to start the scheduled task in such a way that every user immediately sees the log. Any chance? Thanks!

    Read the article

  • many partitions on a single filegroup?¿ does it make sense?

    - by river0
    Hi, I'm designing a datawarehouse solution and I'm a newbie in disk configuration issues, let me explain you. Our storage is spread over 6 storage enlosures having each of them 5 raid-1 disk arrays, and having 2 LUNS defined per each disk array, which makes a total 48 LUNS (this is following Microsoft fast track recommendations for datawarehouse architectures). I would like to partition my data, on other projects I have worked before, we always followed a 1 partition - 1 filegroup rule. On the microsoft fast track recomendations it is advised to create a filegroup and then for that filegroup a data file per each lun... but I pretend to have a week level partitioning... if I apply that rule I think that I'll get too many files and a complex layout. I'm thinking of just creating just one filegroup (with the 48 lun data files), but still create the partitions since I want to keep soem of the benefits of partitions like partition switching... Is this scenario not recommended? What would you suggest?

    Read the article

  • Git - post-receive hook with git pull "Failed to find a valid git directory"

    - by ludicco
    It's very weird but when setting a git repository and creating a post-receive hook with: echo "--initializing hook--" cd ~/websites/testing echo "--prepare update--" git pull echo "--update completed--" the hook runs indeed, but it never manage to run git pull properly: 6bfa32c..71c3d2a master -> master --initializing hook-- --prepare update-- fatal: Not a git repository: '.' Failed to find a valid git directory. --update completed-- so I'm asking myself now, how it's possible to make the hook update the clone with post-receive? in this case the user running the processes is the same, and its everything inside the user folder so I really don't understand...because if if I go manually into cd ~/websites/testing git pull it works without any problem... any help on that would be pretty much appreciated Thanks a lot

    Read the article

  • CodePlex Daily Summary for Tuesday, June 15, 2010

    CodePlex Daily Summary for Tuesday, June 15, 2010New ProjectsBackup on Build: Backup critical files on each Visual Studio build.CDN Support for EPiServer CMS: This module adds CDN support for EPiServer CMS by modifying outgoing links.Custom WCF Bindings: This project contains some custom WCF LOB SDK bindings I have created including a SalesForce one. I will blog on updates as they occur. Please se...DocIcon for SharePoint 2010: DocIcon for SharePoint re-enables links from document icons in SharePoint 2010. This feature was in previous versions of SharePoint, but was remove...EEG Peak Detection: EEG Peak Detectionemployeemanagement1: employeemanagement1Enables map services on top of existing map providers like Google Maps: Services include Map visualization services, Map decoration services, Spot registration services and Spot naming services.fbprivacy: Tool to assess your Facebook Privacy SettingsfMRI SVM Toolbox: fMRI SVM ToolboxGCMS – using .Net for human CMS: GCMS makes only what you need to do with a CMS and nothing more and it makes it with .NETqjblog: My First Blog.Send2Sharepoint: Office(Word,Excel,Outlook) and windows explorer addin to upload documents to sharepoint document library. SharePoint Find and Replace: SharePoint Find & Replace allows you to replace a specific string within a site collection with a different value. For example, when you change a l...SharePoint Management Studio: This project developed on Visula Studio 2008 and c# language. The main aim is manage your SharePoint 2007 FARM.SharePoint PageController: A SharePoint solution which provides an extensible framework to perform actions on a per-page basis in SharePoint. OOTB functionality allows for f...SilverNotePad: Simple notepad built using MVVM patern.SolidWorks Addin Development: The SolidWorks Addin Development project is dedicated to helping developers and non-developers with creating fully functional addins.Sunlit World Scheme: Sunlit World Scheme is a nearly R4RS-compliant Scheme implementation that supports threading, TCP, UDP, cryptography, and simple graphics and windo...TimeBend: Time tracking gone wild.TinyCMS: Jednostavan CMS s mogućnosti unosa vijesti, linkova i natječaja. CSS je napravljen tek toliko... Aplikacija izrađena za dev4Fun natjecanje.Ujimanet Android: text categorization tool for androidVisual Storm Engine: Visual Storm es un motor para probar nuevas tecnologias orientadas a la creacion de video juegos. Por ahora solo soporta Windows Vista/7 y usa Dire...New ReleasesAjax ASP.Net Forum: developer.insecla.com-forum_v0.1.4: *VERSION: 0.1.4* FEATURES ADDED Rating Threads (Through AjaxCTK (included Ms .DLL in the BIN folder)) Empowered Within AJAX Custom star ima...AlphaGet: Alpha 3: Important: from this release WinGet changes its name to AlphaGet in order to identify it better and make it search-engine friendly. New Features N...Backup on Build: Backup on Build v1.0.0 Initial Release: Initial Release version 1.0.0Boleto.Net: BoletoNet: Última versão estável da BoletoNet.dll. O código fonte dessa versão pode ser encontrado em http://boletonet.codeplex.com/SourceControl/list/change...CDN Support for EPiServer CMS: CDN Support v1: See links on start page for information on how to use and install this module.Chargify.NET: Chargify.NET v0.750: Adding support for creating freemium subscription plans Adding preliminary JSON support Adding ISO 3166-1 Alpha 2 data embedded in the library ...Community Forums NNTP bridge: Community Forums NNTP Bridge V38: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...ContainerOne - C# application server: V0.1.3.0: New minor release containing: Infrastructure - core service An installer of a windows service which provides the following: Service registry Even...DocIcon for SharePoint 2010: wwEsp.DocIcon Deployment Package Release 1.0.0: This package installs the DocIcon feature on a SharePoint 2010 server farm. The solution is deployed as a Farm-level feature that can be enabled or...Ethical Hacking ASP.NET: Version 1.2.0.0: For the complete list of changes, new features and fixes in the new version, please view the Version History page. Read more about the available te...Folder Bookmarks: Folder Bookmarks 1.6.3: The latest version of Folder Bookmarks (1.6.3), with new features and Mini-Menu UI Changes (1.4). Once you have extracted the file, do not delete ...Hades: Projet Hadès - Official Demo - Version 0.1.1 Beta: Second release correcting some bugs... ---------------------------------------------------------------------------- - Projet Hadès - Official Demo...KooBoo Image Gallery: RC 1: This new Version has this features 1) Refactoring to change the mispelled word galery to gallery 2) Change to use the plugin in the same page of ...LibWowArmory: LibWowArmory 0.3 beta: LibWowArmory 0.3 betaThis release of the LibWowArmory source code matches the WoW Armory as of version 3.3.3. Changes since version 0.2.3:Solution...MailChimp4Umbraco: 0.90 stable: Can be used in productionMapWindow6: MapWindow 6.0 June 14: This version adds the WebMercator projection and fixes a bug that was causing some perfect spheres to be created as oblate WGS1984 spheroids.MDownloader: MDownloader-0.15.18.59782: Supported FileServe. Supported SharingMatrix. Fixed minor bugs.MGM - MyGroupManager: MyGroupManager v0.1.5 - Alpha: At this point the application appears feature complete and works pretty well. The code still needs some tweaks (error handling), and a general look...MvcPager: MvcPager 1.4: MvcPager 1.4 source codes and demo projects MvcPager 1.4版源代码及示例文件Nito.LINQ: Beta (v0.6): Rx version The "with Rx" versions of Nito.LINQ are built against Rx 1.0.2563.0, released 2010-06-09. Supported Platforms .NET 4.0 Client Profile, ...open gaze and mouse analyzer: Ogama 3.3: This release was published on 14.06.2010 and is a bugfix release. For the list of changes please visit http://www.ogama.net. Only use this installe...patterns & practices: Prism: Prism 4.0 Drop 2: Prism 4.0 Drop 2 Welcome to the second drop of Prism 4.0 (formally known as the Composite Application Guidance for WPF and Silverlight). This drop ...Prism Software Factory Light: 0.5 Beta: 4ward Prism Software Factory Light - 0.5 Beta releaseThis is the first public beta release of the 4ward Prism Software Factory Light that allows to...PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT: PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT--3.3: Over the last several months, my primary research effort has been directed at producing strictly portable development methods between C and C# . T...qjblog: v1.source: v1.sourceqjblog: v1blog: V1 BlogQuick Performance Monitor: Version 1.4.2: Added 'Move to new window' functionality.Refix - .NET dependency management: Refix v0.1.0.90 ALPHA: Added console tree-style visualisation of solution dependencies, as well as some bug fixes. This version should work out of the box with the demons...SEMICO Framework: Version Stable 1.0.0.3: Version Stable 1.0.0.3SharePoint Find and Replace: 1.0.16: Version: 1.0.16 This release is the first stable release of this project, including the Microsoft public license agreement. Fixes: Added about dia...SharePoint Management Studio: v1: v1SharePoint PageController: SharePoint PageController: For SharePoint 2010 and 2007 running on IIS 7Software Is Hardwork: Sw. Is Hw. Lib. 3.0.0.x+06: Sw. Is Hw. Lib. 3.0.0.x+06SolidWorks Addin Development: GenericAddinFramework-06.14.2010: R1.SourceGrid: SourceGrid 4.30: Sources are here Note that SourceGrid sources are not hosted on CodePlex. The sources are hosted on bitbucket.org Main Changes Improved hidden ...SSIS Expression Editor & Tester: Expression Editor and Tester v1.0.2.0: Corrected release of expression editor tool, no changes to control. Download and extract the files to get started, no install required. Changes Co...Sunlit World Scheme: Sunlit World Scheme - 20100614 - source and binary: This is the result of building the current source code in Debug mode. The source code is included.TinyCMS: TinyCMS: Source kodVCC: Latest build, v2.1.30614.0: Automatic drop of latest buildVianaNET - Videoanalysis for physical motion: VianaNET 1.2 - beta: This is the VianaNET beta release with some bug fixes. Would like to have some comments on it. Regards, AdrianWorkLogger: Worklogger Beta 1: Simple work logger for Windows in WPFMost Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryPHPExcelMicrosoft SQL Server Community & SamplesASP.NETMost Active Projectspatterns & practices – Enterprise LibraryRhyduino - Arduino and Managed CodeCassandraemonCommunity Forums NNTP bridgedotSpatialjQuery Library for SharePoint Web ServicesBlogEngine.NETLightweight Fluent WorkflowNB_Store - Free DotNetNuke Ecommerce Catalog ModuleUmbraco CMS

    Read the article

  • Radius Authorization against ActiveDirectory and the users file

    - by mohrphium
    I have a problem with my freeradius server configuration. I want to be able to authenticate users against Windows ActiveDirectory (2008 R2) and the users file, because some of my co-workers are not listed in AD. We use the freeradius server to authenticate WLAN users. (PEAP/MSCHAPv2) AD Authentication works great, but I still have problems with the /etc/freeradius/users file When I run freeradius -X -x I get the following: Mon Jul 2 09:15:58 2012 : Info: ++++[chap] returns noop Mon Jul 2 09:15:58 2012 : Info: ++++[mschap] returns noop Mon Jul 2 09:15:58 2012 : Info: [suffix] No '@' in User-Name = "testtest", looking up realm NULL Mon Jul 2 09:15:58 2012 : Info: [suffix] Found realm "NULL" Mon Jul 2 09:15:58 2012 : Info: [suffix] Adding Stripped-User-Name = "testtest" Mon Jul 2 09:15:58 2012 : Info: [suffix] Adding Realm = "NULL" Mon Jul 2 09:15:58 2012 : Info: [suffix] Authentication realm is LOCAL. Mon Jul 2 09:15:58 2012 : Info: ++++[suffix] returns ok Mon Jul 2 09:15:58 2012 : Info: [eap] EAP packet type response id 1 length 13 Mon Jul 2 09:15:58 2012 : Info: [eap] No EAP Start, assuming it's an on-going EAP conversation Mon Jul 2 09:15:58 2012 : Info: ++++[eap] returns updated Mon Jul 2 09:15:58 2012 : Info: [files] users: Matched entry testtest at line 1 Mon Jul 2 09:15:58 2012 : Info: ++++[files] returns ok Mon Jul 2 09:15:58 2012 : Info: ++++[expiration] returns noop Mon Jul 2 09:15:58 2012 : Info: ++++[logintime] returns noop Mon Jul 2 09:15:58 2012 : Info: [pap] WARNING: Auth-Type already set. Not setting to PAP Mon Jul 2 09:15:58 2012 : Info: ++++[pap] returns noop Mon Jul 2 09:15:58 2012 : Info: +++- else else returns updated Mon Jul 2 09:15:58 2012 : Info: ++- else else returns updated Mon Jul 2 09:15:58 2012 : Info: Found Auth-Type = EAP Mon Jul 2 09:15:58 2012 : Info: # Executing group from file /etc/freeradius/sites-enabled/default Mon Jul 2 09:15:58 2012 : Info: +- entering group authenticate {...} Mon Jul 2 09:15:58 2012 : Info: [eap] EAP Identity Mon Jul 2 09:15:58 2012 : Info: [eap] processing type tls Mon Jul 2 09:15:58 2012 : Info: [tls] Initiate Mon Jul 2 09:15:58 2012 : Info: [tls] Start returned 1 Mon Jul 2 09:15:58 2012 : Info: ++[eap] returns handled Sending Access-Challenge of id 199 to 192.168.61.11 port 3072 EAP-Message = 0x010200061920 Message-Authenticator = 0x00000000000000000000000000000000 State = 0x85469e2a854487589fb1196910cb8ae3 Mon Jul 2 09:15:58 2012 : Info: Finished request 125. Mon Jul 2 09:15:58 2012 : Debug: Going to the next request Mon Jul 2 09:15:58 2012 : Debug: Waking up in 2.4 seconds. After that it repeats the login attempt and at some point tries to authenticate against ActiveDirectory with ntlm, which doesn't work since the user exists only in the users file. Can someone help me out here? Thanks. PS: Hope this helps, freeradius trying to auth against AD: Mon Jul 2 09:15:58 2012 : Info: ++[chap] returns noop Mon Jul 2 09:15:58 2012 : Info: ++[mschap] returns noop Mon Jul 2 09:15:58 2012 : Info: [suffix] No '@' in User-Name = "testtest", looking up realm NULL Mon Jul 2 09:15:58 2012 : Info: [suffix] Found realm "NULL" Mon Jul 2 09:15:58 2012 : Info: [suffix] Adding Stripped-User-Name = "testtest" Mon Jul 2 09:15:58 2012 : Info: [suffix] Adding Realm = "NULL" Mon Jul 2 09:15:58 2012 : Info: [suffix] Authentication realm is LOCAL. Mon Jul 2 09:15:58 2012 : Info: ++[suffix] returns ok Mon Jul 2 09:15:58 2012 : Info: ++[control] returns ok Mon Jul 2 09:15:58 2012 : Info: [eap] EAP packet type response id 7 length 67 Mon Jul 2 09:15:58 2012 : Info: [eap] No EAP Start, assuming it's an on-going EAP conversation Mon Jul 2 09:15:58 2012 : Info: ++[eap] returns updated Mon Jul 2 09:15:58 2012 : Info: [files] users: Matched entry testtest at line 1 Mon Jul 2 09:15:58 2012 : Info: ++[files] returns ok Mon Jul 2 09:15:58 2012 : Info: ++[smbpasswd] returns notfound Mon Jul 2 09:15:58 2012 : Info: ++[expiration] returns noop Mon Jul 2 09:15:58 2012 : Info: ++[logintime] returns noop Mon Jul 2 09:15:58 2012 : Info: [pap] WARNING: Auth-Type already set. Not setting to PAP Mon Jul 2 09:15:58 2012 : Info: ++[pap] returns noop Mon Jul 2 09:15:58 2012 : Info: Found Auth-Type = EAP Mon Jul 2 09:15:58 2012 : Info: # Executing group from file /etc/freeradius/sites-enabled/inner-tunnel Mon Jul 2 09:15:58 2012 : Info: +- entering group authenticate {...} Mon Jul 2 09:15:58 2012 : Info: [eap] Request found, released from the list Mon Jul 2 09:15:58 2012 : Info: [eap] EAP/mschapv2 Mon Jul 2 09:15:58 2012 : Info: [eap] processing type mschapv2 Mon Jul 2 09:15:58 2012 : Info: [mschapv2] # Executing group from file /etc/freeradius/sites-enabled/inner-tunnel Mon Jul 2 09:15:58 2012 : Info: [mschapv2] +- entering group MS-CHAP {...} Mon Jul 2 09:15:58 2012 : Info: [mschap] Creating challenge hash with username: testtest Mon Jul 2 09:15:58 2012 : Info: [mschap] Told to do MS-CHAPv2 for testtest with NT-Password Mon Jul 2 09:15:58 2012 : Info: [mschap] expand: --username=%{mschap:User-Name:-None} -> --username=testtest Mon Jul 2 09:15:58 2012 : Info: [mschap] No NT-Domain was found in the User-Name. Mon Jul 2 09:15:58 2012 : Info: [mschap] expand: %{mschap:NT-Domain} -> Mon Jul 2 09:15:58 2012 : Info: [mschap] ... expanding second conditional Mon Jul 2 09:15:58 2012 : Info: [mschap] expand: --domain=%{%{mschap:NT-Domain}:-AD.CXO.NAME} -> --domain=AD.CXO.NAME Mon Jul 2 09:15:58 2012 : Info: [mschap] mschap2: 82 Mon Jul 2 09:15:58 2012 : Info: [mschap] Creating challenge hash with username: testtest Mon Jul 2 09:15:58 2012 : Info: [mschap] expand: --challenge=%{mschap:Challenge:-00} -> --challenge=dd441972f987d68b Mon Jul 2 09:15:58 2012 : Info: [mschap] expand: --nt-response=%{mschap:NT-Response:-00} -> --nt-response=7e6c537cd5c26093789cf7831715d378e16ea3e6c5b1f579 Mon Jul 2 09:15:58 2012 : Debug: Exec-Program output: Logon failure (0xc000006d) Mon Jul 2 09:15:58 2012 : Debug: Exec-Program-Wait: plaintext: Logon failure (0xc000006d) Mon Jul 2 09:15:58 2012 : Debug: Exec-Program: returned: 1 Mon Jul 2 09:15:58 2012 : Info: [mschap] External script failed. Mon Jul 2 09:15:58 2012 : Info: [mschap] FAILED: MS-CHAP2-Response is incorrect Mon Jul 2 09:15:58 2012 : Info: ++[mschap] returns reject Mon Jul 2 09:15:58 2012 : Info: [eap] Freeing handler Mon Jul 2 09:15:58 2012 : Info: ++[eap] returns reject Mon Jul 2 09:15:58 2012 : Info: Failed to authenticate the user. Mon Jul 2 09:15:58 2012 : Auth: Login incorrect (mschap: External script says Logon failure (0xc000006d)): [testtest] (from client techap01 port 0 via TLS tunnel) PPS: Maybe the problem is located here: In /etc/freeradius/modules/ntlm_auth I have set ntlm to: program = "/usr/bin/ntlm_auth --request-nt-key --domain=AD.CXO.NAME --username=%{mschap:User-Name} --password=%{User-Password}" I need this, so users can login without adding @ad.cxo.name to their usernames. But how can I tell freeradius to try both logins, [email protected] (should fail) testtest (against users file - should work)

    Read the article

  • Resolving "PLS-00201: identifier 'DBMS_SYSTEM.XXXX' must be declared" Error

    - by Giri Mandalika
    Here is a failure sample. SQL set serveroutput on SQL alter package APPS.FND_TRACE compile body; Warning: Package Body altered with compilation errors. SQL show errors Errors for PACKAGE BODY APPS.FND_TRACE: LINE/COL ERROR -------- ----------------------------------------------------------------- 235/6 PL/SQL: Statement ignored 235/6 PLS-00201: identifier 'DBMS_SYSTEM.SET_EV' must be declared .. By default, DBMS_SYSTEM package is accessible only from SYS schema. Also there is no public synonym created for this package. So, the solution is to create the public synonym and grant "execute" privilege on DBMS_SYSTEM package to all database users or a specific user. eg., SQL CREATE PUBLIC SYNONYM dbms_system FOR dbms_system; Synonym created. SQL GRANT EXECUTE ON dbms_system TO APPS; Grant succeeded. - OR - SQL GRANT EXECUTE ON dbms_system TO PUBLIC; Grant succeeded. SQL alter package APPS.FND_TRACE compile body; Package body altered. Note that merely granting execute privilege is not enough -- creating the public synonym is as important to resolve this issue.

    Read the article

  • Ionics Rewrite Filter setup on IIS 5.1

    - by Neil Aitken
    I'm trying to configure IIRF 2 on IIS 5.1 running on XP Pro, so that I can run the Zend Framework. I've managed to get the filter running on a second website that I setup using one the IIS admin scripts. When I goto iirfStatus I get this: The problem is the .ini path for the site is pointing to c:\windows\system32\Irif.ini rather than the site root. If I try creating an IIS application under IIS-Website Properties-Home Directory then iirfStatus stops working entirely. Any ideas how I can set the ini path correctly, or will I only be able to get away with this on a proper server edition of IIS?

    Read the article

  • Profile of Scott L Newman

    - by Ratman21
    To:       Whom It May Concern From: Scott L Newman Date:   4/23/2010 Re:      Profile Who is he, what can he do? Two very good questions. #1. I am a 20 + years experience Information Technology Professional (hold on don’t hit delete yet!). Who is not over the hill (I am on top of it) and still knows how to do (and can still do) that thing call work! #2. A can do attitude, that does not allow problems to sit unfixed. I have a broad range of skills, including: Certified CompTIA A+, Security+ and Network+ Technician §         2.5 years (NOC) Network experience on large Cisco based Wan – UK to Austria §         20 years experience MIS/DP – Yes I can do IBM mainframes and Tandem non-stops too §         18 years experience as technical Help Desk support – panicking users, no problem §         18 years experience with PC/Server based system, intranet and internet systems §         10+ years experienced on: Microsoft Office, Windows XP and Data Network Fundamentals (YES I do windows) §         Strong trouble shooting skills for software, hard ware and circuit issues (and I can tell you what kind of horrors I had to face on all of them). §         Very experienced on working with customers on problems – again panicking users, no problem §         Working experience with Remote Access (VPN/SecurID) – I didn’t just study them I worked on/with them §         Skilled in getting info for and creating documentation for Operation procedures (I do not just wait for them to give it to me I go out and get it. Waiting for info on working applications is, well dumb) Multiple software languages (Hey I have done some programming) And much more experiences in “IT” (Mortgage, stocks and financial information systems experience and have worked “IT” in a hospital) Can multitask, also have ability to adapt to change and learn quickly. (once was put in charge of a system that I had not worked with for over two years. Talk about having to relearn and adapt to changes fast. But I did it.)   The summarization is that I know what do, know keep things going and how to fix it when it breaks.   Scott L. Newman Confidential

    Read the article

  • Where can I get an Adobe Flash Page Curl / Flip / Peel Effect tutorial?

    - by nutman
    I currently work on a brochure in InDesign. I export as an SWF and apply a page curl effect to it when publishing so I can put it online. It's great. However, I now want to take the brochure to the next level and make each page interactive. To do this I export from InDesign as an XFL (flash format). This gets the brochure into Flash, with each page on a separate frame. I can now make the page interactive; this is also great! All I need to do now is create the page flip effect again, because when you export to XFL you cannot embed a page flip effect on it. The trouble is I can't find a tutorial anywhere for creating this effect. Anyone know where I can find one?

    Read the article

  • Uploading and Importing CSV file to SQL Server in ASP.NET WebForms

    - by Vincent Maverick Durano
    Few weeks ago I was working with a small internal project  that involves importing CSV file to Sql Server database and thought I'd share the simple implementation that I did on the project. In this post I will demonstrate how to upload and import CSV file to SQL Server database. As some may have already know, importing CSV file to SQL Server is easy and simple but difficulties arise when the CSV file contains, many columns with different data types. Basically, the provider cannot differentiate data types between the columns or the rows, blindly it will consider them as a data type based on first few rows and leave all the data which does not match the data type. To overcome this problem, I used schema.ini file to define the data type of the CSV file and allow the provider to read that and recognize the exact data types of each column. Now what is schema.ini? Taken from the documentation: The Schema.ini is a information file, used to define the data structure and format of each column that contains data in the CSV file. If schema.ini file exists in the directory, Microsoft.Jet.OLEDB provider automatically reads it and recognizes the data type information of each column in the CSV file. Thus, the provider intelligently avoids the misinterpretation of data types before inserting the data into the database. For more information see: http://msdn.microsoft.com/en-us/library/ms709353%28VS.85%29.aspx Points to remember before creating schema.ini:   1. The schema information file, must always named as 'schema.ini'.   2. The schema.ini file must be kept in the same directory where the CSV file exists.   3. The schema.ini file must be created before reading the CSV file.   4. The first line of the schema.ini, must the name of the CSV file, followed by the properties of the CSV file, and then the properties of the each column in the CSV file. Here's an example of how the schema looked like: [Employee.csv] ColNameHeader=False Format=CSVDelimited DateTimeFormat=dd-MMM-yyyy Col1=EmployeeID Long Col2=EmployeeFirstName Text Width 100 Col3=EmployeeLastName Text Width 50 Col4=EmployeeEmailAddress Text Width 50 To get started lets's go a head and create a simple blank database. Just for the purpose of this demo I created a database called TestDB. After creating the database then lets go a head and fire up Visual Studio and then create a new WebApplication project. Under the root application create a folder called UploadedCSVFiles and then place the schema.ini on that folder. The uploaded CSV files will be stored in this folder after the user imports the file. Now add a WebForm in the project and set up the HTML mark up and add one (1) FileUpload control one(1)Button and three (3) Label controls. After that we can now proceed with the codes for uploading and importing the CSV file to SQL Server database. Here are the full code blocks below: 1: using System; 2: using System.Data; 3: using System.Data.SqlClient; 4: using System.Data.OleDb; 5: using System.IO; 6: using System.Text; 7:   8: namespace WebApplication1 9: { 10: public partial class CSVToSQLImporting : System.Web.UI.Page 11: { 12: private string GetConnectionString() 13: { 14: return System.Configuration.ConfigurationManager.ConnectionStrings["DBConnectionString"].ConnectionString; 15: } 16: private void CreateDatabaseTable(DataTable dt, string tableName) 17: { 18:   19: string sqlQuery = string.Empty; 20: string sqlDBType = string.Empty; 21: string dataType = string.Empty; 22: int maxLength = 0; 23: StringBuilder sb = new StringBuilder(); 24:   25: sb.AppendFormat(string.Format("CREATE TABLE {0} (", tableName)); 26:   27: for (int i = 0; i < dt.Columns.Count; i++) 28: { 29: dataType = dt.Columns[i].DataType.ToString(); 30: if (dataType == "System.Int32") 31: { 32: sqlDBType = "INT"; 33: } 34: else if (dataType == "System.String") 35: { 36: sqlDBType = "NVARCHAR"; 37: maxLength = dt.Columns[i].MaxLength; 38: } 39:   40: if (maxLength > 0) 41: { 42: sb.AppendFormat(string.Format(" {0} {1} ({2}), ", dt.Columns[i].ColumnName, sqlDBType, maxLength)); 43: } 44: else 45: { 46: sb.AppendFormat(string.Format(" {0} {1}, ", dt.Columns[i].ColumnName, sqlDBType)); 47: } 48: } 49:   50: sqlQuery = sb.ToString(); 51: sqlQuery = sqlQuery.Trim().TrimEnd(','); 52: sqlQuery = sqlQuery + " )"; 53:   54: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 55: { 56: sqlConn.Open(); 57: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 58: sqlCmd.ExecuteNonQuery(); 59: sqlConn.Close(); 60: } 61:   62: } 63: private void LoadDataToDatabase(string tableName, string fileFullPath, string delimeter) 64: { 65: string sqlQuery = string.Empty; 66: StringBuilder sb = new StringBuilder(); 67:   68: sb.AppendFormat(string.Format("BULK INSERT {0} ", tableName)); 69: sb.AppendFormat(string.Format(" FROM '{0}'", fileFullPath)); 70: sb.AppendFormat(string.Format(" WITH ( FIELDTERMINATOR = '{0}' , ROWTERMINATOR = '\n' )", delimeter)); 71:   72: sqlQuery = sb.ToString(); 73:   74: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 75: { 76: sqlConn.Open(); 77: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 78: sqlCmd.ExecuteNonQuery(); 79: sqlConn.Close(); 80: } 81: } 82: protected void Page_Load(object sender, EventArgs e) 83: { 84:   85: } 86: protected void BTNImport_Click(object sender, EventArgs e) 87: { 88: if (FileUpload1.HasFile) 89: { 90: FileInfo fileInfo = new FileInfo(FileUpload1.PostedFile.FileName); 91: if (fileInfo.Name.Contains(".csv")) 92: { 93:   94: string fileName = fileInfo.Name.Replace(".csv", "").ToString(); 95: string csvFilePath = Server.MapPath("UploadedCSVFiles") + "\\" + fileInfo.Name; 96:   97: //Save the CSV file in the Server inside 'MyCSVFolder' 98: FileUpload1.SaveAs(csvFilePath); 99:   100: //Fetch the location of CSV file 101: string filePath = Server.MapPath("UploadedCSVFiles") + "\\"; 102: string strSql = "SELECT * FROM [" + fileInfo.Name + "]"; 103: string strCSVConnString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + filePath + ";" + "Extended Properties='text;HDR=YES;'"; 104:   105: // load the data from CSV to DataTable 106:   107: OleDbDataAdapter adapter = new OleDbDataAdapter(strSql, strCSVConnString); 108: DataTable dtCSV = new DataTable(); 109: DataTable dtSchema = new DataTable(); 110:   111: adapter.FillSchema(dtCSV, SchemaType.Mapped); 112: adapter.Fill(dtCSV); 113:   114: if (dtCSV.Rows.Count > 0) 115: { 116: CreateDatabaseTable(dtCSV, fileName); 117: Label2.Text = string.Format("The table ({0}) has been successfully created to the database.", fileName); 118:   119: string fileFullPath = filePath + fileInfo.Name; 120: LoadDataToDatabase(fileName, fileFullPath, ","); 121:   122: Label1.Text = string.Format("({0}) records has been loaded to the table {1}.", dtCSV.Rows.Count, fileName); 123: } 124: else 125: { 126: LBLError.Text = "File is empty."; 127: } 128: } 129: else 130: { 131: LBLError.Text = "Unable to recognize file."; 132: } 133:   134: } 135: } 136: } 137: } The code above consists of three (3) private methods which are the GetConnectionString(), CreateDatabaseTable() and LoadDataToDatabase(). The GetConnectionString() is a method that returns a string. This method basically gets the connection string that is configured in the web.config file. The CreateDatabaseTable() is method that accepts two (2) parameters which are the DataTable and the filename. As the method name already suggested, this method automatically create a Table to the database based on the source DataTable and the filename of the CSV file. The LoadDataToDatabase() is a method that accepts three (3) parameters which are the tableName, fileFullPath and delimeter value. This method is where the actual saving or importing of data from CSV to SQL server happend. The codes at BTNImport_Click event handles the uploading of CSV file to the specified location and at the same time this is where the CreateDatabaseTable() and LoadDataToDatabase() are being called. If you notice I also added some basic trappings and validations within that event. Now to test the importing utility then let's create a simple data in a CSV format. Just for the simplicity of this demo let's create a CSV file and name it as "Employee" and add some data on it. Here's an example below: 1,VMS,Durano,[email protected] 2,Jennifer,Cortes,[email protected] 3,Xhaiden,Durano,[email protected] 4,Angel,Santos,[email protected] 5,Kier,Binks,[email protected] 6,Erika,Bird,[email protected] 7,Vianne,Durano,[email protected] 8,Lilibeth,Tree,[email protected] 9,Bon,Bolger,[email protected] 10,Brian,Jones,[email protected] Now save the newly created CSV file in some location in your hard drive. Okay let's run the application and browse the CSV file that we have just created. Take a look at the sample screen shots below: After browsing the CSV file. After clicking the Import Button Now if we look at the database that we have created earlier you'll notice that the Employee table is created with the imported data on it. See below screen shot.   That's it! I hope someone find this post useful! Technorati Tags: ASP.NET,CSV,SQL,C#,ADO.NET

    Read the article

  • build a Database from Ms Word list information...

    - by Jayron Soares
    Please someone can advise me how to approach a given problem: I have a sequential list of metadata in a document in MS Word. The basic idea is create a python algorithm to iterate over of the information, retrieving just the name of PROCESS, when is made a queue, from a database. for example. Process: Process Walker (1965) Exact reference: Walker Process Equipment., nc. v. Food Machinery Corp.. Link: http://caselaw.lp.findlaw.com/scripts/getcase.pl?court=US&vol=382&invol= Type of procedure: Certiorari To The United States Court of Appeals for the SeventhCircuit. Parties: Walker Process Equipment, Inc. Sector: Systems is … Start Date: October 12-13 Arguedas, 1965 Summary: Food Machinery Company has initiated a process to stop or slow the entry of competitors through the use of a patent obtained by fraud. The case concerned a patenton "knee ction swing diffusers" used in aeration equipment for sewage treatment systems, and the question was whether "the maintenance and enforcement of a patent obtained by fraud before the patent office" may be a basis for antitrust punishment. Report of the evolution process: petitioner, in answer to respond .. Importance: a) First case which established an analysis for the diagnosis of dispute… There are about 200 pages containing the information above. I have in mind the idea of creating an algorithm in python to be able to break this information sequenced and try to store them in a web database[open source application that I’m looking for] in order to allow for free consultations ...

    Read the article

  • Winnipeg Code Camp&ndash;Session Announcement

    - by D'Arcy Lussier
    I’ve been updating the Winnipeg Code Camp website over the last few weeks with sessions and speakers as we’ve added them, and I’m happy to announce the full set of sessions!* We have a very interesting mix this year with new speakers and varied technologies! Remember this is a *FREE* event, so head over to our website to find out how to register for what will be a fantastic code camp! *OK, so we still have one session that needs to be have an official title, and one session that’s still TBA…but close enough. ;) What`s New in Entity Framework 4 Aaron Kowall Easy Automation Setup for Everyday Projects Amir Barylko Hackerspaces Everywhere! Winnipeg: Our Time is Now Andrew Orr C# Ninjitsu Chris Eargle Code like a Ninja:Enhance Your Productivity with VS.NET & JustCode Chris Eargle Scala Language Tour Craig Tataryn WP7 - Creating a Data Driven App D`Arcy Lussier TBA (WordPress Related) Dan Bernardic WP7 Development Foundation D'Arcy Lussier HTML5 for .NET Pros Dave Wesst Turbocharge Your Manual Testing Process with VS 2010 Dylan Smith Develop Visual Studio 2010 Extensions - Twitter Studio George Chen Functionality Driven Development with Asp .Net MVC George Chen & Sean Bennett Web Development for Mobile Devices Kelly Cassidy Intro to Nmap Security Scanner Mak Kolybabi My Personal Top 10 SQL Habits Good and Bad Mike Diehl Stupid Mistakes Made By Smart People Ron Bowes Intro to jQuery Stefan Penner Taking Your WP7 Application to the Next Level with Tombstoning Tyler Doerksen Coming Soon! Tyler Doerksen

    Read the article

  • IIRF Setup on IIS 5.1

    - by Neil Aitken
    I'm trying to configure IIRF 2 on IIS 5.1 running on XP Pro, so that I can run the Zend Framework. I've managed to get the filter running on a second website that I setup using one the IIS admin scripts. When I goto iirfStatus I get this: The problem is the .ini path for the site is pointing to c:\windows\system32\Irif.ini rather than the site root. If I try creating an IIS application under IIS-Website Properties-Home Directory then iirfStatus stops working entirely. Any ideas how I can set the ini path correctly, or will I only be able to get away with this on a proper server edition of IIS?

    Read the article

  • Free space not reclaimed after online resizing ext4 in Ubuntu 9.10

    - by TiansHUo
    My root partition was filling up, with only 500 mbs left, I wanted to resize my root partition from 20 Gb to 40Gb So I resized my partition by using these steps: Using Gparted to resize another partition to give space for the EXT4 Using fdisk, deleting the root partition (on /dev/sda2), and creating it again using the new size resize2fs /dev/sda2 Updating grub2 But now the problem is that although I can boot in my new partition and the new partition shows it is 40Gb, but the free size was still 500mb. So I booted from a LiveCD and checked with e2fsck -p /dev/sda2, it reported clean. So I added the -f flag (force check), still, the drive is full.

    Read the article

  • Oracle Social Network -The Social Glue for Enterprise Applications

    - by kellsey.ruppel
    by Peter Reiser  - Social Business Evangelist, Oracle WebCenter  Tom Petrocelli of Enterprise Strategy Group published a report recently, “Oracle Social Network: The Social Glue for Enterprise Applications”, on Oracle Social Network (OSN) and how traditional social products create social silos whereas OSN is the “social glue” for enterprise applications.  This report supports the point of Oracle’s Social Business Strategy to seamless integrate social capabilities into the main business processes. Quote from report: “Oracle has adopted the correct approach to creating a social layer and socially enabled applications. Oracle Social Network is not simply another enterprise social network product; it is a complete social layer for the enterprise application stack. This approach will serve Oracle users well in the future.” OSN allow to capture the related Conversations of a business process right where it’s happens – within the respective Business application.  Fusion CRM is an excellent example for this approach. Quote from report: “Oracle’s new software, Oracle Social Network, is an example of a solution to the silo problem. While Oracle fields a typical enterprise social network application with microblogging, file sharing, shared documents or wikis, and activity streams, the front-end application is only a small part of what Oracle Social Network does. Instead, Oracle Social Network is a platform that provides social features as a service to other enterprise applications. In effect, Oracle Social Network socially enables all of Oracle’s enterprise applications—all enterprise applications really—with not only the same features, but also the same conversations. As a result, the social conversations act as a conduit for inter-application communication and collaboration.” Source: ESG Research Report, Oracle Social Network: The Social Glue for Enterprise Applications, August 2012. You can download the report here.

    Read the article

  • 'rman' cheat-sheet and rlwrap completion

    - by katsumii
    I started using 'rlwrap' some monthes ago like one of my colleague does.bash-like features in sqlplus, rman and other Oracle command line tools (Oracle Luxembourg Core Tech' Blog by Gilles Haro)One can find specific Oracle extension for databases 9i, 10g and 11g (keyword textfile) over here. This will avoid you the need to create this .oracle_keywords file.There is 'rman' keyword file in the link above. I experimented a little and found some missing keywords which are:MAXCORRUPTION PRIMARY NOCFAU VIRTUAL COMPRESSION FOREIGN With these words added, 'rman' works like this:$ rlwrap -f ~/rman $ORACLE_HOME/bin/rman Recovery Manager: Release 11.2.0.3.0 - Production on Mon Dec 3 02:56:04 2012 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. RMAN> <-- Hit TAB Display all 211 possibilities? (y or n) As you can guess, this completion is not context aware.I found these missing words by creating a kind of 'cheat sheet' for rman with the script like below. This sheet contains list of verbs and 1st operands. I uploaded to here so one can create a coffee cup with a lot of esoteric words printed on :)validWords() { sed -n 's/^RMAN-01009: syntax error: found "identifier": expecting one of: //p' \ | sed -r 's/double-quoted-string, single-quoted-string/Some String/;s/, /" "/g;s/""//' } echo "Bogus" | rman | validWords > /tmp/rman.$$ for i in $(cat /tmp/rman.$$) do i=$(echo $i | tr -d '"') echo "#### $i ####" echo "$i Bogus" | rman | validWords done One can find more keywords in the document here.

    Read the article

  • General Overview of Design Pattern Types

    Typically most software engineering design patterns fall into one of three categories in regards to types. Three types of software design patterns include: Creational Type Patterns Structural Type Patterns Behavioral Type Patterns The Creational Pattern type is geared toward defining the preferred methods for creating new instances of objects. An example of this type is the Singleton Pattern. The Singleton Pattern can be used if an application only needs one instance of a class. In addition, this singular instance also needs to be accessible across an application. The benefit of the Singleton Pattern is that you control both instantiation and access using this pattern. The Structural Pattern type is a way to describe the hierarchy of objects and classes so that they can be consolidated into a larger structure. An example of this type is the Façade Pattern.  The Façade Pattern is used to define a base interface so that all other interfaces inherit from the parent interface. This can be used to simplify a number of similar object interactions into one single standard interface. The Behavioral Pattern Type deals with communication between objects. An example of this type is the State Design Pattern. The State Design Pattern enables objects to alter functionality and processing based on the internal state of the object at a given time.

    Read the article

  • Installation procedure RAC One Node

    - by rene.kundersma
    Okay, In order to test RAC One Node, on my Oracle VM Laptop, I just: - installed Oracle VM 2.2 - Created two OEL 5.3 images The two images are fully prepared for Oracle 11gr2 Grid Infrastructure and 11gr2 RAC including four shared disks for ASM and private nics. After installation of the Oracle 11gr2 Grid Infrastructure and a "software only installation" of 11gr2 RAC, I installed patch 9004119 as you can see with the opatch lsinv output: This patch has the scripts required to administer RAC One Node, you will see them later. At the moment we have them available for Linux and Solaris. After installation of the patch, I created a RAC database with an instance on one node. Please note that the "Global Database Name" has to be the same as the SID prefix and should be less then or equal to 8 characters: When the database creation is done, first I create a service. This is because RAC One Node needs to be "initialized" each time you add a service: The service configuration details are: After creating the service, a script called raconeinit needs to run from $RDBMS_HOME/bin. This is a script supplied by the patch. I can imagine the next major patch set of 11gr2 has this scripts available by default. The script will configure the database to run on other nodes: After initialization, when you would run raconeinit again, you would see: So, now the configuration is ready and we are ready to run 'Omotion' and move the service around from one node to the other (yes, vm competitor: this is service is available during the migration, nice right ?) . Omotion is started by running Omotion. With Omotion -v you get verbose output: So, during the migration you will see the two instance active: And, after the migration, there is only one instance left on the new node:

    Read the article

  • links for 2010-04-15

    - by Bob Rhubart
    e-Energy 2010 in Passau : Franz Haberhauer's Weblog Fresh off his participation in a panel at the 1st Int' Conf. on Energy-Efficient Computing and Networking at the University of Passau, Germany, Franz Haberhauer offers some background on the CoolThreads/Chip Mulitthreading Technology and its role in greener datacenters. (tags: oracle sun datacenter Mulitthreading) Oracle Enterprise Manager Grid Control: New Recommended Bundle Patch (APR 2010) - 9405592 for Patch Automation on EM 10.2.0.5 Notes and a short FAQ on the Recommended Bundle Patch 9405592 for Oracle Enterprise Manager Grid Control. (tags: architect entarch grid oracle otn) Vijaykumar Yenne: Customizing Spaces UI Vijaykumar Yyenne explains how to leverage the Extend Spaces Project on the Oracle Technology Network to customize Oracle WebCenter site templates. (tags: enterprise2.0 oracle otn webcenter) Knut Vatsendvik: Catch Me If You Can "Suppose you have a Proxy based Web Service using Oracle Service Bus. In a stage in the request pipeline, you are using a Publish action to publish the incoming message to a JMS queue using a Business Service. What if the outbound transport provider throws an exception (outside of your pipeline)? Is your pipeline able to catch the error with an error handler?" -- Knut Vatsendvik (tags: oracle otn soa esb weblogic architect) Pete Wang: Coherence Configuration For Multiple HA SOA Domains Quick tips from Pete Wang on the Oracle Coherence settings necessary for creating multiple SOA HA domains. (tags: architect coherence oracle otn soa) Warren Baird: New Walkthrough Capability in AutoVue 20 Warren Baird describes new features in Oracle AutoVue 20 that allow viewing a 3D model of a building from the inside. (tags: architect entarch oracle otn) Peter Wang: How to implement multi-source XSLT mapping in Oracle SOA Suite 11g BPEL In SOA 11g, you can create a XSLT mapper that uses multiple sources as the input. Pete Wang shows you how. (tags: oracle otn soa bpel architect)

    Read the article

  • Resolving “ssl handshake failure” error in PostgresQL

    - by Mitch
    I would like to connect to my Postgres 8.3 database using SSL from my XP client using OpenSSL. This works fine without SSL. When I try it with SSL (no client certificate), I get the error: error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure I have followed the instructions in the Postgres manual for SSL including creating a self-signed certificate. In my pg_hba.conf there is a line: host dbname loginname 123.45.67.89/32 md5 The version of OpenSSL on the server is 0.9.8g and on the client is 0.9.8j. I'd appreciate any suggestions for tracking down the problem. Edit: The uncommented lines from postgresql.conf are: data_directory = '/var/ebs0/postgres/main' hba_file = '/etc/postgresql/8.3/main/pg_hba.conf' ident_file = '/etc/postgresql/8.3/main/pg_ident.conf' external_pid_file = '/var/run/postgresql/8.3-main.pid' listen_addresses = '*' port = 5432 max_connections = 100 unix_socket_directory = '/var/run/postgresql' ssl = true shared_buffers = 24MB

    Read the article

  • SQL SERVER – Repair a SQL Server Database Using a Transaction Log Explorer

    - by Pinal Dave
    In this blog, I’ll show how to use ApexSQL Log, a SQL Server transaction log viewer. You can download it for free, install, and play along. But first, let’s describe some disaster recovery scenarios where it’s useful. About SQL Server disaster recovery Along with database development and administration, you must work on a good recovery plan. Disasters do happen and no one’s immune. What you can do is take all actions needed to be ready for a disaster and go through it with minimal data loss and downtime. Besides creating a recovery plan, it’s necessary to have a list of steps that will be executed when a disaster occurs and to test them before a disaster. This way, you’ll know that the plan is good and viable. Testing can also be used as training for all team members, so they can all understand and execute it when the time comes. It will show how much time is needed to have your servers fully functional again and how much data you can lose in a real-life situation. If these don’t meet recovery-time and recovery-point objectives, the plan needs to be improved. Keep in mind that all major changes in environment configuration, business strategy, and recovery objectives require a new recovery plan testing, as these changes most probably induce a recovery plan changing and tweaking. What is a good SQL Server disaster recovery plan? A good SQL Server disaster recovery strategy starts with planning SQL Server database backups. An efficient strategy is to create a full database backup periodically. Between two successive full database backups, you can create differential database backups. It is essential is to create transaction log backups regularly between full database backups. Keep in mind that transaction log backups can be created only on databases in the full recovery model. In other words, a simple, but efficient backup strategy would be a full database backup every night, a transaction log backup every hour, or every 15 minutes. The frequency depends on how much data you can afford to lose and how busy the database is. Another option, instead of creating a full database backup every night, is to create a full database backup once a week (e.g. on Friday at midnight) and differential database backup every night until next Friday when you will create a full database backup again. Once you create your SQL Server database backup strategy, schedule the backups. You can do that easily using SQL Server maintenance plans. Why are transaction logs important? Transaction log backups contain transactions executed on a SQL Server database. They provide enough information to undo and redo the transactions and roll back or forward the database to a point in time. In SQL Server disaster recovery situations, transaction logs enable to repair a SQL Server database and bring it to the state before the disaster. Be aware that even with regular backups, there will be some data missing. These are the transactions made between the last transaction log backup and the time of the disaster. In some situations, to repair your SQL Server database it’s not necessary to re-create the database from its last backup. The database might still be online and all you need to do is roll back several transactions, such as wrong update, insert, or delete. The restore to a point in time feature is available in SQL Server, but for large databases, it is very time-consuming, as SQL Server first restores a full database backup, and then restores transaction log backups, one after another, up to the recovery point. During that time, the database is unavailable. This is where a SQL Server transaction log viewer can help. For optimal recovery, besides having a database in the full recovery model, it’s important that you haven’t manually truncated the online transaction log. This ensures that all transactions made after the last transaction log backup are still in the online transaction log. All you have to do is read and replay them. How to read a SQL Server transaction log? SQL Server doesn’t provide an option to read transaction logs. There are several SQL Server commands and functions that read the content of a transaction log file (fn_dblog, fn_dump_dblog, and DBCC PAGE), but they are undocumented. They require T-SQL knowledge, return a large number of not easy to read and understand columns, sometimes in binary or hexadecimal format. Another challenge is reading UPDATE statements, as it’s necessary to match it to a value in the MDF file. When you finally read the transactions executed, you have to create a script for it. How to easily repair a SQL database? The easiest solution is to use a transaction log reader that will not only read the transactions in the transaction log files, but also automatically create scripts for the read transactions. In the following example, I will show how to use ApexSQL Log to repair a SQL database after a crash. If a database has crashed and both MDF and LDF files are lost, you have to rely on the full database backup and all subsequent transaction log backups. In another scenario, the MDF file is lost, but the LDF file is available. First, restore the last full database backup on SQL Server using SQL Server Management Studio. I’ll name it Restored_AW2014. Then, start ApexSQL Log It will automatically detect all local servers. If not, click the icon right to the Server drop-down list, or just type in the SQL Server instance name. Select the Windows or SQL Server authentication type and select the Restored_AW2014 database from the database drop-down list. When all options are set, click Next. ApexSQL Log will show the online transaction log file. Now, click Add and add all transaction log backups created after the full database backup I used to restore the database. In case you don’t have transaction log backups, but the LDF file hasn’t been lost during the SQL Server disaster, add it using Add.   To repair a SQL database to a point in time, ApexSQL Log needs to read and replay all the transactions in the transaction log backups (or the LDF file saved after the disaster). That’s why I selected the Whole transaction log option in the Filter setup. ApexSQL Log offers a range of various filters, which are useful when you need to read just specific transactions. You can filter transactions by the time of the transactions, operation type (e.g. to read only data inserts), table name, SQL Server login that made the transaction, etc. In this scenario, to repair a SQL database, I’ll check all filters and make sure that all transactions are included. In the Operations tab, select all schema operations (DDL). If you omit these, only the data changes will be read so if there were any schema changes, such as a new function created, or an existing table modified, they will be ignored and database will not be properly repaired. The data repair for modified tables will fail. In the Tables tab, I’ll make sure all tables are selected. I will uncheck the Show operations on dropped tables option, to reduce the number of transactions. Click Next. ApexSQL Log offers three options. Select Open results in grid, to get a user-friendly presentation of the transactions. As you can see, details are shown for every transaction, including the old and new values for updated columns, which are clearly highlighted. Now, select them all and then create a redo script by clicking the Create redo script icon in the menu.   For a large number of transactions and in a critical situation, when acting fast is a must, I recommend using the Export results to file option. It will save some time, as the transactions will be directly scripted into a redo file, without showing them in the grid first. Select Generate reconstruction (REDO) script , change the output path if you want, and click Finish. After the redo T-SQL script is created, ApexSQL Log shows the redo script summary: The third option will create a command line statement for a batch file that you can use to schedule execution, which is not really applicable when you repair a SQL database, but quite useful in daily auditing scenarios. To repair your SQL database, all you have to do is execute the generated redo script using an integrated developer environment tool such as SQL Server Management Studio or any other, against the restored database. You can find more information about how to read SQL Server transaction logs and repair a SQL database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered, restored, or transactions rolled back. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • automysqlbackup not working weirdly on shared host

    - by KPL
    I'm on HostGator and I have not-root-level SSH access. I manually setup automysqlbackup in a /home/user/automysqlbackup folder. Created a file called runbackup in the same folder, chmod +x'ed it. The content : /home/user/automysqlbackup/automysqlbackup /home/user/automysqlbackup/myserver.conf Now, when I run ./runbackup from shell, no email is sent. I've setup a daily cron job. The crontab line reads - 24 06 * * * /home/user/automysqlbackup/runbackup When the crontab is run, I do get an email, but, the subject is WARNING: Error Reported - MySQL Backup Log and SQL Files for localhost - 012-03-19_06h24m The email does have an attachment, but it just has the SQL for creating tables. No data in the several rows at all. I don't know what wrong I'm doing and this is freaking me out! I tried writing a custom script as well, using this guide, but mutt doesn't send email, then. The OS used by HostGator is CentOS.

    Read the article

  • Do all Mac OS X applications require Admin permissions to 'install'?

    - by Andy
    I'm new to the whole Mac OS X operating system. I'm trying to learn and I've got myself a MacBook running Mac OS X 10.7.3. I've created a test user that can not administrate so that I can test out permissions and I've found that I can not do anything in the Applications folder, which includes 'installing' applications (even those drag 'n' drop ones) and creating folders, without entering an Admin name and password. However, I was under the impression that this wasn't the case and you only needed Admin permissions to write to somewhere like Preferences, so can somebody please clarify why it is asking for Admin when I try to drag 'n' drop applications into the Applications folder.

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >