Search Results

Search found 12373 results on 495 pages for 'copy reg'.

Page 91/495 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • Secure wipe of a hard drive using WinPE.

    - by Derek Meier
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} The wiping of a hard drive is typically seen as fairly trivial.  There are tons of applications out there that will do it for you.  Point àClickàGlobal-Thermo Nuclear War. However, these applications are typically expensive or unreliable.  Plus, if you have a laptop or lack a secondary computer to put the hard drive into – how on earth do you wipe it quickly and easily while still conforming to a 7 pass rule (this means that every possible bit on the hard drive is set to 0 and then to 1 seven times in a row)?  Yes, one pass should be enough – as turning every bit from a 1 to a zero will wipe the data from existence.  But, we’re dealing with tinfoil hat wearing types here people.  DOD standards dictate at least 3 passes, and typically 7 is the preferred amount.  I’m not going to argue about data recovery.  I have been told to use 7 passes, and so I will.  So say we all! Quite some time ago I used to make a BartPE XP-based boot cd for the original purpose of securely wiping data.  I loved BartPE and integrated so many plugins into my builds that I could do pretty much anything directly from CD.  Reset passwords, uninstall security updates, wipe drives, chkdsk, remove spyware, install Windows, etc.  However, with the newer multi-core systems and new chipsets coming out from vendors, I found that BartPE was rather difficult to keep up to date.  I have since switched to WinPE 3.0 (Windows Preinstallation Environment). http://technet.microsoft.com/en-us/library/cc748933(WS.10).aspx  It is fairly simple to create your own CD, and I have made a few helpful scripts to easily integrate drivers and rebuild the ISO file for you.  I’ll cover making your own boot CD utilizing WinPE 3.0 in a later post – I can talk about WinPE forever and need to collect my thoughts!!  My wife loves talking about WinPE almost as much as talking about Doctor Who.  Wait, did I say loves?  Hmmmm, I may have meant loathes. The topic at hand?  Right. Wiping a drive! I must have drunk too much coffee this morning.  I like to use a simple batch script that calls a combination of diskpart.exe from Microsoft® and Sdelete.exe created by our friend Mark Russinovich. http://technet.microsoft.com/en-us/sysinternals/bb897443.aspx All of the following files are located within the same directory on my WinPE boot CD. Here are the contents of wipe_me.bat, script.txt and sdelete.reg. Wipe_me.bat:   @echo off echo. echo     I will completely wipe the local hard drives using echo     7 individual wipes. The data will NOT echo     be recoverable.  I will begin after you pause echo. echo Preparing to partition and format disk. Diskpart.exe /s "script.txt" REM I was annoyed by not having a completely automated script – and Sdelete wants you to accept the license agreement. So, I added a registry file to skip doing that. regedit /S sdelete.reg rem sdelete options selected are: -p (passes) -c (zero free space) -s (recurse through subdirectories, if any) -z (clean free space) [drive letter] sdelete.exe -p 7 -c -s -z c: echo. echo Pass seven complete. echo. echo Wiping complete. Pause exit script.txt: list disk select disk 0 clean create partition primary select partition 1 active format FS=NTFS LABEL="New Volume" QUICK assign letter=c exit *Notes: This script assumes one local hard drive – change the script as you see fit for your environment.  The clean command will overwrite the master boot record and any hidden sector information – so be careful!   sdelete.reg: Windows Registry Editor Version 5.00 [HKEY_CURRENT_USER\Software\Sysinternals\SDelete] "EulaAccepted"=dword:00000001   With a combination of WinPE, sdelete.exe and your friendly neighborhood text editor you can begin wiping drives as quickly and easily as possible!  I hope this helps, I get asked this a lot in my line of work. Best of luck, Derek

    Read the article

  • Using Web Services from an XNA 4.0 WP7 Game

    - by Michael Cummings
    Now that the Windows Phone 7 development tools have been out for a while, let’s talk about how you can use them. Windows Phone 7 ( WP7 ) has two application types that you can create, either Silverlight or XNA, and you can’t really mix the two together. The development environment for WP7 is a special edition of Visual Studio 2010 called Visual Studio 2010 Express for Windows Phone. This edition will be installed with the WP7 tools, even if you have a full edition of VS2010 already installed. While you can use your full edition of VS2010 to do WP7 development, this astute developer has noticed that there are a few things that you can only do in the Express for Windows Phone edition. So lets start by discussing WP7 networking. On the WP7 platform the only networking available is through Web Services using WCF or if you’re really masochistic, you’ll use the WebClient to do http. In Silverlight, it’s fairly easy to wire up a WCF proxy to call a web service and get some data. In the XNA projects, not so much. Create WCF Service First, we’ll create our service that will return some information that we need in our game. Open Visual Studio 2010, and create a new WCF Web Service project. We’ll use the default implementation as we only need to see how to use a service, we are not interested in creating a really cool service at this point. However you may want to follow the instructions in the comments of Service1.svc.cs to change the name to something better, I used DataService and IDataService for the interface. You should now be able to run the project and the WCF Test Client will load and properly enumerate your service. At this point we have a functional service that can be consumed by our XNA game. Consume the WCF Service Open Visual Studio 2010 Express for Windows Phone and create a new XNA Game Studio 4.0 Windows Phone Game project. Now if you try to add a service reference to the project, you’ll notice that the option is not available. However, if you add a Silverlight application to your solution, you’ll notice that you can create a service reference there. So using the Silverlight project, we can create the service reference. Unfortunately you can’t reference the Silverlight project from the XNA Game project, so using Windows Explorer copy the Service References folder from the Silverlight project directory to the XNA Game project directory, then add the folder to your XNA Game project. You’ll need to set the property Build Action to None for all the files, except for Reference.cs, which should be Build. Truely, we only need Reference.cs but I find it easier to copy the whole folder. If you try to compile at this point, you’ll notice that we are missing  a couple of references, System.Runtime.Serialization, System.Net and System.ServiceModel. Add these to the XNA Game project and you should build successfully. You’ll also need to copy the ServiceReference.ClientConfig file and add it to your project. The WCF infrastructure looks for this file and will complain if it can’t find it. You’ll need to set the Copy to Output Directory property to Copy if Newer. We now need to add the code to call the service and display the results on the screen. Go ahead and add a SpriteFont resource to the Content project and load it in the Game project. There’s nothing here that’s changed much from 3.1 other than your Content project is now under the Solution node and not the Project node. While you’re at it, add a string field to store the result of the service call, and intialize it to string.Empty. Then in the Draw method, write the string out to the screen, only if it does not equal string.Empty. Now to wrap this up, lets create a new field that’s of the type DataServiceClient. In the Initialize Method, create a new instance of this type using its default contructor, then in the LoadContent we can call the service. Since we can only call the GetData method of our service asynchronously we need to set up a Completed event handler first. Thankfully, Visual Studio helps out a lot there just create, using the tab key whatever VS says to. In the GetDataAsyncCompleted event handler assign the service result ( e.Result) to your string field. If you run your game, you should get something like this : Enjoy!

    Read the article

  • Replication Services in a BI environment

    - by jorg
    In this blog post I will explain the principles of SQL Server Replication Services without too much detail and I will take a look on the BI capabilities that Replication Services could offer in my opinion. SQL Server Replication Services provides tools to copy and distribute database objects from one database system to another and maintain consistency afterwards. These tools basically copy or synchronize data with little or no transformations, they do not offer capabilities to transform data or apply business rules, like ETL tools do. The only “transformations” Replication Services offers is to filter records or columns out of your data set. You can achieve this by selecting the desired columns of a table and/or by using WHERE statements like this: SELECT <published_columns> FROM [Table] WHERE [DateTime] >= getdate() - 60 There are three types of replication: Transactional Replication This type replicates data on a transactional level. The Log Reader Agent reads directly on the transaction log of the source database (Publisher) and clones the transactions to the Distribution Database (Distributor), this database acts as a queue for the destination database (Subscriber). Next, the Distribution Agent moves the cloned transactions that are stored in the Distribution Database to the Subscriber. The Distribution Agent can either run at scheduled intervals or continuously which offers near real-time replication of data! So for example when a user executes an UPDATE statement on one or multiple records in the publisher database, this transaction (not the data itself) is copied to the distribution database and is then also executed on the subscriber. When the Distribution Agent is set to run continuously this process runs all the time and transactions on the publisher are replicated in small batches (near real-time), when it runs on scheduled intervals it executes larger batches of transactions, but the idea is the same. Snapshot Replication This type of replication makes an initial copy of database objects that need to be replicated, this includes the schemas and the data itself. All types of replication must start with a snapshot of the database objects from the Publisher to initialize the Subscriber. Transactional replication need an initial snapshot of the replicated publisher tables/objects to run its cloned transactions on and maintain consistency. The Snapshot Agent copies the schemas of the tables that will be replicated to files that will be stored in the Snapshot Folder which is a normal folder on the file system. When all the schemas are ready, the data itself will be copied from the Publisher to the snapshot folder. The snapshot is generated as a set of bulk copy program (BCP) files. Next, the Distribution Agent moves the snapshot to the Subscriber, if necessary it applies schema changes first and copies the data itself afterwards. The application of schema changes to the Subscriber is a nice feature, when you change the schema of the Publisher with, for example, an ALTER TABLE statement, that change is propagated by default to the Subscriber(s). Merge Replication Merge replication is typically used in server-to-client environments, for example when subscribers need to receive data, make changes offline, and later synchronize changes with the Publisher and other Subscribers, like with mobile devices that need to synchronize one in a while. Because I don’t really see BI capabilities here, I will not explain this type of replication any further. Replication Services in a BI environment Transactional Replication can be very useful in BI environments. In my opinion you never want to see users to run custom (SSRS) reports or PowerPivot solutions directly on your production database, it can slow down the system and can cause deadlocks in the database which can cause errors. Transactional Replication can offer a read-only, near real-time database for reporting purposes with minimal overhead on the source system. Snapshot Replication can also be useful in BI environments, if you don’t need a near real-time copy of the database, you can choose to use this form of replication. Next to an alternative for Transactional Replication it can be used to stage data so it can be transformed and moved into the data warehousing environment afterwards. In many solutions I have seen developers create multiple SSIS packages that simply copies data from one or more source systems to a staging database that figures as source for the ETL process. The creation of these packages takes a lot of (boring) time, while Replication Services can do the same in minutes. It is possible to filter out columns and/or records and it can even apply schema changes automatically so I think it offers enough features here. I don’t know how the performance will be and if it really works as good for this purpose as I expect, but I want to try this out soon!

    Read the article

  • SharePoint 2010 Hosting :: How to Customize SharePoint 2010 Global Navigation

    - by mbridge
    Requirements - SharePoint Foundation or SharePoint Server 2010 site - SharePoint Designer 2010 Steps 1. The first step in my process was to download from codeplex a starter masterpage http://startermasterpages.codeplex.com/ . 2. Once you downloaded the starter master page, open up your SharePoint site in SharePoint Designer 2010 and on the left in the “Site Objects “ area click on the folder “All Files” and drill down to catalogs >> masterpages . Once you are in the Masterpage folder copy and paste the _starter.master into this folder. 3. The first step in the customization process is to create your custom style sheet. To create your custom style sheet, click on the “all Files” folder and click on “Style Library.” Right click in the style library section and choose Style sheet. Once the style sheet is created, rename it style.css. Now open the style sheet you created in SharePoint Designer. 4. In this next step you will copy and paste the SharePoint core styles for the global navigation into your custom style sheet. Copy and paste the css below into the style sheet and save file .s4-tn{ padding:0px; margin:0px; } .s4-tn ul.static{ white-space:nowrap; } .s4-tn li.static > .menu-item{ /* [ReplaceColor(themeColor:"Dark2")] */ color:#3b4f65; white-space:nowrap; border:1px solid transparent; padding:4px 10px; display:inline-block; height:15px; vertical-align:middle; } .s4-tn ul.dynamic{ /* [ReplaceColor(themeColor:"Light2")] */ background-color:white; /* [ReplaceColor(themeColor:"Dark2-Lighter")] */ border:1px solid #D9D9D9; } .s4-tn li.dynamic > .menu-item{ display:block; padding:3px 10px; white-space:nowrap; font-weight:normal; } .s4-tn li.dynamic > a:hover{ font-weight:normal; /* [ReplaceColor(themeColor:"Light2-Lighter")] */ background-color:#D9D9D9; } .s4-tn li.static > a:hover { /* [ReplaceColor(themeColor:"Accent1")] */ color:#44aff6; text-decoration:underline; } 5. Once you created the style sheet, go back to the masterpage folder and open the _starter.master file and in the Customization category click edit file. 6. Next, when the edit file opens make sure you view it in split view. Now you are going to search for the reference to our custom masterpage in the code. Make sure you are scrolled to the top in the code section and press “ctrl f” on the key board. This will pop up the find and replace tool. In the” find what field”, copy and paste and then click find next. 7. Now, in the code replace You have now referenced your custom style sheet in your masterpage. 8. The next step is to locate your Global Navigation control, make sure you are scrolled to the top in the code section and press “ctrl f” on the key board. This will pop up the find and replace tool. In the” find what field”, copy and paste ID="TopNavigationMenuV4” and then click find next. Once you find ID="TopNavigationMenuV4” , you should see the following block of code which is the global navigation control: ID="TopNavigationMenuV4" Runat="server" EnableViewState="false" DataSourceID="topSiteMap" AccessKey="" UseSimpleRendering="true" UseSeparateCss="false" Orientation="Horizontal" StaticDisplayLevels="1" MaximumDynamicDisplayLevels="1" SkipLinkText="" CssClass="s4-tn" 9. In the global navigation code above you should see CssClass="s4-tn" . As an additional step you can replace "s4-tn" your own custom name like CssClass="MyNav" . If you can the name of the CSS class make sure you update your custom style sheet with the new name, example below: .MyNav{ padding:0px; margin:0px; } .MyNav ul.static{ white-space:nowrap; } 10. At this point you are ready to brand your global navigation. The next step is to modify your style.css with your customizations to the default SharePoint styles. Have fun styling and make sure you save your work often. Hope it helps!!

    Read the article

  • Implement Tree/Details With Taskflow Regions Using EJB

    - by Deepak Siddappa
    This article describes on Display Tree/Details using taskflow regions.Use Case DescriptionLet us take scenario where we need to display Tree/Details, left region contains category hierarchy with items listed in a tree structure (ex:- Region-Countries-Locations-Departments in tree format) and right region contains the Employees list.In detail, Here User may drills down through categories using a tree until Employees are listed. Clicking the tree node name displays Employee list in the adjacent pane related to particular tree node. Implementation StepsThe script for creating the tables and inserting the data required for this application CreateSchema.sql Lets create a Java EE Web Application with Entities based on Regions, Countries, Locations, Departments and Employees table. Create a Stateless Session Bean and data control for the Stateless Session Bean. Add the below code to the session bean and expose the method in local/remote interface and generate a data control for that.Note:- Here in the below code "em" is a EntityManager. public List<Employees> empFilteredByTreeNode(String treeNodeType, String paramValue) { String queryString = null; try { if (treeNodeType == "null") { queryString = "select * from Employees emp ORDER BY emp.employee_id ASC"; } else if (Pattern.matches("[a-zA-Z]+[_]+[a-zA-Z]+[_]+[[0-9]+]+", treeNodeType)) { queryString = "select * from employees emp INNER JOIN departments dept\n" + "ON emp.department_id = dept.department_id JOIN locations loc\n" + "ON dept.location_id = loc.location_id JOIN countries cont\n" + "ON loc.country_id = cont.country_id JOIN regions reg\n" + "ON cont.region_id = reg.region_id and reg.region_name = '" + paramValue + "' ORDER BY emp.employee_id ASC"; } else if (treeNodeType.contains("regionsFindAll_bc_countriesList_1")) { queryString = "select * from employees emp INNER JOIN departments dept \n" + "ON emp.department_id = dept.department_id JOIN locations loc \n" + "ON dept.location_id = loc.location_id JOIN countries cont \n" + "ON loc.country_id = cont.country_id and cont.country_name = '" + paramValue + "' ORDER BY emp.employee_id ASC"; } else if (treeNodeType.contains("regionsFindAll_bc_locationsList_1")) { queryString = "select * from employees emp INNER JOIN departments dept ON emp.department_id = dept.department_id JOIN locations loc ON dept.location_id = loc.location_id and loc.city = '" + paramValue + "' ORDER BY emp.employee_id ASC"; } else if (treeNodeType.trim().contains("regionsFindAll_bc_departmentsList_1")) { queryString = "select * from Employees emp INNER JOIN Departments dept ON emp.DEPARTMENT_ID = dept.DEPARTMENT_ID and dept.DEPARTMENT_NAME = '" + paramValue + "'"; } } catch (NullPointerException e) { System.out.println(e.getMessage()); } return em.createNativeQuery(queryString, Employees.class).getResultList(); } In the ViewController project, create two ADF taskflow with page Fragments and name them as FirstTaskflow and SecondTaskflow respectively. Open FirstTaskflow,from component palette drop view(Page Fragment) name it as TreeList.jsff. Open SeconfTaskflow, from component palette drop view(Page Fragment) name it as EmpList.jsff and create two paramters in its overview parameters tab as shown in below image. Open TreeList.jsff , from data control palette drop regionsFindAll->Tree as ADF Tree. In Edit Tree Binding dialog, for Tree Level Rules select the display attributes as follows:-model.Regions - regionNamemodel.Countries - countryNamemodel.Locations - citymodel.Departments - departmentName In structure panel, click on af:Tree - t1 and select selectionListener with edit property. Create a "TreeBean" managed bean with scope as "session" as shown in below Image. Create new method as getTreeNodeSelectedValue and click ok. Open TreeBean managed bean and add the below code: private String treeNodeType; private String paramValue; public void getTreeNodeSelectedValue(SelectionEvent selectionEvent) { RichTree tree = (RichTree)selectionEvent.getSource(); RowKeySet addedSet = selectionEvent.getAddedSet(); Iterator i = addedSet.iterator(); TreeModel model = (TreeModel)tree.getValue(); model.setRowKey(i.next()); JUCtrlHierNodeBinding node = (JUCtrlHierNodeBinding)tree.getRowData(); //oracle.jbo.Row Row rw = node.getRow(); Object selectedTreeNode = node.getAttribute(0); Object treeListType = node.getBindings(); String treeNodeType = treeListType.toString(); this.setParamValue(selectedTreeNode.toString()); this.setTreeNodeType(treeNodeType); } public void setTreeNodeType(String treeNodeType) { this.treeNodeType = treeNodeType; } public String getTreeNodeType() { return treeNodeType; } public void setParamValue(String paramValue) { this.paramValue = paramValue; } public String getParamValue() { return paramValue; }<br /> Open EmpList.jsff , from data control palette drop empFilteredByTreeNode->Employees->Table as ADF Read-only Table. After selecting the  Employees result set, in Edit Action Binding dialog window pass the pageFlowScope parameters as shown in below Image. In empList.jsff page, click Binding tab and click on Create Executable binding and select Invoke action and follow as shown in below image. Edit executeEmpFiltered invoke action properties and set the Refresh to ifNeeded, So when ever the page needs the method will be executed. Create Main.jspx page with page template as Oracle Three Column Layout. Drop FirstTaskflow as Region in start facet and drop SecondTaskflow as Region in center facet, Edit task Flow Binding dialog window pass the Input Paramters as shown in below Image. Run the Main.jspx, tree will be displayed in left region and emp details will displyaed on the right region. Click on the Americas in tree node, all emp related to the Americas related will be displayed. Click on Americas->United States of America->South San Francisco->Accounting, only employee belongs to the Accounting department will be displayed.

    Read the article

  • Norton Ghost EBAB03F1: The specified network name is no longer available.

    - by Breck Carter
    After about 15 minutes, a Norton Ghost 14 backup fails with Error EBAB03F1: The specified network name is no longer available. The source computer is a P4 laptop running Windows XP SP3. The target computer is a Core2 Quad desktop running Windows Vista Ultimate 64bit. It does not help to disable Norton 360 on the source computer or Norton Antivirus 2008 on the target computer. The Event Viewer consistently shows the same two VSS-related errors after Norton Ghost starts but before it fails. It makes no difference if the VSS service is started or stopped. The VSS errors do not appear elsewhere in the event log, only after Ghost starts. The MSS event messages, however, are quite common, appearing throughout the log, and they may have nothing to do with the problem. Here is the Norton Ghost error display... -Errors exist. --Unable to write to file. ---Error EBAB03F1: The specified network name is no longer available. ---Unable to set file size. ----Error EBAB03F1: The specified network name is no longer available. ----Unable to write to file. -----Error EBAB03F1: The specified network name is no longer available. -----Unable to set file size. ------Error EBAB03F1: The specified network name is no longer available. Here are the source computer events, with the final error at the top and the "Ghost Starting" message at the bottom: ===== Event Type: Error Event Source: Norton Ghost Event Category: High Priority Event ID: 100 Date: 11/09/2009 Time: 9:40:26 AM User: N/A Computer: PAVILION2 Description: Error EC8F17B7: Cannot create recovery points for job: Drive Backup of (C:\) (3). Error E7D1001F: Unable to write to file. Error EBAB03F1: The specified network name is no longer available. Error E7D10046: Unable to set file size. Error EBAB03F1: The specified network name is no longer available. Error E7D1001F: Unable to write to file. Error EBAB03F1: The specified network name is no longer available. Error E7D10046: Unable to set file size. Error EBAB03F1: The specified network name is no longer available. Details: 0xEBAB0005 Source: Norton Ghost ===== Event Type: Information Event Source: MSSQL$SQLEXPRESS Event Category: Server Event ID: 3421 Date: 11/09/2009 Time: 9:34:06 AM User: NT AUTHORITY\NETWORK SERVICE Computer: PAVILION2 Description: Recovery completed for database ReportServer$SQLEXPRESSTempDB (database ID 6) in 1 second(s) (analysis 205 ms, redo 0 ms, undo 376 ms.) This is an informational message only. No user action is required. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 5d 0d 00 00 0a 00 00 00 ]....... 0008: 15 00 00 00 50 00 41 00 ....P.A. 0010: 56 00 49 00 4c 00 49 00 V.I.L.I. 0018: 4f 00 4e 00 32 00 5c 00 O.N.2.\. 0020: 53 00 51 00 4c 00 45 00 S.Q.L.E. 0028: 58 00 50 00 52 00 45 00 X.P.R.E. 0030: 53 00 53 00 00 00 18 00 S.S..... 0038: 00 00 52 00 65 00 70 00 ..R.e.p. 0040: 6f 00 72 00 74 00 53 00 o.r.t.S. 0048: 65 00 72 00 76 00 65 00 e.r.v.e. 0050: 72 00 24 00 53 00 51 00 r.$.S.Q. 0058: 4c 00 45 00 58 00 50 00 L.E.X.P. 0060: 52 00 45 00 53 00 53 00 R.E.S.S. 0068: 00 00 .. ===== Event Type: Information Event Source: MSSQL$SQLEXPRESS Event Category: Server Event ID: 17137 Date: 11/09/2009 Time: 9:34:02 AM User: NT AUTHORITY\NETWORK SERVICE Computer: PAVILION2 Description: Starting up database 'ReportServer$SQLEXPRESSTempDB'. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: f1 42 00 00 0a 00 00 00 ñB...... 0008: 15 00 00 00 50 00 41 00 ....P.A. 0010: 56 00 49 00 4c 00 49 00 V.I.L.I. 0018: 4f 00 4e 00 32 00 5c 00 O.N.2.\. 0020: 53 00 51 00 4c 00 45 00 S.Q.L.E. 0028: 58 00 50 00 52 00 45 00 X.P.R.E. 0030: 53 00 53 00 00 00 18 00 S.S..... 0038: 00 00 52 00 65 00 70 00 ..R.e.p. 0040: 6f 00 72 00 74 00 53 00 o.r.t.S. 0048: 65 00 72 00 76 00 65 00 e.r.v.e. 0050: 72 00 24 00 53 00 51 00 r.$.S.Q. 0058: 4c 00 45 00 58 00 50 00 L.E.X.P. 0060: 52 00 45 00 53 00 53 00 R.E.S.S. 0068: 00 00 .. ===== Event Type: Error Event Source: VSS Event Category: None Event ID: 5013 Date: 11/09/2009 Time: 9:28:32 AM User: N/A Computer: PAVILION2 Description: Volume Shadow Copy Service error: Shadow Copy writer ContentIndexingService called routine RegQueryValueExW which failed with status 0x80070002 (converted to 0x800423f4). For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 57 53 48 43 4f 4d 4e 43 WSHCOMNC 0008: 32 32 39 32 00 00 00 00 2292.... 0010: 57 53 48 43 49 43 00 00 WSHCIC.. 0018: 32 38 37 00 00 00 00 00 287..... ===== Event Type: Error Event Source: VSS Event Category: None Event ID: 5013 Date: 11/09/2009 Time: 9:28:32 AM User: N/A Computer: PAVILION2 Description: Volume Shadow Copy Service error: Shadow Copy writer ContentIndexingService called routine RegQueryValueExW which failed with status 0x80070002 (converted to 0x800423f4). For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 57 53 48 43 4f 4d 4e 43 WSHCOMNC 0008: 32 32 39 32 00 00 00 00 2292.... 0010: 57 53 48 43 49 43 00 00 WSHCIC.. 0018: 32 38 37 00 00 00 00 00 287..... ===== Event Type: Error Event Source: VSS Event Category: None Event ID: 12302 Date: 11/09/2009 Time: 9:28:32 AM User: N/A Computer: PAVILION2 Description: Volume Shadow Copy Service error: An internal inconsistency was detected in trying to contact shadow copy service writers. Please check to see that the Event Service and Volume Shadow Copy Service are operating properly. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 42 55 45 43 58 4d 4c 43 BUECXMLC 0008: 33 36 33 37 00 00 00 00 3637.... 0010: 42 55 45 43 58 4d 4c 43 BUECXMLC 0018: 33 36 30 37 00 00 00 00 3607.... ===== Event Type: Information Event Source: Norton Ghost Event Category: High Priority Event ID: 100 Date: 11/09/2009 Time: 9:27:57 AM User: N/A Computer: PAVILION2 Description: Info 6C8F1F63: The drive-based backup job, Drive Backup of (C:\) (3), has been started manually. Details: Source: Norton Ghost

    Read the article

  • Norton Ghost EBAB03F1: The specified network name is no longer available

    - by Breck Carter
    After about 15 minutes, a Norton Ghost 14 backup fails with Error EBAB03F1: The specified network name is no longer available. The source computer is a P4 laptop running Windows XP SP3. The target computer is a Core2 Quad desktop running Windows Vista Ultimate 64bit. It does not help to disable Norton 360 on the source computer or Norton Antivirus 2008 on the target computer. The Event Viewer consistently shows the same two VSS-related errors after Norton Ghost starts but before it fails. It makes no difference if the VSS service is started or stopped. The VSS errors do not appear elsewhere in the event log, only after Ghost starts. The MSS event messages, however, are quite common, appearing throughout the log, and they may have nothing to do with the problem. Here is the Norton Ghost error display... -Errors exist. --Unable to write to file. ---Error EBAB03F1: The specified network name is no longer available. ---Unable to set file size. ----Error EBAB03F1: The specified network name is no longer available. ----Unable to write to file. -----Error EBAB03F1: The specified network name is no longer available. -----Unable to set file size. ------Error EBAB03F1: The specified network name is no longer available. Here are the source computer events, with the final error at the top and the "Ghost Starting" message at the bottom: ===== Event Type: Error Event Source: Norton Ghost Event Category: High Priority Event ID: 100 Date: 11/09/2009 Time: 9:40:26 AM User: N/A Computer: PAVILION2 Description: Error EC8F17B7: Cannot create recovery points for job: Drive Backup of (C:\) (3). Error E7D1001F: Unable to write to file. Error EBAB03F1: The specified network name is no longer available. Error E7D10046: Unable to set file size. Error EBAB03F1: The specified network name is no longer available. Error E7D1001F: Unable to write to file. Error EBAB03F1: The specified network name is no longer available. Error E7D10046: Unable to set file size. Error EBAB03F1: The specified network name is no longer available. Details: 0xEBAB0005 Source: Norton Ghost ===== Event Type: Information Event Source: MSSQL$SQLEXPRESS Event Category: Server Event ID: 3421 Date: 11/09/2009 Time: 9:34:06 AM User: NT AUTHORITY\NETWORK SERVICE Computer: PAVILION2 Description: Recovery completed for database ReportServer$SQLEXPRESSTempDB (database ID 6) in 1 second(s) (analysis 205 ms, redo 0 ms, undo 376 ms.) This is an informational message only. No user action is required. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 5d 0d 00 00 0a 00 00 00 ]....... 0008: 15 00 00 00 50 00 41 00 ....P.A. 0010: 56 00 49 00 4c 00 49 00 V.I.L.I. 0018: 4f 00 4e 00 32 00 5c 00 O.N.2.\. 0020: 53 00 51 00 4c 00 45 00 S.Q.L.E. 0028: 58 00 50 00 52 00 45 00 X.P.R.E. 0030: 53 00 53 00 00 00 18 00 S.S..... 0038: 00 00 52 00 65 00 70 00 ..R.e.p. 0040: 6f 00 72 00 74 00 53 00 o.r.t.S. 0048: 65 00 72 00 76 00 65 00 e.r.v.e. 0050: 72 00 24 00 53 00 51 00 r.$.S.Q. 0058: 4c 00 45 00 58 00 50 00 L.E.X.P. 0060: 52 00 45 00 53 00 53 00 R.E.S.S. 0068: 00 00 .. ===== Event Type: Information Event Source: MSSQL$SQLEXPRESS Event Category: Server Event ID: 17137 Date: 11/09/2009 Time: 9:34:02 AM User: NT AUTHORITY\NETWORK SERVICE Computer: PAVILION2 Description: Starting up database 'ReportServer$SQLEXPRESSTempDB'. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: f1 42 00 00 0a 00 00 00 ñB...... 0008: 15 00 00 00 50 00 41 00 ....P.A. 0010: 56 00 49 00 4c 00 49 00 V.I.L.I. 0018: 4f 00 4e 00 32 00 5c 00 O.N.2.\. 0020: 53 00 51 00 4c 00 45 00 S.Q.L.E. 0028: 58 00 50 00 52 00 45 00 X.P.R.E. 0030: 53 00 53 00 00 00 18 00 S.S..... 0038: 00 00 52 00 65 00 70 00 ..R.e.p. 0040: 6f 00 72 00 74 00 53 00 o.r.t.S. 0048: 65 00 72 00 76 00 65 00 e.r.v.e. 0050: 72 00 24 00 53 00 51 00 r.$.S.Q. 0058: 4c 00 45 00 58 00 50 00 L.E.X.P. 0060: 52 00 45 00 53 00 53 00 R.E.S.S. 0068: 00 00 .. ===== Event Type: Error Event Source: VSS Event Category: None Event ID: 5013 Date: 11/09/2009 Time: 9:28:32 AM User: N/A Computer: PAVILION2 Description: Volume Shadow Copy Service error: Shadow Copy writer ContentIndexingService called routine RegQueryValueExW which failed with status 0x80070002 (converted to 0x800423f4). For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 57 53 48 43 4f 4d 4e 43 WSHCOMNC 0008: 32 32 39 32 00 00 00 00 2292.... 0010: 57 53 48 43 49 43 00 00 WSHCIC.. 0018: 32 38 37 00 00 00 00 00 287..... ===== Event Type: Error Event Source: VSS Event Category: None Event ID: 5013 Date: 11/09/2009 Time: 9:28:32 AM User: N/A Computer: PAVILION2 Description: Volume Shadow Copy Service error: Shadow Copy writer ContentIndexingService called routine RegQueryValueExW which failed with status 0x80070002 (converted to 0x800423f4). For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 57 53 48 43 4f 4d 4e 43 WSHCOMNC 0008: 32 32 39 32 00 00 00 00 2292.... 0010: 57 53 48 43 49 43 00 00 WSHCIC.. 0018: 32 38 37 00 00 00 00 00 287..... ===== Event Type: Error Event Source: VSS Event Category: None Event ID: 12302 Date: 11/09/2009 Time: 9:28:32 AM User: N/A Computer: PAVILION2 Description: Volume Shadow Copy Service error: An internal inconsistency was detected in trying to contact shadow copy service writers. Please check to see that the Event Service and Volume Shadow Copy Service are operating properly. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 42 55 45 43 58 4d 4c 43 BUECXMLC 0008: 33 36 33 37 00 00 00 00 3637.... 0010: 42 55 45 43 58 4d 4c 43 BUECXMLC 0018: 33 36 30 37 00 00 00 00 3607.... ===== Event Type: Information Event Source: Norton Ghost Event Category: High Priority Event ID: 100 Date: 11/09/2009 Time: 9:27:57 AM User: N/A Computer: PAVILION2 Description: Info 6C8F1F63: The drive-based backup job, Drive Backup of (C:\) (3), has been started manually. Details: Source: Norton Ghost

    Read the article

  • Node.js Adventure - When Node Flying in Wind

    - by Shaun
    In the first post of this series I mentioned some popular modules in the community, such as underscore, async, etc.. I also listed a module named “Wind (zh-CN)”, which is created by one of my friend, Jeff Zhao (zh-CN). Now I would like to use a separated post to introduce this module since I feel it brings a new async programming style in not only Node.js but JavaScript world. If you know or heard about the new feature in C# 5.0 called “async and await”, or you learnt F#, you will find the “Wind” brings the similar async programming experience in JavaScript. By using “Wind”, we can write async code that looks like the sync code. The callbacks, async stats and exceptions will be handled by “Wind” automatically and transparently.   What’s the Problem: Dense “Callback” Phobia Let’s firstly back to my second post in this series. As I mentioned in that post, when we wanted to read some records from SQL Server we need to open the database connection, and then execute the query. In Node.js all IO operation are designed as async callback pattern which means when the operation was done, it will invoke a function which was taken from the last parameter. For example the database connection opening code would be like this. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: } 8: }); And then if we need to query the database the code would be like this. It nested in the previous function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: } 14: }; 15: } 16: }); Assuming if we need to copy some data from this database to another then we need to open another connection and execute the command within the function under the query function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: target.open(targetConnectionString, function(error, t_conn) { 14: if(error) { 15: // connect failed 16: } 17: else { 18: t_conn.queryRaw(copy_command, function(error, results) { 19: if(error) { 20: // copy failed 21: } 22: else { 23: // and then, what do you want to do now... 24: } 25: }; 26: } 27: }; 28: } 29: }; 30: } 31: }); This is just an example. In the real project the logic would be more complicated. This means our application might be messed up and the business process will be fragged by many callback functions. I would like call this “Dense Callback Phobia”. This might be a challenge how to make code straightforward and easy to read, something like below. 1: try 2: { 3: // open source connection 4: var s_conn = sqlConnect(s_connectionString); 5: // retrieve data 6: var results = sqlExecuteCommand(s_conn, s_command); 7: 8: // open target connection 9: var t_conn = sqlConnect(t_connectionString); 10: // prepare the copy command 11: var t_command = getCopyCommand(results); 12: // execute the copy command 13: sqlExecuteCommand(s_conn, t_command); 14: } 15: catch (ex) 16: { 17: // error handling 18: }   What’s the Problem: Sync-styled Async Programming Similar as the previous problem, the callback-styled async programming model makes the upcoming operation as a part of the current operation, and mixed with the error handling code. So it’s very hard to understand what on earth this code will do. And since Node.js utilizes non-blocking IO mode, we cannot invoke those operations one by one, as they will be executed concurrently. For example, in this post when I tried to copy the records from Windows Azure SQL Database (a.k.a. WASD) to Windows Azure Table Storage, if I just insert the data into table storage one by one and then print the “Finished” message, I will see the message shown before the data had been copied. This is because all operations were executed at the same time. In order to make the copy operation and print operation executed synchronously I introduced a module named “async” and the code was changed as below. 1: async.forEach(results.rows, 2: function (row, callback) { 3: var resource = { 4: "PartitionKey": row[1], 5: "RowKey": row[0], 6: "Value": row[2] 7: }; 8: client.insertEntity(tableName, resource, function (error) { 9: if (error) { 10: callback(error); 11: } 12: else { 13: console.log("entity inserted."); 14: callback(null); 15: } 16: }); 17: }, 18: function (error) { 19: if (error) { 20: error["target"] = "insertEntity"; 21: res.send(500, error); 22: } 23: else { 24: console.log("all done."); 25: res.send(200, "Done!"); 26: } 27: }); It ensured that the “Finished” message will be printed when all table entities had been inserted. But it cannot promise that the records will be inserted in sequence. It might be another challenge to make the code looks like in sync-style? 1: try 2: { 3: forEach(row in rows) { 4: var entity = { /* ... */ }; 5: tableClient.insert(tableName, entity); 6: } 7:  8: console.log("Finished"); 9: } 10: catch (ex) { 11: console.log(ex); 12: }   How “Wind” Helps “Wind” is a JavaScript library which provides the control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps. It’s available in NPM so that we can install it through “npm install wind”. Now let’s create a very simple Node.js application as the example. This application will take some website URLs from the command arguments and tried to retrieve the body length and print them in console. Then at the end print “Finish”. I’m going to use “request” module to make the HTTP call simple so I also need to install by the command “npm install request”. The code would be like this. 1: var request = require("request"); 2:  3: // get the urls from arguments, the first two arguments are `node.exe` and `fetch.js` 4: var args = process.argv.splice(2); 5:  6: // main function 7: var main = function() { 8: for(var i = 0; i < args.length; i++) { 9: // get the url 10: var url = args[i]; 11: // send the http request and try to get the response and body 12: request(url, function(error, response, body) { 13: if(!error && response.statusCode == 200) { 14: // log the url and the body length 15: console.log( 16: "%s: %d.", 17: response.request.uri.href, 18: body.length); 19: } 20: else { 21: // log error 22: console.log(error); 23: } 24: }); 25: } 26: 27: // finished 28: console.log("Finished"); 29: }; 30:  31: // execute the main function 32: main(); Let’s execute this application. (I made them in multi-lines for better reading.) 1: node fetch.js 2: "http://www.igt.com/us-en.aspx" 3: "http://www.igt.com/us-en/games.aspx" 4: "http://www.igt.com/us-en/cabinets.aspx" 5: "http://www.igt.com/us-en/systems.aspx" 6: "http://www.igt.com/us-en/interactive.aspx" 7: "http://www.igt.com/us-en/social-gaming.aspx" 8: "http://www.igt.com/support.aspx" Below is the output. As you can see the finish message was printed at the beginning, and the pages’ length retrieved in a different order than we specified. This is because in this code the request command, console logging command are executed asynchronously and concurrently. Now let’s introduce “Wind” to make them executed in order, which means it will request the websites one by one, and print the message at the end.   First of all we need to import the “Wind” package and make sure the there’s only one global variant named “Wind”, and ensure it’s “Wind” instead of “wind”. 1: var Wind = require("wind");   Next, we need to tell “Wind” which code will be executed asynchronously so that “Wind” can control the execution process. In this case the “request” operation executed asynchronously so we will create a “Task” by using a build-in helps function in “Wind” named Wind.Async.Task.create. 1: var requestBodyLengthAsync = function(url) { 2: return Wind.Async.Task.create(function(t) { 3: request(url, function(error, response, body) { 4: if(error || response.statusCode != 200) { 5: t.complete("failure", error); 6: } 7: else { 8: var data = 9: { 10: uri: response.request.uri.href, 11: length: body.length 12: }; 13: t.complete("success", data); 14: } 15: }); 16: }); 17: }; The code above created a “Task” from the original request calling code. In “Wind” a “Task” means an operation will be finished in some time in the future. A “Task” can be started by invoke its start() method, but no one knows when it actually will be finished. The Wind.Async.Task.create helped us to create a task. The only parameter is a function where we can put the actual operation in, and then notify the task object it’s finished successfully or failed by using the complete() method. In the code above I invoked the request method. If it retrieved the response successfully I set the status of this task as “success” with the URL and body length. If it failed I set this task as “failure” and pass the error out.   Next, we will change the main() function. In “Wind” if we want a function can be controlled by Wind we need to mark it as “async”. This should be done by using the code below. 1: var main = eval(Wind.compile("async", function() { 2: })); When the application is running, Wind will detect “eval(Wind.compile(“async”, function” and generate an anonymous code from the body of this original function. Then the application will run the anonymous code instead of the original one. In our example the main function will be like this. 1: var main = eval(Wind.compile("async", function() { 2: for(var i = 0; i < args.length; i++) { 3: try 4: { 5: var result = $await(requestBodyLengthAsync(args[i])); 6: console.log( 7: "%s: %d.", 8: result.uri, 9: result.length); 10: } 11: catch (ex) { 12: console.log(ex); 13: } 14: } 15: 16: console.log("Finished"); 17: })); As you can see, when I tried to request the URL I use a new command named “$await”. It tells Wind, the operation next to $await will be executed asynchronously, and the main thread should be paused until it finished (or failed). So in this case, my application will be pause when the first response was received, and then print its body length, then try the next one. At the end, print the finish message.   Finally, execute the main function. The full code would be like this. 1: var request = require("request"); 2: var Wind = require("wind"); 3:  4: var args = process.argv.splice(2); 5:  6: var requestBodyLengthAsync = function(url) { 7: return Wind.Async.Task.create(function(t) { 8: request(url, function(error, response, body) { 9: if(error || response.statusCode != 200) { 10: t.complete("failure", error); 11: } 12: else { 13: var data = 14: { 15: uri: response.request.uri.href, 16: length: body.length 17: }; 18: t.complete("success", data); 19: } 20: }); 21: }); 22: }; 23:  24: var main = eval(Wind.compile("async", function() { 25: for(var i = 0; i < args.length; i++) { 26: try 27: { 28: var result = $await(requestBodyLengthAsync(args[i])); 29: console.log( 30: "%s: %d.", 31: result.uri, 32: result.length); 33: } 34: catch (ex) { 35: console.log(ex); 36: } 37: } 38: 39: console.log("Finished"); 40: })); 41:  42: main().start();   Run our new application. At the beginning we will see the compiled and generated code by Wind. Then we can see the pages were requested one by one, and at the end the finish message was printed. Below is the code Wind generated for us. As you can see the original code, the output code were shown. 1: // Original: 2: function () { 3: for(var i = 0; i < args.length; i++) { 4: try 5: { 6: var result = $await(requestBodyLengthAsync(args[i])); 7: console.log( 8: "%s: %d.", 9: result.uri, 10: result.length); 11: } 12: catch (ex) { 13: console.log(ex); 14: } 15: } 16: 17: console.log("Finished"); 18: } 19:  20: // Compiled: 21: /* async << function () { */ (function () { 22: var _builder_$0 = Wind.builders["async"]; 23: return _builder_$0.Start(this, 24: _builder_$0.Combine( 25: _builder_$0.Delay(function () { 26: /* var i = 0; */ var i = 0; 27: /* for ( */ return _builder_$0.For(function () { 28: /* ; i < args.length */ return i < args.length; 29: }, function () { 30: /* ; i ++) { */ i ++; 31: }, 32: /* try { */ _builder_$0.Try( 33: _builder_$0.Delay(function () { 34: /* var result = $await(requestBodyLengthAsync(args[i])); */ return _builder_$0.Bind(requestBodyLengthAsync(args[i]), function (result) { 35: /* console.log("%s: %d.", result.uri, result.length); */ console.log("%s: %d.", result.uri, result.length); 36: return _builder_$0.Normal(); 37: }); 38: }), 39: /* } catch (ex) { */ function (ex) { 40: /* console.log(ex); */ console.log(ex); 41: return _builder_$0.Normal(); 42: /* } */ }, 43: null 44: ) 45: /* } */ ); 46: }), 47: _builder_$0.Delay(function () { 48: /* console.log("Finished"); */ console.log("Finished"); 49: return _builder_$0.Normal(); 50: }) 51: ) 52: ); 53: /* } */ })   How Wind Works Someone may raise a big concern when you find I utilized “eval” in my code. Someone may assume that Wind utilizes “eval” to execute some code dynamically while “eval” is very low performance. But I would say, Wind does NOT use “eval” to run the code. It only use “eval” as a flag to know which code should be compiled at runtime. When the code was firstly been executed, Wind will check and find “eval(Wind.compile(“async”, function”. So that it knows this function should be compiled. Then it utilized parse-js to analyze the inner JavaScript and generated the anonymous code in memory. Then it rewrite the original code so that when the application was running it will use the anonymous one instead of the original one. Since the code generation was done at the beginning of the application was started, in the future no matter how long our application runs and how many times the async function was invoked, it will use the generated code, no need to generate again. So there’s no significant performance hurt when using Wind.   Wind in My Previous Demo Let’s adopt Wind into one of my previous demonstration and to see how it helps us to make our code simple, straightforward and easy to read and understand. In this post when I implemented the functionality that copied the records from my WASD to table storage, the logic would be like this. 1, Open database connection. 2, Execute a query to select all records from the table. 3, Recreate the table in Windows Azure table storage. 4, Create entities from each of the records retrieved previously, and then insert them into table storage. 5, Finally, show message as the HTTP response. But as the image below, since there are so many callbacks and async operations, it’s very hard to understand my logic from the code. Now let’s use Wind to rewrite our code. First of all, of course, we need the Wind package. Then we need to include the package files into project and mark them as “Copy always”. Add the Wind package into the source code. Pay attention to the variant name, you must use “Wind” instead of “wind”. 1: var express = require("express"); 2: var async = require("async"); 3: var sql = require("node-sqlserver"); 4: var azure = require("azure"); 5: var Wind = require("wind"); Now we need to create some async functions by using Wind. All async functions should be wrapped so that it can be controlled by Wind which are open database, retrieve records, recreate table (delete and create) and insert entity in table. Below are these new functions. All of them are created by using Wind.Async.Task.create. 1: sql.openAsync = function (connectionString) { 2: return Wind.Async.Task.create(function (t) { 3: sql.open(connectionString, function (error, conn) { 4: if (error) { 5: t.complete("failure", error); 6: } 7: else { 8: t.complete("success", conn); 9: } 10: }); 11: }); 12: }; 13:  14: sql.queryAsync = function (conn, query) { 15: return Wind.Async.Task.create(function (t) { 16: conn.queryRaw(query, function (error, results) { 17: if (error) { 18: t.complete("failure", error); 19: } 20: else { 21: t.complete("success", results); 22: } 23: }); 24: }); 25: }; 26:  27: azure.recreateTableAsync = function (tableName) { 28: return Wind.Async.Task.create(function (t) { 29: client.deleteTable(tableName, function (error, successful, response) { 30: console.log("delete table finished"); 31: client.createTableIfNotExists(tableName, function (error, successful, response) { 32: console.log("create table finished"); 33: if (error) { 34: t.complete("failure", error); 35: } 36: else { 37: t.complete("success", null); 38: } 39: }); 40: }); 41: }); 42: }; 43:  44: azure.insertEntityAsync = function (tableName, entity) { 45: return Wind.Async.Task.create(function (t) { 46: client.insertEntity(tableName, entity, function (error, entity, response) { 47: if (error) { 48: t.complete("failure", error); 49: } 50: else { 51: t.complete("success", null); 52: } 53: }); 54: }); 55: }; Then in order to use these functions we will create a new function which contains all steps for data copying. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: } 4: catch (ex) { 5: console.log(ex); 6: res.send(500, "Internal error."); 7: } 8: })); Let’s execute steps one by one with the “$await” keyword introduced by Wind so that it will be invoked in sequence. First is to open the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: } 7: catch (ex) { 8: console.log(ex); 9: res.send(500, "Internal error."); 10: } 11: })); Then retrieve all records from the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: } 10: catch (ex) { 11: console.log(ex); 12: res.send(500, "Internal error."); 13: } 14: })); After recreated the table, we need to create the entities and insert them into table storage. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: } 24: } 25: catch (ex) { 26: console.log(ex); 27: res.send(500, "Internal error."); 28: } 29: })); Finally, send response back to the browser. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: // send response 24: console.log("all done"); 25: res.send(200, "All done!"); 26: } 27: } 28: catch (ex) { 29: console.log(ex); 30: res.send(500, "Internal error."); 31: } 32: })); If we compared with the previous code we will find now it became more readable and much easy to understand. It’s very easy to know what this function does even though without any comments. When user go to URL “/was/copyRecords” we will execute the function above. The code would be like this. 1: app.get("/was/copyRecords", function (req, res) { 2: copyRecords(req, res).start(); 3: }); And below is the logs printed in local compute emulator console. As we can see the functions executed one by one and then finally the response back to me browser.   Scaffold Functions in Wind Wind provides not only the async flow control and compile functions, but many scaffold methods as well. We can build our async code more easily by using them. I’m going to introduce some basic scaffold functions here. In the code above I created some functions which wrapped from the original async function such as open database, create table, etc.. All of them are very similar, created a task by using Wind.Async.Task.create, return error or result object through Task.complete function. In fact, Wind provides some functions for us to create task object from the original async functions. If the original async function only has a callback parameter, we can use Wind.Async.Binding.fromCallback method to get the task object directly. For example the code below returned the task object which wrapped the file exist check function. 1: var Wind = require("wind"); 2: var fs = require("fs"); 3:  4: fs.existsAsync = Wind.Async.Binding.fromCallback(fs.exists); In Node.js a very popular async function pattern is that, the first parameter in the callback function represent the error object, and the other parameters is the return values. In this case we can use another build-in function in Wind named Wind.Async.Binding.fromStandard. For example, the open database function can be created from the code below. 1: sql.openAsync = Wind.Async.Binding.fromStandard(sql.open); 2:  3: /* 4: sql.openAsync = function (connectionString) { 5: return Wind.Async.Task.create(function (t) { 6: sql.open(connectionString, function (error, conn) { 7: if (error) { 8: t.complete("failure", error); 9: } 10: else { 11: t.complete("success", conn); 12: } 13: }); 14: }); 15: }; 16: */ When I was testing the scaffold functions under Wind.Async.Binding I found for some functions, such as the Azure SDK insert entity function, cannot be processed correctly. So I personally suggest writing the wrapped method manually.   Another scaffold method in Wind is the parallel tasks coordination. In this example, the steps of open database, retrieve records and recreated table should be invoked one by one, but it can be executed in parallel when copying data from database to table storage. In Wind there’s a scaffold function named Task.whenAll which can be used here. Task.whenAll accepts a list of tasks and creates a new task. It will be returned only when all tasks had been completed, or any errors occurred. For example in the code below I used the Task.whenAll to make all copy operation executed at the same time. 1: var copyRecordsInParallel = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage in parallal 14: var tasks = new Array(results.rows.length); 15: for (var i = 0; i < results.rows.length; i++) { 16: var entity = { 17: "PartitionKey": results.rows[i][1], 18: "RowKey": results.rows[i][0], 19: "Value": results.rows[i][2] 20: }; 21: tasks[i] = azure.insertEntityAsync(tableName, entity); 22: } 23: $await(Wind.Async.Task.whenAll(tasks)); 24: // send response 25: console.log("all done"); 26: res.send(200, "All done!"); 27: } 28: } 29: catch (ex) { 30: console.log(ex); 31: res.send(500, "Internal error."); 32: } 33: })); 34:  35: app.get("/was/copyRecordsInParallel", function (req, res) { 36: copyRecordsInParallel(req, res).start(); 37: });   Besides the task creation and coordination, Wind supports the cancellation solution so that we can send the cancellation signal to the tasks. It also includes exception solution which means any exceptions will be reported to the caller function.   Summary In this post I introduced a Node.js module named Wind, which created by my friend Jeff Zhao. As you can see, different from other async library and framework, adopted the idea from F# and C#, Wind utilizes runtime code generation technology to make it more easily to write async, callback-based functions in a sync-style way. By using Wind there will be almost no callback, and the code will be very easy to understand. Currently Wind is still under developed and improved. There might be some problems but the author, Jeff, should be very happy and enthusiastic to learn your problems, feedback, suggestion and comments. You can contact Jeff by - Email: [email protected] - Group: https://groups.google.com/d/forum/windjs - GitHub: https://github.com/JeffreyZhao/wind/issues   Source code can be download here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Automating Solaris 11 Zones Installation Using The Automated Install Server

    - by Orgad Kimchi
    Introduction How to use the Oracle Solaris 11 Automated install server in order to automate the Solaris 11 Zones installation. In this document I will demonstrate how to setup the Automated Install server in order to provide hands off installation process for the Global Zone and two Non Global Zones located on the same system. Architecture layout: Figure 1. Architecture layout Prerequisite Setup the Automated install server (AI) using the following instructions “How to Set Up Automated Installation Services for Oracle Solaris 11” The first step in this setup will be creating two Solaris 11 Zones configuration files. Step 1: Create the Solaris 11 Zones configuration files  The Solaris Zones configuration files should be in the format of the zonecfg export command. # zonecfg -z zone1 export > /var/tmp/zone1# cat /var/tmp/zone1 create -b set brand=solaris set zonepath=/rpool/zones/zone1 set autoboot=true set ip-type=exclusive add anet set linkname=net0 set lower-link=auto set configure-allowed-address=true set link-protection=mac-nospoof set mac-address=random end  Create a backup copy of this file under a different name, for example, zone2. # cp /var/tmp/zone1 /var/tmp/zone2 Modify the second configuration file with the zone2 configuration information You should change the zonepath for example: set zonepath=/rpool/zones/zone2 Step2: Copy and share the Zones configuration files  Create the NFS directory for the Zones configuration files # mkdir /export/zone_config Share the directory for the Zones configuration file # share –o ro /export/zone_config Copy the Zones configuration files into the NFS shared directory # cp /var/tmp/zone1 /var/tmp/zone2  /export/zone_config Verify that the NFS share has been created using the following command # share export_zone_config      /export/zone_config     nfs     sec=sys,ro Step 3: Add the Global Zone as client to the Install Service Use the installadm create-client command to associate client (Global Zone) with the install service To find the MAC address of a system, use the dladm command as described in the dladm(1M) man page. The following command adds the client (Global Zone) with MAC address 0:14:4f:2:a:19 to the s11x86service install service. # installadm create-client -e “0:14:4f:2:a:19" -n s11x86service You can verify the client creation using the following command # installadm list –c Service Name  Client Address     Arch   Image Path ------------  --------------     ----   ---------- s11x86service 00:14:4F:02:0A:19  i386   /export/auto_install/s11x86service We can see the client install service name (s11x86service), MAC address (00:14:4F:02:0A:19 and Architecture (i386). Step 4: Global Zone manifest setup  First, get a list of the installation services and the manifests associated with them: # installadm list -m Service Name   Manifest        Status ------------   --------        ------ default-i386   orig_default   Default s11x86service  orig_default   Default Then probe the s11x86service and the default manifest associated with it. The -m switch reflects the name of the manifest associated with a service. Since we want to capture that output into a file, we redirect the output of the command as follows: # installadm export -n s11x86service -m orig_default >  /var/tmp/orig_default.xml Create a backup copy of this file under a different name, for example, orig-default2.xml, and edit the copy. # cp /var/tmp/orig_default.xml /var/tmp/orig_default2.xml Use the configuration element in the AI manifest for the client system to specify non-global zones. Use the name attribute of the configuration element to specify the name of the zone. Use the source attribute to specify the location of the config file for the zone.The source location can be any http:// or file:// location that the client can access during installation. The following sample AI manifest specifies two Non-Global Zones: zone1 and zone2 You should replace the server_ip with the ip address of the NFS server. <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>   <ai_instance>     <target>       <logical>         <zpool name="rpool" is_root="true">           <filesystem name="export" mountpoint="/export"/>           <filesystem name="export/home"/>           <be name="solaris"/>         </zpool>       </logical>     </target>     <software type="IPS">       <source>         <publisher name="solaris">           <origin name="http://pkg.oracle.com/solaris/release"/>         </publisher>       </source>       <software_data action="install">         <name>pkg:/entire@latest</name>         <name>pkg:/group/system/solaris-large-server</name>       </software_data>     </software>     <configuration type="zone" name="zone1" source="file:///net/server_ip/export/zone_config/zone1"/>     <configuration type="zone" name="zone2" source="file:///net/server_ip/export/zone_config/zone2"/>   </ai_instance> </auto_install> The following example adds the /var/tmp/orig_default2.xml AI manifest to the s11x86service install service # installadm create-manifest -n s11x86service -f /var/tmp/orig_default2.xml -m gzmanifest You can verify the manifest creation using the following command # installadm list -n s11x86service  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    orig_default        Default  None    gzmanifest          Inactive None We can see from the command output that the new manifest named gzmanifest has been created and associated with the s11x86service install service. Step 5: Non Global Zone manifest setup The AI manifest for non-global zone installation is similar to the AI manifest for installing the global zone. If you do not provide a custom AI manifest for a non-global zone, the default AI manifest for Zones is used The default AI manifest for Zones is available at /usr/share/auto_install/manifest/zone_default.xml. In this example we should use the default AI manifest for zones The following sample default AI manifest for zones # cat /usr/share/auto_install/manifest/zone_default.xml <?xml version="1.0" encoding="UTF-8"?> <!--  Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved. --> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>     <ai_instance name="zone_default">         <target>             <logical>                 <zpool name="rpool">                     <!--                       Subsequent <filesystem> entries instruct an installer                       to create following ZFS datasets:                           <root_pool>/export         (mounted on /export)                           <root_pool>/export/home    (mounted on /export/home)                       Those datasets are part of standard environment                       and should be always created.                       In rare cases, if there is a need to deploy a zone                       without these datasets, either comment out or remove                       <filesystem> entries. In such scenario, it has to be also                       assured that in case of non-interactive post-install                       configuration, creation of initial user account is                       disabled in related system configuration profile.                       Otherwise the installed zone would fail to boot.                     -->                     <filesystem name="export" mountpoint="/export"/>                     <filesystem name="export/home"/>                     <be name="solaris">                         <options>                             <option name="compression" value="on"/>                         </options>                     </be>                 </zpool>             </logical>         </target>         <software type="IPS">             <destination>                              </destination>             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data>         </software>     </ai_instance> </auto_install> (optional) We can customize the default AI manifest for Zones Create a backup copy of this file under a different name, for example, zone_default2.xml and edit the copy # cp /usr/share/auto_install/manifest/zone_default.xml /var/tmp/zone_default2.xml Edit the copy (/var/tmp/zone_default2.xml) The following example adds the /var/tmp/zone_default2.xml AI manifest to the s11x86service install service and specifies that zone1 and zone2 should use this manifest. # installadm create-manifest -n s11x86service -f /var/tmp/zone_default2.xml -m zones_manifest -c zonename="zone1 zone2" Note: Do not use the following elements or attributes in a non-global zone AI manifest:     The auto_reboot attribute of the ai_instance element     The http_proxy attribute of the ai_instance element     The disk child element of the target element     The noswap attribute of the logical element     The nodump attribute of the logical element     The configuration element Step 6: Global Zone profile setup We are going to create a global zone configuration profile which includes the host information for example: host name, ip address name services etc… # sysconfig create-profile –o /var/tmp/gz_profile.xml You need to provide the host information for example:     Default router     Root password     DNS information The output should eventually disappear and be replaced by the initial screen of the System Configuration Tool (see Figure 2), where you can do the final configuration. Figure 2. Profile creation menu You can validate the profile using the following command # installadm validate -n s11x86service –P /var/tmp/gz_profile.xml Validating static profile gz_profile.xml...  Passed Next, instantiate a profile with the install service. In our case, use the following syntax for doing this # installadm create-profile -n s11x86service  -f /var/tmp/gz_profile.xml -p  gz_profile You can verify profile creation using the following command # installadm list –n s11x86service  -p Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         None We can see that the gz_profie has been created and associated with the s11x86service Install service. Step 7: Setup the Solaris Zones configuration profiles The step should be similar to the Global zone profile creation on step 6 # sysconfig create-profile –o /var/tmp/zone1_profile.xml # sysconfig create-profile –o /var/tmp/zone2_profile.xml You can validate the profiles using the following command # installadm validate -n s11x86service -P /var/tmp/zone1_profile.xml Validating static profile zone1_profile.xml...  Passed # installadm validate -n s11x86service -P /var/tmp/zone2_profile.xml Validating static profile zone2_profile.xml...  Passed Next, associate the profiles with the install service The following example adds the zone1_profile.xml configuration profile to the s11x86service  install service and specifies that zone1 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone1_profile.xml -p zone1_profile -c zonename=zone1 The following example adds the zone2_profile.xml configuration profile to the s11x86service  install service and specifies that zone2 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone2_profile.xml -p zone2_profile -c zonename=zone2 You can verify the profiles creation using the following command # installadm list -n s11x86service -p Service/Profile Name  Criteria --------------------  -------- s11x86service    zone1_profile      zonename = zone1    zone2_profile      zonename = zone2    gz_profile         None We can see that we have three profiles in the s11x86service  install service     Global Zone  gz_profile     zone1            zone1_profile     zone2            zone2_profile. Step 8: Global Zone setup Associate the global zone client with the manifest and the profile that we create in the previous steps The following example adds the manifest and profile to the client (global zone), where: gzmanifest  is the name of the manifest. gz_profile  is the name of the configuration profile. mac="0:14:4f:2:a:19" is the client (global zone) mac address s11x86service is the install service name. # installadm set-criteria -m  gzmanifest  –p  gz_profile  -c mac="0:14:4f:2:a:19" -n s11x86service You can verify the manifest and profile association using the following command # installadm list -n s11x86service -p  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    gzmanifest                   mac  = 00:14:4F:02:0A:19    orig_default        Default  None Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         mac      = 00:14:4F:02:0A:19    zone2_profile      zonename = zone2    zone1_profile      zonename = zone1 Step 9: Provision the host with the Non-Global Zones The next step is to boot the client system off the network and provision it using the Automated Install service that we just set up. First, boot the client system. Figure 3 shows the network boot attempt (when done on an x86 system): Figure 3. Network Boot Then you will be prompted by a GRUB menu, with a timer, as shown in Figure 4. The default selection (the "Text Installer and command line" option) is highlighted.  Press the down arrow to highlight the second option labeled Automated Install, and then press Enter. The reason we need to do this is because we want to prevent a system from being automatically re-installed if it were to be booted from the network accidentally. Figure 4. GRUB Menu What follows is the continuation of a networked boot from the Automated Install server,. The client downloads a mini-root (a small set of files in which to successfully run the installer), identifies the location of the Automated Install manifest on the network, retrieves that manifest, and then processes it to identify the address of the IPS repository from which to obtain the desired software payload. Non-Global Zones are installed and configured on the first reboot after the Global Zone is installed. You can list all the Solaris Zones status using the following command # zoneadm list -civ Once the Zones are in running state you can login into the Zone using the following command # zlogin –z zone1 Troubleshooting Automated Installations If an installation to a client system failed, you can find the client log at /system/volatile/install_log. NOTE: Zones are not installed if any of the following errors occurs:     A zone config file is not syntactically correct.     A collision exists among zone names, zone paths, or delegated ZFS datasets in the set of zones to be installed     Required datasets are not configured in the global zone. For more troubleshooting information see “Installing Oracle Solaris 11 Systems” Conclusion This paper demonstrated the benefits of using the Automated Install server to simplify the Non Global Zones setup, including the creation and configuration of the global zone manifest and the Solaris Zones profiles.

    Read the article

  • ASM programming, how to use loop?

    - by chris
    Hello. Im first time here.I am a college student. I've created a simple program by using assembly language. And im wondering if i can use loop method to run it almost samething as what it does below the program i posted. and im also eager to find someome who i can talk through MSN messanger so i can ask you questions right away.(if possible) ok thank you .MODEL small .STACK 400h .data prompt db 10,13,'Please enter a 3 digit number, example 100:',10,13,'$' ;10,13 cause to go to next line first_digit db 0d second_digit db 0d third_digit db 0d Not_prime db 10,13,'This number is not prime!',10,13,'$' prime db 10,13,'This number is prime!',10,13,'$' question db 10,13,'Do you want to contine Y/N $' counter dw 0d number dw 0d half dw ? .code Start: mov ax, @data ;establish access to the data segment mov ds, ax mov number, 0d LetsRoll: mov dx, offset prompt ; print the string (please enter a 3 digit...) mov ah, 9h int 21h ;execute ;read FIRST DIGIT mov ah, 1d ;bios code for read a keystroke int 21h ;call bios, it is understood that the ascii code will be returned in al mov first_digit, al ;may as well save a copy sub al, 30h ;Convert code to an actual integer cbw ;CONVERT BYTE TO WORD. This takes whatever number is in al and ;extends it to ax, doubling its size from 8 bits to 16 bits ;The first digit now occupies all of ax as an integer mov cx, 100d ;This is so we can calculate 100*1st digit +10*2nd digit + 3rd digit mul cx ;start to accumulate the 3 digit number in the variable imul cx ;it is understood that the other operand is ax ;AND that the result will use both dx::ax ;but we understand that dx will contain only leading zeros add number, ax ;save ;variable <number> now contains 1st digit * 10 ;---------------------------------------------------------------------- ;read SECOND DIGIT, multiply by 10 and add in mov ah, 1d ;bios code for read a keystroke int 21h ;call bios, it is understood that the ascii code will be returned in al mov second_digit, al ;may as well save a copy sub al, 30h ;Convert code to an actual integer cbw ;CONVERT BYTE TO WORD. This takes whatever number is in al and ;extends it to ax, boubling its size from 8 bits to 16 bits ;The first digit now occupies all of ax as an integer mov cx, 10d ;continue to accumulate the 3 digit number in the variable mul cx ;it is understood that the other operand is ax, containing first digit ;AND that the result will use both dx::ax ;but we understand that dx will contain only leading zeros. Ignore them add number, ax ;save -- nearly finished ;variable <number> now contains 1st digit * 100 + second digit * 10 ;---------------------------------------------------------------------- ;read THIRD DIGIT, add it in (no multiplication this time) mov ah, 1d ;bios code for read a keystroke int 21h ;call bios, it is understood that the ascii code will be returned in al mov third_digit, al ;may as well save a copy sub al, 30h ;Convert code to an actual integer cbw ;CONVERT BYTE TO WORD. This takes whatever number is in al and ;extends it to ax, boubling its size from 8 bits to 16 bits ;The first digit now occupies all of ax as an integer add number, ax ;Both my variable number and ax are 16 bits, so equal size mov ax, number ;copy contents of number to ax mov cx, 2h div cx ;Divide by cx mov half, ax ;copy the contents of ax to half mov cx, 2h; mov ax, number; ;copy numbers to ax xor dx, dx ;flush dx jmp prime_check ;jump to prime check print_question: mov dx, offset question ;print string (do you want to continue Y/N?) mov ah, 9h int 21h ;execute mov ah, 1h int 21h ;execute cmp al, 4eh ;compare je Exit ;jump to exit cmp al, 6eh ;compare je Exit ;jump to exit cmp al, 59h ;compare je Start ;jump to start cmp al, 79h ;compare je Start ;jump to start prime_check: div cx; ;Divide by cx cmp dx, 0h ;reset the value of dx je print_not_prime ;jump to not prime xor dx, dx; ;flush dx mov ax, number ;copy the contents of number to ax cmp cx, half ;compare half with cx je print_prime ;jump to print prime section inc cx; ;increment cx by one jmp prime_check ;repeat the prime check print_prime: mov dx, offset prime ;print string (this number is prime!) mov ah, 9h int 21h ;execute jmp print_question ;jumps to question (do you want to continue Y/N?) this is for repeat print_not_prime: mov dx, offset Not_prime ;print string (this number is not prime!) mov ah, 9h int 21h ;execute jmp print_question ;jumps to question (do you want to continue Y/N?) this is for repeat Exit: mov ah, 4ch int 21h ;execute exit END Start

    Read the article

  • eventcreate with multiline description

    - by Adam J.R. Erickson
    I'd like to use eventcreate from a batch file to log the results of a file copy job (robocopy). What I'd really like to do is use the output of the file copy job as the description of the event (/D of createevent). The trouble is, there are multiple lines in the file copy output, and I've only been able to get one line into a local variable or a pipe command. I've tried reading a local variable in from file, like set /P myVar=<temp.txt but it only gets the first line. How can I write multiple lines to the description of an event from a batch file?

    Read the article

  • Using AuthzSVNAccessFile for controlling SVN Access produces HTTP 400 Bad Request

    - by meeper
    I have a new repository on an existing subversion server that requires us to perform path based authorization within the repository. I found that the AuthzSVNAccessFile directive in apache is directly responsible for allowing this functionality. After fixing several other problems such as AuthzSVNAccessFile preventing SVNListParentPath from operating properly, I am left with one single problem. I can checkout, I can update, I can commit, BUT I cannot execute an SVN COPY for performing branch/tagging operations. The moment I comment out the AuthzSVNAccessFile line in the Apache config everything works as expected except the obvious path authorizations. Versions: The server OS is Debian 6.0.7 (Squeeze) Apache 2.2.16-6+squeeze11 Server Subversion 1.6.12dfsg-7 Clients are running windows Clients tried are: TortoiseSVN 1.8.2 Build 24708 64bit SVN CLI Client 1.8.3 (r1516576) Authentication is performed via AD to a Windows 2003 domain and appears to be operating normally. I have stripped out all other configurations and repository setups to produce this single configuration that reproduces the problem. Apache Configuration: <VirtualHost *:443> ServerName svn-test.company.com ServerAlias /svn-test ServerAdmin [email protected] SSLEngine On SSLCertificateFile /etc/apache2/apache.pem ErrorLog /var/log/apache2/svn-test_error.log LogLevel warn CustomLog /var/log/apache2/svn-test_access.log combined ServerSignature On # Repository Access to all Repositories <Location "/"> DAV svn SVNParentPath /var/svn SVNListParentPath on AuthBasicProvider ldap AuthType Basic AuthzLDAPAuthoritative Off AuthName "Subversion Test Repository System" AuthLDAPURL "ldap://adserver.company.com:389/DC=corp,DC=company,DC=com?sAMAccountName?sub?(objectClass=*)" NONE AuthLDAPBindDN "CN=service_account,OU=ServiceIDs,OU=corp,OU=Delegated,DC=na,DC=corp,DC=company,DC=com" AuthLDAPBindPassword service_account_password Require valid-user SSLRequireSSL </Location> # <LocationMatch /.+> is a really dirty trick to make listing of repositories work # http://d.hatena.ne.jp/shimonoakio/20080130/1201686016 <LocationMatch /.+> AuthzSVNAccessFile /etc/apache2/svn_path_auth </LocationMatch> </VirtualHost> SVN Access File: [/] * = rw The repository used (AuthTestBasic) consists of the following directory structure and contains no externals (this is a literal listing, not an example): / /branches/ /tags/ /trunk/ /trunk/somefile.txt Tortoise produces the following error during a tag operation in it's tag result window: Adding directory failed: COPY on /authtestbasic/!svn/bc/2/trunk (400 Bad Request) The svn.exe CLI client produces the following error: C:\Users\e20epkt>svn copy https://servername/authtestbasic/trunk https://servername/authtestbasic/tags/tag1 -m "svn cli client" svn: E175002: Adding directory failed: COPY on /authtestbasic/!svn/bc/2/trunk (400 Bad Request) The Apache error log has nothing in it, however the apache access log has the following in it (IP addresses and usernames changed obviously): 10.1.2.100 - - [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 401 2595 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 200 996 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 200 884 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/trunk HTTP/1.1" 207 692 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/vcc/default HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "REPORT /authtestbasic/!svn/bc/0/trunk HTTP/1.1" 404 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/vcc/default HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "REPORT /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 200 674 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 207 548 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/tags/tag1 HTTP/1.1" 404 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "MKACTIVITY /authtestbasic/!svn/act/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e HTTP/1.1" 201 708 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/tags HTTP/1.1" 207 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "CHECKOUT /authtestbasic/!svn/vcc/default HTTP/1.1" 201 708 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPPATCH /authtestbasic/!svn/wbl/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e/2 HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "CHECKOUT /authtestbasic/!svn/ver/1/tags HTTP/1.1" 201 724 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "COPY /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 400 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "DELETE /authtestbasic/!svn/act/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e HTTP/1.1" 204 1956 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" You'll see that the second to last line contains the COPY command with the HTTP 400 response, however, there doesn't appear to be any indication as to why. Please note that, while yes this is a test repository on a test server, I am experiencing this same issue in this test setup where I have eliminated all other possible causes (mixed repository configurations, externals, etc). I have also confirmed that all files for the repository (/var/svn/authtestbasic) are owned by the Apache user www-data.

    Read the article

  • xcopy and timeouts

    - by acidzombie24
    Related to this question and this answer http://serverfault.com/questions/48839/backup-on-disc-using-truecrypt-corruption-problem/50829#50829 I want to copy the backup to my HD. However the timeout seems to be a problem. I see many single files then i see File creation error - Data error (cyclic redundancy check). and its currently trying to copy the next file. When i click my computer i see nothing and it also locks up for some seconds (30 maybe) So TrueCrypt is not a good solution if i must be able to copy files back from a volume with a corrupted sector? (answer in other thread please). Is there any way for me to change the timeout? I look here for some flags and didnt see any http://www.scriptlogic.com/support/CustomScripts/XCOPYCommandLineParameters.html

    Read the article

  • Tune SQL Server Express using Profiler?

    - by Glen Little
    I have a SQL Server 2005 database... a copy of it is running in development on a full version of SQL server. Another copy is running in SQL Server 2005 Express on a web server. I've used SQL Profiler and saved a Tuning trace log from activity on the SQL Express copy of the database. I want to use the saved trace log in the Database Engine Tuning Advisor... If I try when connecting the Advisor to the Express database, I am told that Express is not supported. If I try when connecting the Advisor to the SQL Server database, I get empty results. Is there any way to do this?

    Read the article

  • Immutable hard links on ext3/4?

    - by shovas
    In my research on file versioning at the fs level, snapshotting, and related ideas, I took a look at hard-links and exactly what they are and how they behave. Using rsync you can get a pretty slick poor man's snapshotting system up and running on file systems that don't natively support it. But, can you get immutable hard links on ext3/4 or any other file systems for that matter? My definition for immutable hard link is: A hard link which, when changed on one location, becomes a regular copy and no longer a hard link. I would like this because it would enable snapshotting use of the source data to link against instead of a copy of the data (in the case of the rsync snapshotting technique). I have gigabytes of data that can't be duplicated due to space restrictions but I have enough room if I can intelligently snapshot individual changed files with the rest linked to the source not a copy. Given all that, is there some other technique, feature or technology I'm really looking for?

    Read the article

  • local app opening instead of ssh forwarded app over x

    - by The Journeyman geek
    i have a custom install of ubuntu 9.10 - xorg intel and its deps, icewm, xde and swiftfox from the swiftfox repos. I'm trying to start a ssh forwarded session of swiftfox from another system - which has the plain vanilla firefox version in the repos- with ssh -x [ipaddress] and then starting swiftfox from command line. When i start it though, it opens up the local copy of firefox instead of the copy of swiftfox on the other box. I have NO idea what's wrong...swiftfox dosen't open on the remote box, i am definately on the remote boxes terminal, and there's no way whatsoever it should open a local copy. I'm wondering what's wrong

    Read the article

  • Excel hyperlink not redirecting properly (bug?)

    - by Andrej
    I don't know is this is the right place to post this questions, but I have an excel hyperlink problem. Here's the thing. I click on let's say "A1", copy the link in it (http://www.godaddy.com/domains/searchresults.aspx?ci=54814), right click on hyperlink and copy that SAME URL as the link (if it is not automatically detected and changed). When I go to click on it, I am readirected to http://www.godaddy.com/domains/search.aspx?ci=53972. If I copy and paste the link directly into the browser, it works fine. Somebody knows what's going on? Thank you for your time. Andrej

    Read the article

  • Install system-wide PEAR on Debian Lenny

    - by artvolk
    Good day! I've installed PEAR on Debian Lenny using apt-get install php-pear, it was installed in /usr/share/php When I try to install anything using pear install <package> the PEAR folder is created under current user home directory and separate copy of pear is installed there. I ended up by installing local copy of PEAR for one of the users like this: http://kuziel.info/log/archives/2006/04/01/Installation-of-local-PEAR-repository Is any way to tell pear to install packages to system-wide repository in /usr/share/php? What is the recommended way of using system-wide PEAR copy? Thanks in advance!

    Read the article

  • Cannot read/access Apache2 access logs

    - by webworm
    I have been asked to take a look at some access logs for an Apcahe2 web server running on Ubuntu. I have been told by the administrator of the machine that my login has "admin" access yet I cannot seem to copy the access logs from Apache2 to my local machine via FTP for analysis. I figure one of two things is happening ... I don't really have full admin access Some other process (perhaps Apache2) has control of the log files and won't let me copy them. How can I tell if I truly have admin access? What type of access do I need to request? Root access? Something else? Should I be able to copy these log files with admin access?

    Read the article

  • Boot drive is incorrect one.

    - by Dwayne
    I have several hard drives installed. I normally use c: as my boot drive and a much larger drive (h:) for storing most of my files. I found a subfolder in my c:windows folder named windows after a failed reinstall of Vista. Upon inspection I determined it to be older than the c:windows folder and therefore it must be the older, working version of the boot. I renamed the c:windows folder to c:windows.bad and moved the sub windows to the c: root directory. I also copied it to the h: drive. Now MSCONFIG reports that the copy that is booting is the h: copy. How can I change it back to the c: copy and can I delete the c:windows.bad file set?

    Read the article

  • SQL Server Snapshot Replication Subscriber (Editable or Read-Only)

    - by NateReid
    I need to create a copy of my SQL 2008 R2 Enterprise database and have it located on the same server as the original. I will be using this second copy of the database as the target of a mostly read-only website. I understand that if I create this copy of the database using snapshot replication that all data changes in the subscriber database will be overwriten in the event of the next replication. The web application will try to write to this database to record login attempts, etc and will fail if its source database is read-only. In my case I do not need to keep these auditing records and they can therefore be overwriten each time a new snapshot is applied. My question is whether SQL Server forces the subcriber database to be read-only and is there any way around this? Thank you, Nate

    Read the article

  • Cancel table design change in SQL Server 2000

    - by Bryce Wagner
    In SQL Server Enterprise Manager and change one of the columns and save it, it will create a table with the new definition, and copy all the data to that new table, and then delete the old table when it's done. But if your table is large (let's say on the order of 100GB), it can take a long time to do this. Even worse, if you don't have sufficient disk space, it doesn't notice ahead of time, and it will spend a long time trying to copy the table, run out of space, and then decide to abort the process. We have other ways to copy the data in smaller chunks, but those require significantly more manual intervention, so it's usually easier to just let Enterprise Manager figure it out, as long as there's enough disk space. So for a long running "Design Table" save like this, is there any way to cancel once it's started? Or do you just have to wait for it to fail?

    Read the article

  • Copying 500GB Data to EC2 Instances Local Drive

    - by iCode
    Please do not ask me why (they made me) but I have to copy 500GB data to the local drive every 200 node/instances that I am launching in EC2. For reasons beyond this post, this data must by on the local drive and not EBS drive so I can not benefit from snapshots. What is the fastest way that I can manage to this? Copying from S3 to each node takes a long time. I trying to attached an EBS volume to every node with the data and then copy the data from EBS to the local drive but that also take a long time (several hours_) Now, I am also thinking to use bit torrent but not sure how well it is going to be. What is the best way to copy 500GB of static data to each local drive of 200 ec2 instances? The 500Gb of data is composed several hundred of file with varying size but the biggest file is 20GB.

    Read the article

  • why use branches in svn?

    - by ajsie
    i know that you could organize your files according to this structure in svn: trunk branches tags that you copy the trunk to a folder in branches if you want to have a seperate development line. later on you merge this branch back to trunk. but i wonder why me and my group should do this. why should one copy the trunk to a branch and work with this copy just to merge it back to the trunk, and mean while the code is frequently updated/commited to stay in sync with the trunk. why not just work with the trunk then? what is the benefits with creating a branch? would be great if someone could shed a light on this topic. thanks in advance

    Read the article

  • Cannot access files after trying to upgrade Ubuntu

    - by Ola
    I tried to upgrade Ubuntu from 11.10 to 12.04. I left it for 24 hours but the upgrade did not complete. Hence I cancelled the upgrade. I thought I will copy all the files that I have to a DVD/CD and try try downloading a copy of Ubuntu. But now, I cannot open any file or copy them. I cannot even shutdown my laptop. I have many important files on my laptop. Can someone help me retrieve my files from my laptop? Regards Ola

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >