Search Results

Search found 9676 results on 388 pages for 'soft delete'.

Page 86/388 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • [asp.net-mvc]Handling Multiple Form Actions on One View?

    - by Pandiya Chendur
    I have multiple master pages in my asp.net mvc web application... Each of the pages add,edit,view and delete functionalities.... What it does is i have to create multiple views for handling add,edit,view and delete functionalities (ie) the user has to navigate to another view to edit/view the details of a record... How to Handle Multiple Form Actions (ie) add,edit,view and delete functionalities on One View?

    Read the article

  • Better variant of getting the output dinamically-allocated array from the function?

    - by Raigomaru
    Here is to variants. First: int n = 42; int* some_function(int* input) { int* result = new int[n]; // some code return result; } void main() { int* input = new int[n]; int* output = some_function(input); delete[] input; delete[] output; } Here the function returns the memory, allocated inside the function. Second variant: int n = 42; void some_function(int* input, int* output) { // some code } void main() { int* input = new int[n]; int* output = new int[n]; some_function(input, output); delete[] input; delete[] output; } Here the memory is allocated outside the function. Now I use the first variant. But I now that many built-in c++ functions use the second variant. The first variant is more comfortable (in my opinion). But the second one also has some advantages (you allocate and delete memory in the same block). Maybe it's a silly question but what variant is better and why?

    Read the article

  • javascript:anchor tag problem

    - by Raam
    function main(){ var delImage=document.createElement("img"); delImage.setAttribute("alt","Edit"); delImage.setAttribute("src","drop.png"); var position=newRow.rowIndex; var typeElem=document.createElement("a"); typeElem.setAttribute("name","delete"); typeElem.setAttribute("href","#"); typeElem.appendChild(delImage); typeElem.setAttribute('onclick',"delete(position)"); newRow.insertCell(lastCell).appendChild(typeElem); } function delete(pos){ alert(pos); } i am not able to call the delete function when anchor tag was clicked...what can i want to change to achieve this?

    Read the article

  • MEF C# Service - DLL Updating

    - by connerb
    Currently, I have a C# service that runs off of many .dll's and has modules/plugins that it imports at startup. I would like to create an update system that basically stops the service, deletes any files it is told to delete (old versions), downloads new versions from a server, and starts the service. I believe I have coded this right except for the delete part, because as long as I am not overwriting anything, the file will download. If I try to overwrite something, it won't work, which is why I am trying to delete it before hand. However, when I do File.Delete() to the path that I want to do, it gives me access to the path is denied. Here is my code: new Thread(new ThreadStart(() => { ServiceController controller = new ServiceController("client"); controller.Stop(); controller.WaitForStatus(ServiceControllerStatus.Stopped); try { if (um.FilesUpdated != null) { foreach (FilesUpdated file in um.FilesUpdated) { if (file.OldFile != null) { File.Delete(Path.Combine(Utility.AssemblyDirectory, file.OldFile)); } if (file.NewFile != null) { wc.DownloadFile(cs.UpdateUrl + "/updates/client/" + file.NewFile, Path.Combine(Utility.AssemblyDirectory, file.NewFile)); } } } if (um.ModulesUpdated != null) { foreach (ModulesUpdated module in um.ModulesUpdated) { if (module.OldModule != null) { File.Delete(Path.Combine(cs.ModulePath, module.OldModule)); } if (module.NewModule != null) { wc.DownloadFile(cs.UpdateUrl + "/updates/client/modules/" + module.NewModule, Path.Combine(cs.ModulePath, module.NewModule)); } } } } catch (Exception ex) { Logger.log(ex); } controller.Start(); })).Start(); I believe it is because the files are in use, but I can't seem to unload them. I though stopping the service would work, but apparently not. I have also checked the files and they are not read-only (but the folder is, which is located in Program Files, however I couldn't seem to get it to not be read-only programmatically or manually). The service is also being run as an administrator (NT AUTHORITY\SYSTEM). I've read about unloading the AppDomain but AppDomain.Unload(AppDomain.CurrentDomain); returned an exception as well. Not too sure even if this is a problem with MEF or my program just not having the correct permissions...I would assume that it's mainly because the file is in use.

    Read the article

  • how to create following Java applicatin? [on hold]

    - by Tushar Bichwe
    Write a JAVA program which performs the following listed operations: A. Create a package named MyEmpPackage which consists of following classes A class named Employee which stores information like the Emp number, first name, middle name, last name, address, designation and salary. The class should also contain appropriate get and set methods. 05 A class named AddEmployeeFrame which displays a frame consisting of appropriate controls to enter the details of a Employee and store these details in the Employee class object. The frame should also have three buttons with the caption as “Add Record” and “Delete Record” and “Exit”. 10 A class named MyCustomListener which should work as a user – defined event listener to handle required events as mentioned in following points. 05 B When the “Add Record” button is clicked, the dialog box should be appeared with asking the user “Do you really want to add record in the file”. If the user selects Yes than the record should be saved in the file. 10 When the “Exit” button is clicked, the frame should be closed. 10 [Note: Use the MyCustomListener class only to handle the appropriate events] C The “Delete Record” button should open a new frame which should take input of delete criteria using a radio button. The radio button should provide facility to delete on basis of first name, middle name or last name. 10 The new frame should also have a text box to input the delete criteria value. 10 The record should be deleted from the file and a message dialog should appear with the message that “Record is successfully Deleted”. 10 [Note: Use the MyCustomListener class only to handle the appropriate events] D Provide proper error messages and perform appropriate exceptions where ever required in all the classes 10

    Read the article

  • What's the safest way to remove data from mysql? (PHP/Mysql)

    - by ggfan
    I want to allow users as well as me(the admin) to delete data in mysql. I used to have remove.php that would get $_GETs from whatever that needed to be deleted such as... remove.php?action=post&posting_id=2. But I learned that anyone can simply abuse it and delete all my data. So what's the safest way for users and me to delete information without getting all crazy and hard? I am only a beginner :) I'm not sure if I can use POSTs because there is no forms and the data isn't changing. Is sessions good? Or would there be too many with postings, user information, comments, etc. Ex: James wants to delete one of his postings(it is posting_id=5). So he clicks the remove link and that takes him to remove.php?action=post&posting_id=5.

    Read the article

  • Entity Framework version 1- Brief Synopsis and Tips &ndash; Part 1

    - by Rohit Gupta
    To Do Eager loading use Projections (for e.g. from c in context.Contacts select c, c.Addresses)  or use Include Query Builder Methods (Include(“Addresses”)) If there is multi-level hierarchical Data then to eager load all the relationships use Include Query Builder methods like customers.Include("Order.OrderDetail") to include Order and OrderDetail collections or use customers.Include("Order.OrderDetail.Location") to include all Order, OrderDetail and location collections with a single include statement =========================================================================== If the query uses Joins then Include() Query Builder method will be ignored, use Nested Queries instead If the query does projections then Include() Query Builder method will be ignored Use Address.ContactReference.Load() OR Contact.Addresses.Load() if you need to Deferred Load Specific Entity – This will result in extra round trips to the database ObjectQuery<> cannot return anonymous types... it will return a ObjectQuery<DBDataRecord> Only Include method can be added to Linq Query Methods Any Linq Query method can be added to Query Builder methods. If you need to append a Query Builder Method (other than Include) after a LINQ method  then cast the IQueryable<Contact> to ObjectQuery<Contact> and then append the Query Builder method to it =========================================================================== Query Builder methods are Select, Where, Include Methods which use Entity SQL as parameters e.g. "it.StartDate, it.EndDate" When Query Builder methods do projection then they return ObjectQuery<DBDataRecord>, thus to iterate over this collection use contact.Item[“Name”].ToString() When Linq To Entities methods do projection, they return collection of anonymous types --- thus the collection is strongly typed and supports Intellisense EF Object Context can track changes only on Entities, not on Anonymous types. If you use a Defining Query for a EntitySet then the EntitySet becomes readonly since a Defining Query is the same as a View (which is treated as a ReadOnly by default). However if you want to use this EntitySet for insert/update/deletes then we need to map stored procs (as created in the DB) to the insert/update/delete functions of the Entity in the Designer You can use either Execute method or ToList() method to bind data to datasources/bindingsources If you use the Execute Method then remember that you can traverse through the ObjectResult<> collection (returned by Execute) only ONCE. In WPF use ObservableCollection to bind to data sources , for keeping track of changes and letting EF send updates to the DB automatically. Use Extension Methods to add logic to Entities. For e.g. create extension methods for the EntityObject class. Create a method in ObjectContext Partial class and pass the entity as a parameter, then call this method as desired from within each entity. ================================================================ DefiningQueries and Stored Procedures: For Custom Entities, one can use DefiningQuery or Stored Procedures. Thus the Custom Entity Collection will be populated using the DefiningQuery (of the EntitySet) or the Sproc. If you use Sproc to populate the EntityCollection then the query execution is immediate and this execution happens on the Server side and any filters applied will be applied in the Client App. If we use a DefiningQuery then these queries are composable, meaning the filters (if applied to the entityset) will all be sent together as a single query to the DB, returning only filtered results. If the sproc returns results that cannot be mapped to existing entity, then we first create the Entity/EntitySet in the CSDL using Designer, then create a dummy Entity/EntitySet using XML in the SSDL. When creating a EntitySet in the SSDL for this dummy entity, use a TSQL that does not return any results, but does return the relevant columns e.g. select ContactID, FirstName, LastName from dbo.Contact where 1=2 Also insure that the Entity created in the SSDL uses the SQL DataTypes and not .NET DataTypes. If you are unable to open the EDMX file in the designer then note the Errors ... they will give precise info on what is wrong. The Thrid option is to simply create a Native Query in the SSDL using <Function Name="PaymentsforContact" IsComposable="false">   <CommandText>SELECT ActivityId, Activity AS ActivityName, ImagePath, Category FROM dbo.Activities </CommandText></FuncTion> Then map this Function to a existing Entity. This is a quick way to get a custom Entity which is regular Entity with renamed columns or additional columns (which are computed columns). The disadvantage to using this is that It will return all the rows from the Defining query and any filter (if defined) will be applied only at the Client side (after getting all the rows from DB). If you you DefiningQuery instead then we can use that as a Composable Query. The Fourth option (for mapping a READ stored proc results to a non-existent Entity) is to create a View in the Database which returns all the fields that the sproc also returns, then update the Model so that the model contains this View as a Entity. Then map the Read Sproc to this View Entity. The other option would be to simply create the View and remove the sproc altogether. ================================================================ To Execute a SProc that does not return a entity, use a EntityCommand to execute that proc. You cannot call a sproc FunctionImport that does not return Entities From Code, the only way is to use SSDL function calls using EntityCommand.  This changes with EntityFramework Version 4 where you can return Scalar Types, Complex Types, Entities or NonQuery ================================================================ UDF when created as a Function in SSDL, we need to set the Name & IsComposable properties for the Function element. IsComposable is always false for Sprocs, for UDF's set this to true. You cannot call UDF "Function" from within code since you cannot import a UDF Function into the CSDL Model (with Version 1 of EF). only stored procedures can be imported and then mapped to a entity ================================================================ Entity Framework requires properties that are involved in association mappings to be mapped in all of the function mappings for the entity (Insert, Update and Delete). Because Payment has an association to Reservation... hence we need to pass both the paymentId and reservationId to the Delete sproc even though just the paymentId is the PK on the Payment Table. ================================================================ When mapping insert, update and delete procs to a Entity, insure that all the three or none are mapped. Further if you have a base class and derived class in the CSDL, then you must map (ins, upd, del) sprocs to all parent and child entities in the inheritance relationship. Note that this limitation that base and derived entity methods must all must be mapped does not apply when you are mapping Read Stored Procedures.... ================================================================ You can write stored procedures SQL directly into the SSDL by creating a Function element in the SSDL and then once created, you can map this Function to a CSDL Entity directly in the designer during Function Import ================================================================ You can do Entity Splitting such that One Entity maps to multiple tables in the DB. For e.g. the Customer Entity currently derives from Contact Entity...in addition it also references the ContactPersonalInfo Entity. One can copy all properties from the ContactPersonalInfo Entity into the Customer Entity and then Delete the CustomerPersonalInfo entity, finall one needs to map the copied properties to the ContactPersonalInfo Table in Table Mapping (by adding another table (ContactPersonalInfo) to the Table Mapping... this is called Entity Splitting. Thus now when you insert a Customer record, it will automatically create SQL to insert records into the Contact, Customers and ContactPersonalInfo tables even though you have a Single Entity called Customer in the CSDL =================================================================== There is Table by Type Inheritance where another EDM Entity can derive from another EDM entity and absorb the inherted entities properties, for example in the Break Away Geek Adventures EDM, the Customer entity derives (inherits) from the Contact Entity and absorbs all the properties of Contact entity. Thus when you create a Customer Entity in Code and then call context.SaveChanges the Object Context will first create the TSQL to insert into the Contact Table followed by a TSQL to insert into the Customer table =================================================================== Then there is the Table per Hierarchy Inheritance..... where different types are created based on a condition (similar applying a condition to filter a Entity to contain filtered records)... the diference being that the filter condition populates a new Entity Type (derived from the base Entity). In the BreakAway sample the example is Lodging Entity which is a Abstract Entity and Then Resort and NonResort Entities which derive from Lodging Entity and records are filtered based on the value of the Resort Boolean field =================================================================== Then there is Table per Concrete Type Hierarchy where we create a concrete Entity for each table in the database. In the BreakAway sample there is a entity for the Reservation table and another Entity for the OldReservation table even though both the table contain the same number of fields. The OldReservation Entity can then inherit from the Reservation Entity and configure the OldReservation Entity to remove all Scalar Properties from the Entity (since it inherits the properties from Reservation and filters based on ReservationDate field) =================================================================== Complex Types (Complex Properties) Entities in EF can also contain Complex Properties (in addition to Scalar Properties) and these Complex Properties reference a ComplexType (not a EntityType) DropdownList, ListBox, RadioButtonList, CheckboxList, Bulletedlist are examples of List server controls (not data bound controls) these controls cannot use Complex properties during databinding, they need Scalar Properties. So if a Entity contains Complex properties and you need to bind those to list server controls then use projections to return Scalar properties and bind them to the control (the disadvantage is that projected collections are not tracked by the Object Context and hence cannot persist changes to the projected collections bound to controls) ObjectDataSource and EntityDataSource do account for Complex properties and one can bind entities with Complex Properties to Data Source controls and they will be tracked for changes... with no additional plumbing needed to persist changes to these collections bound to controls So DataBound controls like GridView, FormView need to use EntityDataSource or ObjectDataSource as a datasource for entities that contain Complex properties so that changes to the datasource done using the GridView can be persisted to the DB (enabling the controls for updates)....if you cannot use the EntityDataSource you need to flatten the ComplexType Properties using projections With EF Version 4 ComplexTypes are supported by the Designer and can add/remove/compose Complex Types directly using the Designer =================================================================== Conditional Mapping ... is like Table per Hierarchy Inheritance where Entities inherit from a base class and then used conditions to populate the EntitySet (called conditional Mapping). Conditional Mapping has limitations since you can only use =, is null and IS NOT NULL Conditions to do conditional mapping. If you need more operators for filtering/mapping conditionally then use QueryView(or possibly Defining Query) to create a readonly entity. QueryView are readonly by default... the EntitySet created by the QueryView is enabled for change tracking by the ObjectContext, however the ObjectContext cannot create insert/update/delete TSQL statements for these Entities when SaveChanges is called since it is QueryView. One way to get around this limitation is to map stored procedures for the insert/update/delete operations in the Designer. =================================================================== Difference between QueryView and Defining Query : QueryView is defined in the (MSL) Mapping File/section of the EDM XML, whereas the DefiningQuery is defined in the store schema (SSDL). QueryView is written using Entity SQL and is this database agnostic and can be used against any database/Data Layer. DefiningQuery is written using Database Lanaguage i.e. TSQL or PSQL thus you have more control =================================================================== Performance: Lazy loading is deferred loading done automatically. lazy loading is supported with EF version4 and is on by default. If you need to turn it off then use context.ContextOptions.lazyLoadingEnabled = false To improve Performance consider PreCompiling the ObjectQuery using the CompiledQuery.Compile method

    Read the article

  • Simple Branching and Merging with SVN

    Its a good idea not to do too much work without checking something into source control.  By too much work I mean typically on the order of a couple of hours at most, and certainly its a good practice to check in anything you have before you leave the office for the day.  But what if your changes break the build (on the build server you do have a build server dont you?) or would cause problems for others on your team if they get the latest code?  The solution with Subversion is branching and merging (incidentally, if youre using Microsoft Visual Studio Team System, you can shelve your changes and share shelvesets with others, which accomplishes many of the same things as branching and merging, but is a bit simpler to do). Getting Started Im going to assume you have Subversion installed along with the nearly ubiquitous client, TortoiseSVN.  See my previous post on installing SVN server if you want to get it set up real quick (you can put it on your workstation/laptop just to learn how it works easily enough). Overview When you know you are going to be working on something that you wont be able to check in quickly, its a good idea to start a branch.  Its also perfectly fine to create the branch after-the-fact (have you ever started something thinking it would be an hour and 4 hours later realized you were nowhere near done?).  In any event, the first thing you need to do is create a branch.  A branch is simply a copy of the current trunk (a typical subversion setup has root directories called trunk, tags, and branches its a good idea to keep this and to put your branches in the branches folder).  Once you have a new branch, you need to switch your working copy so that it is bound to your branch.  As you work,  you may want to merge in changes that are happening in the trunk to your branch, and ultimately when you are done youll want to merge your branch back into the trunk.  When done, you can delete your branch (or not, but it may add clutter).  To sum up: Create a new branch Switch your local working copy to the new branch Develop in the branch (commit changes, etc.) Merge changes from trunk into your branch Merge changes from branch into trunk Delete the branch Create a new branch From the root of your repository, right-click and select TortoiseSVN > Branch/tag as shown at right (click to enlarge).  This will bring up the Copy (Branch / Tag) interface.  By default the From WC at URL: should be pointing at the trunk of your repository.  I recommend (after ensuring that you have the latest version) that you choose to make the copy from the HEAD revision in the repository (the first radio button).  In the To URL: textbox, you should change the URL from /trunk to /branches/NAME_OF_BRANCH.  You can name the branch anything you like, but its often useful to give it your name (if its just for your use) or some useful information (such as a datestamp or a bug/issue ID from that it relates to, or perhaps just the name of the feature you are adding. When youre done with that, enter in a log message for your new branch.  If you want to immediately switch your local working copy to the new branch/tag, check the box at the bottom of the dialog (Switch working copy to new branch/tag).  You can see an example at right. Assuming everything works, you should very quickly see a window telling you the Copy finished, like the one shown below: Switch Local Working Copy to New Branch If you followed the instructions above and checked the box when you created your branch, you dont need to do this step.  However, if you have a branch that already exists and you would like to switch over to working on it, you can do so by using the Switch command.  Youll find it in the explorer context menu under TortoiseSVN > Switch: This brings up a dialog that shows you your current binding, and lets you enter in a new URL to switch to: In the screenshot above, you can see that Im currently bound to a branch, and so I could switch back to the trunk or to another branch.  If youre not sure what to enter here, you can click the [] next to the URL textbox to explore your repository and find the appropriate root URL to use.  Also, the dropdown will show you URLs that might be a good fit (such as the trunk of the current repository). Develop in the Branch Once you have created a branch and switched your working copy to use it,  you can make changes and Commit them as usual.  Your commits are now going into the branch, so they wont impact other users or the build server that are working off of the trunk (or their own branches).  In theory you can keep on doing this forever, but practically its a good idea to periodically merge the trunk into your branch, and/or keep your branches short-lived and merge them back into the trunk before they get too far out of sync. Merge Changes from Trunk into your Branch Once you have been working in a branch for a little while, change to the trunk will have occurred that youll want to merge into your branch.  Its much safer and easier to integrate changes in small increments than to wait for weeks or months and then try to merge in two very different codebases.  To perform the merge, simply go to the root of your branch working copy and right click, select TortoiseSVN->Merge.  Youll be presented with this dialog: In this case you want to leave the default setting, Merge a range of revisions.  Click Next.  Now choose the URL to merge from.  You should select the trunk of your current repository (which should be in the dropdownlist, or you can click the [] to browse your repository for the correct URL).  You can leave everything else blank since you want to merge everything: Click Next.  Again you can leave the default settings.  If you want to do something more granular than everything in the trunk, you can select a different Merge depth, to include merging just one item in the tree.  You can also perform a Test merge to see what changes will take place before you click Merge (which is often a good idea).  Heres what the dialog should look like before you click Merge: After clicking Merge (or Test merge) you should see a confirmation like this (it will say Test Only in the title if you click Test merge): Now you should build your solution, run all of your tests, and verify that your branch still works the way it should, given the updates that youve just integrated from the trunk.  Once everything works, Commit your changes, and then continue with your work on the branch.  Note that until you commit, nothing has actually changed in your branch on the server.  Other team members who may also be working in this branch wont be impacted, etc.  The Merge is purely a client-side operation until you perform a Commit. In a more real-world scenario, you may have conflicts.  When you do, youll be presented with a dialog like this one: Its up to you which option you want to go with.  The more frequently you Merge, the fewer of these youll have to deal with.  Also, be very sure that youre merging the right folders together.  If you try and merge your trunk with some subfolder in your branchs structure, youll end up with all kinds of conflicts and problems.  Fortunately, theyre only on your working copy (unless you commit them!) but if you see something like that, be sure to doublecheck your URL and your local file location. Merge Your Branch Back Into Trunk When youre done working in your branch, its time to pull it back into the trunk.  The first thing you should do is follow the previous steps instructions for merging the latest from the trunk into your branch.  This lets you ensure that what you have in your branch works correctly with the current trunk.  Once youve done that and committed your changes to your branch, youre ready to proceed with this step. Once youre confident your branch is good to go, you should go to its root folder and select TortoiseSVN->Merge (as above) from the explorer right-click menu.  This time, select Reintegrate a branch as shown below: Click Next.  Youll want it to merge with the trunk, which should be the default: Click Next. Leave the default settings: Click Test merge to see a test, and then if all looks good, click Merge.  Note that if you havent checked in your working copy changes, youll see something like this: If on the other hand things are successful: After this step, its likely you are finished working in your branch.  Dont forget to use the ToroiseSVN->Switch command to change your working copy back to the trunk. Delete the Branch You dont have to delete the branch, but over time your branches area of your repository will get cluttered, and in any event if theyre not actively being worked on the branches are just taking up space and adding to later confusion.  Keeping your branches limited to things youre actively working on is simply a good habit to get into, just like making sure your codebase itself remains tidy and not filled with old commented out bits of code. To delete the branch after youre finished with it, the simplest thing to do is choose TortoiseSVN->Repo Browser.  From there, assuming you did this from your branch, it should already be highlighted.  In any event, navigate to your branch in the treeview on the left, and then right-click and select Delete.  Enter a log message if youd like: Click OK, and its gone.  Dont be too afraid of this, though.  You can still get to the files by viewing the log for branches, and selecting a previous revision (anything before the delete action): If for some reason you needed something that was previously in this branch, you could easily get back to any changeset you checked in, so you should have absolutely no fear when it comes to deleting branches youre done with.   Resources If youre using Eclipse, theres a nice write-up of the steps required by Zach Cox that I found helpful here. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Metro: Creating an IndexedDbDataSource for WinJS

    - by Stephen.Walther
    The goal of this blog entry is to describe how you can create custom data sources which you can use with the controls in the WinJS library. In particular, I explain how you can create an IndexedDbDataSource which you can use to store and retrieve data from an IndexedDB database. If you want to skip ahead, and ignore all of the fascinating content in-between, I’ve included the complete code for the IndexedDbDataSource at the very bottom of this blog entry. What is IndexedDB? IndexedDB is a database in the browser. You can use the IndexedDB API with all modern browsers including Firefox, Chrome, and Internet Explorer 10. And, of course, you can use IndexedDB with Metro style apps written with JavaScript. If you need to persist data in a Metro style app written with JavaScript then IndexedDB is a good option. Each Metro app can only interact with its own IndexedDB databases. And, IndexedDB provides you with transactions, indices, and cursors – the elements of any modern database. An IndexedDB database might be different than the type of database that you normally use. An IndexedDB database is an object-oriented database and not a relational database. Instead of storing data in tables, you store data in object stores. You store JavaScript objects in an IndexedDB object store. You create new IndexedDB object stores by handling the upgradeneeded event when you attempt to open a connection to an IndexedDB database. For example, here’s how you would both open a connection to an existing database named TasksDB and create the TasksDB database when it does not already exist: var reqOpen = window.indexedDB.open(“TasksDB”, 2); reqOpen.onupgradeneeded = function (evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement: true }); }; reqOpen.onsuccess = function () { var db = reqOpen.result; // Do something with db }; When you call window.indexedDB.open(), and the database does not already exist, then the upgradeneeded event is raised. In the code above, the upgradeneeded handler creates a new object store named tasks. The new object store has an auto-increment column named id which acts as the primary key column. If the database already exists with the right version, and you call window.indexedDB.open(), then the success event is raised. At that point, you have an open connection to the existing database and you can start doing something with the database. You use asynchronous methods to interact with an IndexedDB database. For example, the following code illustrates how you would add a new object to the tasks object store: var transaction = db.transaction(“tasks”, “readwrite”); var reqAdd = transaction.objectStore(“tasks”).add({ name: “Feed the dog” }); reqAdd.onsuccess = function() { // Tasks added successfully }; The code above creates a new database transaction, adds a new task to the tasks object store, and handles the success event. If the new task gets added successfully then the success event is raised. Creating a WinJS IndexedDbDataSource The most powerful control in the WinJS library is the ListView control. This is the control that you use to display a collection of items. If you want to display data with a ListView control, you need to bind the control to a data source. The WinJS library includes two objects which you can use as a data source: the List object and the StorageDataSource object. The List object enables you to represent a JavaScript array as a data source and the StorageDataSource enables you to represent the file system as a data source. If you want to bind an IndexedDB database to a ListView then you have a choice. You can either dump the items from the IndexedDB database into a List object or you can create a custom data source. I explored the first approach in a previous blog entry. In this blog entry, I explain how you can create a custom IndexedDB data source. Implementing the IListDataSource Interface You create a custom data source by implementing the IListDataSource interface. This interface contains the contract for the methods which the ListView needs to interact with a data source. The easiest way to implement the IListDataSource interface is to derive a new object from the base VirtualizedDataSource object. The VirtualizedDataSource object requires a data adapter which implements the IListDataAdapter interface. Yes, because of the number of objects involved, this is a little confusing. Your code ends up looking something like this: var IndexedDbDataSource = WinJS.Class.derive( WinJS.UI.VirtualizedDataSource, function (dbName, dbVersion, objectStoreName, upgrade, error) { this._adapter = new IndexedDbDataAdapter(dbName, dbVersion, objectStoreName, upgrade, error); this._baseDataSourceConstructor(this._adapter); }, { nuke: function () { this._adapter.nuke(); }, remove: function (key) { this._adapter.removeInternal(key); } } ); The code above is used to create a new class named IndexedDbDataSource which derives from the base VirtualizedDataSource class. In the constructor for the new class, the base class _baseDataSourceConstructor() method is called. A data adapter is passed to the _baseDataSourceConstructor() method. The code above creates a new method exposed by the IndexedDbDataSource named nuke(). The nuke() method deletes all of the objects from an object store. The code above also overrides a method named remove(). Our derived remove() method accepts any type of key and removes the matching item from the object store. Almost all of the work of creating a custom data source goes into building the data adapter class. The data adapter class implements the IListDataAdapter interface which contains the following methods: · change() · getCount() · insertAfter() · insertAtEnd() · insertAtStart() · insertBefore() · itemsFromDescription() · itemsFromEnd() · itemsFromIndex() · itemsFromKey() · itemsFromStart() · itemSignature() · moveAfter() · moveBefore() · moveToEnd() · moveToStart() · remove() · setNotificationHandler() · compareByIdentity Fortunately, you are not required to implement all of these methods. You only need to implement the methods that you actually need. In the case of the IndexedDbDataSource, I implemented the getCount(), itemsFromIndex(), insertAtEnd(), and remove() methods. If you are creating a read-only data source then you really only need to implement the getCount() and itemsFromIndex() methods. Implementing the getCount() Method The getCount() method returns the total number of items from the data source. So, if you are storing 10,000 items in an object store then this method would return the value 10,000. Here’s how I implemented the getCount() method: getCount: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore().then(function (store) { var reqCount = store.count(); reqCount.onerror = that._error; reqCount.onsuccess = function (evt) { complete(evt.target.result); }; }); }); } The first thing that you should notice is that the getCount() method returns a WinJS promise. This is a requirement. The getCount() method is asynchronous which is a good thing because all of the IndexedDB methods (at least the methods implemented in current browsers) are also asynchronous. The code above retrieves an object store and then uses the IndexedDB count() method to get a count of the items in the object store. The value is returned from the promise by calling complete(). Implementing the itemsFromIndex method When a ListView displays its items, it calls the itemsFromIndex() method. By default, it calls this method multiple times to get different ranges of items. Three parameters are passed to the itemsFromIndex() method: the requestIndex, countBefore, and countAfter parameters. The requestIndex indicates the index of the item from the database to show. The countBefore and countAfter parameters represent hints. These are integer values which represent the number of items before and after the requestIndex to retrieve. Again, these are only hints and you can return as many items before and after the request index as you please. Here’s how I implemented the itemsFromIndex method: itemsFromIndex: function (requestIndex, countBefore, countAfter) { var that = this; return new WinJS.Promise(function (complete, error) { that.getCount().then(function (count) { if (requestIndex >= count) { return WinJS.Promise.wrapError(new WinJS.ErrorFromName(WinJS.UI.FetchError.doesNotExist)); } var startIndex = Math.max(0, requestIndex - countBefore); var endIndex = Math.min(count, requestIndex + countAfter + 1); that._getObjectStore().then(function (store) { var index = 0; var items = []; var req = store.openCursor(); req.onerror = that._error; req.onsuccess = function (evt) { var cursor = evt.target.result; if (index < startIndex) { index = startIndex; cursor.advance(startIndex); return; } if (cursor && index < endIndex) { index++; items.push({ key: cursor.value[store.keyPath].toString(), data: cursor.value }); cursor.continue(); return; } results = { items: items, offset: requestIndex - startIndex, totalCount: count }; complete(results); }; }); }); }); } In the code above, a cursor is used to iterate through the objects in an object store. You fetch the next item in the cursor by calling either the cursor.continue() or cursor.advance() method. The continue() method moves forward by one object and the advance() method moves forward a specified number of objects. Each time you call continue() or advance(), the success event is raised again. If the cursor is null then you know that you have reached the end of the cursor and you can return the results. Some things to be careful about here. First, the return value from the itemsFromIndex() method must implement the IFetchResult interface. In particular, you must return an object which has an items, offset, and totalCount property. Second, each item in the items array must implement the IListItem interface. Each item should have a key and a data property. Implementing the insertAtEnd() Method When creating the IndexedDbDataSource, I wanted to go beyond creating a simple read-only data source and support inserting and deleting objects. If you want to support adding new items with your data source then you need to implement the insertAtEnd() method. Here’s how I implemented the insertAtEnd() method for the IndexedDbDataSource: insertAtEnd:function(unused, data) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function(store) { var reqAdd = store.add(data); reqAdd.onerror = that._error; reqAdd.onsuccess = function (evt) { var reqGet = store.get(evt.target.result); reqGet.onerror = that._error; reqGet.onsuccess = function (evt) { var newItem = { key:evt.target.result[store.keyPath].toString(), data:evt.target.result } complete(newItem); }; }; }); }); } When implementing the insertAtEnd() method, you need to be careful to return an object which implements the IItem interface. In particular, you should return an object that has a key and a data property. The key must be a string and it uniquely represents the new item added to the data source. The value of the data property represents the new item itself. Implementing the remove() Method Finally, you use the remove() method to remove an item from the data source. You call the remove() method with the key of the item which you want to remove. Implementing the remove() method in the case of the IndexedDbDataSource was a little tricky. The problem is that an IndexedDB object store uses an integer key and the VirtualizedDataSource requires a string key. For that reason, I needed to override the remove() method in the derived IndexedDbDataSource class like this: var IndexedDbDataSource = WinJS.Class.derive( WinJS.UI.VirtualizedDataSource, function (dbName, dbVersion, objectStoreName, upgrade, error) { this._adapter = new IndexedDbDataAdapter(dbName, dbVersion, objectStoreName, upgrade, error); this._baseDataSourceConstructor(this._adapter); }, { nuke: function () { this._adapter.nuke(); }, remove: function (key) { this._adapter.removeInternal(key); } } ); When you call remove(), you end up calling a method of the IndexedDbDataAdapter named removeInternal() . Here’s what the removeInternal() method looks like: setNotificationHandler: function (notificationHandler) { this._notificationHandler = notificationHandler; }, removeInternal: function(key) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqDelete = store.delete (key); reqDelete.onerror = that._error; reqDelete.onsuccess = function (evt) { that._notificationHandler.removed(key.toString()); complete(); }; }); }); } The removeInternal() method calls the IndexedDB delete() method to delete an item from the object store. If the item is deleted successfully then the _notificationHandler.remove() method is called. Because we are not implementing the standard IListDataAdapter remove() method, we need to notify the data source (and the ListView control bound to the data source) that an item has been removed. The way that you notify the data source is by calling the _notificationHandler.remove() method. Notice that we get the _notificationHandler in the code above by implementing another method in the IListDataAdapter interface: the setNotificationHandler() method. You can raise the following types of notifications using the _notificationHandler: · beginNotifications() · changed() · endNotifications() · inserted() · invalidateAll() · moved() · removed() · reload() These methods are all part of the IListDataNotificationHandler interface in the WinJS library. Implementing the nuke() Method I wanted to implement a method which would remove all of the items from an object store. Therefore, I created a method named nuke() which calls the IndexedDB clear() method: nuke: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqClear = store.clear(); reqClear.onerror = that._error; reqClear.onsuccess = function (evt) { that._notificationHandler.reload(); complete(); }; }); }); } Notice that the nuke() method calls the _notificationHandler.reload() method to notify the ListView to reload all of the items from its data source. Because we are implementing a custom method here, we need to use the _notificationHandler to send an update. Using the IndexedDbDataSource To illustrate how you can use the IndexedDbDataSource, I created a simple task list app. You can add new tasks, delete existing tasks, and nuke all of the tasks. You delete an item by selecting an item (swipe or right-click) and clicking the Delete button. Here’s the HTML page which contains the ListView, the form for adding new tasks, and the buttons for deleting and nuking tasks: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>DataSources</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.1.0.RC/css/ui-dark.css" rel="stylesheet" /> <script src="//Microsoft.WinJS.1.0.RC/js/base.js"></script> <script src="//Microsoft.WinJS.1.0.RC/js/ui.js"></script> <!-- DataSources references --> <link href="indexedDb.css" rel="stylesheet" /> <script type="text/javascript" src="indexedDbDataSource.js"></script> <script src="indexedDb.js"></script> </head> <body> <div id="tmplTask" data-win-control="WinJS.Binding.Template"> <div class="taskItem"> Id: <span data-win-bind="innerText:id"></span> <br /><br /> Name: <span data-win-bind="innerText:name"></span> </div> </div> <div id="lvTasks" data-win-control="WinJS.UI.ListView" data-win-options="{ itemTemplate: select('#tmplTask'), selectionMode: 'single' }"></div> <form id="frmAdd"> <fieldset> <legend>Add Task</legend> <label>New Task</label> <input id="inputTaskName" required /> <button>Add</button> </fieldset> </form> <button id="btnNuke">Nuke</button> <button id="btnDelete">Delete</button> </body> </html> And here is the JavaScript code for the TaskList app: /// <reference path="//Microsoft.WinJS.1.0.RC/js/base.js" /> /// <reference path="//Microsoft.WinJS.1.0.RC/js/ui.js" /> function init() { WinJS.UI.processAll().done(function () { var lvTasks = document.getElementById("lvTasks").winControl; // Bind the ListView to its data source var tasksDataSource = new DataSources.IndexedDbDataSource("TasksDB", 1, "tasks", upgrade); lvTasks.itemDataSource = tasksDataSource; // Wire-up Add, Delete, Nuke buttons document.getElementById("frmAdd").addEventListener("submit", function (evt) { evt.preventDefault(); tasksDataSource.beginEdits(); tasksDataSource.insertAtEnd(null, { name: document.getElementById("inputTaskName").value }).done(function (newItem) { tasksDataSource.endEdits(); document.getElementById("frmAdd").reset(); lvTasks.ensureVisible(newItem.index); }); }); document.getElementById("btnDelete").addEventListener("click", function () { if (lvTasks.selection.count() == 1) { lvTasks.selection.getItems().done(function (items) { tasksDataSource.remove(items[0].data.id); }); } }); document.getElementById("btnNuke").addEventListener("click", function () { tasksDataSource.nuke(); }); // This method is called to initialize the IndexedDb database function upgrade(evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement: true }); } }); } document.addEventListener("DOMContentLoaded", init); The IndexedDbDataSource is created and bound to the ListView control with the following two lines of code: var tasksDataSource = new DataSources.IndexedDbDataSource("TasksDB", 1, "tasks", upgrade); lvTasks.itemDataSource = tasksDataSource; The IndexedDbDataSource is created with four parameters: the name of the database to create, the version of the database to create, the name of the object store to create, and a function which contains code to initialize the new database. The upgrade function creates a new object store named tasks with an auto-increment property named id: function upgrade(evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement: true }); } The Complete Code for the IndexedDbDataSource Here’s the complete code for the IndexedDbDataSource: (function () { /************************************************ * The IndexedDBDataAdapter enables you to work * with a HTML5 IndexedDB database. *************************************************/ var IndexedDbDataAdapter = WinJS.Class.define( function (dbName, dbVersion, objectStoreName, upgrade, error) { this._dbName = dbName; // database name this._dbVersion = dbVersion; // database version this._objectStoreName = objectStoreName; // object store name this._upgrade = upgrade; // database upgrade script this._error = error || function (evt) { console.log(evt.message); }; }, { /******************************************* * IListDataAdapter Interface Methods ********************************************/ getCount: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore().then(function (store) { var reqCount = store.count(); reqCount.onerror = that._error; reqCount.onsuccess = function (evt) { complete(evt.target.result); }; }); }); }, itemsFromIndex: function (requestIndex, countBefore, countAfter) { var that = this; return new WinJS.Promise(function (complete, error) { that.getCount().then(function (count) { if (requestIndex >= count) { return WinJS.Promise.wrapError(new WinJS.ErrorFromName(WinJS.UI.FetchError.doesNotExist)); } var startIndex = Math.max(0, requestIndex - countBefore); var endIndex = Math.min(count, requestIndex + countAfter + 1); that._getObjectStore().then(function (store) { var index = 0; var items = []; var req = store.openCursor(); req.onerror = that._error; req.onsuccess = function (evt) { var cursor = evt.target.result; if (index < startIndex) { index = startIndex; cursor.advance(startIndex); return; } if (cursor && index < endIndex) { index++; items.push({ key: cursor.value[store.keyPath].toString(), data: cursor.value }); cursor.continue(); return; } results = { items: items, offset: requestIndex - startIndex, totalCount: count }; complete(results); }; }); }); }); }, insertAtEnd:function(unused, data) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function(store) { var reqAdd = store.add(data); reqAdd.onerror = that._error; reqAdd.onsuccess = function (evt) { var reqGet = store.get(evt.target.result); reqGet.onerror = that._error; reqGet.onsuccess = function (evt) { var newItem = { key:evt.target.result[store.keyPath].toString(), data:evt.target.result } complete(newItem); }; }; }); }); }, setNotificationHandler: function (notificationHandler) { this._notificationHandler = notificationHandler; }, /***************************************** * IndexedDbDataSource Method ******************************************/ removeInternal: function(key) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqDelete = store.delete (key); reqDelete.onerror = that._error; reqDelete.onsuccess = function (evt) { that._notificationHandler.removed(key.toString()); complete(); }; }); }); }, nuke: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqClear = store.clear(); reqClear.onerror = that._error; reqClear.onsuccess = function (evt) { that._notificationHandler.reload(); complete(); }; }); }); }, /******************************************* * Private Methods ********************************************/ _ensureDbOpen: function () { var that = this; // Try to get cached Db if (that._cachedDb) { return WinJS.Promise.wrap(that._cachedDb); } // Otherwise, open the database return new WinJS.Promise(function (complete, error, progress) { var reqOpen = window.indexedDB.open(that._dbName, that._dbVersion); reqOpen.onerror = function (evt) { error(); }; reqOpen.onupgradeneeded = function (evt) { that._upgrade(evt); that._notificationHandler.invalidateAll(); }; reqOpen.onsuccess = function () { that._cachedDb = reqOpen.result; complete(that._cachedDb); }; }); }, _getObjectStore: function (type) { type = type || "readonly"; var that = this; return new WinJS.Promise(function (complete, error) { that._ensureDbOpen().then(function (db) { var transaction = db.transaction(that._objectStoreName, type); complete(transaction.objectStore(that._objectStoreName)); }); }); }, _get: function (key) { return new WinJS.Promise(function (complete, error) { that._getObjectStore().done(function (store) { var reqGet = store.get(key); reqGet.onerror = that._error; reqGet.onsuccess = function (item) { complete(item); }; }); }); } } ); var IndexedDbDataSource = WinJS.Class.derive( WinJS.UI.VirtualizedDataSource, function (dbName, dbVersion, objectStoreName, upgrade, error) { this._adapter = new IndexedDbDataAdapter(dbName, dbVersion, objectStoreName, upgrade, error); this._baseDataSourceConstructor(this._adapter); }, { nuke: function () { this._adapter.nuke(); }, remove: function (key) { this._adapter.removeInternal(key); } } ); WinJS.Namespace.define("DataSources", { IndexedDbDataSource: IndexedDbDataSource }); })(); Summary In this blog post, I provided an overview of how you can create a new data source which you can use with the WinJS library. I described how you can create an IndexedDbDataSource which you can use to bind a ListView control to an IndexedDB database. While describing how you can create a custom data source, I explained how you can implement the IListDataAdapter interface. You also learned how to raise notifications — such as a removed or invalidateAll notification — by taking advantage of the methods of the IListDataNotificationHandler interface.

    Read the article

  • Automatic Standby Recreation for Data Guard

    - by pablo.boixeda(at)oracle.com
    Hi,Unfortunately sometimes a Standby Instance needs to be recreated. This can happen for many reasons such as lost archive logs, standby data files, failover, among others.This is why we wanted to have one script to recreate standby instances in an easy way.This script recreates the standby considering some prereqs:-Database Version should be at least 11gR1-Dummy instance started on the standby node (Seeking to improve this so it won't be needed)-Broker configuration hasn't been removed-In our case we have two TNSNAMES files, one for the Standby creation (using SID) and the other one for production using service names (including broker service name)-Some environment variables set up by the environment db script (like ORACLE_HOME, PATH...)-The directory tree should not have been modified in the stanby hostWe are currently using it on our 11gR2 Data Guard tests.Any improvements will be welcome! Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} #!/bin/ksh ###    NOMBRE / VERSION ###       recrea_dg.sh   v.1.00 ### ###    DESCRIPCION ###       reacreacion de la Standby ### ###    DEVUELVE ###       0 Creacion de STANDBY correcta ###       1 Fallo ### ###    NOTAS ###       Este shell script NO DEBE MODIFICARSE. ###       Todas las variables y constantes necesarias se toman del entorno. ### ###    MODIFICADO POR:    FECHA:        COMENTARIOS: ###    ---------------    ----------    ------------------------------------- ###      Oracle           15/02/2011    Creacion. ### ### ### Cargar entorno ### V_ADMIN_DIR=`dirname $0` . ${V_ADMIN_DIR}/entorno_bd.sh 1>>/dev/null if [ $? -ne 0 ] then   echo "Error Loading the environment."   exit 1 fi V_RET=0 V_DATE=`/bin/date` V_DATE_F=`/bin/date +%Y%m%d_%H%M%S` V_LOGFILE=${V_TRAZAS}/recrea_dg_${V_DATE_F}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1 ### ### Variables para Recrear el Data Guard ### V_DB_BR=`echo ${V_DB_NAME}|tr '[:lower:]' '[:upper:]'` if [ "${ORACLE_SID}" = "${V_DB_NAME}01" ] then         V_LOCAL_BR=${V_DB_BR}'01'         V_REMOTE_BR=${V_DB_BR}'02' else         V_LOCAL_BR=${V_DB_BR}'02'         V_REMOTE_BR=${V_DB_BR}'01' fi echo " Getting local instance ROLE ${ORACLE_SID} ..." sqlplus -s /nolog 1>>/dev/null 2>&1 <<-! whenever sqlerror exit 1 connect / as sysdba variable salida number declare   v_database_role v\$database.database_role%type; begin   select database_role into v_database_role from v\$database;   :salida := case v_database_role        when 'PRIMARY' then 2        when 'PHYSICAL STANDBY' then 3        else 4      end; end; / exit :salida ! case $? in 1) echo " ERROR: Cannot get instance ROLE ." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; 2) echo " Local Instance with PRIMARY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=PRIMARY ;; 3) echo " Local Instance with PHYSICAL STANDBY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=STANDBY ;; *) echo " ERROR: UNKNOWN ROLE." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; esac if [ "${V_DB_ROLE_LCL}" = "PRIMARY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_REMOTE_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_LOCAL_BR}         V_STANDBY=${V_REMOTE_BR} fi if [ "${V_DB_ROLE_LCL}" = "STANDBY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_LOCAL_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_REMOTE_BR}         V_STANDBY=${V_LOCAL_BR} fi # Cargamos las variables de los hosts # Cargamos las variables de los hosts PRY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_PRIMARY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` SBY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` echo "el HOST primary es: ${PRY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "el HOST standby es: ${SBY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## V_DATE=`/bin/date` echo "${V_DATE} - Shutting down Standby instance" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## SBY_STATUS=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',status from v\\$instance; EOF` if [ ${SBY_STATUS} = 'STARTED' ] || [ ${SBY_STATUS} = 'MOUNTED' ] || [ ${SBY_STATUS} = 'OPEN' ] then         echo "${V_DATE} - Standby instance shutdown in progress..." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1         sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         shutdown abort         ! fi V_DATE=`/bin/date` echo "" echo "${V_DATE} - Standby instance stopped" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Eliminamos los ficheros de la base de datos ## V_SBY_SID=`echo ${V_STANDBY}|tr '[:upper:]' '[:lower:]'` V_PRY_SID=`echo ${V_PRIMARY}|tr '[:upper:]' '[:lower:]'` ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/data/*.dbf ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch/*.arc ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.rdo ## ## Startup nomount stby instance ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting  DUMMY Standby Instance " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${SBY_HOST} touch /home/oracle/init_dg.ora ssh ${SBY_HOST} 'echo "DB_NAME='${V_DB_NAME}'">>/home/oracle/init_dg.ora' ssh ${SBY_HOST} touch /home/oracle/start_dummy.sh ssh ${SBY_HOST} 'echo "ORACLE_HOME=/opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_HOME">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "PATH=\$ORACLE_HOME/bin:\$PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "ORACLE_SID='${V_SBY_SID}'">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_SID">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "sqlplus -s /nolog <<-!" >>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      whenever sqlerror exit 1 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      connect / as sysdba ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      startup nomount pfile='\''/home/oracle/init_dg.ora'\''">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "! ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'chmod 744 /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'sh /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/init_dg.ora' ## ## TNSNAMES change, specific for RMAN duplicate ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Setting up TNSNAMES in PRIMARY host " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.inst  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting STANDBY creation with RMAN.. " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 rman<<-! >>${V_LOGFILE} connect target sys/${V_DB_PWD}@${V_PRIMARY} connect auxiliary sys/${V_DB_PWD}@${V_STANDBY} run { allocate channel prmy1 type disk; allocate channel prmy2 type disk; allocate channel prmy3 type disk; allocate channel prmy4 type disk; allocate auxiliary channel stby type disk; duplicate target database for standby from active database dorecover spfile parameter_value_convert '${V_PRY_SID}','${V_SBY_SID}' set control_files='/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/control01.ctl','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/control02.ctl' set db_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set log_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set 'db_unique_name'='${V_SBY_SID}' set log_archive_config='DG_CONFIG=(${V_PRIMARY},${V_STANDBY})' set fal_client='${V_STANDBY}' set fal_server='${V_PRIMARY}' set log_archive_dest_1='LOCATION=/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch DB_UNIQUE_NAME=${V_SBY_SID} MANDATORY VALID_FOR=(ALL_LOGFILES,ALL_ROLES)' set log_archive_dest_2='SERVICE="${V_PRIMARY}"','SYNC AFFIRM DB_UNIQUE_NAME=${V_PRY_SID} DELAY=0 MAX_FAILURE=0 REOPEN=300 REGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)' nofilenamecheck ; } ! V_DATE=`/bin/date` if [ $? -ne 0 ] then         echo ""         echo "${V_DATE} - Error creating STANDBY instance"         echo ""         echo "********************************************************************************" else         echo ""         echo "${V_DATE} - STANDBY instance created SUCCESSFULLY "         echo ""         echo "********************************************************************************" fi sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=${SBY_HOST})(PORT=1544))' scope=both;         alter system set service_names='${V_DB_NAME}.eu.roca.net,${V_SBY_SID}.eu.roca.net,${V_SBY_SID}_DGMGRL.eu.roca.net' scope=both;         alter database recover managed standby database using current logfile disconnect from session;         alter system set dg_broker_start=true scope=both; ! ## ## TNSNAMES change, back to Production Mode ## V_DATE=`/bin/date` echo " " | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Restoring TNSNAMES in PRIMARY "  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.prod  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} -  Waiting for media recovery before check the DATA GUARD Broker"  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 sleep 200 dgmgrl <<-! | grep SUCCESS 1>/dev/null 2>&1     connect ${V_DB_USR}/${V_DB_PWD}@${V_STANDBY}     show configuration verbose; ! if [ $? -ne 0 ] ; then         echo "       ERROR: El status del Broker no es SUCCESS" | tee -a ${V_LOGFILE}   2>&1 ;         V_RET=1 else          echo "      DATA GUARD OK " | tee -a ${V_LOGFILE}   2>&1 ; Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}         V_RET=0 fi Hope it helps.

    Read the article

  • Permissions restoring from Time Machine - Finder copy vs "cp" copy

    - by Ben Challenor
    Note: this question was starting to sprawl so I rewrote it. I have a folder that I'm trying to restore from a Time Machine backup. Using cp -R works fine, but certain folders cannot be restored with either the Time Machine UI or Finder. Other users have reported similar errors and the cp -R workaround was suggested (e.g. Restoring from Time Machine - Permissions Error). But I wanted to understand: Why cp -R works when the Finder and the Time Machine UI do not. Whether I could prevent the errors by changing file permissions before the backup. There do indeed seem to be some permissions that Finder works with and some that it does not. I've narrowed the errors down to folders with the user ben (that's me) and the group wheel. Here's a simplified reproduction. I have four folders with the owner/group combinations I've seen so far: ben ~/Desktop/test $ ls -lea total 16 drwxr-xr-x 7 ben staff 238 27 Nov 14:31 . drwx------+ 17 ben staff 578 27 Nov 14:29 .. 0: group:everyone deny delete -rw-r--r--@ 1 ben staff 6148 27 Nov 14:31 .DS_Store drwxr-xr-x 3 ben staff 102 27 Nov 14:30 ben-staff drwxr-xr-x 3 ben wheel 102 27 Nov 14:30 ben-wheel drwxr-xr-x 3 root admin 102 27 Nov 14:31 root-admin drwxr-xr-x 3 root wheel 102 27 Nov 14:31 root-wheel Each contains a single file called file with the same owner/group: ben ~/Desktop/test $ cd ben-staff ben ~/Desktop/test/ben-staff $ ls -lea total 0 drwxr-xr-x 3 ben staff 102 27 Nov 14:30 . drwxr-xr-x 7 ben staff 238 27 Nov 14:31 .. -rw-r--r-- 1 ben staff 0 27 Nov 14:30 file In the backup, they look like this: ben /Volumes/Deimos/Backups.backupdb/Ben’s MacBook Air/Latest/Macintosh HD/Users/ben/Desktop/test $ ls -leA total 16 -rw-r--r--@ 1 ben staff 6148 27 Nov 14:34 .DS_Store 0: group:everyone deny write,delete,append,writeattr,writeextattr,chown drwxr-xr-x@ 3 ben staff 102 27 Nov 14:51 ben-staff 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown drwxr-xr-x@ 3 ben wheel 102 27 Nov 14:51 ben-wheel 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown drwxr-xr-x@ 3 root admin 102 27 Nov 14:52 root-admin 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown drwxr-xr-x@ 3 root wheel 102 27 Nov 14:52 root-wheel 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown Of these, ben-staff can be restored with Finder without errors. root-wheel and root-admin ask for my password and then restore without errors. But ben-wheel does not prompt for my password and gives the error: The operation can’t be completed because you don’t have permission to access “file”. Interestingly, I can restore the file from this folder by dragging it directly to my local drive (instead of dragging its parent folder), but when I do so its permissions are changed to ben/staff. Here are the permissions after the restore for the three folders that worked correctly, and the file from ben-wheel that was changed to ben/staff. ben ~/Desktop/test-restore $ ls -leA total 16 -rw-r--r--@ 1 ben staff 6148 27 Nov 14:46 .DS_Store drwxr-xr-x 3 ben staff 102 27 Nov 14:30 ben-staff -rw-r--r-- 1 ben staff 0 27 Nov 14:30 file drwxr-xr-x 3 root admin 102 27 Nov 14:31 root-admin drwxr-xr-x 3 root wheel 102 27 Nov 14:31 root-wheel Can anyone explain this behaviour? Why do Finder and the Time Machine UI break with the ben / wheel permissions? And why does cp -R work (even without sudo)?

    Read the article

  • Replace dual-XP installs with single-XP install and repartition drive?

    - by caeious
    Hello, The Current Situation I have a hard drive that currently is split up like so: Primary Partition C: 9.77 GB NTFS Healthy (System) with XP Pro (in Polish) installed Extended Partition D: 39.82 GB NTFS Healthy (Boot) with XP Pro (in English) installed 6.30 GB Free space When I start my comuter I get a black and white Windows Boot Manager dual boot screen with 2 choices both being Microsoft Windows XP. The first choice is the English version of XP and the second choice is the Polish version of XP. Images of my Computer Management window and Dual Boot screen The Mission What I need to do is get rid of the entire extended partition (D: 39.82 GB & 6.30 free space) and just have the one primary C: drive which I assume will be somewheres around 55 GB big. So in the end I just want XP Pro in English running on my C: drive and no black and white boot screen to show up when starting up my laptop. The Question How do I go about successfully completing The Mission with out making my computer a useless pile of silicon, plastic and metal? UPDATE: So I went ahead and tried to follow Neal's suggestion but hit a wall. I got to a Windows XP Pro install screen that had the 3 following options as well as my drive data: To set up Windows XP on the selected item, press Enter To create a partition in the unpartitioned space, press C To delete the selected partition, press D 57232 MB Disk 0 at Id 0 on bus 0 on atapi [MBR] C: Partition1 [NTFS] 10001 MB ( 4642 MB free ) Unpartitioned space 6448 MB D: Partition2 [NTFS] 40774 MB ( 26225 MB free ) Unpartitioned space 8 MB I figured I would go with the first choice ((To set up Windows XP on the selected item, press Enter)) because I just wanted to set up Windows XP on C: Partition1 (which was preselected) so I pressed Enter which brought me to a screen displaying this message: You chose to install Windows XP on a partition that contains another operating system. Installing Windows XP on this partition might cause the other operating system to function improperly. CAUTION: Installing multiple operating systems on a single partition is not recommended. So this leads me to 2 new questions: How do I get rid of the Windows XP (Polish language) install on C: Partition 1 so that I can cleanly and safely install Windows XP (English language) on it? Neal, is this what you meant by me possibly having to delete the partition that the Windows XP (Polish language) install was located on? Since I have the option to delete partitions with the 3rd choice ((To delete the selected partition, press D)), should I do that on this screen or wait until I have Windows XP (English language) safely installed on C: Partition 1? I have to ask these questions because I have read that it is possibly dangerous to delete hard drive partitions. Just being cautious.

    Read the article

  • Wireless doesn't work on a Lenovo V570

    - by Stephen
    I've had Ubuntu installed on my HD for about 3 months but ever since I ran into this wireless issue I kinda lost my lust of Ubuntu. I have zero experience getting around with/ using the console command. I have a Lenovo V570. I got the driver update for the broadcom networking card via the Additional Drivers application but that did nothing. I love the look and feel of using Ubuntu but I have no technological experience for the matter. Any help would be awesome. When I scan for wireless connections while in Ubuntu, my computer picks up nothing, while on Win7 it will pick up the handful of wireless networks around my area. My wired connection is fine, but the use of not having wireless on a laptop is rather contradictory to it as a feature. Cheers! Also, I just installed 11.10, if that helps any. Yes, I used the search before I posted this, but again I have ZERO understanding of the command stuff and need a meat and potatoes answer(s). stephen@ubuntu:~$ sudo lshw -class network [sudo] password for stephen: *-network UNCLAIMED description: Network controller product: BCM4313 802.11b/g/n Wireless LAN Controller vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:03:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:f1900000-f1903fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:04:00.0 logical name: eth0 version: 06 serial: f0:de:f1:63:98:14 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw ip=192.168.1.78 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:2000(size=256) memory:f1804000-f1804fff memory:f1800000-f1803fff stephen@ubuntu:~$ rfkill list all 0: ideapad_wlan: Wireless LAN Soft blocked: yes Hard blocked: no 1: acer-wireless: Wireless LAN Soft blocked: yes Hard blocked: no

    Read the article

  • CodePlex Daily Summary for Friday, March 05, 2010

    CodePlex Daily Summary for Friday, March 05, 2010New Projects.svn Folders Cleanup Tool: dotSVN Cleanup is a tool that allows you to remove the .svn folders . Just click, browse, say abracadabra ...and the magic is done. Have fun with...Accord: The Accord framework creates an easy we to integrate any Dependency Injection framework into your project, while abstracting the details of your im...Asp.net MVC Lab: Try asp.net mvc outASP.NET Themes management with Webforms: The provided source is an example for how to use themes in ASP.NET Webforms. this source is the "up to date" support for the article I wroteB&W Port Scanner: B&W Port Scanner (formerly Net Inspector) is a fast TCP Port scan utility. The main idea is support of customizable operations to be performed f...BizTalk SWAT - Simple Web Activity Tracker: This is a web based version of BizTalk HAT. The concept is designed to be able to share and enable sharing of orchestration info easily. Some of th...C# Linear Hash Table: A C# dictionary-like implementation of a linear hash table. It is more memory efficiant than the .NET dictionary, and also almost as fast. NOTE: On...DBF Import Export Wizard: DBF Import Export Wizard is a tool for anyone needing to import DBF files into SQL Server or to export SQL Server tables to a DBF file. This proje...Domain as XML - Driven Development: Visual Studio Code Samples: Domain as XML - Driven Development: Visual Studio Code SamplesEasyDownload: This application allows to manage downloads handling an stack of files and several useful configurationsEos2: .FlightTickets: This application allows to buy flight ticketsFotofly PhotoViewer: A Silverlight control that uses the Fotofly metadata library to show the people in a photo (using Windows Live Photo Gallery People metadata) and a...Fujiy source code: Source code examplesGameSet: This application allows to play games with distributed users.Injectivity (Dependency Injection): Injectivity is a dependency injection framework (written in C#) with a strong focus on the ease of configuration and performance. Having been writt...Inventory: Keep track of inputs, materias and salesLoanTin.Com Source Code: LoanTin.Com - a Social Networking Website as same as Tumblr.com, based on source code of Loantiner Project, allow anyone can share anything to anyo...mysln: my solutions.NumTextBox: TextBox控件重写 之NumTextBox,主要实现的功能是,只允许输入数字,或String,Numeric,Currency,Decimal,Float,Double,Short,Int,Long 修改自:http://www.codeproject.com/KB/edit/num...Quick Performance Monitor: This small utility helps to monitor performance counters without using the full blown perfmon tool from Windows. It supports a number of command li...Runo: Runo ResearchSales: This application allows to manage a hardware storeScrewWiki Form Auth Provider: Enables your ASP.NET site to use Forms Authentication to integrate with your ScrewWiki. User management is performed on a parent site, and cookie i...SDS: Scientific DataSet library and tools: The SDS library makes it easy for .Net developers to read, write and share scalars, vectors, matrices and multidimensional grids which are very com...ShapeSweeper: Minesweeper-like game for the Zune HD. Each hidden object has three properties to discover--location, color, and shape--and all three must be corre...SilverlightExcel: an Excel file viewer in Silverlight 4: SilverlightExcel is a Silverlight application allowing you to open and view Excel files and also create graphs.sPWadmin: pwAdmin is an Web Interface based on JSP that uses the PW-Java API to control an PW-Server.Video Player control in Silverlight: A control for playing video in Silverlight 4 with chapters on timeline control. This player will be easily skinnable and customizable. More Featur...XNA Light Pre Pass Renderer: A demo/sample that shows how to write a light pre pass renderer in XNA.Zimms: Collaboration Site for friends, a code depot, and scratch padNew Releases.svn Folders Cleanup Tool: dotSVN Cleanup Tool: dotSVN Cleanup Tool executableAccord: Alpha: Initial build of the Accord framework.AcPrac: AcPrac Ver 0.1: The first version of AcPrac. It is not fully functional, but rather a version to get the bugs out. Please report all bugs.ASP.NET: ASP.NET Browser Definition Files: This download contains: ASP.NET 4 Browser Definition Files -- You can use the new ASP.NET 4 browser definition files with earlier versions of ASP....B&W Port Scanner: Black`n`White Port Scanner 1.0: B&W Port Scanner 1.0 Final Release Date: 03.03.2010 Black`n`WhiteBizTalk SWAT - Simple Web Activity Tracker: BizTalk SWAT: This is a web based version of BizTalk HAT. The concept is designed to be able to share and enable sharing of orchestration info easily. It uses th...BTP Tools: CSB+CUV+HCSB dict files 2010-03-04: 5. is now missing a space between the Strong’s number and the Count: >CSB Translation: 圣所 7, 至圣所G39+G394 it should be: CSB Translation: 圣所 7, 至圣所G...C# Linear Hash Table: Linear Hash Table: First working version of the Linear Hash Table.Cassiopeia: WinTools 1.0 beta: First ReleaseComposure: Caliburn-44007-trunk-vs2010.net40: This is a very simple conversion of the Caliburn trunk (rev 44007) for use in Visual Studio 2010 RC1 built against .NET40. Because the conversion w...Cover Creator: CoverCreator 1.3.0: English and Polish version. Functionality to add image to the front page. Load / save covers.DBF Import Export Wizard: DBF Import Export Wizard Source Code: Version 0.1.0.3DBF Import Export Wizard: DBF_Import_Export_Wizard Setup 0.1.0.3: Zip file contains Setup.exeESB Toolkit Extensions: Tellago BizTalk ESB 2.0 Toolkit Extensions v0.2: Windows Installer file that installs Library on a BizTalk ESB 2.0 system. This Install automatically configures the esb.config to use the new compo...Fotofly PhotoViewer: Fotofly Photoview v0.1: The first public release. Based on a Silverlight application I have been using for over a year at www.tassography.com. This version uses Fotofly v0...HPC with GPUs applied to CG: Cuda Soft Bodies simulation: Cuda src for soft bodiesHPC with GPUs applied to CG: Full Soft Bodies src: full src code for soft bodies simulationInjectivity (Dependency Injection): 2.8.166.2135: Release 2.8.166.2135 of the Injectivity dependency injection framework.Line Counter: 1.5 (Code Outline Preview): This version contains preview of the code outline feature, you can now view C# code outline within Line Counter. Note that the code outline now onl...Micajah Mindtouch Deki Wiki Copier: MicajahWikiCopier: You should use the following line arguments: WikiCopier.exe "http://oldwikiwithdata.wik.is/@api/deki" "login" "password" "http://newwiki.somename.l...ncontrols: Alpha 0.4.0.1: Added some example on the Console Project.NumTextBox: NumTextBox初始版本: TextBox控件重写 之NumTextBox,主要实现的功能是,只允许输入数字,或String,Numeric,Currency,Decimal,Float,Double,Short,Int,Long 此为初始版本PSCodeplex: PS CodePlex 0.2: PS CodePlex 0.2 has some breaking changes to the parameters. A few of the parameters are renamed and a few are made as switch parameters. Add-Rele...Quick Performance Monitor: QPerfMon First release - Version 1.0.0: The first release of the utility.RapidWebDev - .NET Enterprise Software Development Infrastructure: ProductManagement Quick Sample 0.2: This is a sample product management application to demonstrate how to develop enterprise software in RapidWebDev. The glossary of the system are ro...ScrewWiki Form Auth Provider: ScrewWiki Forms Authentication: Initial ReleaseSee.Sharper: See.Sharper.Docs-1.10.3.4: HTML documentation (including Doxygen project)See.Sharper: See.Sharper-1.10.3.4: Solution (Source files, debug and release binaries)Solar.Generic: Solar.Generic 0.8.0.0 Beta (Revised, Renamed): Solar.Generic 0.8.0.0 (Revised & Renamed) Renamed project from Solar.Commons to Solar.Generic. Project solution file is now in format of Visual ...Solar.Security: Solar.Security 1.1.0.0: Performed several major refactorings of code base. Stripped In-Memory implementation of IConfiguration interface of transactional behavior due to...sPWadmin: pwAdmin v0.7: -Star System Simulator: Star System Simulator 2.3: Changes in this release: Fixed several localisation issues. Features in this release: Model star systems in 3D. Euler-Cromer method. Improved...SysI: sysi, stable and ready: This time for sure.TheWhiteAmbit: TheWhiteAmbit - Demo: Two little demos demonstrating: - fast realtime raytracing - generating bent normals for shading (CUDA capable GPU needed = nVidia GeForce >8x00)VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 22 Beta: Build 22 (beta) New: Visual Studio 2010 RC support (VsTortoise for Visual Studio 2010 RC screenshots) New: VsTortoise integrates in to Solution E...WinMergeFS: WinMergeFS 0.1.42128alpha: WinMergeFS provides AuFS functionality for windows. With WinMergeFS users can mount multiple directories into a virtual drive. Plugin based root se...WSDLGenerator: WSDLGenerator 0.0.0.2: - Bugs fixed - Code refactored - Added support for custom typesXNA Light Pre Pass Renderer: LightPrePassRendererXNA: Zipped source code for the light pre pass renderer made with XNA.Most Popular ProjectsMetaSharpRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)LiveUpload to FacebookASP.NETMicrosoft SQL Server Community & SamplesMost Active ProjectsUmbraco CMSRawrBlogEngine.NETSDS: Scientific DataSet library and toolsMapWindow GISpatterns & practices – Enterprise LibraryjQuery Library for SharePoint Web ServicesIonics Isapi Rewrite FilterMDT Web FrontEndDiffPlex - a .NET Diff Generator

    Read the article

  • “Full House” at the ArcSig Ft. Lauderdale March meeting at the Microsoft Office!

    - by Rainer
    “Full House” at the ArcSig Ft. Lauderdale March meeting at the Microsoft Office! After all participants had the opportunity to get free pizza and soft drinks, Quent Herschelman gave an excellent presentation about Architecture Tools in Visual Studio 2010. He started with a brief introduction to the new MSDN subscriptions, the different versions of Visual Studio, and the full circle of application and solution integration for the new Team Foundation Server. I like the new structure of the MSDN subscriptions and the new options the Team Foundation Server offers! Quent continued his presentation with excellent information and demos of the Architecture Tools, focusing on Architecture Explorer and UML. INTERTECH donated a trainings certificate for $2495 for the first price and a Barnes & Noble Gift card in the amount of $50 for the second price to our monthly raffle! Congratulations to the two winners! Microsoft donated books, t-shirts and great info material for MSDN and Visual Studio 2010, as well as the pizza and soft drinks! Thank you very much to Quent Herschelman for the excellent presentation, to Intertech for the prizes, and to Microsoft for the hosting and food! Pre-Session Pizza Quent is ready to "go"! Quent at work! Thank you all for coming! Rainer Habermann ArcSig Fort Lauderdale Site Director

    Read the article

  • How to get SQL Railroad Diagrams from MSDN BNF syntax notation.

    - by Phil Factor
    pre {margin-bottom:.0001pt; font-size:8.0pt; font-family:"Courier New"; margin-left: 0cm; margin-right: 0cm; margin-top: 0cm; } On SQL Server Books-On-Line, in the Transact-SQL Reference (database Engine), every SQL Statement has its syntax represented in  ‘Backus–Naur Form’ notation (BNF)  syntax. For a programmer in a hurry, this should be ideal because It is the only quick way to understand and appreciate all the permutations of the syntax. It is a great feature once you get your eye in. It isn’t the only way to get the information;  You can, of course, reverse-engineer an understanding of the syntax from the examples, but your understanding won’t be complete, and you’ll have wasted time doing it. BNF is a good start in representing the syntax:  Oracle and SQLite go one step further, and have proper railroad diagrams for their syntax, which is a far more accessible way of doing it. There are three problems with the BNF on MSDN. Firstly, it is isn’t a standard version of  BNF, but an ancient fork from EBNF, inherited from Sybase. Secondly, it is excruciatingly difficult to understand, and thirdly it has a number of syntactic and semantic errors. The page describing DML triggers, for example, currently has the absurd BNF error that makes it state that all statements in the body of the trigger must be separated by commas.  There are a few other detail problems too. Here is the offending syntax for a DML trigger, pasted from MSDN. Trigger on an INSERT, UPDATE, or DELETE statement to a table or view (DML Trigger) CREATE TRIGGER [ schema_name . ]trigger_name ON { table | view } [ WITH <dml_trigger_option> [ ,...n ] ] { FOR | AFTER | INSTEAD OF } { [ INSERT ] [ , ] [ UPDATE ] [ , ] [ DELETE ] } [ NOT FOR REPLICATION ] AS { sql_statement [ ; ] [ ,...n ] | EXTERNAL NAME <method specifier [ ; ] > }   <dml_trigger_option> ::=     [ ENCRYPTION ]     [ EXECUTE AS Clause ]   <method_specifier> ::=  This should, of course, be /* Trigger on an INSERT, UPDATE, or DELETE statement to a table or view (DML Trigger) */ CREATE TRIGGER [ schema_name . ]trigger_name ON { table | view } [ WITH <dml_trigger_option> [ ,...n ] ] { FOR | AFTER | INSTEAD OF } { [ INSERT ] [ , ] [ UPDATE ] [ , ] [ DELETE ] } [ NOT FOR REPLICATION ] AS { {sql_statement [ ; ]} [ ...n ] | EXTERNAL NAME <method_specifier> [ ; ] }   <dml_trigger_option> ::=     [ ENCRYPTION ]     [ EXECUTE AS CLAUSE ]   <method_specifier> ::=     assembly_name.class_name.method_name I’d love to tell Microsoft when I spot errors like this so they can correct them but I can’t. Obviously, there is a mechanism on MSDN to get errors corrected by using comments, but that doesn’t work for me (*Error occurred while saving your data.”), and when I report that the comment system doesn’t work to MSDN, I get no reply. I’ve been trying to create railroad diagrams for all the important SQL Server SQL statements, as good as you’d find for Oracle, and have so far published the CREATE TABLE and ALTER TABLE railroad diagrams based on the BNF. Although I’ve been aware of them, I’ve never realised until recently how many errors there are. Then, Colin Daley created a translator for the SQL Server dialect of  BNF which outputs standard EBNF notation used by the W3C. The example MSDN BNF for the trigger would be rendered as … /* Trigger on an INSERT, UPDATE, or DELETE statement to a table or view (DML Trigger) */ create_trigger ::= 'CREATE TRIGGER' ( schema_name '.' ) ? trigger_name 'ON' ( table | view ) ( 'WITH' dml_trigger_option ( ',' dml_trigger_option ) * ) ? ( 'FOR' | 'AFTER' | 'INSTEAD OF' ) ( ( 'INSERT' ) ? ( ',' ) ? ( 'UPDATE' ) ? ( ',' ) ? ( 'DELETE' ) ? ) ( 'NOT FOR REPLICATION' ) ? 'AS' ( ( sql_statement ( ';' ) ? ) + | 'EXTERNAL NAME' method_specifier ( ';' ) ? )   dml_trigger_option ::= ( 'ENCRYPTION' ) ? ( 'EXECUTE AS CLAUSE' ) ?   method_specifier ::= assembly_name '.' class_name '.' method_name Colin’s intention was to allow anyone to paste SQL Server’s BNF notation into his website-based parser, and from this generate classic railroad diagrams via Gunther Rademacher's Railroad Diagram Generator.  Colin's application does this for you: you're not aware that you are moving to a different site.  Because Colin's 'translator' it is a parser, it will pick up syntax errors. Once you’ve fixed the syntax errors, you will get the syntax in the form of a human-readable railroad diagram and, in this form, the semantic mistakes become flamingly obvious. Gunter’s Railroad Diagram Generator is brilliant. To be able, after correcting the MSDN dialect of BNF, to generate a standard EBNF, and from thence to create railroad diagrams for SQL Server’s syntax that are as good as Oracle’s, is a great boon, and many thanks to Colin for the idea. Here is the result of the W3C EBNF from Colin’s application then being run through the Railroad diagram generator. create_trigger: dml_trigger_option: method_specifier:   Now that’s much better, you’ll agree. This is pretty easy to understand, and at this point any error is immediately obvious. This should be seriously useful, and it is to me. However  there is that snag. The BNF is generally incorrect, and you can’t expect the average visitor to mess about with it. The answer is, of course, to correct the BNF on MSDN and maybe even add railroad diagrams for the syntax. Stop giggling! I agree it won’t happen. In the meantime, we need to collaboratively store and publish these corrected syntaxes ourselves as we do them. How? GitHub?  SQL Server Central?  Simple-Talk? What should those of us who use the system  do with our corrected EBNF so that anyone can use them without hassle?

    Read the article

  • Why does mpstat show different values when I use the interval setting?

    - by Abe
    Here's the output I get when I run mpstat: $mpstat Linux 3.2.0-30-generic (my-laptop-C650) 09/17/2012 _x86_64_ (2 CPU) 05:32:01 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle 05:32:01 PM all 9.16 0.08 2.69 2.00 0.00 0.04 0.00 0.00 86.02 And here's what I get when I run it with a one-second interval: $mpstat 1 05:31:51 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle 05:31:52 PM all 1.52 0.00 1.01 0.00 0.00 0.00 0.00 0.00 97.47 05:31:53 PM all 2.04 0.00 1.02 0.00 0.00 0.00 0.00 0.00 96.94 05:31:54 PM all 1.50 0.00 1.50 0.00 0.00 0.00 0.00 0.00 97.00 Why does the first process show the processor as 86% idle, and the second show it as ~97% idle? I've tried this in a bunch of different configurations, and it's not a real difference in CPU usage -- unless mpstat itself is making the difference. Which number should I trust?

    Read the article

  • Looking Back at MIX10

    - by WeigeltRo
    It’s the sad truth of my life that even though I’m fascinated by airplanes and flight in general since my childhood days, my body doesn’t like flying. Even the ridiculously short flights inside Germany are taking their toll on me each time. Now combine this with sitting in the cramped space of economy class for many hours on a transatlantic flight from Germany to Las Vegas and back, and factor in some heavy dose of jet lag (especially on my way eastwards), and you get an idea why after coming back home I had this question on my mind: Was it really worth it to attend MIX10? This of course is a question that will also be asked by my boss at Comma Soft (for other reasons, obviously), who decided to send me and my colleague Jens Schaller, to the MIX10 conference. (A note to my German readers: An dieser Stelle der Hinweis, dass Comma Soft noch Silverlight-Entwickler und/oder UI-Designer für den Standort Bonn sucht – aussagekräftige Bewerbungen bitte an [email protected]) Too keep things short: My answer is yes. Before I’ll go into detail, let me ask the heretical questions whether tech conferences in general still make sense. There was a time, where actually being at a tech conference gave you a head-start in regard to learning about new technologies. Nowadays this is no longer true, where every bit of information and every detail is immediately twittered, blogged and whatevered to death. In the case of MIX10 you even can download the video-taped sessions shortly after. So: Does visiting a conference still make sense? It depends on what you expect from a conference. It should be clear to everybody that you’ll neither get exclusive information, nor receive training in a small group. What a conference does offer that sitting in front of your computer does not can be summarized as follows: Focus Being away from work and home will help you to focus on the presented information. Of course there are always the poor guys who are haunted by their work (with mails and short text messages reporting the latest showstopper problem), but in general being out of your office makes a huge difference. Inspiration With the focus comes the emotional involvement. I find it much easier to absorb information if I feel that certain vibe when sitting in a session. This still means that I have put work into reviewing the information later, but it’s a better starting point. And all the impressions collected at a (good) conference combined lead to a higher motivation – be it by the buzz (“this is gonna be sooo cool!”) or by the fear to fall behind (“man, we’ll have work on this, or else…”). People At a conference it’s pretty easy to get into contact with other people during breakfast, lunch and other breaks. This is a good opportunity to get a feel for what other development teams are doing (on a very general level of course, nobody will tell you about their secret formula) and what they are thinking about specific technologies. So MIX10 did offer focus, inspiration and people, but that would have meant nothing without valuable content. When I (being a frontend developer with a strong interest in UI/UX) planned my visit to MIX10, I made the decision to focus on the "soft" topics of design, interaction and user experience. I figured that I would be bombarded with all the technical details about Silverlight 4 anyway in the weeks and months to come. Actually, I would have liked to catch a few technical sessions, but the agenda wasn’t exactly in favor of people interested in any kind of Silverlight and UI/UX/Design topics. That’s one of my few complaints about the conference – I would have liked one more day and/or more sessions per day. Overall, the quality of the workshops and sessions was pretty high. In fact, looking back at my collection of conferences I’ve visited in the past I’d say that MIX10 ranks somewhere near the top spot. Here’s an overview of the workshops/sessions I attended (I’ll leave out the keynotes): Day 0 (Workshops on Sunday) Design Fundamentals for Developers Robby Ingebretsen is the man! Great workshop in three parts with the perfect mix of examples, well-structured definition of terminology and the right dose of humor. Robby was part of the WPF team before founding his own company so he not only has a strong interest in design (and the skillz!) but also the technical background.   Design Tools and Techniques Originally announced to be held by Arturo Toledo, the Rosso brothers from ArcheType filled in for the first two parts, and Corrina Black had a pretty general part about the Windows Phone UI. The first two thirds were a mixed bag; the two guys definitely knew what they were talking about, and the demos were great, but the talk lacked the preparation and polish of a truly great presentation. Corrina was not allowed to go into too much detail before the keynote on Monday, but the session was still very interesting as it showed how much thought went into the Windows Phone UI (and there’s always a lot to learn when people talk about their thought process). Day 1 (Monday) Designing Rich Experiences for Data-Centric Applications I wonder whether there was ever a test-run for this session, but what Ken Azuma and Yoshihiro Saito delivered in the first 15 minutes of a 30-minutes-session made me walk out. A commercial for a product (just great: a video showing a SharePoint plug-in in an all-Japanese UI) combined with the most generic blah blah one could imagine. EPIC FAIL.   Great User Experiences: Seamlessly Blending Technology & Design I switched to this session from the one above but I guess I missed the interesting part – what I did catch was what looked like a “look at the cool stuff we did” without being helpful. Or maybe I was just in a bad mood after the other session.   The Art, Technology and Science of Reading This talk by Kevin Larson was very interesting, but was more a presentation of what Microsoft is doing in research (pretty impressive) and in the end lacked a bit the helpful advice one could have hoped for.   10 Ways to Attack a Design Problem and Come Out Winning Robby Ingebretsen again, and again a great mix of theory and practice. The clean and simple, yet effective, UI of the reader app resulted in a simultaneous “wow” of Jens and me. If you’d watch only one session video, this should be it. Microsoft has to bring Robby back next year! Day 2 (Tuesday) Touch in Public: Multi-touch Interaction Design for Kiosks & Architectural Experiences Very interesting session by Jason Brush, a great inspiration with many details to look out for in the examples. Exactly what I was hoping for – and then some!   Designing Bing: Heart and Science How hard can it be to design the UI for a search engine? An input field and a list of results, that should be it, right? Well, not so fast! The talk by Paul Ray showed the many iterations to finally get it right (up to the choice of a specific blue for the links). And yes, I want an eye-tracking device to play around with!   The Elephant in the Room When Nishant Kothary presented a long list of what his session was not about, I told to myself (not having the description text present) “Am I in the wrong talk? Should I leave?”. Boy, was I wrong. A great talk about human factors in the process of designing stuff.   An Hour with Bill Buxton Having seen Bill Buxton’s presentation in the keynote, I just had to see this man again – even though I didn’t know what to expect. Being more or less unplanned and intended to be more of a conversation, the session didn’t provide a wealth of immediately useful information. Nevertheless Bill Buxton was impressive with his huge knowledge of seemingly everything. But this could/should have been a session some when in the evening and not in parallel to at least two other interesting talks. Day 3 (Wednesday) Design the Ordinary, Like the Fixie This session by DL Byron and Kevin Tamura started really well and brought across the message to keep things simple. But towards the end the talk lost some of its steam. And, as a member of the audience pointed out, they kind of ignored their own advice when they used a fancy presentation software other then PowerPoint that sometimes got in the way of showing things.   Developing Natural User Interfaces Speaking of alternative presentation software, Joshua Blake definitely had the most remarkable alternative to PowerPoint, a self-written program called NaturalShow that was controlled using multi-touch on a touch screen. Not a PowerPoint-killer, but impressive nevertheless. The (excellent) talk itself was kind of eye-opening in regard to what “multi-touch support” on various platforms (WPF, Silverlight, Windows Phone) actually means.   Treat your Content Right The talk by Tiffani Jones Brown wasn’t even on my planned schedule, but somehow I ended up in that session – and it was great. And even for people who don’t necessarily have to write content for websites, some points made by Tiffani are valid in many places, notably wherever you put texts with more than a single word into your UI. Creating Effective Info Viz in Microsoft Silverlight The last session of MIX10 I attended was kind of disappointing. At first things were very promising, with Matthias Shapiro giving a brief but well-structured introduction to info graphics and interactive visualizations. Then the live-coding began and while the result was interesting, too much time was spend on wrestling to get the code working. Ending earlier than planned, the talk was a bit light on actual content, but at least it included a nice list of resources. Conclusion It could be felt all across MIX10, UIs will take a huge leap forward; in fact, there are enough examples that have already. People who both have the technical know-how and at least a basic understanding of design (“literacy” as Bill Buxton called it) are in high demand. The concept of the MIX conference and initiatives like design.toolbox shows that Microsoft understands very well that frontend developers have to acquire new knowledge besides knowing how to hack code and putting buttons on a form. There are extremely exciting times before us, with lots of opportunity for those who are eager to develop their skills, that is for sure.

    Read the article

  • wifi not recognized

    - by pumper
    I had wifi and worked then some day ubuntu asked me to update some packeages and restarted the system and after that no wifi. this is my wireless_script output : ########## wireless info START ########## ##### release ##### Distributor ID: Ubuntu Description: Ubuntu 14.04 LTS Release: 14.04 Codename: trusty ##### kernel ##### Linux S510p 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux ##### lspci ##### 02:00.0 Network controller [0280]: Qualcomm Atheros QCA9565 / AR9565 Wireless Network Adapter [168c:0036] (rev 01) Subsystem: Lenovo Device [17aa:3026] Kernel driver in use: ath9k 03:00.0 Ethernet controller [0200]: Qualcomm Atheros AR8162 Fast Ethernet [1969:1090] (rev 10) Subsystem: Lenovo Device [17aa:3807] Kernel driver in use: alx ##### lsusb ##### Bus 001 Device 006: ID 0eef:a111 D-WAV Scientific Co., Ltd Bus 001 Device 007: ID 0cf3:3004 Atheros Communications, Inc. Bus 001 Device 004: ID 174f:1488 Syntek Bus 001 Device 003: ID 03f0:5607 Hewlett-Packard Bus 001 Device 002: ID 8087:8000 Intel Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 002 Device 002: ID 15d9:0a4c Trust International B.V. USB+PS/2 Optical Mouse Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub ##### PCMCIA Card Info ##### ##### rfkill ##### 0: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: ideapad_bluetooth: Bluetooth Soft blocked: no Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no 3: hci0: Bluetooth Soft blocked: no Hard blocked: no ##### iw reg get ##### country 00: (2402 - 2472 @ 40), (3, 20) (2457 - 2482 @ 40), (3, 20), PASSIVE-SCAN, NO-IBSS (2474 - 2494 @ 20), (3, 20), NO-OFDM, PASSIVE-SCAN, NO-IBSS (5170 - 5250 @ 40), (3, 20), PASSIVE-SCAN, NO-IBSS (5735 - 5835 @ 40), (3, 20), PASSIVE-SCAN, NO-IBSS ##### interfaces ##### # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback auto dsl-provider iface dsl-provider inet ppp pre-up /sbin/ifconfig wlan0 up # line maintained by pppoeconf provider dsl-provider auto wlan0 iface wlan0 inet manual ##### iwconfig ##### wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off ##### route ##### Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface ##### resolv.conf ##### ##### nm-tool ##### NetworkManager Tool State: connected (global) - Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: alx State: unavailable Default: no HW Address: <MAC address removed> Capabilities: Carrier Detect: yes Wired Properties Carrier: off - Device: wlan0 ---------------------------------------------------------------- Type: 802.11 WiFi Driver: ath9k State: unmanaged Default: no HW Address: <MAC address removed> Capabilities: Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points ##### NetworkManager.state ##### [main] NetworkingEnabled=true WirelessEnabled=true WWANEnabled=true WimaxEnabled=true ##### NetworkManager.conf ##### [main] plugins=ifupdown,keyfile,ofono dns=dnsmasq no-auto-default=<MAC address removed>, [ifupdown] managed=false ##### iwlist ##### wlan0 Scan completed : Cell 01 - Address: <MAC address removed> Channel:1 Frequency:2.412 GHz (Channel 1) Quality=55/70 Signal level=-55 dBm Encryption key:on ESSID:"mohsen" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s Bit Rates:24 Mb/s; 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=000000076c342498 Extra: Last beacon: 12ms ago IE: Unknown: 00066D6F6873656E IE: Unknown: 010882848B960C121824 IE: Unknown: 030101 IE: Unknown: 2A0104 IE: Unknown: 32043048606C ##### iwlist channel ##### wlan0 13 channels in total; available frequencies : Channel 01 : 2.412 GHz Channel 02 : 2.417 GHz Channel 03 : 2.422 GHz Channel 04 : 2.427 GHz Channel 05 : 2.432 GHz Channel 06 : 2.437 GHz Channel 07 : 2.442 GHz Channel 08 : 2.447 GHz Channel 09 : 2.452 GHz Channel 10 : 2.457 GHz Channel 11 : 2.462 GHz Channel 12 : 2.467 GHz Channel 13 : 2.472 GHz ##### lsmod ##### ath3k 13318 0 bluetooth 395423 23 bnep,ath3k,btusb,rfcomm ath9k 164164 0 ath9k_common 13551 1 ath9k ath9k_hw 453856 2 ath9k_common,ath9k ath 28698 3 ath9k_common,ath9k,ath9k_hw mac80211 626489 1 ath9k cfg80211 484040 3 ath,ath9k,mac80211 ##### modinfo ##### filename: /lib/modules/3.13.0-24-generic/kernel/drivers/bluetooth/ath3k.ko firmware: ath3k-1.fw license: GPL version: 1.0 description: Atheros AR30xx firmware driver author: Atheros Communications srcversion: 98A5245588C09E5E41690D0 alias: usb:v0489pE036d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE03Cd*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE02Cd*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3pE003d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p3121d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v13D3p3402d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v04C5p1330d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE04Dd*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE056d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE04Ed*dc*dsc*dp*ic*isc*ip*in* alias: usb:v13D3p3393d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE057d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0930p0220d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0930p0219d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3pE005d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3pE004d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v13D3p3362d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v04CAp3008d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v04CAp3006d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v04CAp3005d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v04CAp3004d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v13D3p3375d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p817Ad*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p311Dd*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p3008d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p3004d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p0036d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v03F0p311Dd*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE027d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0489pE03Dd*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0930p0215d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v13D3p3304d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3pE019d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p3002d*dc*dsc*dp*ic*isc*ip*in* alias: usb:v0CF3p3000d*dc*dsc*dp*ic*isc*ip*in* depends: bluetooth intree: Y vermagic: 3.13.0-24-generic SMP mod_unload modversions signer: Magrathea: Glacier signing key sig_key: <MAC address removed>:D9:06:21:70:6E:8D:06:60:4D:73:0B:35:9F:C0 sig_hashalgo: sha512 filename: /lib/modules/3.13.0-24-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k.ko license: Dual BSD/GPL description: Support for Atheros 802.11n wireless LAN cards. author: Atheros Communications srcversion: BAF225EEB618908380B28DA alias: platform:qca955x_wmac alias: platform:ar934x_wmac alias: platform:ar933x_wmac alias: platform:ath9k alias: pci:v0000168Cd00000036sv*sd*bc*sc*i* alias: pci:v0000168Cd00000036sv0000185Fsd00003027bc*sc*i* alias: pci:v0000168Cd00000036sv00001B9Asd00002810bc*sc*i* alias: pci:v0000168Cd00000036sv0000144Fsd00007202bc*sc*i* alias: pci:v0000168Cd00000036sv00001A3Bsd00002130bc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00000612bc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00000652bc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00000642bc*sc*i* alias: pci:v0000168Cd00000036sv0000168Csd0000302Cbc*sc*i* alias: pci:v0000168Cd00000036sv0000168Csd00003027bc*sc*i* alias: pci:v0000168Cd00000036sv0000144Dsd0000411Ebc*sc*i* alias: pci:v0000168Cd00000036sv0000144Dsd0000411Dbc*sc*i* alias: pci:v0000168Cd00000036sv0000144Dsd0000411Cbc*sc*i* alias: pci:v0000168Cd00000036sv0000144Dsd0000411Bbc*sc*i* alias: pci:v0000168Cd00000036sv0000144Dsd0000411Abc*sc*i* alias: pci:v0000168Cd00000036sv00001028sd0000020Ebc*sc*i* alias: pci:v0000168Cd00000036sv0000103Csd0000217Fbc*sc*i* alias: pci:v0000168Cd00000036sv0000103Csd000018E3bc*sc*i* alias: pci:v0000168Cd00000036sv000017AAsd00003026bc*sc*i* alias: pci:v0000168Cd00000036sv00001A3Bsd0000213Abc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00000662bc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00000672bc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00000622bc*sc*i* alias: pci:v0000168Cd00000036sv0000185Fsd00003028bc*sc*i* alias: pci:v0000168Cd00000036sv0000105Bsd0000E069bc*sc*i* alias: pci:v0000168Cd00000036sv0000168Csd0000302Bbc*sc*i* alias: pci:v0000168Cd00000036sv0000168Csd00003026bc*sc*i* alias: pci:v0000168Cd00000036sv0000168Csd00003025bc*sc*i* alias: pci:v0000168Cd00000036sv00001B9Asd00002812bc*sc*i* alias: pci:v0000168Cd00000036sv00001B9Asd00002811bc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00006671bc*sc*i* alias: pci:v0000168Cd00000036sv000011ADsd00000632bc*sc*i* alias: pci:v0000168Cd00000036sv0000185Fsd0000A119bc*sc*i* alias: pci:v0000168Cd00000036sv0000105Bsd0000E068bc*sc*i* alias: pci:v0000168Cd00000036sv00001A3Bsd00002176bc*sc*i* alias: pci:v0000168Cd00000036sv0000168Csd00003028bc*sc*i* alias: pci:v0000168Cd00000037sv*sd*bc*sc*i* alias: pci:v0000168Cd00000034sv*sd*bc*sc*i* alias: pci:v0000168Cd00000034sv000010CFsd00001783bc*sc*i* alias: pci:v0000168Cd00000034sv000014CDsd00000064bc*sc*i* alias: pci:v0000168Cd00000034sv000014CDsd00000063bc*sc*i* alias: pci:v0000168Cd00000034sv0000103Csd00001864bc*sc*i* alias: pci:v0000168Cd00000034sv000011ADsd00006641bc*sc*i* alias: pci:v0000168Cd00000034sv000011ADsd00006631bc*sc*i* alias: pci:v0000168Cd00000034sv00001043sd0000850Ebc*sc*i* alias: pci:v0000168Cd00000034sv00001A3Bsd00002110bc*sc*i* alias: pci:v0000168Cd00000034sv00001969sd00000091bc*sc*i* alias: pci:v0000168Cd00000034sv000017AAsd00003214bc*sc*i* alias: pci:v0000168Cd00000034sv0000168Csd00003117bc*sc*i* alias: pci:v0000168Cd00000034sv000011ADsd00006661bc*sc*i* alias: pci:v0000168Cd00000034sv00001A3Bsd00002116bc*sc*i* alias: pci:v0000168Cd00000033sv*sd*bc*sc*i* alias: pci:v0000168Cd00000032sv*sd*bc*sc*i* alias: pci:v0000168Cd00000032sv00001043sd0000850Dbc*sc*i* alias: pci:v0000168Cd00000032sv00001B9Asd00001C01bc*sc*i* alias: pci:v0000168Cd00000032sv00001B9Asd00001C00bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00001F95bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00001195bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00001F86bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00001186bc*sc*i* alias: pci:v0000168Cd00000032sv00001B9Asd00002001bc*sc*i* alias: pci:v0000168Cd00000032sv00001B9Asd00002000bc*sc*i* alias: pci:v0000168Cd00000032sv0000144Fsd00007197bc*sc*i* alias: pci:v0000168Cd00000032sv0000105Bsd0000E04Fbc*sc*i* alias: pci:v0000168Cd00000032sv0000105Bsd0000E04Ebc*sc*i* alias: pci:v0000168Cd00000032sv000011ADsd00006628bc*sc*i* alias: pci:v0000168Cd00000032sv000011ADsd00006627bc*sc*i* alias: pci:v0000168Cd00000032sv00001C56sd00004001bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00002100bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00002C97bc*sc*i* alias: pci:v0000168Cd00000032sv000017AAsd00003219bc*sc*i* alias: pci:v0000168Cd00000032sv000017AAsd00003218bc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd0000C708bc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd0000C680bc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd0000C706bc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd0000410Fbc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd0000410Ebc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd0000410Dbc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd00004106bc*sc*i* alias: pci:v0000168Cd00000032sv0000144Dsd00004105bc*sc*i* alias: pci:v0000168Cd00000032sv0000185Fsd00003027bc*sc*i* alias: pci:v0000168Cd00000032sv0000185Fsd00003119bc*sc*i* alias: pci:v0000168Cd00000032sv0000168Csd00003122bc*sc*i* alias: pci:v0000168Cd00000032sv0000168Csd00003119bc*sc*i* alias: pci:v0000168Cd00000032sv0000105Bsd0000E075bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00002152bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd0000126Abc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00002126bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00001237bc*sc*i* alias: pci:v0000168Cd00000032sv00001A3Bsd00002086bc*sc*i* alias: pci:v0000168Cd00000030sv*sd*bc*sc*i* alias: pci:v0000168Cd0000002Esv*sd*bc*sc*i* alias: pci:v0000168Cd0000002Dsv*sd*bc*sc*i* alias: pci:v0000168Cd0000002Csv*sd*bc*sc*i* alias: pci:v0000168Cd0000002Bsv*sd*bc*sc*i* alias: pci:v0000168Cd0000002Bsv00001A3Bsd00002C37bc*sc*i* alias: pci:v0000168Cd0000002Asv000010CFsd00001536bc*sc*i* alias: pci:v0000168Cd0000002Asv000010CFsd0000147Dbc*sc*i* alias: pci:v0000168Cd0000002Asv000010CFsd0000147Cbc*sc*i* alias: pci:v0000168Cd0000002Asv0000185Fsd0000309Dbc*sc*i* alias: pci:v0000168Cd0000002Asv00001A32sd00000306bc*sc*i* alias: pci:v0000168Cd0000002Asv000011ADsd00006642bc*sc*i* alias: pci:v0000168Cd0000002Asv000011ADsd00006632bc*sc*i* alias: pci:v0000168Cd0000002Asv0000105Bsd0000E01Fbc*sc*i* alias: pci:v0000168Cd0000002Asv00001A3Bsd00001C71bc*sc*i* alias: pci:v0000168Cd0000002Asv*sd*bc*sc*i* alias: pci:v0000168Cd00000029sv*sd*bc*sc*i* alias: pci:v0000168Cd00000027sv*sd*bc*sc*i* alias: pci:v0000168Cd00000024sv*sd*bc*sc*i* alias: pci:v0000168Cd00000023sv*sd*bc*sc*i* depends: ath9k_hw,mac80211,ath9k_common,cfg80211,ath intree: Y vermagic: 3.13.0-24-generic SMP mod_unload modversions signer: Magrathea: Glacier signing key sig_key: <MAC address removed>:D9:06:21:70:6E:8D:06:60:4D:73:0B:35:9F:C0 sig_hashalgo: sha512 parm: debug:Debugging mask (uint) parm: nohwcrypt:Disable hardware encryption (int) parm: blink:Enable LED blink on activity (int) parm: btcoex_enable:Enable wifi-BT coexistence (int) parm: bt_ant_diversity:Enable WLAN/BT RX antenna diversity (int) parm: ps_enable:Enable WLAN PowerSave (int) filename: /lib/modules/3.13.0-24-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_common.ko license: Dual BSD/GPL description: Shared library for Atheros wireless 802.11n LAN cards. author: Atheros Communications srcversion: 696B00A6C59713EC0966997 depends: ath,ath9k_hw intree: Y vermagic: 3.13.0-24-generic SMP mod_unload modversions signer: Magrathea: Glacier signing key sig_key: <MAC address removed>:D9:06:21:70:6E:8D:06:60:4D:73:0B:35:9F:C0 sig_hashalgo: sha512 filename: /lib/modules/3.13.0-24-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_hw.ko license: Dual BSD/GPL description: Support for Atheros 802.11n wireless LAN cards. author: Atheros Communications srcversion: 4809F3842A0542CD6B556D3 depends: ath intree: Y vermagic: 3.13.0-24-generic SMP mod_unload modversions signer: Magrathea: Glacier signing key sig_key: <MAC address removed>:D9:06:21:70:6E:8D:06:60:4D:73:0B:35:9F:C0 sig_hashalgo: sha512 filename: /lib/modules/3.13.0-24-generic/kernel/drivers/net/wireless/ath/ath.ko license: Dual BSD/GPL description: Shared library for Atheros wireless LAN cards. author: Atheros Communications srcversion: 88A67C5359B02C5A710AFCF depends: cfg80211 intree: Y vermagic: 3.13.0-24-generic SMP mod_unload modversions signer: Magrathea: Glacier signing key sig_key: <MAC address removed>:D9:06:21:70:6E:8D:06:60:4D:73:0B:35:9F:C0 sig_hashalgo: sha512 ##### modules ##### lp rtc ##### blacklist ##### [/etc/modprobe.d/blacklist-ath_pci.conf] blacklist ath_pci [/etc/modprobe.d/blacklist.conf] blacklist evbug blacklist usbmouse blacklist usbkbd blacklist eepro100 blacklist de4x5 blacklist eth1394 blacklist snd_intel8x0m blacklist snd_aw2 blacklist i2c_i801 blacklist prism54 blacklist bcm43xx blacklist garmin_gps blacklist asus_acpi blacklist snd_pcsp blacklist pcspkr blacklist amd76x_edac [/etc/modprobe.d/fbdev-blacklist.conf] blacklist arkfb blacklist aty128fb blacklist atyfb blacklist radeonfb blacklist cirrusfb blacklist cyber2000fb blacklist gx1fb blacklist gxfb blacklist kyrofb blacklist matroxfb_base blacklist mb862xxfb blacklist neofb blacklist nvidiafb blacklist pm2fb blacklist pm3fb blacklist s3fb blacklist savagefb blacklist sisfb blacklist tdfxfb blacklist tridentfb blacklist viafb blacklist vt8623fb ##### udev rules ##### # PCI device 0x1969:0x1090 (alx) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="<MAC address removed>", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" # PCI device 0x168c:0x0036 (ath9k) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="<MAC address removed>", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="wlan*", NAME="wlan0" ##### dmesg ##### [ 1.707662] psmouse serio1: elantech: assuming hardware version 3 (with firmware version 0x450f03) [ 11.918852] ath: phy0: WB335 1-ANT card detected [ 11.918856] ath: phy0: Set BT/WLAN RX diversity capability [ 11.926438] ath: phy0: Enable LNA combining [ 11.928469] ath: phy0: ASPM enabled: 0x42 [ 11.928473] ath: EEPROM regdomain: 0x65 [ 11.928475] ath: EEPROM indicates we should expect a direct regpair map [ 11.928478] ath: Country alpha2 being used: 00 [ 11.928479] ath: Regpair used: 0x65 [ 14.066021] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready ########## wireless info END ############

    Read the article

  • links for 2010-04-29

    - by Bob Rhubart
    AS11 Oracle B2B Sync Support - Series 1 (Oracle Fusion Middleware - B2B Team Blog) Sinkarbabu Kirubanithi with part 1 of a planned 3-part series on synchronous message support in Oracle B2B 11g. (tags: oracle otn fusionmiddleware b2b) Java 2 Go!: How to write a simple yet “bullet-proof” object cache "So, while we were thinking hard to come up with the most efficient, generic and elegant way of finally implementing our weak and soft caches, Mr. Eric Chan, who is one of the main architects in Oracle Beehive team, had a very interesting breakthrough. In short terms, he thought of a very nice way of combining both WeakReference and SoftReference in our weak and soft caches so that they would provide exactly the same functionality without having to deal with those reference queues at all. Basically, instead of using a plain HashMap as our backing storage, we used a java.util.WeakHashMap in both our cache implementations. The hat trick was what and how to store things in it." - Eduardo Rodrigues (tags: oracle java sun) @jamet123: First Look – Oracle Data Mining "[Oracle Data Mining] is a nice product for Oracle database customers and well worth looking into. The new UI will only make it more so." James Taylor (tags: oracle otn datamining database) Live Webcast: Social BPM: Integrating Enterprise 2.0 with Business Applications #oracle Peggy Chen and Dan Tortorici show you how to take your business to the next level with a unified solution that fosters process-based collaboration between employees, partners, and customers. Wednesday, May 12, 2010 at 11:00am PT / 2:00pm ET (tags: oracle otn enterprise2.0 webcast)

    Read the article

  • Use Autoruns to Manually Clean an Infected PC

    - by Mark Virtue
    There are many anti-malware programs out there that will clean your system of nasties, but what happens if you’re not able to use such a program?  Autoruns, from SysInternals (recently acquired by Microsoft), is indispensable when removing malware manually. There are a few reasons why you may need to remove viruses and spyware manually: Perhaps you can’t abide running resource-hungry and invasive anti-malware programs on your PC You might need to clean your mom’s computer (or someone else who doesn’t understand that a big flashing sign on a website that says “Your computer is infected with a virus – click HERE to remove it” is not a message that can necessarily be trusted) The malware is so aggressive that it resists all attempts to automatically remove it, or won’t even allow you to install anti-malware software Part of your geek credo is the belief that anti-spyware utilities are for wimps Autoruns is an invaluable addition to any geek’s software toolkit.  It allows you to track and control all programs (and program components) that start automatically with Windows (or with Internet Explorer).  Virtually all malware is designed to start automatically, so there’s a very strong chance that it can be detected and removed with the help of Autoruns. We have covered how to use Autoruns in an earlier article, which you should read if you need to first familiarize yourself with the program. Autoruns is a standalone utility that does not need to be installed on your computer.  It can be simply downloaded, unzipped and run (link below).  This makes is ideally suited for adding to your portable utility collection on your flash drive. When you start Autoruns for the first time on a computer, you are presented with the license agreement: After agreeing to the terms, the main Autoruns window opens, showing you the complete list of all software that will run when your computer starts, when you log in, or when you open Internet Explorer: To temporarily disable a program from launching, uncheck the box next to it’s entry.  Note:  This does not terminate the program if it is running at the time – it merely prevents it from starting next time.  To permanently prevent a program from launching, delete the entry altogether (use the Delete key, or right-click and choose Delete from the context-menu)).  Note:  This does not remove the program from your computer – to remove it completely you need to uninstall the program (or otherwise delete it from your hard disk). Suspicious Software It can take a fair bit of experience (read “trial and error”) to become adept at identifying what is malware and what is not.  Most of the entries presented in Autoruns are legitimate programs, even if their names are unfamiliar to you.  Here are some tips to help you differentiate the malware from the legitimate software: If an entry is digitally signed by a software publisher (i.e. there’s an entry in the Publisher column) or has a “Description”, then there’s a good chance that it’s legitimate If you recognize the software’s name, then it’s usually okay.  Note that occasionally malware will “impersonate” legitimate software, but adopting a name that’s identical or similar to software you’re familiar with (e.g. “AcrobatLauncher” or “PhotoshopBrowser”).  Also, be aware that many malware programs adopt generic or innocuous-sounding names, such as “Diskfix” or “SearchHelper” (both mentioned below). Malware entries usually appear on the Logon tab of Autoruns (but not always!) If you open up the folder that contains the EXE or DLL file (more on this below), an examine the “last modified” date, the dates are often from the last few days (assuming that your infection is fairly recent) Malware is often located in the C:\Windows folder or the C:\Windows\System32 folder Malware often only has a generic icon (to the left of the name of the entry) If in doubt, right-click the entry and select Search Online… The list below shows two suspicious looking entries:  Diskfix and SearchHelper These entries, highlighted above, are fairly typical of malware infections: They have neither descriptions nor publishers They have generic names The files are located in C:\Windows\System32 They have generic icons The filenames are random strings of characters If you look in the C:\Windows\System32 folder and locate the files, you’ll see that they are some of the most recently modified files in the folder (see below) Double-clicking on the items will take you to their corresponding registry keys: Removing the Malware Once you’ve identified the entries you believe to be suspicious, you now need to decide what you want to do with them.  Your choices include: Temporarily disable the Autorun entry Permanently delete the Autorun entry Locate the running process (using Task Manager or similar) and terminating it Delete the EXE or DLL file from your disk (or at least move it to a folder where it won’t be automatically started) or all of the above, depending upon how certain you are that the program is malware. To see if your changes succeeded, you will need to reboot your machine, and check any or all of the following: Autoruns – to see if the entry has returned Task Manager (or similar) – to see if the program was started again after the reboot Check the behavior that led you to believe that your PC was infected in the first place.  If it’s no longer happening, chances are that your PC is now clean Conclusion This solution isn’t for everyone and is most likely geared to advanced users. Usually using a quality Antivirus application does the trick, but if not Autoruns is a valuable tool in your Anti-Malware kit. Keep in mind that some malware is harder to remove than others.  Sometimes you need several iterations of the steps above, with each iteration requiring you to look more carefully at each Autorun entry.  Sometimes the instant that you remove the Autorun entry, the malware that is running replaces the entry.  When this happens, we need to become more aggressive in our assassination of the malware, including terminating programs (even legitimate programs like Explorer.exe) that are infected with malware DLLs. Shortly we will be publishing an article on how to identify, locate and terminate processes that represent legitimate programs but are running infected DLLs, in order that those DLLs can be deleted from the system. Download Autoruns from SysInternals Similar Articles Productive Geek Tips Using Autoruns Tool to Track Startup Applications and Add-onsHow To Get Detailed Information About Your PCSUPERAntiSpyware Portable is the Must-Have Spyware Removal Tool You NeedQuick Tip: Windows Vista Temp Files DirectoryClear Recent Commands From the Run Dialog in Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more Download Microsoft Office Help tab

    Read the article

  • Ghost team foundation build controllers

    - by Martin Hinshelwood
    Quite often after an upgrade there are things left over. Most of the time they are easy to delete, but sometimes it takes a little effort. Even rarer are those times when something just will not go away no matter how much you try. We have had a ghost team build controller hanging around for a while now, and it had defeated my best efforts to get rid of it. The build controller was from our old TFS server from before our TFS 2010 beta 2 upgrade and was really starting to annoy me. Every time I try to delete it I get the message: Controller cannot be deleted because there are build in progress -Manage Build Controller dialog   Figure: Deleting a ghost controller does not always work. I ended up checking all of our 172 Team Projects for the build that was queued, but did not find anything. Jim Lamb pointed me to the “tbl_BuildQueue” table in the team Project Collection database and sure enough there was the nasty little beggar. Figure: The ghost build was easily spotted Adam Cogan asked me: “Why did you suspect this one?” Well, there are a number of things that led me to suspect it: QueueId is very low: Look at the other items, they are in the thousands not single digits ControllerId: I know there is only one legitimate controller, and I am assuming that 6 relates to “zzUnicorn” DefinitionId: This is a very low number and I looked it up in “tbl_BuildDefinition” and it did not exist QueueTime: As we did not upgrade to TFS 2010 until late 2009 a date of 2008 for a queued build is very suspect Status: A status of 2 means that it is still queued This build must have been queued long ago when we were using TFS 2008, probably a beta, and it never got cleaned up. As controllers are new in TFS 2010 it would have created the “zzUnicorn” controller to handle any build servers that already exist. I had previously deleted the Agent, but leaving the controller just looks untidy. Now that the ghost build has been identified there are two options: Delete the row I would not recommend ever deleting anything from the database to achieve something in TFS. It is really not supported. Set the Status to cancelled (Recommended) This is the best option as TFS will then clean it up itself So I set the Status of this build to 2 (cancelled) and sure enough it disappeared after a couple of minutes and I was then able to then delete the “zzUnicorn” controller. Figure: Almost completely clean Now all I have to do is get rid of that untidy “zzBunyip” agent, but that will require rewriting one of our build scripts which will have to wait for now.   Technorati Tags: ALM,TFBS,TFS 2010

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >