Search Results

Search found 1968 results on 79 pages for 'pickle dump'.

Page 73/79 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • File server share access intermittent/slow/machine unstable: win2kr2

    - by Jack B.
    I have a file server running Win2k8R2 on an older HP DL380G4. It has nothing set up on it other than file sharing. All drivers/firmware/updates installed. The file server is used as a dump for a bunch of test machines - so essentially a lot of small files are being written to it. It was working fine until it started showing the following symptoms: Shares became either very slow/intermittent or could not access them at all. Logging in the the server, you could use it like normal but windows would start freezing and eventually you had to hard reboot it because nothing was responsive. After rebooting, it would work fine for 20min-2hours and then degrade into this broken state again. Some info after investigation: HP Raid Config utility shows the Raid array as functioning properly (RAID5 btw). Event log shows a bunch of DoS attacks from the test machines, saying it has disconnected the connection a. AFAIK (not part of my job) the test machines haven't changed the way they log information to this server or the amount of them hasn't increased. b. Nothing is infected, this server was scanned fully, and the test machines are re-imaged almost daily. Nothing in performance monitor shows as anything being pegged at maximum (CPU/HD/Network/RAM) I installed MS Network Monitor and it is showing a lot of traffic The server was using one gigabit Ethernet connection, I connected the second one as well with the same results. Forgot to add - one of the commonly written to dirs on the share has over 16k subdirs in it, with a crapton of small files within those dirs. Some of the OS instability was slow access to the drive which has this directory - perfmon doesn't show much activity on the HD though so I'm not sure if this crowded dir is the cause. Here is one important fact: I ran into this issue 2-3 months ago, couldn't figure it out, but I had a spare identical machine so I swapped them out (thought it was related to the machine), and now I have the same issue. Also, the computer will be stable if I turn off file sharing. So is the server just getting DoS'd by the test machines? I've never dealt with such an issue. Is instability in the server's OS common when getting DoS'd? Is there anything I can do to confirm this before telling the owners of the test machines to optimize their traffic? (I'm not sure what they'll be able to do). Is there something within Win2k8R2 that can balance the traffic across the two NICs? Any help would be appreciated. Update: Another thought - the drive with the share is RAID5 across 6 SCSI320 300GB HDs. They are near full capacity about 100GB from 1TB left. Could the amount of tiny files could be causing some weirdness with the parity in this array? I think I've read something about this in the past but I'm no expert on RAID.

    Read the article

  • BIND no longer responds to AXFR Requests

    - by djsumdog
    Recently we moved our primary external DNS server. It has three caching DNS slaves in front of it provided by our ISP. They've told us they've started getting access denied requests when doing zone transfers (AXFR). If I add in my own IPs to the allow-transfer list, I also get a transfer failed when using dig with the AXFR argument. Here is what my bind configuration looks like: options { directory "/var/lib/named"; dump-file "/var/log/named_dump.db"; zone-statistics yes; statistics-file "/var/log/named.stats"; listen-on-v6 { any; }; notify-source 10.19.0.68 port 53; querylog yes; notify yes; allow-transfer { 127.0.0.1; //localhost 1.1.1.1; //public dns slave 1 2.2.2.2; //public dns slave 2 3.3.3.3; //public dns slave 3 }; also-notify { 1.1.1.1; //public dns slave 1 2.2.2.2; //public dns slave 2 3.3.3.3; //public dns slave 3 }; include "/etc/named.d/forwarders.conf"; }; logging { channel simple_log { file "/var/log/bind.log" versions 10 size 3m; severity info; print-time yes; print-severity yes; print-category yes; }; category default{ simple_log; }; channel log_zone_transfers { file "/var/log/axfr.log" versions 10 size 3m; print-time yes; print-category yes; print-severity yes; }; category xfer-out { log_zone_transfers; }; channel log_notify { file "/var/log/notify.log" versions 10 size 3m; print-time yes; print-category yes; print-severity yes; }; category notify { log_notify; }; channel queries { file "/var/log/queries.log" versions 10 size 30m; print-time yes; severity info; print-category yes; print-severity yes; }; category queries { queries; }; }; zone "." in { type hint; file "root.hint"; }; zone "localhost" in { type master; file "localhost.zone"; }; zone "0.0.127.in-addr.arpa" in { type master; file "127.0.0.zone"; }; include "/etc/named.conf.include"; zone "example.net " { type master; file "/var/lib/named/master/example.net.hosts"; }; zone "example.com " { type master; file "/var/lib/named/master/example.com.hosts"; }; ## -- other master files -- And the errors in the xfer log look like the following: 29-Oct-2012 14:20:02.806 xfer-out: info: client 1.1.1.1#59069: bad zone transfer request: 'example.com./IN': non-authoritative zone (NOTAUTH) I've tried adding allow-transfer parameters directly on the zone files and still get failed transfers. Any idea what I'm doing wrong?

    Read the article

  • javafx tableview get selected data from ObservableList

    - by user3717821
    i am working on a javafx project and i need your help . while i am trying to get selected data from table i can get selected data from normal cell but can't get data from ObservableList inside tableview. code for my database: -- phpMyAdmin SQL Dump -- version 4.0.4 -- http://www.phpmyadmin.net -- -- Host: localhost -- Generation Time: Jun 10, 2014 at 06:20 AM -- Server version: 5.1.33-community -- PHP Version: 5.4.12 SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO"; SET time_zone = "+00:00"; /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */; /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */; /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */; /*!40101 SET NAMES utf8 */; -- -- Database: `test` -- -- -------------------------------------------------------- -- -- Table structure for table `customer` -- CREATE TABLE IF NOT EXISTS `customer` ( `col0` int(11) NOT NULL, `col1` varchar(255) DEFAULT NULL, `col2` int(11) DEFAULT NULL, PRIMARY KEY (`col0`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -- Dumping data for table `customer` -- INSERT INTO `customer` (`col0`, `col1`, `col2`) VALUES (12, 'adasdasd', 231), (22, 'adasdasd', 231), (212, 'adasdasd', 231); /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */; /*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */; /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */; my javafx codes: import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.SQLException; import java.util.Map; import javafx.application.Application; import javafx.beans.property.SimpleStringProperty; import javafx.beans.value.ChangeListener; import javafx.beans.value.ObservableValue; import javafx.collections.FXCollections; import javafx.collections.ObservableList; import javafx.event.ActionEvent; import javafx.event.EventHandler; import javafx.scene.Scene; import javafx.scene.control.Button; import javafx.scene.control.TableCell; import javafx.scene.control.TableColumn; import javafx.scene.control.TableColumn.CellDataFeatures; import javafx.scene.control.TablePosition; import javafx.scene.control.TableView; import javafx.scene.control.TableView.TableViewSelectionModel; import javafx.scene.control.cell.ChoiceBoxTableCell; import javafx.scene.control.cell.TextFieldTableCell; import javafx.scene.layout.BorderPane; import javafx.stage.Stage; import javafx.util.Callback; import javafx.util.StringConverter; class DBConnector { private static Connection conn; private static String url = "jdbc:mysql://localhost/test"; private static String user = "root"; private static String pass = "root"; public static Connection connect() throws SQLException{ try{ Class.forName("com.mysql.jdbc.Driver").newInstance(); }catch(ClassNotFoundException cnfe){ System.err.println("Error: "+cnfe.getMessage()); }catch(InstantiationException ie){ System.err.println("Error: "+ie.getMessage()); }catch(IllegalAccessException iae){ System.err.println("Error: "+iae.getMessage()); } conn = DriverManager.getConnection(url,user,pass); return conn; } public static Connection getConnection() throws SQLException, ClassNotFoundException{ if(conn !=null && !conn.isClosed()) return conn; connect(); return conn; } } public class DynamicTable extends Application{ Object newValue; //TABLE VIEW AND DATA private ObservableList<ObservableList> data; private TableView<ObservableList> tableview; //MAIN EXECUTOR public static void main(String[] args) { launch(args); } //CONNECTION DATABASE public void buildData(){ tableview.setEditable(true); Callback<TableColumn<Map, String>, TableCell<Map, String>> cellFactoryForMap = new Callback<TableColumn<Map, String>, TableCell<Map, String>>() { @Override public TableCell call(TableColumn p) { return new TextFieldTableCell(new StringConverter() { @Override public String toString(Object t) { return t.toString(); } @Override public Object fromString(String string) { return string; } }); } }; Connection c ; data = FXCollections.observableArrayList(); try{ c = DBConnector.connect(); //SQL FOR SELECTING ALL OF CUSTOMER String SQL = "SELECT * from CUSTOMer"; //ResultSet ResultSet rs = c.createStatement().executeQuery(SQL); /********************************** * TABLE COLUMN ADDED DYNAMICALLY * **********************************/ for(int i=0 ; i<rs.getMetaData().getColumnCount(); i++){ //We are using non property style for making dynamic table final int j = i; TableColumn col = new TableColumn(rs.getMetaData().getColumnName(i+1)); if(j==1){ final ObservableList<String> logLevelList = FXCollections.observableArrayList("FATAL", "ERROR", "WARN", "INFO", "INOUT", "DEBUG"); col.setCellFactory(ChoiceBoxTableCell.forTableColumn(logLevelList)); tableview.getColumns().addAll(col); } else{ col.setCellValueFactory(new Callback<CellDataFeatures<ObservableList,String>,ObservableValue<String>>(){ public ObservableValue<String> call(CellDataFeatures<ObservableList, String> param) { return new SimpleStringProperty(param.getValue().get(j).toString()); } }); tableview.getColumns().addAll(col); } if(j!=1) col.setCellFactory(cellFactoryForMap); System.out.println("Column ["+i+"] "); } /******************************** * Data added to ObservableList * ********************************/ while(rs.next()){ //Iterate Row ObservableList<String> row = FXCollections.observableArrayList(); for(int i=1 ; i<=rs.getMetaData().getColumnCount(); i++){ //Iterate Column row.add(rs.getString(i)); } System.out.println("Row [1] added "+row ); data.add(row); } //FINALLY ADDED TO TableView tableview.setItems(data); }catch(Exception e){ e.printStackTrace(); System.out.println("Error on Building Data"); } } @Override public void start(Stage stage) throws Exception { //TableView Button showDataButton = new Button("Add"); showDataButton.setOnAction(new EventHandler<ActionEvent>() { public void handle(ActionEvent event) { ObservableList<String> row = FXCollections.observableArrayList(); for(int i=1 ; i<=3; i++){ //Iterate Column row.add("asdasd"); } data.add(row); //FINALLY ADDED TO TableView tableview.setItems(data); } }); tableview = new TableView(); buildData(); //Main Scene BorderPane root = new BorderPane(); root.setCenter(tableview); root.setBottom(showDataButton); Scene scene = new Scene(root,500,500); stage.setScene(scene); stage.show(); tableview.getSelectionModel().selectedItemProperty().addListener(new ChangeListener() { @Override public void changed(ObservableValue observableValue, Object oldValue, Object newValue) { //Check whether item is selected and set value of selected item to Label if (tableview.getSelectionModel().getSelectedItem() != null) { TableViewSelectionModel selectionModel = tableview.getSelectionModel(); ObservableList selectedCells = selectionModel.getSelectedCells(); TablePosition tablePosition = (TablePosition) selectedCells.get(0); Object val = tablePosition.getTableColumn().getCellData(newValue); System.out.println("Selected Value " + val); System.out.println("Selected row " + newValue); } } }); } } please help me..

    Read the article

  • AutoMapper MappingFunction from Source Type of NameValueCollection

    - by REA_ANDREW
    I have had a situation arise today where I need to construct a complex type from a source of a NameValueCollection.  A little while back I submitted a patch for the Agatha Project to include REST (JSON and XML) support for the service contract.  I realized today that as useful as it is, it did not actually support true REST conformance, as REST should support GET so that you can use JSONP from JavaScript directly meaning you can query cross domain services.  My original implementation for POX and JSON used the POST method and this immediately rules out JSONP as from reading, JSONP only works with GET Requests. This then raised another issue.  The current operation contract of Agatha and one of its main benefits is that you can supply an array of Request objects in a single request, limiting the about of server requests you need to make.  Now, at the present time I am thinking that this will not be the case for the REST imlementation but will yield the benefits of the fact that : The same Request objects can be used for SOAP and RST (POX, JSON) The construct of the JavaScript functions will be simpler and more readable It will enable the use of JSONP for cross domain REST Services The current contract for the Agatha WcfRequestProcessor is at time of writing the following: [ServiceContract] public interface IWcfRequestProcessor { [OperationContract(Name = "ProcessRequests")] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [TransactionFlow(TransactionFlowOption.Allowed)] Response[] Process(params Request[] requests); [OperationContract(Name = "ProcessOneWayRequests", IsOneWay = true)] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] void ProcessOneWayRequests(params OneWayRequest[] requests); }   My current proposed solution, and at the very early stages of my concept is as follows: [ServiceContract] public interface IWcfRestJsonRequestProcessor { [OperationContract(Name="process")] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [TransactionFlow(TransactionFlowOption.Allowed)] [WebGet(UriTemplate = "process/{name}/{*parameters}", BodyStyle = WebMessageBodyStyle.WrappedResponse, ResponseFormat = WebMessageFormat.Json)] Response[] Process(string name, NameValueCollection parameters); [OperationContract(Name="processoneway",IsOneWay = true)] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [WebGet(UriTemplate = "process-one-way/{name}/{*parameters}", BodyStyle = WebMessageBodyStyle.WrappedResponse, ResponseFormat = WebMessageFormat.Json)] void ProcessOneWayRequests(string name, NameValueCollection parameters); }   Now this part I have not yet implemented, it is the preliminart step which I have developed which will allow me to take the name of the Request Type and the NameValueCollection and construct the complex type which is that of the Request which I can then supply to a nested instance of the original IWcfRequestProcessor  and work as it should normally.  To give an example of some of the urls which you I envisage with this method are: http://www.url.com/service.svc/json/process/getweather/?location=london http://www.url.com/service.svc/json/process/getproductsbycategory/?categoryid=1 http://www.url.om/service.svc/json/process/sayhello/?name=andy Another reason why my direction has gone to a single request for the REST implementation is because of restrictions which are imposed by browsers on the length of the url.  From what I have read this is on average 2000 characters.  I think that this is a very acceptable usage limit in the context of using 1 request, but I do not think this is acceptable for accommodating multiple requests chained together.  I would love to be corrected on that one, I really would but unfortunately from what I have read I have come to the conclusion that this is not the case. The mapping function So, as I say this is just the first pass I have made at this, and I am not overly happy with the try catch for detecting types without default constructors.  I know there is a better way but for the minute, it escapes me.  I would also like to know the correct way for adding mapping functions and not using the anonymous way that I have used.  To achieve this I have used recursion which I am sure is what other mapping function use. As you do have to go as deep as the complex type is. public static object RecurseType(NameValueCollection collection, Type type, string prefix) { try { var returnObject = Activator.CreateInstance(type); foreach (var property in type.GetProperties()) { foreach (var key in collection.AllKeys) { if (String.IsNullOrEmpty(prefix) || key.Length > prefix.Length) { var propertyNameToMatch = String.IsNullOrEmpty(prefix) ? key : key.Substring(property.Name.IndexOf(prefix) + prefix.Length + 1); if (property.Name == propertyNameToMatch) { property.SetValue(returnObject, Convert.ChangeType(collection.Get(key), property.PropertyType), null); } else if(property.GetValue(returnObject,null) == null) { property.SetValue(returnObject, RecurseType(collection, property.PropertyType, String.Concat(prefix, property.PropertyType.Name)), null); } } } } return returnObject; } catch (MissingMethodException) { //Quite a blunt way of dealing with Types without default constructor return null; } }   Another thing is performance, I have not measured this in anyway, it is as I say the first pass, so I hope this can be the start of a more perfected implementation.  I tested this out with a complex type of three levels, there is no intended logical meaning to the properties, they are simply for the purposes of example.  You could call this a spiking session, as from here on in, now I know what I am building I would take a more TDD approach.  OK, purists, why did I not do this from the start, well I didn’t, this was a brain dump and now I know what I am building I can. The console test and how I used with AutoMapper is as follows: static void Main(string[] args) { var collection = new NameValueCollection(); collection.Add("Name", "Andrew Rea"); collection.Add("Number", "1"); collection.Add("AddressLine1", "123 Street"); collection.Add("AddressNumber", "2"); collection.Add("AddressPostCodeCountry", "United Kingdom"); collection.Add("AddressPostCodeNumber", "3"); AutoMapper.Mapper.CreateMap<NameValueCollection, Person>() .ConvertUsing(x => { return(Person) RecurseType(x, typeof(Person), null); }); var person = AutoMapper.Mapper.Map<NameValueCollection, Person>(collection); Console.WriteLine(person.Name); Console.WriteLine(person.Number); Console.WriteLine(person.Address.Line1); Console.WriteLine(person.Address.Number); Console.WriteLine(person.Address.PostCode.Country); Console.WriteLine(person.Address.PostCode.Number); Console.ReadLine(); }   Notice the convention that I am using and that this method requires you do use.  Each property is prefixed with the constructed name of its parents combined.  This is the convention used by AutoMapper and it makes sense. I can also think of other uses for this including using with ASP.NET MVC ModelBinders for creating a complex type from the QueryString which is itself is a NameValueCollection. Hope this is of some help to people and I would welcome any code reviews you could give me. References: Agatha : http://code.google.com/p/agatha-rrsl/ AutoMapper : http://automapper.codeplex.com/   Cheers for now, Andrew   P.S. I will have the proposed solution for a more complete REST implementation for AGATHA very soon. 

    Read the article

  • Top Tweets SOA Partner Community – March 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity SOA Community ?SOA Community Newsletter February 2012 wp.me/p10C8u-o0 Marc ?Reading through the #OFM 11.1.1.6 , patchset 5 documentation. What is the best way to upgrade your whole dev…prd street. SOA Community Thanks for the successful and super interesting #sbidays ! Wonderful discussions around the Integration, case management and security tracks Torsten Winterberg Schon den neuen Opitz Technology-Blog gebookmarked? The Cattle Crew bit.ly/yLPwBD wird ab sofort regelmäßig Erkenntnisse posten. OTNArchBeat ? Unit Testing Asynchronous BPEL Processes Using soapUI | @DanielAmadei bit.ly/x9NsS9 Rolando Carrasco ?Video de Human Task en BPM 11g. Por @edwardo040. bit.ly/wki9CA cc @OracleBPM @OracleSOA @soacommunity View video Marcel Mertin SOA Security Hands-On by Dirk Krafzig and Mamoon Yunus at #sbidays is also great! SOA Community Workshop day #sbidays #BPMN2.0 by Volker Stiehl from #SAP great training – now I can model & execute in #bpmsuite #soacommunity Simone Geib ?Just updated our advanced #soasuite #otn page with a number of very interesting @orclateamsoa blog posts: bit.ly/advancedsoasui… OTNArchBeat ? Start Small, Grow Fast: SOA Best Practices article by @biemond, @rluttikhuizen, @demed bit.ly/yem9Zv Steffen Miller ? Nice new features in SOA Suite Business Rules #PS5 Testing rules with scenarios and output validation bit.ly/zj64Q3 @SOACOMMUNITY OTNArchBeat ? Reply SOA Blackbelt training by David Shaffer, April 30th–May 4th 2012 bit.ly/xGdC24 OTNArchBeat ? What have BPM, big data, social tools, and business models got in common? | Andy Mulholland bit.ly/xUkOGf SOA Community ? Live hacking at #sbidays – cheaper shopping, bias cracking, payment systems, secure your SOA! pic.twitter.com/y7YaIdug SOA Community Future #BPM & #ACM solutions can make use of ontology’s, based on #sqarql #sbidays pic.twitter.com/xLb1Z5zs Simone Geib ? @soacommunity: SOA Blackbelt training by David Shaffer, April 30th–May 4th 2012 wp.me/p10C8u-nX Biemond Changing your ADF Connections in Enterprise Manager with PS5: With Patch Set 5 of Fusion Middleware you can fina… bit.ly/zF7Rb1 Marc ? HUGE (!) CPU and Heap improvement on Oracle Fusion Middleware tinyurl.com/762drzp @wlscommunity @soacommunity #OSB #SOA #WLS SOA Community ?Networking @ SOA & BPM Partner Community blogs.oracle.com/soacommunity/e… #soacommunity #otn #opn #oracle SOA Community ?Published the SOA Partner Community newsletter February edition – READ it. Not yet a member? oracle.com/goto/emea/soa #soacommunity #otn #opn AMIS, Oracle & Java Blog by Lucas Jellema: "Book Review: Do More with SOA Integration: Best of Packt (december 2011, various authors)" bit.ly/wq633E Jon petter hjulstad @SOASimone Excellent summary! Lots of new features! Simone Geib ?Do you want to know what’s new in #soasuite #PS5? Go to bit.ly/xBX06f and let me know what you think SOA Community ? Unit Testing Asynchronous BPEL Processes Using soapUI oracle.com/technetwork/ar… #soacommunity #soa #otn #oracle #bpel Retweeted by SOA Community View media Retweeted by SOA Community Eric Elzinga ? Oracle Fusion Middleware Partner Community Forum Malage, The Overview, bit.ly/AA9BKd #ofmforum SOA&Cloud Symposium ? The February issue of the Service Technology Magazine is now published. servicetechmag.com SOA Community ? Oracle SOA Suite 11g Database Growth Management – must read! oracle.com/technetwork/da… #soacommunity #soa #purging demed ? Have you exposed internal processes to mobile devices using #oraclesoa? Interested in an article? DM me! #osb #rest #multichannel #mobile orclateamsoa ? A-Team SOA Blog: Enhanced version of Thread Dump Analyzer (TDA A-Team) ow.ly/1hpk7l SOA Community Reply BPM Suite #PS5 (11.1.1.6) available for download soacommunity.wordpress.com/2012/02/22/soa… Send us your feedback! #soacommunity #bpmsuite #opn SOA Community ? SOA Suite #PS5 (11.1.1.6) available for download soacommunity.wordpress.com/2012/02/22/soa… Send us your feedback! #soacommunity #soasuite SOA Community BPM Suite #PS5 1(1.1.1.6) available for download. List of new BPM features blogs.oracle.com/soacommunity/e… #soacommunity #bpm #bpmsuite #opn OracleBlogs BPM in Utilties Industry ow.ly/1hC3fp Retweeted by SOA Community OTNArchBeat ? Demystifying Oracle Enterprise Gateway | Naresh Persaud bit.ly/xtDNe2 OTNArchBeat ? Architect’s Guide to Big Data; Test BPEL Processes Using SoapUI; Development Debate bit.ly/xbDYSo Frank Nimphius ? Finished my book review of "Do More with SOA Integration: Best of Packt ". Here are my review comments: bit.ly/x2k9OZ Lucas Jellema ? That is my one stop-and-go download center for #PS5 : edelivery.oracle.com/EPD/Download/g… Lucas Jellema ? Interesting piece of documentation: Fusion Applications Extensibility Guide – docs.oracle.com/cd/E15586_01/f… source for design time @ run time inspira Lucas Jellema ? Strongly improved support for testing Business Rules at Design Time in #PS5 see docs.oracle.com/cd/E23943_01/u… Lucas Jellema ? SOA Suite 11gR1 PS5: new BPEL Component testing – docs.oracle.com/cd/E23943_01/d… Lucas Jellema ? PS5 available for CEP (Complex Event Processing) – a personal favorite of mine : oracle.com/technetwork/mi… Lucas Jellema ?What’s New in Fusion Developer’s Guide 11gR1 PS5: docs.oracle.com/cd/E23943_01/w… Lucas Jellema ? BPMN Correlation (FMW 11gR1 PS5): docs.oracle.com/cd/E23943_01/d… Lucas Jellema ? Modifying running BPM Process instances (FMW 11gR1 PS5): docs.oracle.com/cd/E23943_01/d… Lucas Jellema ? SOA Suite 11gR1 PS5 – new aggregation pattern: docs.oracle.com/cd/E23943_01/d… routing multiple messages to same instance Melvin van der Kuijl ? Automating Testing of SOA Composite Applications in PS5. docs.oracle.com/cd/E23943_01/d… Cato Aune ? SOA suite PS5 Enterprise Deployment Guide is available in ePub docs.oracle.com/cd/E23943_01/c… . Much better than pdf on Galaxy Note SOA Community ?JDeveloper 11.1.1.6 is available for download bit.ly/wGYrwE #soacommunity SOA Community ? Your first experience #PS5 – let us know @soacommunity – send us your tweets and blog posts! #soacommunity Jon petter hjulstad ? WLS 10.3.6 New features, ex better logging of jdbc use: docs.oracle.com/cd/E23943_01/w… Heidi Buelow ? Get it now! RT @soacommunity: BPM Suite PS5 11.1.1.6 available for download bit.ly/AgagT5 #bpm #soacommunity Jon petter hjulstad ?SOA Suite PS5 EDG contains OSB! docs.oracle.com/cd/E23943_01/c… Jon petter hjulstad ? Testing Oracle Rules from JDeveloper is easier in PS5: docs.oracle.com/cd/E23943_01/u… Biemond® ? What’s New in Oracle Service Bus 11.1.1.6.0 oracle.com/technetwork/mi… Jon petter hjulstad ? Adminguide New and Changed Features for PS5, ex GridLink data sources: docs.oracle.com/cd/E23943_01/c… Retweeted by SOA Community Andreas Koop ? Unbelievable! #OFM Doc Lib growth from 11gPS4->11gPS5 by 1.2G! View media SOA Community ?ODI PS5 is available oracle.com/technetwork/mi… #odi #soacommunity 22 Feb View media SOA Community Service Bus 11g Development Cookbook soacommunity.wordpress.com/2012/02/20/ser… #osb #soacommunity #ace #opn View media For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: soacommunity,twitter,Oracle,SOA Community,Jürgen Kress,OPN,SOA,BPM

    Read the article

  • JMX Based Monitoring - Part Four - Business App Server Monitoring

    - by Anthony Shorten
    In the last blog entry I talked about the Oracle Utilities Application Framework V4 feature for monitoring and managing aspects of the Web Application Server using JMX. In this blog entry I am going to discuss a similar new feature that allows JMX to be used for management and monitoring the Oracle Utilities business application server component. This feature is primarily focussed on performance tracking of the product. In first release of Oracle Utilities Customer Care And Billing (V1.x I am talking about), we used to use Oracle Tuxedo as part of the architecture. In Oracle Utilities Application Framework V2.0 and above, we removed Tuxedo from the architecture. One of the features that some customers used within Tuxedo was the performance tracking ability. The idea was that you enabled performance logging on the individual Tuxedo servers and then used a utility named txrpt to produce a performance report. This report would list every service called, the number of times it was called and the average response time. When I worked a performance consultant, I used this report to identify badly performing services and also gauge the overall performance characteristics of a site. When Tuxedo was removed from the architecture this information was also lost. While you can get some information from access.log and some Mbeans supplied by the Web Application Server it was not at the same granularity as txrpt or as useful. I am happy to say we have not only reintroduced this facility in Oracle Utilities Application Framework but it is now accessible via JMX and also we have added more detail into the performance tracking. Most of this new design was working with customers around the world to make sure we introduced a new feature that not only satisfied their performance tracking needs but allowed for finer grained performance analysis. As with the Web Application Server, the Business Application Server JMX monitoring is enabled by specifying a JMX port number in RMI Port number for JMX Business and initial credentials in the JMX Enablement System User ID and JMX Enablement System Password configuration options. These options are available using the configureEnv[.sh] -a utility. These credentials are shared across the Web Application Server and Business Application Server for authorization purposes. Once this is information is supplied a number of configuration files are built (by the initialSetup[.sh] utility) to configure the facility: spl.properties - contains the JMX URL, the security configuration and the mbeans that are enabled. For example, on my demonstration machine: spl.runtime.management.rmi.port=6750 spl.runtime.management.connector.url.default=service:jmx:rmi:///jndi/rmi://localhost:6750/oracle/ouaf/ejbAppConnector jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file ouaf.jmx.com.splwg.ejb.service.management.PerformanceStatistics=enabled ouaf.jmx.* files - contain the userid and password. The default configuration uses the JMX default configuration. You can use additional security features by altering the spl.properties file manually or using a custom template. For more security options see JMX Security for more details. Once it has been configured and the changes reflected in the product using the initialSetup[.sh] utility the JMX facility can be used. For illustrative purposes I will use jconsole but any JSR160 complaint browser or client can be used (with the appropriate configuration). Once you start jconsole (ensure that splenviron[.sh] is executed prior to execution to set the environment variables or for remote connection, ensure java is in your path and jconsole.jar in your classpath) you specify the URL in the spl.runtime.management.connnector.url.default entry. For example: You are then able to track performance of the product using the PerformanceStatistics Mbean. The attributes of the PerformanceStatistics Mbean are counts of each object type. This is where this facility differs from txrpt. The information that is collected includes the following: The Service Type is captured so you can filter the results in terms of the type of service. For maintenance type services you can even see the transaction type (ADD, CHANGE etc) so you can see the performance of updates against read transactions. The Minimum and Maximum are also collected to give you an idea of the spread of performance. The last call is recorded. The date, time and user of the last call are recorded to give you an idea of the timeliness of the data. The Mbean maintains a set of counters per Service Type to give you a summary of the types of transactions being executed. This gives you an overall picture of the types of transactions and volumes at your site. There are a number of interesting operations that can also be performed: reset - This resets the statistics back to zero. This is an important operation. For example, txrpt is restricted to collecting statistics per hour, which is ok for most people. But what if you wanted to be more granular? This operation allows to set the collection period to anything you wish. The statistics collected will represent values since the last restart or last reset. completeExecutionDump - This is the operation that produces a CSV in memory to allow extraction of the data. All the statistics are extracted (see the Server Administration Guide for a full list). This can be then loaded into a database, a tool or simply into your favourite spreadsheet for analysis. Here is an extract of an execution dump from my demonstration environment to give you an idea of the format: ServiceName, ServiceType, MinTime, MaxTime, Avg Time, # of Calls, Latest Time, Latest Date, Latest User ... CFLZLOUL, EXECUTE_LIST, 15.0, 64.0, 22.2, 10, 16.0, 2009-12-16::11-25-36-932, ASHORTEN CILBBLLP, READ, 106.0, 1184.0, 466.3333333333333, 6, 106.0, 2009-12-16::11-39-01-645, BOBAMA CILBBLLP, DELETE, 70.0, 146.0, 108.0, 2, 70.0, 2009-12-15::12-53-58-280, BPAYS CILBBLLP, ADD, 860.0, 4903.0, 2243.5, 8, 860.0, 2009-12-16::17-54-23-862, LELLISON CILBBLLP, CHANGE, 112.0, 3410.0, 815.1666666666666, 12, 112.0, 2009-12-16::11-40-01-103, ASHORTEN CILBCBAL, EXECUTE_LIST, 8.0, 84.0, 26.0, 22, 23.0, 2009-12-16::17-54-01-643, LJACKMAN InitializeUserInfoService, READ_SYSTEM, 49.0, 962.0, 70.83777777777777, 450, 63.0, 2010-02-25::11-21-21-667, ASHORTEN InitializeUserService, READ_SYSTEM, 130.0, 2835.0, 234.85777777777778, 450, 216.0, 2010-02-25::11-21-21-446, ASHORTEN MenuLoginService, READ_SYSTEM, 530.0, 1186.0, 703.3333333333334, 9, 530.0, 2009-12-16::16-39-31-172, ASHORTEN NavigationOptionDescriptionService, READ_SYSTEM, 2.0, 7.0, 4.0, 8, 2.0, 2009-12-21::09-46-46-892, ASHORTEN ... There are other operations and attributes available. Refer to the Server Administration Guide provided with your product to understand the full et of operations and attributes. This is one of the many features I am proud that we implemented as it allows flexible monitoring of the performance of the product.

    Read the article

  • Big Data – Is Big Data Relevant to me? – Big Data Questionnaires – Guest Post by Vinod Kumar

    - by Pinal Dave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions of SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. I think the series from Pinal is a good one for anyone planning to start on Big Data journey from the basics. In my daily customer interactions this buzz of “Big Data” always comes up, I react generally saying – “Sir, do you really have a ‘Big Data’ problem or do you have a big Data problem?” Generally, there is a silence in the air when I ask this question. Data is everywhere in organizations – be it big data, small data, all data and for few it is bad data which is same as no data :). Wow, don’t discount me as someone who opposes “Big Data”, I am a big supporter as much as I am a critic of the abuse of this term by the people. In this post, I wanted to let my mind flow so that you can also think in the direction I want you to see these concepts. In any case, this is not an exhaustive dump of what is in my mind – but you will surely get the drift how I am going to question Big Data terms from customers!!! Is Big Data Relevant to me? Many of my customers talk to me like blank whiteboard with no idea – “why Big Data”. They want to jump into the bandwagon of technology and they want to decipher insights from their unexplored data a.k.a. unstructured data with structured data. So what are these industry scenario’s that come to mind? Here are some of them: Financials Fraud detection: Banks and Credit cards are monitoring your spending habits on real-time basis. Customer Segmentation: applies in every industry from Banking to Retail to Aviation to Utility and others where they deal with end customer who consume their products and services. Customer Sentiment Analysis: Responding to negative brand perception on social or amplify the positive perception. Sales and Marketing Campaign: Understand the impact and get closer to customer delight. Call Center Analysis: attempt to take unstructured voice recordings and analyze them for content and sentiment. Medical Reduce Re-admissions: How to build a proactive follow-up engagements with patients. Patient Monitoring: How to track Inpatient, Out-Patient, Emergency Visits, Intensive Care Units etc. Preventive Care: Disease identification and Risk stratification is a very crucial business function for medical. Claims fraud detection: There is no precise dollars that one can put here, but this is a big thing for the medical field. Retail Customer Sentiment Analysis, Customer Care Centers, Campaign Management. Supply Chain Analysis: Every sensors and RFID data can be tracked for warehouse space optimization. Location based marketing: Based on where a check-in happens retail stores can be optimize their marketing. Telecom Price optimization and Plans, Finding Customer churn, Customer loyalty programs Call Detail Record (CDR) Analysis, Network optimizations, User Location analysis Customer Behavior Analysis Insurance Fraud Detection & Analysis, Pricing based on customer Sentiment Analysis, Loyalty Management Agents Analysis, Customer Value Management This list can go on to other areas like Utility, Manufacturing, Travel, ITES etc. So as you can see, there are obviously interesting use cases for each of these industry verticals. These are just representative list. Where to start? A lot of times I try to quiz customers on a number of dimensions before starting a Big Data conversation. Are you getting the data you need the way you want it and in a timely manner? Can you get in and analyze the data you need? How quickly is IT to respond to your BI Requests? How easily can you get at the data that you need to run your business/department/project? How are you currently measuring your business? Can you get the data you need to react WITHIN THE QUARTER to impact behaviors to meet your numbers or is it always “rear-view mirror?” How are you measuring: The Brand Customer Sentiment Your Competition Your Pricing Your performance Supply Chain Efficiencies Predictive product / service positioning What are your key challenges of driving collaboration across your global business?  What the challenges in innovation? What challenges are you facing in getting more information out of your data? Note: Garbage-in is Garbage-out. Hold good for all reporting / analytics requirements Big Data POCs? A number of customers get into the realm of setting a small team to work on Big Data – well it is a great start from an understanding point of view, but I tend to ask a number of other questions to such customers. Some of these common questions are: To what degree is your advanced analytics (natural language processing, sentiment analysis, predictive analytics and classification) paired with your Big Data’s efforts? Do you have dedicated resources exploring the possibilities of advanced analytics in Big Data for your business line? Do you plan to employ machine learning technology while doing Advanced Analytics? How is Social Media being monitored in your organization? What is your ability to scale in terms of storage and processing power? Do you have a system in place to sort incoming data in near real time by potential value, data quality, and use frequency? Do you use event-driven architecture to manage incoming data? Do you have specialized data services that can accommodate different formats, security, and the management requirements of multiple data sources? Is your organization currently using or considering in-memory analytics? To what degree are you able to correlate data from your Big Data infrastructure with that from your enterprise data warehouse? Have you extended the role of Data Stewards to include ownership of big data components? Do you prioritize data quality based on the source system (that is Facebook/Twitter data has lower quality thresholds than radio frequency identification (RFID) for a tracking system)? Do your retention policies consider the different legal responsibilities for storing Big Data for a specific amount of time? Do Data Scientists work in close collaboration with Data Stewards to ensure data quality? How is access to attributes of Big Data being given out in the organization? Are roles related to Big Data (Advanced Analyst, Data Scientist) clearly defined? How involved is risk management in the Big Data governance process? Is there a set of documented policies regarding Big Data governance? Is there an enforcement mechanism or approach to ensure that policies are followed? Who is the key sponsor for your Big Data governance program? (The CIO is best) Do you have defined policies surrounding the use of social media data for potential employees and customers, as well as the use of customer Geo-location data? How accessible are complex analytic routines to your user base? What is the level of involvement with outside vendors and third parties in regard to the planning and execution of Big Data projects? What programming technologies are utilized by your data warehouse/BI staff when working with Big Data? These are some of the important questions I ask each customer who is actively evaluating Big Data trends for their organizations. These questions give you a sense of direction where to start, what to use, how to secure, how to analyze and more. Sign off Any Big data is analysis is incomplete without a compelling story. The best way to understand this is to watch Hans Rosling – Gapminder (2:17 to 6:06) videos about the third world myths. Don’t get overwhelmed with the Big Data buzz word, the destination to what your data speaks is important. In this blog post, we did not particularly look at any Big Data technologies. This is a set of questionnaire one needs to keep in mind as they embark their journey of Big Data. I did write some of the basics in my blog: Big Data – Big Hype yet Big Opportunity. Do let me know if these questions make sense?  Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • EM12c Release 4: New EMCLI Verbs

    - by SubinDaniVarughese
    Here are the new EM CLI verbs in Enterprise Manager 12c Release 4 (12.1.0.4). This helps you in writing new scripts or enhancing your existing scripts for further automation. Basic Administration Verbs invoke_ws - Invoke EM web service.ADM Verbs associate_target_to_adm - Associate a target to an application data model. export_adm - Export Application Data Model to a specified .xml file. import_adm - Import Application Data Model from a specified .xml file. list_adms - List the names, target names and application suites of existing Application Data Models verify_adm - Submit an application data model verify job for the target specified.Agent Update Verbs get_agent_update_status -  Show Agent Update Results get_not_updatable_agents - Shows Not Updatable Agents get_updatable_agents - Show Updatable Agents update_agents - Performs Agent Update Prereqs and submits Agent Update JobBI Publisher Reports Verbs grant_bipublisher_roles - Grants access to the BI Publisher catalog and features. revoke_bipublisher_roles - Revokes access to the BI Publisher catalog and features.Blackout Verbs create_rbk - Create a Retro-active blackout.CFW Verbs cancel_cloud_service_requests -  To cancel cloud service requests delete_cloud_service_instances -  To delete cloud service instances delete_cloud_user_objects - To delete cloud user objects. get_cloud_service_instances - To get information about cloud service instances get_cloud_service_requests - To get information about cloud requests get_cloud_user_objects - To get information about cloud user objects.Chargeback Verbs add_chargeback_entity - Adds the given entity to Chargeback. assign_charge_plan - Assign a plan to a chargeback entity. assign_cost_center - Assign a cost center to a chargeback entity. create_charge_entity_type - Create  charge entity type export_charge_plans - Exports charge plans metadata to file export_custom_charge_items -  Exports user defined charge items to a file import_charge_plans - Imports charge plans metadata from given file import_custom_charge_items -  Imports user defined charge items metadata from given file list_charge_plans - Gives a list of charge plans in Chargeback. list_chargeback_entities - Gives a list of all the entities in Chargeback list_chargeback_entity_types - Gives a list of all the entity types that are supported in Chargeback list_cost_centers - Lists the cost centers in Chargeback. remove_chargeback_entity - Removes the given entity from Chargeback. unassign_charge_plan - Un-assign the plan associated to a chargeback entity. unassign_cost_center - Un-assign the cost center associated to a chargeback entity.Configuration/Association History disable_config_history - Disable configuration history computation for a target type. enable_config_history - Enable configuration history computation for a target type. set_config_history_retention_period - Sets the amount of time for which Configuration History is retained.ConfigurationCompare config_compare - Submits the configuration comparison job get_config_templates - Gets all the comparison templates from the repositoryCompliance Verbs fix_compliance_state -  Fix compliance state by removing references in deleted targets.Credential Verbs update_credential_setData Subset Verbs export_subset_definition - Exports specified subset definition as XML file at specified directory path. generate_subset - Generate subset using specified subset definition and target database. import_subset_definition - Import a subset definition from specified XML file. import_subset_dump - Imports dump file into specified target database. list_subset_definitions - Get the list of subset definition, adm and target nameDelete pluggable Database Job Verbs delete_pluggable_database - Delete a pluggable databaseDeployment Procedure Verbs get_runtime_data - Get the runtime data of an executionDiscover and Push to Agents Verbs generate_discovery_input - Generate Discovery Input file for discovering Auto-Discovered Domains refresh_fa - Refresh Fusion Instance run_fa_diagnostics - Run Fusion Applications DiagnosticsFusion Middleware Provisioning Verbs create_fmw_domain_profile - Create a Fusion Middleware Provisioning Profile from a WebLogic Domain create_fmw_home_profile - Create a Fusion Middleware Provisioning Profile from an Oracle Home create_inst_media_profile - Create a Fusion Middleware Provisioning Profile from Installation MediaGold Agent Image Verbs create_gold_agent_image - Creates a gold agent image. decouple_gold_agent_image - Decouples the agent from gold agent image. delete_gold_agent_image - Deletes a gold agent image. get_gold_agent_image_activity_status -  Gets gold agent image activity status. get_gold_agent_image_details - Get the gold agent image details. list_agents_on_gold_image - Lists agents on a gold agent image. list_gold_agent_image_activities - Lists gold agent image activities. list_gold_agent_image_series - Lists gold agent image series. list_gold_agent_images - Lists the available gold agent images. promote_gold_agent_image - Promotes a gold agent image. stage_gold_agent_image - Stages a gold agent image.Incident Rules Verbs add_target_to_rule_set - Add a target to an enterprise rule set. delete_incident_record - Delete one or more open incidents remove_target_from_rule_set - Remove a target from an enterprise rule set. Job Verbs export_jobs - Export job details in to an xml file import_jobs - Import job definitions from an xml file job_input_file - Supply details for a job verb in a property file resume_job - Resume a job or set of jobs suspend_job - Suspend a job or set of jobs Oracle Database as Service Verbs config_db_service_target - Configure DB Service target for OPCPrivilege Delegation Settings Verbs clear_default_privilege_delegation_setting - Clears the default privilege delegation setting for a given list of platforms set_default_privilege_delegation_setting - Sets the default privilege delegation setting for a given list of platforms test_privilege_delegation_setting - Tests a Privilege Delegation Setting on a hostSSA Verbs cleanup_dbaas_requests - Submit cleanup request for failed request create_dbaas_quota - Create Database Quota for a SSA User Role create_service_template - Create a Service Template delete_dbaas_quota - Delete the Database Quota setup for a SSA User Role delete_service_template - Delete a given service template get_dbaas_quota - List the Database Quota setup for all SSA User Roles get_dbaas_request_settings - List the Database Request Settings get_service_template_detail - Get details of a given service template get_service_templates -  Get the list of available service templates rename_service_template -  Rename a given service template update_dbaas_quota - Update the Database Quota for a SSA User Role update_dbaas_request_settings - Update the Database Request Settings update_service_template -  Update a given service template. SavedConfigurations get_saved_configs  - Gets the saved configurations from the repository Server Generated Alert Metric Verbs validate_server_generated_alerts  - Server Generated Alert Metric VerbServices Verbs edit_sl_rule - Edit the service level rule for the specified serviceSiebel Verbs list_siebel_enterprises -  List Siebel enterprises currently monitored in EM list_siebel_servers -  List Siebel servers under a specified siebel enterprise update_siebel- Update a Siebel enterprise or its underlying serversSiteGuard Verbs add_siteguard_aux_hosts -  Associate new auxiliary hosts to the system configure_siteguard_lag -  Configure apply lag and transport lag limit for databases delete_siteguard_aux_host -  Delete auxiliary host associated with a site delete_siteguard_lag -  Erases apply lag or transport lag limit for databases get_siteguard_aux_hosts -  Get all auxiliary hosts associated with a site get_siteguard_health_checks -  Shows schedule of health checks get_siteguard_lag -  Shows apply lag or transport lag limit for databases schedule_siteguard_health_checks -  Schedule health checks for an operation plan stop_siteguard_health_checks -  Stops all future health check execution of an operation plan update_siteguard_lag -  Updates apply lag and transport lag limit for databasesSoftware Library Verbs stage_swlib_entity_files -  Stage files of an entity from Software Library to a host target.Target Data Verbs create_assoc - Creates target associations delete_assoc - Deletes target associations list_allowed_pairs - Lists allowed association types for specified source and destination list_assoc - Lists associations between source and destination targets manage_agent_partnership - Manages partnership between agents. Used for explicitly assigning agent partnershipsTrace Reports generate_ui_trace_report  -  Generate and download UI Page performance report (to identify slow rendering pages)VI EMCLI Verbs add_virtual_platform - Add Oracle Virtual PLatform(s). modify_virtual_platform - Modify Oracle Virtual Platform.To get more details about each verb, execute$ emcli help <verb_name>Example: $ emcli help list_assocNew resources in list verbThese are the new resources in EM CLI list verb :Certificates  WLSCertificateDetails Credential Resource Group  PreferredCredentialsDefaultSystemScope - Preferred credentials (System Scope)   PreferredCredentialsSystemScope - Target preferred credentialPrivilege Delegation Settings  TargetPrivilegeDelegationSettingDetails  - List privilege delegation setting details on a host  TargetPrivilegeDelegationSetting - List privilege delegation settings on a host   PrivilegeDelegationSettings  - Lists all Privilege Delegation Settings   PrivilegeDelegationSettingDetails - Lists details of  Privilege Delegation Settings To get more details about each resource, execute$ emcli list -resource="<resource_name>" -helpExample: $ emcli list -resource="PrivilegeDelegationSettings" -helpDeprecated Verbs:Agent Administration Verbs resecure_agent - Resecure an agentTo get the complete list of verbs, execute:$ emcli help Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • How To - Guide to Importing Data from a MySQL Database to Excel using MySQL for Excel

    - by Javier Treviño
    Fetching data from a database to then get it into an Excel spreadsheet to do analysis, reporting, transforming, sharing, etc. is a very common task among users. There are several ways to extract data from a MySQL database to then import it to Excel; for example you can use the MySQL Connector/ODBC to configure an ODBC connection to a MySQL database, then in Excel use the Data Connection Wizard to select the database and table from which you want to extract data from, then specify what worksheet you want to put the data into.  Another way is to somehow dump a comma delimited text file with the data from a MySQL table (using the MySQL Command Line Client, MySQL Workbench, etc.) to then in Excel open the file using the Text Import Wizard to attempt to correctly split the data in columns. These methods are fine, but involve some degree of technical knowledge to make the magic happen and involve repeating several steps each time data needs to be imported from a MySQL table to an Excel spreadsheet. So, can this be done in an easier and faster way? With MySQL for Excel you can. MySQL for Excel features an Import MySQL Data action where you can import data from a MySQL Table, View or Stored Procedure literally with a few clicks within Excel.  Following is a quick guide describing how to import data using MySQL for Excel. This guide assumes you already have a working MySQL Server instance, Microsoft Office Excel 2007 or 2010 and MySQL for Excel installed. 1. Opening MySQL for Excel Being an Excel Add-In, MySQL for Excel is opened from within Excel, so to use it open Excel, go to the Data tab located in the Ribbon and click MySQL for Excel at the far right of the Ribbon. 2. Creating a MySQL Connection (may be optional) If you have MySQL Workbench installed you will automatically see the same connections that you can see in MySQL Workbench, so you can use any of those and there may be no need to create a new connection. If you want to create a new connection (which normally you will do only once), in the Welcome Panel click New Connection, which opens the Setup New Connection dialog. Here you only need to give your new connection a distinctive Connection Name, specify the Hostname (or IP address) where the MySQL Server instance is running on (if different than localhost), the Port to connect to and the Username for the login. If you wish to test if your setup is good to go, click Test Connection and an information dialog will pop-up stating if the connection is successful or errors were found. 3.Opening a connection to a MySQL Server To open a pre-configured connection to a MySQL Server you just need to double-click it, so the Connection Password dialog is displayed where you enter the password for the login. 4. Selecting a MySQL Schema After opening a connection to a MySQL Server, the Schema Selection Panel is shown, where you can select the Schema that contains the Tables, Views and Stored Procedures you want to work with. To do so, you just need to either double-click the desired Schema or select it and click Next >. 5. Importing data… All previous steps were really the basic minimum needed to drill-down to the DB Object Selection Panel  where you can see the Database Objects (grouped by type: Tables, Views and Procedures in that order) that you want to perform actions against; in the case of this guide, the action of importing data from them. a. From a MySQL Table To import from a Table you just need to select it from the list of Database Objects’ Tables group, after selecting it you will note actions below the list become available; then click Import MySQL Data. The Import Data dialog is displayed; you can see some basic information here like the name of the Excel worksheet the data will be imported to (in the window title), the Table Name, the total Row Count and a 10 row preview of the data meant for the user to see the columns that the table contains and to provide a way to select which columns to import. The Import Data dialog is designed with defaults in place so all data is imported (all rows and all columns) by just clicking Import; this is important to minimize the number of clicks needed to get the job done. After the import is performed you will have the data in the Excel worksheet formatted automatically. If you need to override the defaults in the Import Data dialog to change the columns selected for import or to change the number of imported rows you can easily do so before clicking Import. In the screenshot below the defaults are overridden to import only the first 3 columns and rows 10 – 60 (Limit to 50 Rows and Start with Row 10). If the number of rows to be imported exceeds the maximum number of rows Excel can hold in its worksheet, a warning will be displayed in the dialog, meaning the imported number of rows will be limited by that maximum number (65,535 rows if the worksheet is in Compatibility Mode).  In the screenshot below you can see the Table contains 80,559 rows, but only 65,534 rows will be imported since the first row is used for the column names if the Include Column Names as Headers checkbox is checked. b. From a MySQL View Similar to the way of importing from a Table, to import from a View you just need to select it from the list of Database Objects’ Views group, then click Import MySQL Data. The Import Data dialog is displayed; identically to the way everything looks when importing from a table, the dialog displays the View Name, the total Row Count and the data preview grid. Since Views are really a filtered way to display data from Tables, it is actually as if we are extracting data from a Table; so the Import Data dialog is actually identical for those 2 Database Objects. After the import is performed, the data in the Excel spreadsheet looks like the following screenshot. Note that you can override the defaults in the Import Data dialog in the same way described above for importing data from Tables. Also the Compatibility Mode warning will be displayed if data exceeds the maximum number of rows explained before. c. From a MySQL Procedure Too import from a Procedure you just need to select it from the list of Database Objects’ Procedures group (note you can see Procedures here but not Functions since these return a single value, so by design they are filtered out). After the selection is made, click Import MySQL Data. The Import Data dialog is displayed, but this time you can see it looks different to the one used for Tables and Views.  Given the nature of Store Procedures, they require first that values are supplied for its Parameters and also Procedures can return multiple Result Sets; so the Import Data dialog shows the Procedure Name and the Procedure Parameters in a grid where their values are input. After you supply the Parameter Values click Call. After calling the Procedure, the Result Sets returned by it are displayed at the bottom of the dialog; output parameters and the return value of the Procedure are appended as the last Result Set of the group. You can see each Result Set is displayed as a tab so you can see a preview of the returned data.  You can specify if you want to import the Selected Result Set (default), All Result Sets – Arranged Horizontally or All Result Sets – Arranged Vertically using the Import drop-down list; then click Import. After the import is performed, the data in the Excel spreadsheet looks like the following screenshot.  Note in this example all Result Sets were imported and arranged vertically. As you can see using MySQL for Excel importing data from a MySQL database becomes an easy task that requires very little technical knowledge, so it can be done by any type of user. Hope you enjoyed this guide! Remember that your feedback is very important for us, so drop us a message: MySQL on Windows (this) Blog - https://blogs.oracle.com/MySqlOnWindows/ Forum - http://forums.mysql.com/list.php?172 Facebook - http://www.facebook.com/mysql Cheers!

    Read the article

  • Gnome Do not Launching

    - by PyRulez
    When I try running gnome do, I get this. chris@Chris-Ubuntu-Laptop:~$ gnome-do pgrep: invalid user name: -u and it is not writable Trying sudo: chris@Chris-Ubuntu-Laptop:~$ sudo gnome-do [NetworkService] Could not initialize Network Manager dbus: Unable to open the session message bus. [Error 17:54:30.122] [SystemService] Could not initialize dbus: Unable to open the session message bus. (Do:2401): Wnck-CRITICAL **: wnck_set_client_type got called multiple times. (Do:2401): libdo-WARNING **: Binding '<Super>space' failed! [Error 17:54:30.649] [AbstractKeyBindingService] Key "" is already mapped. Tomboy.NotesItemSource "Tomboy Notes" encountered an error in UpdateItems: System.TypeInitializationException: An exception was thrown by the type initializer for Tomboy.TomboyDBus ---> System.Exception: Unable to open the session message bus. ---> System.ArgumentNullException: Argument cannot be null. Parameter name: address at NDesk.DBus.Bus.Open (System.String address) [0x00000] in <filename unknown>:0 at NDesk.DBus.Bus.get_Session () [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at NDesk.DBus.Bus.get_Session () [0x00000] in <filename unknown>:0 at Tomboy.TomboyDBus..cctor () [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at Tomboy.NotesItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . Firefox.PlacesItemSource "Firefox Places" encountered an error in UpdateItems: System.InvalidCastException: Cannot cast from source type to destination type. at Mono.Data.Sqlite.SqliteDataReader.VerifyType (Int32 i, DbType typ) [0x00000] in <filename unknown>:0 at Mono.Data.Sqlite.SqliteDataReader.GetString (Int32 i) [0x00000] in <filename unknown>:0 at Firefox.PlacesItemSource+<LoadPlaceItems>c__Iterator3.MoveNext () [0x00000] in <filename unknown>:0 at System.Collections.Generic.List`1[Firefox.PlaceItem].AddEnumerable (IEnumerable`1 enumerable) [0x00000] in <filename unknown>:0 at System.Collections.Generic.List`1[Firefox.PlaceItem]..ctor (IEnumerable`1 collection) [0x00000] in <filename unknown>:0 at System.Linq.Enumerable.ToArray[PlaceItem] (IEnumerable`1 source) [0x00000] in <filename unknown>:0 at Firefox.PlacesItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . Do.Universe.Linux.GNOMESpecialLocationsItemSource "GNOME Special Locations" encountered an error in UpdateItems: System.IO.FileNotFoundException: Could not find file "/root/.gtk-bookmarks". File name: '/root/.gtk-bookmarks' at System.IO.FileStream..ctor (System.String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, Boolean anonymous, FileOptions options) [0x00000] in <filename unknown>:0 at System.IO.FileStream..ctor (System.String path, FileMode mode, FileAccess access, FileShare share) [0x00000] in <filename unknown>:0 at (wrapper remoting-invoke-with-check) System.IO.FileStream:.ctor (string,System.IO.FileMode,System.IO.FileAccess,System.IO.FileShare) at System.IO.File.OpenRead (System.String path) [0x00000] in <filename unknown>:0 at System.IO.StreamReader..ctor (System.String path, System.Text.Encoding encoding, Boolean detectEncodingFromByteOrderMarks, Int32 bufferSize) [0x00000] in <filename unknown>:0 at System.IO.StreamReader..ctor (System.String path) [0x00000] in <filename unknown>:0 at (wrapper remoting-invoke-with-check) System.IO.StreamReader:.ctor (string) at Do.Universe.Linux.GNOMESpecialLocationsItemSource+<ReadBookmarkItems>c__Iterator3.MoveNext () [0x00000] in <filename unknown>:0 at Do.Universe.Linux.GNOMESpecialLocationsItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . ^[^\Full thread dump: "<unnamed thread>" tid=0x0xb7570700 this=0x0x56f18 thread handle 0x403 state : not waiting owns () at (wrapper managed-to-native) Mono.Unix.Native.Syscall.read (int,intptr,ulong) <0xffffffff> at Mono.Unix.Native.Syscall.read (int,void*,ulong) <0x00023> at Mono.Unix.UnixStream.Read (byte[],int,int) <0x0008b> at NDesk.DBus.Connection.ReadMessage () <0x0003c> at NDesk.DBus.Connection.Iterate () <0x0001b> at NDesk.DBus.BusG/<Init>c__AnonStorey0.<>m__0 (intptr,NDesk.GLib.IOCondition,intptr) <0x00033> at (wrapper native-to-managed) NDesk.DBus.BusG/<Init>c__AnonStorey0.<>m__0 (intptr,NDesk.GLib.IOCondition,intptr) <0xffffffff> at (wrapper managed-to-native) Gtk.Clipboard.gtk_clipboard_wait_is_text_available (intptr) <0xffffffff> at Gtk.Clipboard.WaitIsTextAvailable () <0x00017> at Do.Universe.SelectedTextItem.UpdateSelection (object,System.EventArgs) <0x00027> at Do.Platform.AbstractApplicationService.OnSummoned () <0x00025> at Do.Platform.ApplicationService.<ApplicationService>m__31 (object,System.EventArgs) <0x00013> at Do.Core.Controller.OnSummoned () <0x00025> at Do.Core.Controller.Summon () <0x00027> at Do.Do.Main (string[]) <0x001eb> at (wrapper runtime-invoke) <Module>.runtime_invoke_void_object (object,intptr,intptr,intptr) <0xffffffff> "<unnamed thread>" tid=0x0xb2c81b40 this=0x0x194150 thread handle 0x412 state : interrupted state owns () at (wrapper managed-to-native) System.IO.InotifyWatcher.ReadFromFD (intptr,byte[],intptr) <0xffffffff> at System.IO.InotifyWatcher.Monitor () <0x0005f> at System.Threading.Thread.StartInternal () <0x00057> at (wrapper runtime-invoke) object.runtime_invoke_void__this__ (object,intptr,intptr,intptr) <0xffffffff> "Universe Update Dispatcher" tid=0x0xb29ffb40 this=0x0x569d8 thread handle 0x41b state : interrupted state owns () at (wrapper managed-to-native) System.Threading.WaitHandle.WaitOne_internal (System.Threading.WaitHandle,intptr,int,bool) <0xffffffff> at System.Threading.WaitHandle.WaitOne (System.TimeSpan,bool) <0x00133> at System.Threading.WaitHandle.WaitOne (System.TimeSpan) <0x00022> at Do.Core.UniverseManager.UniverseUpdateLoop () <0x0007a> at System.Threading.Thread.StartInternal () <0x00057> at (wrapper runtime-invoke) object.runtime_invoke_void__this__ (object,intptr,intptr,intptr) <0xffffffff> Tomboy.NotesItemSource "Tomboy Notes" encountered an error in UpdateItems: System.TypeInitializationException: An exception was thrown by the type initializer for Tomboy.TomboyDBus ---> System.Exception: Unable to open the session message bus. ---> System.ArgumentNullException: Argument cannot be null. Parameter name: address at NDesk.DBus.Bus.Open (System.String address) [0x00000] in <filename unknown>:0 at NDesk.DBus.Bus.get_Session () [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at NDesk.DBus.Bus.get_Session () [0x00000] in <filename unknown>:0 at Tomboy.TomboyDBus..cctor () [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at Tomboy.NotesItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . Firefox.PlacesItemSource "Firefox Places" encountered an error in UpdateItems: System.InvalidCastException: Cannot cast from source type to destination type. at Mono.Data.Sqlite.SqliteDataReader.VerifyType (Int32 i, DbType typ) [0x00000] in <filename unknown>:0 at Mono.Data.Sqlite.SqliteDataReader.GetString (Int32 i) [0x00000] in <filename unknown>:0 at Firefox.PlacesItemSource+<LoadPlaceItems>c__Iterator3.MoveNext () [0x00000] in <filename unknown>:0 at System.Collections.Generic.List`1[Firefox.PlaceItem].AddEnumerable (IEnumerable`1 enumerable) [0x00000] in <filename unknown>:0 at System.Collections.Generic.List`1[Firefox.PlaceItem]..ctor (IEnumerable`1 collection) [0x00000] in <filename unknown>:0 at System.Linq.Enumerable.ToArray[PlaceItem] (IEnumerable`1 source) [0x00000] in <filename unknown>:0 at Firefox.PlacesItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . Do.Universe.Linux.GNOMESpecialLocationsItemSource "GNOME Special Locations" encountered an error in UpdateItems: System.IO.FileNotFoundException: Could not find file "/root/.gtk-bookmarks". File name: '/root/.gtk-bookmarks' at System.IO.FileStream..ctor (System.String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, Boolean anonymous, FileOptions options) [0x00000] in <filename unknown>:0 at System.IO.FileStream..ctor (System.String path, FileMode mode, FileAccess access, FileShare share) [0x00000] in <filename unknown>:0 at (wrapper remoting-invoke-with-check) System.IO.FileStream:.ctor (string,System.IO.FileMode,System.IO.FileAccess,System.IO.FileShare) at System.IO.File.OpenRead (System.String path) [0x00000] in <filename unknown>:0 at System.IO.StreamReader..ctor (System.String path, System.Text.Encoding encoding, Boolean detectEncodingFromByteOrderMarks, Int32 bufferSize) [0x00000] in <filename unknown>:0 at System.IO.StreamReader..ctor (System.String path) [0x00000] in <filename unknown>:0 at (wrapper remoting-invoke-with-check) System.IO.StreamReader:.ctor (string) at Do.Universe.Linux.GNOMESpecialLocationsItemSource+<ReadBookmarkItems>c__Iterator3.MoveNext () [0x00000] in <filename unknown>:0 at Do.Universe.Linux.GNOMESpecialLocationsItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . It stops when I try my key combination, ctrl-alt-. It does not pop up though.

    Read the article

  • How I do VCS

    - by Wes McClure
    After years of dabbling with different version control systems and techniques, I wanted to share some of what I like and dislike in a few blog posts.  To start this out, I want to talk about how I use VCS in a team environment.  These come in a series of tips or best practices that I try to follow.  Note: This list is subject to change in the future. Always use some form of version control for all aspects of software development. Development is an evolution.  Looking back at where we were is an invaluable asset in that process.  This includes data schemas and documentation. Reverting / reapplying changes is absolutely critical for efficient development. The tools I use: Code: Hg (preferred), SVN Database: TSqlMigrations Documents: Sometimes in code repository, also SharePoint with versioning Always tag a commit (changeset) with comments This is a quick way to describe to someone else (or your future self) what the changeset entails. Be brief but courteous. One or two sentences about the task, not the actual changes. Use precommit hooks or setup the central repository to reject changes without comments. Link changesets to documentation If your project management system integrates with version control, or has a way to externally reference stories, tasks etc then leave a reference in the commit.  This helps locate more information about the commit and/or related changesets. It’s best to have a precommit hook or system that requires this information, otherwise it’s easy to forget. Ability to work offline is required, including commits and history Yes this requires a DVCS locally but doesn’t require the central repository to be a DVCS.  I prefer to use either Git or Hg but if it isn’t possible to migrate the central repository, it’s still possible for a developer to push / pull changes to that repository from a local Hg or Git repository. Never lock resources (files) in a central repository… Rude! We have merge tools for a reason, merging sucked a long time ago, it doesn’t anymore… stop locking files! This is unproductive, rude and annoying to other team members. Always review everything in your commit. Never ever commit a set of files without reviewing the changes in each. Never add a file without asking yourself, deep down inside, does this belong? If you leave to make changes during a review, start the review over when you come back.  Never assume you didn’t touch a file, double check. This is another reason why you want to avoid large, infrequent commits. Requirements for tools Quickly show pending changes for the entire repository. Default action for a resource with pending changes is a diff. Pluggable diff & merge tool Produce a unified diff or a diff of all changes.  This is helpful to bulk review changes instead of opening each file. The central repository is not your own personal dump yard.  Breaking this rule is a sure fire way to get the F bomb dropped in front of your name, multiple times. If you turn on Visual Studio’s commit on closing studio option, I will personally break your fingers. By the way, the person(s) in charge of this feature should be fired and never be allowed near programming, ever again. Commit (integrate) to the central repository / branch frequently I try to do this before leaving each day, especially without a DVCS.  One never knows when they might need to work from remote the following day. Never commit commented out code If it isn’t needed anymore, delete it! If you aren’t sure if it might be useful in the future, delete it! This is why we have history. If you don’t know why it’s commented out, figure it out and then either uncomment it or delete it. Don’t commit build artifacts, user preferences and temporary files. Build artifacts do not belong in VCS, everything in them is present in the code. (ie: bin\*, obj\*, *.dll, *.exe) User preferences are your settings, stop overriding my preferences files! (ie: *.suo and *.user files) Most tools allow you to ignore certain files and Hg/Git allow you to version this as an ignore file.  Set this up as a first step when creating a new repository! Be polite when merging unresolved conflicts. Count to 10, cuss, grab a stress ball and realize it’s not a big deal.  Actually, it’s an opportunity to let you know that someone else is working in the same area and you might want to communicate with them. Following the other rules, especially committing frequently, will reduce the likelihood of this. Suck it up, we all have to deal with this unintended consequence at times.  Just be careful and GET FAMILIAR with your merge tool.  It’s really not as scary as you think.  I personally prefer KDiff3 as its merging capabilities rock. Don’t blindly merge and then blindly commit your changes, this is rude and unprofessional.  Make sure you understand why the conflict occurred and which parts of the code you want to keep.  Apply scrutiny when you commit a manual merge: review the diff! Make sure you test the changes (build and run automated tests) Become intimate with your version control system and the tools you use with it. Avoid trial and error as much as is possible, sit down and test the tool out, read some tutorials etc.  Create test repositories and walk through common scenarios. Find the most efficient way to do your work.  These tools will be used repetitively, so inefficiencies will add up. Sometimes this involves a mix of tools, both GUI and CLI. I like a combination of both Tortoise Hg and hg cli to get the job efficiently. Always tag releases Create a way to find a given release, whether this be in comments or an explicit tag / branch.  This should be readily discoverable. Create release branches to patch bugs and then merge the changes back to other development branch(es). If using feature branches, strive for periodic integrations. Feature branches often cause forked code that becomes irreconcilable.  Strive to re-integrate somewhat frequently with the branch this code will ultimately be merged into.  This will avoid merge conflicts in the future. Feature branches are best when they are mutually exclusive of active development in other branches. Use and abuse local commits , at least one per task in a story. This builds a trail of changes in your local repository that can be pushed to a central repository when the story is complete. Never commit a broken build or failing tests to the central repository. It’s ok for a local commit to break the build and/or tests.  In fact, I encourage this if it helps group the changes more logically.  This is one of the main reasons I got excited about DVCS, when I wanted more than one changeset for a set of pending changes but some files could be grouped into both changesets (like solution file / project file changes). If you have more than a dozen outstanding changed resources, there should probably be more than one commit involved. Exceptions when maintaining code bases that require shotgun surgery, in this case, it’s a design smell :) Don’t version sensitive information Especially usernames / passwords   There is one area I haven’t found a solution I like yet: versioning 3rd party libraries and/or code.  I really dislike keeping any assemblies in the repository, but seems to be a common practice for external libraries.  Please feel free to share your ideas about this below.    -Wes

    Read the article

  • How to solve exception_priv _instruction exception while running destop project? [on hold]

    - by Haritha
    While running desktop project im getting exception_priv _instruction how to solve this??? while running this page is coming # # A fatal error has been detected by the Java Runtime Environment: # # EXCEPTION_PRIV_INSTRUCTION (0xc0000096) at pc=0x02f5a92b, pid=3012, tid=3104 # # JRE version: 7.0-b147 # Java VM: Java HotSpot(TM) Client VM (21.0-b17 mixed mode, sharing windows-x86 ) # Problematic frame: # C 0x02f5a92b # # Failed to write core dump. Minidumps are not enabled by default on client versions of Windows # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # --------------- T H R E A D --------------- Current thread (0x02f5a800): JavaThread "LWJGL Application" [_thread_in_native, id=3104, stack(0x076f0000,0x07740000)] siginfo: ExceptionCode=0xc0000096 Registers: EAX=0x000df4f0, EBX=0x32afc180, ECX=0x000df4f0, EDX=0x00000020 ESP=0x0773f768, EBP=0x0773f790, ESI=0x32afc180, EDI=0x02f5a800 EIP=0x02f5a92b, EFLAGS=0x00010206 Top of Stack: (sp=0x0773f768) 0x0773f768: 02bd429c 02bd429c 0773f770 32afc180 0x0773f778: 0773f7b8 32b022c8 00000000 32afc180 0x0773f788: 00000000 0773f7a0 0773f7dc 00943187 0x0773f798: 229ec1c0 00948839 69081736 00000000 0x0773f7a8: 089b0048 00000000 00000014 00001406 0x0773f7b8: 00000002 0773f7bc 32afbeb0 0773f7f8 0x0773f7c8: 32b022c8 00000000 32afbf00 0773f7a0 0x0773f7d8: 0773f7f0 0773f81c 00943187 69081736 Instructions: (pc=0x02f5a92b) 0x02f5a90b: 00 43 00 00 00 00 f0 bc 02 e8 00 e9 22 40 f7 73 0x02f5a91b: 07 85 a5 94 00 90 f7 73 07 50 cc a0 6d d8 49 c0 0x02f5a92b: 6d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x02f5a93b: 00 00 00 00 00 00 00 00 00 08 80 3d 37 00 00 00 Register to memory mapping: EAX=0x000df4f0 is an unknown value EBX=0x32afc180 is an oop {method} - klass: {other class} ECX=0x000df4f0 is an unknown value EDX=0x00000020 is an unknown value ESP=0x0773f768 is pointing into the stack for thread: 0x02f5a800 EBP=0x0773f790 is pointing into the stack for thread: 0x02f5a800 ESI=0x32afc180 is an oop {method} - klass: {other class} EDI=0x02f5a800 is a thread Stack: [0x076f0000,0x07740000], sp=0x0773f768, free space=317k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) C 0x02f5a92b j org.lwjgl.opengl.GL11.glVertexPointer(IILjava/nio/FloatBuffer;)V+48 j com.badlogic.gdx.backends.lwjgl.LwjglGL10.glVertexPointer(IIILjava/nio/Buffer;)V+53 j com.badlogic.gdx.graphics.glutils.VertexArray.bind()V+149 j com.badlogic.gdx.graphics.Mesh.bind()V+25 j com.badlogic.gdx.graphics.Mesh.render(IIIZ)V+32 j com.badlogic.gdx.graphics.Mesh.render(III)V+8 j com.badlogic.gdx.graphics.g2d.SpriteBatch.flush()V+197 j com.badlogic.gdx.graphics.g2d.SpriteBatch.switchTexture(Lcom/badlogic/gdx/graphics/Texture;)V+1 j com.badlogic.gdx.graphics.g2d.SpriteBatch.draw(Lcom/badlogic/gdx/graphics/Texture;FFFF)V+33 j sevenseas.game.WorldRenderer.drawBob()V+54 j sevenseas.game.WorldRenderer.render()V+12 j sevenseas.game.GameClass.render(F)V+38 j com.badlogic.gdx.Game.render()V+19 j com.badlogic.gdx.backends.lwjgl.LwjglApplication.mainLoop()V+642 j com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run()V+27 v ~StubRoutines::call_stub V [jvm.dll+0x122c7e] V [jvm.dll+0x1c9c0e] V [jvm.dll+0x122e73] V [jvm.dll+0x122ed7] V [jvm.dll+0xccd1f] V [jvm.dll+0x14433f] V [jvm.dll+0x171549] C [msvcr100.dll+0x5c6de] endthreadex+0x3a C [msvcr100.dll+0x5c788] endthreadex+0xe4 C [kernel32.dll+0xb713] GetModuleFileNameA+0x1b4 Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) j org.lwjgl.opengl.GL11.nglVertexPointer(IIIJJ)V+0 j org.lwjgl.opengl.GL11.glVertexPointer(IILjava/nio/FloatBuffer;)V+48 j com.badlogic.gdx.backends.lwjgl.LwjglGL10.glVertexPointer(IIILjava/nio/Buffer;)V+53 j com.badlogic.gdx.graphics.glutils.VertexArray.bind()V+149 j com.badlogic.gdx.graphics.Mesh.bind()V+25 j com.badlogic.gdx.graphics.Mesh.render(IIIZ)V+32 j com.badlogic.gdx.graphics.Mesh.render(III)V+8 j com.badlogic.gdx.graphics.g2d.SpriteBatch.flush()V+197 j com.badlogic.gdx.graphics.g2d.SpriteBatch.switchTexture(Lcom/badlogic/gdx/graphics/Texture;)V+1 j com.badlogic.gdx.graphics.g2d.SpriteBatch.draw(Lcom/badlogic/gdx/graphics/Texture;FFFF)V+33 j sevenseas.game.WorldRenderer.drawBob()V+54 j sevenseas.game.WorldRenderer.render()V+12 j sevenseas.game.GameClass.render(F)V+38 j com.badlogic.gdx.Game.render()V+19 j com.badlogic.gdx.backends.lwjgl.LwjglApplication.mainLoop()V+642 j com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run()V+27 v ~StubRoutines::call_stub --------------- P R O C E S S --------------- Java Threads: ( => current thread ) 0x003d6c00 JavaThread "DestroyJavaVM" [_thread_blocked, id=3240, stack(0x008c0000,0x00910000)] =>0x02f5a800 JavaThread "LWJGL Application" [_thread_in_native, id=3104, stack(0x076f0000,0x07740000)] 0x02bcf000 JavaThread "Service Thread" daemon [_thread_blocked, id=2612, stack(0x02e00000,0x02e50000)] 0x02bc1000 JavaThread "C1 CompilerThread0" daemon [_thread_blocked, id=2776, stack(0x02db0000,0x02e00000)] 0x02bbf400 JavaThread "Attach Listener" daemon [_thread_blocked, id=2448, stack(0x02d60000,0x02db0000)] 0x02bbe000 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=1764, stack(0x02d10000,0x02d60000)] 0x02bb8000 JavaThread "Finalizer" daemon [_thread_blocked, id=3864, stack(0x02cc0000,0x02d10000)] 0x02bb3400 JavaThread "Reference Handler" daemon [_thread_blocked, id=2424, stack(0x02c70000,0x02cc0000)] Other Threads: 0x02bb1800 VMThread [stack: 0x02c20000,0x02c70000] [id=3076] 0x02bd1000 WatcherThread [stack: 0x02e50000,0x02ea0000] [id=3276] VM state:not at safepoint (normal execution) VM Mutex/Monitor currently owned by a thread: None Heap def new generation total 4928K, used 2571K [0x229c0000, 0x22f10000, 0x27f10000) eden space 4416K, 46% used [0x229c0000, 0x22bc2e38, 0x22e10000) from space 512K, 100% used [0x22e90000, 0x22f10000, 0x22f10000) to space 512K, 0% used [0x22e10000, 0x22e10000, 0x22e90000) tenured generation total 10944K, used 634K [0x27f10000, 0x289c0000, 0x329c0000) the space 10944K, 5% used [0x27f10000, 0x27faea60, 0x27faec00, 0x289c0000) compacting perm gen total 12288K, used 1655K [0x329c0000, 0x335c0000, 0x369c0000) the space 12288K, 13% used [0x329c0000, 0x32b5dc58, 0x32b5de00, 0x335c0000) ro space 10240K, 42% used [0x369c0000, 0x36dfc660, 0x36dfc800, 0x373c0000) rw space 12288K, 53% used [0x373c0000, 0x37a38180, 0x37a38200, 0x37fc0000) Code Cache [0x00940000, 0x009d8000, 0x02940000) total_blobs=305 nmethods=80 adapters=158 free_code_cache=32183Kb largest_free_block=32955904 Dynamic libraries: 0x00400000 - 0x0042f000 C:\Program Files\Java\jre7\bin\javaw.exe 0x7c900000 - 0x7c9af000 C:\WINDOWS\system32\ntdll.dll 0x7c800000 - 0x7c8f6000 C:\WINDOWS\system32\kernel32.dll 0x77dd0000 - 0x77e6b000 C:\WINDOWS\system32\ADVAPI32.dll 0x77e70000 - 0x77f02000 C:\WINDOWS\system32\RPCRT4.dll 0x77fe0000 - 0x77ff1000 C:\WINDOWS\system32\Secur32.dll 0x7e410000 - 0x7e4a1000 C:\WINDOWS\system32\USER32.dll 0x77f10000 - 0x77f59000 C:\WINDOWS\system32\GDI32.dll 0x773d0000 - 0x774d3000 C:\WINDOWS\WinSxS\x86_Microsoft.Windows.Common-Controls_6595b64144ccf1df_6.0.2600.5512_x-ww_35d4ce83\COMCTL32.dll 0x77c10000 - 0x77c68000 C:\WINDOWS\system32\msvcrt.dll 0x77f60000 - 0x77fd6000 C:\WINDOWS\system32\SHLWAPI.dll 0x76390000 - 0x763ad000 C:\WINDOWS\system32\IMM32.DLL 0x629c0000 - 0x629c9000 C:\WINDOWS\system32\LPK.DLL 0x74d90000 - 0x74dfb000 C:\WINDOWS\system32\USP10.dll 0x78aa0000 - 0x78b5e000 C:\Program Files\Java\jre7\bin\msvcr100.dll 0x6d940000 - 0x6dc61000 C:\Program Files\Java\jre7\bin\client\jvm.dll 0x71ad0000 - 0x71ad9000 C:\WINDOWS\system32\WSOCK32.dll 0x71ab0000 - 0x71ac7000 C:\WINDOWS\system32\WS2_32.dll 0x71aa0000 - 0x71aa8000 C:\WINDOWS\system32\WS2HELP.dll 0x76b40000 - 0x76b6d000 C:\WINDOWS\system32\WINMM.dll 0x76bf0000 - 0x76bfb000 C:\WINDOWS\system32\PSAPI.DLL 0x6d8d0000 - 0x6d8dc000 C:\Program Files\Java\jre7\bin\verify.dll 0x6d370000 - 0x6d390000 C:\Program Files\Java\jre7\bin\java.dll 0x6d920000 - 0x6d933000 C:\Program Files\Java\jre7\bin\zip.dll 0x6cec0000 - 0x6cf42000 C:\Documents and Settings\7stl0225\Local Settings\Temp\libgdx7stl0225\37fe1abc\gdx.dll 0x10000000 - 0x1004c000 C:\Documents and Settings\7stl0225\Local Settings\Temp\libgdx7stl0225\52d76f2b\lwjgl.dll 0x5ed00000 - 0x5edcc000 C:\WINDOWS\system32\OPENGL32.dll 0x68b20000 - 0x68b40000 C:\WINDOWS\system32\GLU32.dll 0x73760000 - 0x737ab000 C:\WINDOWS\system32\DDRAW.dll 0x73bc0000 - 0x73bc6000 C:\WINDOWS\system32\DCIMAN32.dll 0x77c00000 - 0x77c08000 C:\WINDOWS\system32\VERSION.dll 0x070b0000 - 0x07115000 C:\DOCUME~1\7stl0225\LOCALS~1\Temp\libgdx7stl0225\52d76f2b\OpenAL32.dll 0x7c9c0000 - 0x7d1d7000 C:\WINDOWS\system32\SHELL32.dll 0x774e0000 - 0x7761d000 C:\WINDOWS\system32\ole32.dll 0x5ad70000 - 0x5ada8000 C:\WINDOWS\system32\uxtheme.dll 0x76fd0000 - 0x7704f000 C:\WINDOWS\system32\CLBCATQ.DLL 0x77050000 - 0x77115000 C:\WINDOWS\system32\COMRes.dll 0x77120000 - 0x771ab000 C:\WINDOWS\system32\OLEAUT32.dll 0x73f10000 - 0x73f6c000 C:\WINDOWS\system32\dsound.dll 0x76c30000 - 0x76c5e000 C:\WINDOWS\system32\WINTRUST.dll 0x77a80000 - 0x77b15000 C:\WINDOWS\system32\CRYPT32.dll 0x77b20000 - 0x77b32000 C:\WINDOWS\system32\MSASN1.dll 0x76c90000 - 0x76cb8000 C:\WINDOWS\system32\IMAGEHLP.dll 0x72d20000 - 0x72d29000 C:\WINDOWS\system32\wdmaud.drv 0x72d10000 - 0x72d18000 C:\WINDOWS\system32\msacm32.drv 0x77be0000 - 0x77bf5000 C:\WINDOWS\system32\MSACM32.dll 0x77bd0000 - 0x77bd7000 C:\WINDOWS\system32\midimap.dll 0x73ee0000 - 0x73ee4000 C:\WINDOWS\system32\KsUser.dll 0x755c0000 - 0x755ee000 C:\WINDOWS\system32\msctfime.ime 0x69000000 - 0x691a9000 C:\WINDOWS\system32\sisgl.dll 0x73b30000 - 0x73b45000 C:\WINDOWS\system32\mscms.dll 0x73000000 - 0x73026000 C:\WINDOWS\system32\WINSPOOL.DRV 0x66e90000 - 0x66ed1000 C:\WINDOWS\system32\icm32.dll 0x07760000 - 0x0778d000 C:\Program Files\WordWeb\WHook.dll 0x74c80000 - 0x74cac000 C:\WINDOWS\system32\OLEACC.dll 0x76080000 - 0x760e5000 C:\WINDOWS\system32\MSVCP60.dll VM Arguments: jvm_args: -Dfile.encoding=Cp1252 java_command: sevenseas.game.MainDesktop Launcher Type: SUN_STANDARD Environment Variables: PATH=C:/Program Files/Java/jre7/bin/client;C:/Program Files/Java/jre7/bin;C:/Program Files/Java/jre7/lib/i386;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\Program Files\Java\jdk1.7.0\bin;C:\eclipse; USERNAME=7stl0225 OS=Windows_NT PROCESSOR_IDENTIFIER=x86 Family 15 Model 4 Stepping 1, GenuineIntel --------------- S Y S T E M --------------- OS: Windows XP Build 2600 Service Pack 3 CPU:total 1 (1 cores per cpu, 1 threads per core) family 15 model 4 stepping 1, cmov, cx8, fxsr, mmx, sse, sse2, sse3 Memory: 4k page, physical 2031088k(939252k free), swap 3969920k(3011396k free) vm_info: Java HotSpot(TM) Client VM (21.0-b17) for windows-x86 JRE (1.7.0-b147), built on Jun 27 2011 02:25:52 by "java_re" with unknown MS VC++:1600 time: Sat Oct 26 12:35:14 2013 elapsed time: 0 seconds

    Read the article

  • Hard drive mounted at / , duplicate mounted hard drive after using MountManager

    - by HellHarvest
    possible duplicate post I'm running 12.04 64bit. My system is a dual boot for both Ubuntu and Windows7. Both operating systems are sharing the drive named "Elements". My volume named "Elements" is a 1TB SATA NTFS hard drive that shows up twice in the side bar in nautilus. One of the icons is functional and even has the convenient "eject" icon next to it. Below is a picture of the left menu in Nautilus, with System Monitor-File Systems tab open on top of it. Can someone advise me about how to get rid of this extra icon? I think the problem is much more deep-rooted than just a GUI glitch on Nautilus' part. The other icon does nothing but spit out the following error when I click on it (image below). This only happened AFTER I tried using Mount Manager to automate mounting the drive at start up. I've already uninstalled Mount Manager, and restarted, but the problem didn't go away. The hard drive does mount automatically now, so I guess that's cool. But now, every time I boot up now and open Nautilus, BOTH of these icons appear, one of which is fictitious and useless. According to the image above and the outputs of several other commands, it appears to be mounted at / In which case, no matter where I am in Nautilus when I try to click on that icon, of course it will tell me that that drive is in use by another program... Nautilus. I'm afraid of trying to unmount this hard drive (sdb6) because of where it appears to be mounted. I'm kind of a noob, and I have this gut feeling that tells me trying to unmount a drive at / will destroy my entire file system. This fear was further strengthened by the output of "$ fsck" at the very bottom of this post. Error immediately below when that 2nd "Elements" hard drive is clicked in Nautilus: Unable to mount Elements Mount is denied because the NTFS volume is already exclusively opened. The volume may be already mounted, or another software may use it which could be identified for example by the help of the 'fuser' command. It's odd to me that that error message above claims that it's an NTFS volume when everything else tell me that it's an ext4 volume. The actual hard drive "Elements" is in fact an NTFS volume. Here's the output of a few commands and configuration files that may be of interest: $ fuser -a / /: 2120r 2159rc 2160rc 2172r 2178rc 2180rc 2188r 2191rc 2200rc 2203rc 2205rc 2206r 2211r 2212r 2214r 2220r 2228r 2234rc 2246rc 2249rc 2254rc 2260rc 2261r 2262r 2277rc 2287rc 2291rc 2311rc 2313rc 2332rc 2334rc 2339rc 2343rc 2344rc 2352rc 2372rc 2389rc 2422r 2490r 2496rc 2501rc 2566r 2573rc 2581rc 2589rc 2592r 2603r 2611rc 2613rc 2615rc 2678rc 2927r 2981r 3104rc 4156rc 4196rc 4206rc 4213rc 4240rc 4297rc 5032rc 7609r 7613r 7648r 9593rc 18829r 18833r 19776r $ sudo df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb6 496G 366G 106G 78% / udev 2.0G 4.0K 2.0G 1% /dev tmpfs 791M 1.5M 790M 1% /run none 5.0M 0 5.0M 0% /run/lock none 2.0G 672K 2.0G 1% /run/shm /dev/sda1 932G 312G 620G 34% /media/Elements /home/solderblob/.Private 496G 366G 106G 78% /home/solderblob /dev/sdb2 188G 100G 88G 54% /media/A2B24EACB24E852F /dev/sdb1 100M 25M 76M 25% /media/System Reserved $ sudo fdisk -l Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00093cab Device Boot Start End Blocks Id System /dev/sda1 2048 1953519615 976758784 7 HPFS/NTFS/exFAT Disk /dev/sdb: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e8d9b Device Boot Start End Blocks Id System /dev/sdb1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sdb2 206848 392378768 196085960+ 7 HPFS/NTFS/exFAT /dev/sdb3 392380414 1465147391 536383489 5 Extended /dev/sdb5 1456762880 1465147391 4192256 82 Linux swap / Solaris /dev/sdb6 392380416 1448374271 527996928 83 Linux /dev/sdb7 1448376320 1456758783 4191232 82 Linux swap / Solaris Partition table entries are not in disk order $ cat /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> UUID=77039a2a-83d4-47a1-8a8c-a2ec4e4dfd0e / ext4 defaults 0 1 UUID=F6549CC4549C88CF /media/Elements ntfs-3g users 0 0 $ sudo blkid /dev/sda1: LABEL="Elements" UUID="F6549CC4549C88CF" TYPE="ntfs" /dev/sdb1: LABEL="System Reserved" UUID="5CDE130FDE12E156" TYPE="ntfs" /dev/sdb2: UUID="A2B24EACB24E852F" TYPE="ntfs" /dev/sdb6: UUID="77039a2a-83d4-47a1-8a8c-a2ec4e4dfd0e" TYPE="ext4" $ sudo blkid -c /dev/null (appears to be exactly the same as above) /dev/sda1: LABEL="Elements" UUID="F6549CC4549C88CF" TYPE="ntfs" /dev/sdb1: LABEL="System Reserved" UUID="5CDE130FDE12E156" TYPE="ntfs" /dev/sdb2: UUID="A2B24EACB24E852F" TYPE="ntfs" /dev/sdb6: UUID="77039a2a-83d4-47a1-8a8c-a2ec4e4dfd0e" TYPE="ext4" $ mount /dev/sdb6 on / type ext4 (rw) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) /dev/sda1 on /media/Elements type fuseblk (rw,noexec,nosuid,nodev,allow_other,blksize=4096) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) /home/solderblob/.Private on /home/solderblob type ecryptfs (ecryptfs_check_dev_ruid,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_unlink_sigs,ecryptfs_sig=76a47b0175afa48d,ecryptfs_fnek_sig=391b2d8b155215f7) gvfs-fuse-daemon on /home/solderblob/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=solderblob) /dev/sdb2 on /media/A2B24EACB24E852F type fuseblk (rw,nosuid,nodev,allow_other,default_permissions,blksize=4096) /dev/sdb1 on /media/System Reserved type fuseblk (rw,nosuid,nodev,allow_other,default_permissions,blksize=4096) $ ls -a . A2B24EACB24E852F Ubuntu 12.04.1 LTS amd64 .. Elements System Reserved $ cat /proc/mounts rootfs / rootfs rw 0 0 sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0 udev /dev devtmpfs rw,relatime,size=2013000k,nr_inodes=503250,mode=755 0 0 devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0 tmpfs /run tmpfs rw,nosuid,relatime,size=809872k,mode=755 0 0 /dev/disk/by-uuid/77039a2a-83d4-47a1-8a8c-a2ec4e4dfd0e / ext4 rw,relatime,user_xattr,acl,barrier=1,data=ordered 0 0 none /sys/fs/fuse/connections fusectl rw,relatime 0 0 none /sys/kernel/debug debugfs rw,relatime 0 0 none /sys/kernel/security securityfs rw,relatime 0 0 none /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0 none /run/shm tmpfs rw,nosuid,nodev,relatime 0 0 /dev/sda1 /media/Elements fuseblk rw,nosuid,nodev,noexec,relatime,user_id=0,group_id=0,allow_other,blksize=4096 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,nosuid,nodev,noexec,relatime 0 0 /home/solderblob/.Private /home/solderblob ecryptfs rw,relatime,ecryptfs_fnek_sig=391b2d8b155215f7,ecryptfs_sig=76a47b0175afa48d,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_unlink_sigs 0 0 gvfs-fuse-daemon /home/solderblob/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0 /dev/sdb2 /media/A2B24EACB24E852F fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096 0 0 /dev/sdb1 /media/System\040Reserved fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096 0 0 gvfs-fuse-daemon /root/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,relatime,user_id=0,group_id=0 0 0 $ fsck fsck from util-linux 2.20.1 e2fsck 1.42 (29-Nov-2011) /dev/sdb6 is mounted. WARNING!!! The filesystem is mounted. If you continue you ***WILL*** cause ***SEVERE*** filesystem damage. Do you really want to continue<n>? no check aborted.

    Read the article

  • Service Broker, not ETL

    - by jamiet
    I have been very quiet on this blog of late and one reason for that is I have been very busy on a client project that I would like to talk about a little here. The client that I have been working for has a website that runs on a distributed architecture utilising a messaging infrastructure for communication between different endpoints. My brief was to build a system that could consume these messages and produce analytical information in near-real-time. More specifically I basically had to deliver a data warehouse however it was the real-time aspect of the project that really intrigued me. This real-time requirement meant that using an Extract transformation, Load (ETL) tool was out of the question and so I had no choice but to write T-SQL code (i.e. stored-procedures) to process the incoming messages and load the data into the data warehouse. This concerned me though – I had no way to control the rate at which data would arrive into the system yet we were going to have end-users querying the system at the same time that those messages were arriving; the potential for contention in such a scenario was pretty high and and was something I wanted to minimise as much as possible. Moreover I did not want the processing of data inside the data warehouse to have any impact on the customer-facing website. As you have probably guessed from the title of this blog post this is where Service Broker stepped in! For those that have not heard of it Service Broker is a queuing technology that has been built into SQL Server since SQL Server 2005. It provides a number of features however the one that was of interest to me was the fact that it facilitates asynchronous data processing which, in layman’s terms, means the ability to process some data without requiring the system that supplied the data having to wait for the response. That was a crucial feature because on this project the customer-facing website (in effect an OLTP system) would be calling one of our stored procedures with each message – we did not want to cause the OLTP system to wait on us every time we processed one of those messages. This asynchronous nature also helps to alleviate the contention problem because the asynchronous processing activity is handled just like any other task in the database engine and hence can wait on another task (such as an end-user query). Service Broker it was then! The stored procedure called by the OLTP system would simply put the message onto a queue and we would use a feature called activation to pick each message off the queue in turn and process it into the warehouse. At the time of writing the system is not yet up to full capacity but so far everything seems to be working OK (touch wood) and crucially our users are seeing data in near-real-time. By near-real-time I am talking about latencies of a few minutes at most and to someone like me who is used to building systems that have overnight latencies that is a huge step forward! So then, am I advocating that you all go out and dump your ETL tools? Of course not, no! What this project has taught me though is that in certain scenarios there may be better ways to implement a data warehouse system then the traditional “load data in overnight” approach that we are all used to. Moreover I have really enjoyed getting to grips with a new technology and even if you don’t want to use Service Broker you might want to consider asynchronous messaging architectures for your BI/data warehousing solutions in the future. This has been a very high level overview of my use of Service Broker and I have deliberately left out much of the minutiae of what has been a very challenging implementation. Nonetheless I hope I have caused you to reflect upon your own approaches to BI and question whether other approaches may be more tenable. All comments and questions gratefully received! Lastly, if you have never used Service Broker before and want to kick the tyres I have provided below a very simple “Service Broker Hello World” script that will create all of the objects required to facilitate Service Broker communications and then send the message “Hello World” from one place to anther! This doesn’t represent a “proper” implementation per se because it doesn’t close down down conversation objects (which you should always do in a real-world scenario) but its enough to demonstrate the capabilities! @Jamiet ----------------------------------------------------------------------------------------------- /*This is a basic Service Broker Hello World app. Have fun! -Jamie */ USE MASTER GO CREATE DATABASE SBTest GO --Turn Service Broker on! ALTER DATABASE SBTest SET ENABLE_BROKER GO USE SBTest GO -- 1) we need to create a message type. Note that our message type is -- very simple and allowed any type of content CREATE MESSAGE TYPE HelloMessage VALIDATION = NONE GO -- 2) Once the message type has been created, we need to create a contract -- that specifies who can send what types of messages CREATE CONTRACT HelloContract (HelloMessage SENT BY INITIATOR) GO --We can query the metadata of the objects we just created SELECT * FROM   sys.service_message_types WHERE name = 'HelloMessage'; SELECT * FROM   sys.service_contracts WHERE name = 'HelloContract'; SELECT * FROM   sys.service_contract_message_usages WHERE  service_contract_id IN (SELECT service_contract_id FROM sys.service_contracts WHERE name = 'HelloContract') AND        message_type_id IN (SELECT message_type_id FROM sys.service_message_types WHERE name = 'HelloMessage'); -- 3) The communication is between two endpoints. Thus, we need two queues to -- hold messages CREATE QUEUE SenderQueue CREATE QUEUE ReceiverQueue GO --more querying metatda SELECT * FROM sys.service_queues WHERE name IN ('SenderQueue','ReceiverQueue'); --we can also select from the queues as if they were tables SELECT * FROM SenderQueue   SELECT * FROM ReceiverQueue   -- 4) Create the required services and bind them to be above created queues CREATE SERVICE Sender   ON QUEUE SenderQueue CREATE SERVICE Receiver   ON QUEUE ReceiverQueue (HelloContract) GO --more querying metadata SELECT * FROM sys.services WHERE name IN ('Receiver','Sender'); -- 5) At this point, we can begin the conversation between the two services by -- sending messages DECLARE @conversationHandle UNIQUEIDENTIFIER DECLARE @message NVARCHAR(100) BEGIN   BEGIN TRANSACTION;   BEGIN DIALOG @conversationHandle         FROM SERVICE Sender         TO SERVICE 'Receiver'         ON CONTRACT HelloContract WITH ENCRYPTION=OFF   -- Send a message on the conversation   SET @message = N'Hello, World';   SEND  ON CONVERSATION @conversationHandle         MESSAGE TYPE HelloMessage (@message)   COMMIT TRANSACTION END GO --check contents of queues SELECT * FROM SenderQueue   SELECT * FROM ReceiverQueue   GO -- Receive a message from the queue RECEIVE CONVERT(NVARCHAR(MAX), message_body) AS MESSAGE FROM ReceiverQueue GO --If no messages were received and/or you can't see anything on the queues you may wish to check the following for clues: SELECT * FROM sys.transmission_queue -- Cleanup DROP SERVICE Sender DROP SERVICE Receiver DROP QUEUE SenderQueue DROP QUEUE ReceiverQueue DROP CONTRACT HelloContract DROP MESSAGE TYPE HelloMessage GO USE MASTER GO DROP DATABASE SBTest GO

    Read the article

  • Can Google Employees See My Saved Google Chrome Passwords?

    - by Jason Fitzpatrick
    Storing your passwords in your web browser seems like a great time saver, but are the passwords secure and inaccessible to others (even employees of the browser company) when squirreled away? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader MMA is curious if Google employees have (or could have) access to the passwords he stores in Google Chrome: I understand that we are really tempted to save our passwords in Google Chrome. The likely benefit is two fold, You don’t need to (memorize and) input those long and cryptic passwords. These are available wherever you are once you log in to your Google account. The last point sparked my doubt. Since the password is available anywhere, the storage must in some central location, and this should be at Google. Now, my simple question is, can a Google employee see my passwords? Searching over the Internet revealed several articles/messages. Do you save passwords in Chrome? Maybe you should reconsider: Talks about your passwords being stolen by someone who has access to your computer account. Nothing mentioned about the central storage security and vulnerability. There is even a response from Chrome browser security tech lead about the first issue. Chrome’s insane password security strategy: Mostly along the same line. You can steal password from somebody if you have access to the computer account. How to Steal Passwords Saved in Google Chrome in 5 Simple Steps: Teaches you how to actually perform the act mentioned in the previous two when you have access to somebody else’s account. There are many more (including this one at this site), mostly along the same line, points, counter-points, huge debates. I refrain from mentioning them here, simply carry a search if you want to find them. Coming back to my original query, can a Google employee see my password? Since I can view the password using a simple button, definitely they can be unhashed (decrypted) even if encrypted. This is very different from the passwords saved in Unix-like OS’s where the saved password can never be seen in plain text. They use a one-way encryption algorithm to encrypt your passwords. This encrypted password is then stored in the passwd or shadow file. When you attempt to login, the password you type in is encrypted again and compared with the entry in the file that stores your passwords. If they match, it must be the same password, and you are allowed access. Thus, a superuser can change my password, can block my account, but he can never see my password. So are his concerns well founded or will a little insight dispel his worry? The Answer SuperUser contributor Zeel helps put his mind at ease: Short answer: No* Passwords stored on your local machine can be decrypted by Chrome, as long as your OS user account is logged in. And then you can view those in plain text. At first this seems horrible, but how did you think auto-fill worked? When that password field gets filled in, Chrome must insert the real password into the HTML form element – or else the page wouldn’t work right, and you could not submit the form. And if the connection to the website is not over HTTPS, the plain text is then sent over the internet. In other words, if chrome can’t get the plain text passwords, then they are totally useless. A one way hash is no good, because we need to use them. Now the passwords are in fact encrypted, the only way to get them back to plain text is to have the decryption key. That key is your Google password, or a secondary key you can set up. When you sign into Chrome and sync the Google servers will transmit the encrypted passwords, settings, bookmarks, auto-fill, etc, to your local machine. Here Chrome will decrypt the information and be able to use it. On Google’s end all that info is stored in its encrpyted state, and they do not have the key to decrypt it. Your account password is checked against a hash to log in to Google, and even if you let chrome remember it, that encrypted version is hidden in the same bundle as the other passwords, impossible to access. So an employee could probably grab a dump of the encrypted data, but it wouldn’t do them any good, since they would have no way to use it.* So no, Google employees can not** access your passwords, since they are encrypted on their servers. * However, do not forget that any system that can be accessed by an authorized user can be accessed by an unauthorized user. Some systems are easier to break than other, but none are fail-proof. . . That being said, I think I will trust Google and the millions they spend on security systems, over any other password storage solution. And heck, I’m a wimpy nerd, it would be easier to beat the passwords out of me than break Google’s encryption. ** I am also assuming that there isn’t a person who just happens to work for Google gaining access to your local machine. In that case you are screwed, but employment at Google isn’t actually a factor any more. Moral: Hit Win + L before leaving machine. While we agree with zeel that it’s a pretty safe bet (as long as your computer is not compromised) that your passwords are in fact safe while stored in Chrome, we prefer to encrypt all our logins and passwords in a LastPass vault. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 4

    - by MarkPearl
    Learning Outcomes Explain the characteristics of memory systems Describe the memory hierarchy Discuss cache memory principles Discuss issues relevant to cache design Describe the cache organization of the Pentium Computer Memory Systems There are key characteristics of memory… Location – internal or external Capacity – expressed in terms of bytes Unit of Transfer – the number of bits read out of or written into memory at a time Access Method – sequential, direct, random or associative From a users perspective the two most important characteristics of memory are… Capacity Performance – access time, memory cycle time, transfer rate The trade off for memory happens along three axis… Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time This leads to people using a tiered approach in their use of memory   As one goes down the hierarchy, the following occurs… Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access of the memory by the processor The use of two levels of memory to reduce average access time works in principle, but only if conditions 1 to 4 apply. A variety of technologies exist that allow us to accomplish this. Thus it is possible to organize data across the hierarchy such that the percentage of accesses to each successively lower level is substantially less than that of the level above. A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk. This is sometimes referred to as a disk cache and improves performance in two ways… Disk writes are clustered. Instead of many small transfers of data, we have a few large transfers of data. This improves disk performance and minimizes processor involvement. Some data designed for write-out may be referenced by a program before the next dump to disk. In that case the data is retrieved rapidly from the software cache rather than slowly from disk. Cache Memory Principles Cache memory is substantially faster than main memory. A caching system works as follows.. When a processor attempts to read a word of memory, a check is made to see if this in in cache memory… If it is, the data is supplied, If it is not in the cache, a block of main memory, consisting of a fixed number of words is loaded to the cache. Because of the phenomenon of locality of references, when a block of data is fetched into the cache, it is likely that there will be future references to that same memory location or to other words in the block. Elements of Cache Design While there are a large number of cache implementations, there are a few basic design elements that serve to classify and differentiate cache architectures… Cache Addresses Cache Size Mapping Function Replacement Algorithm Write Policy Line Size Number of Caches Cache Addresses Almost all non-embedded processors support virtual memory. Virtual memory in essence allows a program to address memory from a logical point of view without needing to worry about the amount of physical memory available. When virtual addresses are used the designer may choose to place the cache between the MMU (memory management unit) and the processor or between the MMU and main memory. The disadvantage of virtual memory is that most virtual memory systems supply each application with the same virtual memory address space (each application sees virtual memory starting at memory address 0), which means the cache memory must be completely flushed with each application context switch or extra bits must be added to each line of the cache to identify which virtual address space the address refers to. Cache Size We would like the size of the cache to be small enough so that the overall average cost per bit is close to that of main memory alone and large enough so that the overall average access time is close to that of the cache alone. Also, larger caches are slightly slower than smaller ones. Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. The choice of mapping function dictates how the cache is organized. Three techniques can be used… Direct – simplest technique, maps each block of main memory into only one possible cache line Associative – Each main memory block to be loaded into any line of the cache Set Associative – exhibits the strengths of both the direct and associative approaches while reducing their disadvantages For detailed explanations of each approach – read the text book (page 148 – 154) Replacement Algorithm For associative and set associating mapping a replacement algorithm is needed to determine which of the existing blocks in the cache must be replaced by a new block. There are four common approaches… LRU (Least recently used) FIFO (First in first out) LFU (Least frequently used) Random selection Write Policy When a block resident in the cache is to be replaced, there are two cases to consider If no writes to that block have happened in the cache – discard it If a write has occurred, a process needs to be initiated where the changes in the cache are propagated back to the main memory. There are several approaches to achieve this including… Write Through – all writes to the cache are done to the main memory as well at the point of the change Write Back – when a block is replaced, all dirty bits are written back to main memory The problem is complicated when we have multiple caches, there are techniques to accommodate for this but I have not summarized them. Line Size When a block of data is retrieved and placed in the cache, not only the desired word but also some number of adjacent words are retrieved. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that the data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into cache. The hit ratio will begin to decrease as the block becomes even bigger and the probability of using the newly fetched information becomes less than the probability of using the newly fetched information that has to be replaced. Two specific effects come into play… Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch overwrites older cache contents, a small number of blocks results in data being overwritten shortly after they are fetched. As a block becomes larger, each additional word is farther from the requested word and therefore less likely to be needed in the near future. The relationship between block size and hit ratio is complex, and no set approach is judged to be the best in all circumstances.   Pentium 4 and ARM cache organizations The processor core consists of four major components: Fetch/decode unit – fetches program instruction in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L2 instruction cache Out-of-order execution logic – Schedules execution of the micro-operations subject to data dependencies and resource availability – thus micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream. As time permits, this unit schedules speculative execution of micro-operations that may be required in the future Execution units – These units execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers Memory subsystem – This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources

    Read the article

  • Different Not Automatically Implies Better

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/11/05/154556.aspxRecently I was digging deeper why some WCF hosted workflow application did consume quite a lot of memory although it did basically only load a xaml workflow. The first tool of choice is Process Explorer or even better Process Hacker (has more options and the best feature copy&paste does work). The three most important numbers of a process with regards to memory are Working Set, Private Working Set and Private Bytes. Working set is the currently consumed physical memory (parts can be shared between processes e.g. loaded dlls which are read only) Private Working Set is the physical memory needed by this process which is not shareable Private Bytes is the number of non shareable which is only visible in the current process (e.g. all new, malloc, VirtualAlloc calls do create private bytes) When you have a bigger workflow it can consume under 64 bit easily 500MB for a 1-2 MB xaml file. This does not look very scalable. Under 64 bit the issue is excessive private bytes consumption and not the managed heap. The picture is quite different for 32 bit which looks a bit strange but it seems that the hosted VB compiler is a lot less memory hungry under 32 bit. I did try to repro the issue with a medium sized xaml file (400KB) which does contain 1000 variables and 1000 if which can be represented by C# code like this: string Var1; string Var2; ... string Var1000; if (!String.IsNullOrEmpty(Var1) ) { Console.WriteLine(“Var1”); } if (!String.IsNullOrEmpty(Var2) ) { Console.WriteLine(“Var2”); } ....   Since WF is based on VB.NET expressions you are bound to the hosted VB.NET compiler which does result in (x64) 140 MB of private bytes which is ca. 140 KB for each if clause which is quite a lot if you think about the actually present functionality. But there is hope. .NET 4.5 does allow now C# expressions for WF which is a major step forward for all C# lovers. I did create some simple patcher to “cross compile” my xaml to C# expressions. Lets look at the result: C# Expressions VB Expressions x86 x86 On my home machine I have only 32 bit which gives you quite exactly half of the memory consumption under 64 bit. C# expressions are 10 times more memory hungry than VB.NET expressions! I wanted to do more with less memory but instead it did consume a magnitude more memory. That is surprising to say the least. The workflow does initialize in about the same time under x64 and x86 where the VB code does it in 2s whereas the C# version needs 18s. Also nearly ten times slower. That is a too high price to pay for any bigger sized xaml workflow to convert from VB.NET to C# expressions. If I do reduce the number of expressions to 500 then it does need 400MB which is about half of the memory. It seems that the cost per if does rise linear with the number of total expressions in a xaml workflow.  Expression Language Cost per IF Startup Time C# 1000 Ifs x64 1,5 MB 18s C# 500 Ifs x64 750 KB 9s VB 1000 Ifs x64 140 KB 2s VB 500 Ifs x64 70 KB 1s Now we can directly compare two MS implementations. It is clear that the VB.NET compiler uses the same underlying structure but it has much higher offset compared to the highly inefficient C# expression compiler. I have filed a connect bug here with a harsher wording about recent advances in memory consumption. The funniest thing is that one MS employee did give an Azure AppFabric demo around early 2011 which was so slow that he needed to investigate with xperf. He was after startup time and the call stacks with regards to VB.NET expression compilation were remarkably similar. In fact I only found this post by googling for parts of my call stacks. … “C# expressions will be coming soon to WF, and that will have different performance characteristics than VB” … What did he know Jan 2011 what I did no know until today? ;-). He knew that C# expression will come but that they will not be automatically have better footprint. It is about time to fix that. In its current state C# expressions are not usable for bigger workflows. That also explains the headline for today. You can cheat startup time by prestarting workflows so that the demo looks nice and snappy but it does hurt scalability a lot since you do need much more memory than necessary. I did find the stacks by enabling virtual allocation tracking within XPerf which is still the best tool out there. But first you need to look at your process to check where the memory is hiding: For the C# Expression compiler you do not need xperf. You can directly dump the managed heap and check with a profiler of your choice. But if the allocations are happening on the Private Data ( VirtualAlloc ) you can find it with xperf. There is a nice video on channel 9 explaining VirtualAlloc tracking it in greater detail. If your data allocations are on the Heap it does mean that the C/C++ runtime did create a heap for you where all malloc, new calls do allocate from it. You can enable heap tracing with xperf and full call stack support as well which is doable via xperf like it is shown also on channel 9. Or you can use WPRUI directly: To make “Heap Usage” it work you need to set for your executable the tracing flags (before you start it). For example devenv.exe HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\devenv.exe DWORD TracingFlags 1 Do not forget to disable it after you did complete profiling the process or it will impact the startup time quite a lot. You can with xperf attach directly to a running process and collect heap allocation information from a gone wild process. Very handy if you need to find out what a process was doing which has arrived in a funny state. “VirtualAlloc usage” does work without explicitly enabling stuff for a specific process and is always on machine wide. I had issues on my Windows 7 machines with the call stack collection and the latest Windows 8.1 Performance Toolkit. I was told that WPA from Windows 8.0 should work fine but I do not want to downgrade.

    Read the article

  • flash core engine by Dinesh [closed]

    - by hdinesh
    This post was a dump of the following code (without the highlights). No question, just a dump. Please update this q. with a real question to have it reopened. You (the asker) risk to be flagged as spammer (if not already) and a bad reputation. This is a q/a site, not a site to promote your own code libraries. package facers { import flash.display.*; import flash.events.*; import flash.geom.ColorTransform; import flash.utils.Dictionary; import org.papervision3d.cameras.*; import org.papervision3d.scenes.*; import org.papervision3d.objects.*; import org.papervision3d.objects.special.*; import org.papervision3d.objects.primitives.*; import org.papervision3d.materials.*; import org.papervision3d.events.FileLoadEvent; import org.papervision3d.materials.special.*; import org.papervision3d.materials.shaders.*; import org.papervision3d.materials.utils.*; import org.papervision3d.lights.*; import org.papervision3d.render.*; import org.papervision3d.view.*; import org.papervision3d.events.InteractiveScene3DEvent; import org.papervision3d.events.*; import org.papervision3d.core.utils.*; import org.papervision3d.core.geom.renderables.Vertex3D; import caurina.transitions.*; public class Main extends Sprite { public var viewport :BasicView; public var displayObject :DisplayObject3D; private var light :PointLight3D; private var shadowPlane :Plane; private var dataArray :Array; private var material :BitmapFileMaterial; private var planeByContainer :Dictionary = new Dictionary(); private var paperSize :Number = 0.5; private var cloudSize :Number = 1500; private var rotSize :Number = 360; private var maxAlbums :Number = 50; private var num :Number = 0; public function Main():void { trace("START APPLICATION"); viewport = new BasicView(1024, 690, true, true, CameraType.FREE); viewport.camera.zoom = 50; viewport.camera.extra = { goPosition: new DisplayObject3D(),goTarget: new DisplayObject3D() }; addChild(viewport); displayObject = new DisplayObject3D(); viewport.scene.addChild(displayObject); createAlbum(); addEventListener(Event.ENTER_FRAME, onRenderEvent); } private function createAlbum() { dataArray = new Array("images/thums/pic1.jpg", "images/thums/pic2.jpg", "images/thums/pic3.jpg", "images/thums/pic4.jpg", "images/thums/pic5.jpg", "images/thums/pic6.jpg", "images/thums/pic7.jpg", "images/thums/pic8.jpg", "images/thums/pic9.jpg", "images/thums/pic10.jpg", "images/thums/pic1.jpg", "images/thums/pic2.jpg", "images/thums/pic3.jpg", "images/thums/pic4.jpg", "images/thums/pic5.jpg", "images/thums/pic6.jpg", "images/thums/pic7.jpg", "images/thums/pic8.jpg", "images/thums/pic9.jpg", "images/thums/pic10.jpg"); for (var i:int = 0; i < dataArray.length; i++) { material = new BitmapFileMaterial(dataArray[i]); material.doubleSided = true; material.addEventListener(FileLoadEvent.LOAD_COMPLETE, loadMaterial); } } public function loadMaterial(event:Event) { var plane:Plane = new Plane(material, 300, 180); displayObject.addChild(plane); var _x:int = Math.random() * cloudSize - cloudSize/2; var _y:int = Math.random() * cloudSize - cloudSize/2; var _z:int = Math.random() * cloudSize - cloudSize/2; var _rotationX:int = Math.random() * rotSize; var _rotationY:int = Math.random() * rotSize; var _rotationZ:int = Math.random() * rotSize; Tweener.addTween(plane, { x:_x, y:_y, z:_z, rotationX:_rotationX, rotationY:_rotationY, rotationZ:_rotationZ, time:5, transition:"easeIn" } ); } protected function onRenderEvent(event:Event):void { var rotY: Number = (mouseY-(stage.stageHeight/2))/(900/2)*(1200); var rotX: Number = (mouseX-(stage.stageWidth/2))/(600/2)*(-1200); displayObject.rotationY = viewport.camera.x + (rotX - viewport.camera.x) / 50; displayObject.rotationX = viewport.camera.y + (rotY - viewport.camera.y) / 30; viewport.singleRender(); } } } package designLab.events { import flash.display.BlendMode; import flash.display.Sprite; import flash.events.Event; import flash.filters.BlurFilter; // Import designLab import designLab.layer.IntroLayer; import designLab.shadow.ShadowCaster; import designLab.utils.LayerConstant; // Import Papervision3D import org.papervision3d.cameras.*; import org.papervision3d.scenes.*; import org.papervision3d.objects.*; import org.papervision3d.objects.special.*; import org.papervision3d.objects.primitives.*; import org.papervision3d.materials.*; import org.papervision3d.materials.special.*; import org.papervision3d.materials.shaders.*; import org.papervision3d.materials.utils.*; import org.papervision3d.lights.*; import org.papervision3d.render.*; import org.papervision3d.view.*; import org.papervision3d.events.InteractiveScene3DEvent; import org.papervision3d.events.*; import org.papervision3d.core.utils.*; import org.papervision3d.core.geom.renderables.Vertex3D; public class CoreEnging extends Sprite { public var viewport :BasicView; // Create BasicView public var displayObject :DisplayObject3D; // Create DisplayObject public var shadowCaster :ShadowCaster; // Create ShadowCaster private var light :PointLight3D; // Create PointLight private var shadowPlane :Plane; // Create Plane private var layer :LayerConstant; // Create constant resource layer private static var instance :CoreEnging; // Create CoreEnging class static instance // CoreEnging class static instance mathod function public static function getinstance() { if (instance != null) return instance; else { instance = new CoreEnging(); return instance; } } // CoreEnging constrictor public function CoreEnging () { trace("INFO: Design Lab Application : Core Enging v0.1"); layer = new LayerConstant(); viewport = new BasicView(900, 600, true, true, CameraType.FREE); // pass the width, height, scaleToStage, interactive, cameraType to BasicView viewport.camera.zoom = 100; // Define the zoom level of camera addChild(viewport); createFloor(); // Create the floor displayObject = new DisplayObject3D(); // Create new instance of DisplayObject viewport.scene.addChild(displayObject); // Add the DisplayObject to the BasicView light = new PointLight3D(); // Create new instance of PointLight light.z = -50; // Position the Z of create instance light.x = 0; //Position the X of create instance light.rotationZ = 45; //Position the rotation angel of the Z of create instance light.y = 500; //Position the Y of create instance shadowCaster = new ShadowCaster("shadow", 0x000000, BlendMode.MULTIPLY, .1, [new BlurFilter(20, 20, 1)]); // pass shadowcaster name, color, blend mode, alpha and filters shadowCaster.setType(ShadowCaster.SPOTLIGHT); // Define the shadow type addEventListener(Event.ENTER_FRAME, onRenderEvent); // Add frame render event } // Start create floor public function createFloor() { var spr:Sprite = new Sprite(); // Create Sprite spr.graphics.beginFill(0xFFFFFF); // Define the fill color for sprite spr.graphics.drawRect(0, 0, 600, 600); // Define the X, Y, width, height of the sprite var sprMaterial:MovieMaterial = new MovieMaterial(spr, true, true, true); //Create a texture from an existing sprite instance shadowPlane = new Plane(sprMaterial, 2000, 2000, 1, 1); // create new instance of the Plane and pass the texture material, width, height, segmentsW and segmentsH shadowPlane.rotationX = 80; //Position the rotation angel of the X of Plane shadowPlane.y = -200; //Position the Y of Plane viewport.scene.addChild(shadowPlane); // Add the Plane to the BasicView } // switch method function of the page layer control public function addLayer(type:String) { switch (type) { case layer.INTRO: var intro:IntroLayer = new IntroLayer(); break; } } // Create get mathod function for DisplayObject public function getDisplayObject():DisplayObject3D { return displayObject; } // Create get mathod function for BasicView public function getViewport():BasicView { return viewport; } // Rendering function protected function onRenderEvent(event:Event):void { var rotY: Number = (mouseY-(stage.stageHeight/2))/(900/2)*(1200); var rotX: Number = (mouseX-(stage.stageWidth/2))/(600/2)*(-1200); displayObject.rotationY = viewport.camera.x + (rotX - viewport.camera.x) / 50; displayObject.rotationX = viewport.camera.y + (rotY - viewport.camera.y) / 30; // Remove the shadow shadowCaster.invalidate(); // create new shadow on DisplayObject move shadowCaster.castModel(displayObject, light, shadowPlane); viewport.singleRender(); } } } package designLab.layer { import flash.display.Sprite; import flash.events.Event; // Import designLab import designLab.materials.iBusinessCard; import designLab.events.CoreEnging; // Import Papervision3D import org.papervision3d.objects.primitives.Cube; import org.papervision3d.materials.ColorMaterial; import org.papervision3d.materials.MovieMaterial; public class IntroLayer { // IntroLayer constrictor public function IntroLayer() { trace("INFO: Load Intro layer"); var indexDP:DP_index = new DP_index(); //Create the library MovieClip var blackMaterial:MovieMaterial = new MovieMaterial(indexDP, true); //Create a texture from an existing library MovieClip instance blackMaterial.smooth = true; blackMaterial.doubleSided = false; var mycolor:ColorMaterial = new ColorMaterial(0x000000); //Create solid color material var mycard:iBusinessCard = new iBusinessCard(blackMaterial, blackMaterial, mycolor, 372, 10, 207); // Create custom 3D cube object to pass the Front, Back, All, CubeWidth, CubeDepth and CubeHeight CoreEnging.getinstance().getDisplayObject().addChild(mycard.create3DCube()); // Add the custom 3D cube to the DisplayObject } } } package designLab.materials { import flash.display.*; import flash.events.*; // Import Papervision3D import org.papervision3d.materials.*; import org.papervision3d.materials.utils.MaterialsList; import org.papervision3d.objects.primitives.Cube; public class iBusinessCard extends Sprite { private var materialsList :MaterialsList; private var cube :Cube; private var Front :MovieMaterial = new MovieMaterial(); private var Back :MovieMaterial = new MovieMaterial(); private var All :ColorMaterial = new ColorMaterial(); private var CubeWidth :Number; private var CubeDepth :Number; private var CubeHeight :Number; public function iBusinessCard(Front:MovieMaterial, Back:MovieMaterial, All:ColorMaterial, CubeWidth:Number, CubeDepth:Number, CubeHeight:Number) { setFront(Front); setBack(Back); setAll(All); setCubeWidth(CubeWidth); setCubeDepth(CubeDepth); setCubeHeight(CubeHeight); } public function create3DCube():Cube { materialsList = new MaterialsList(); materialsList.addMaterial(Front, "front"); materialsList.addMaterial(Back, "back"); materialsList.addMaterial(All, "left"); materialsList.addMaterial(All, "right"); materialsList.addMaterial(All, "top"); materialsList.addMaterial(All, "bottom"); cube = new Cube(materialsList, CubeWidth, CubeDepth, CubeHeight); cube.x = 0; cube.y = 0; cube.z = 0; cube.rotationY = 180; return cube; } public function setFront(Front:MovieMaterial) { this.Front = Front; } public function getFront():MovieMaterial { return Front; } public function setBack(Back:MovieMaterial) { this.Back = Back; } public function getBack():MovieMaterial { return Back; } public function setAll(All:ColorMaterial) { this.All = All; } public function getAll():ColorMaterial { return All; } public function setCubeWidth(CubeWidth:Number) { this.CubeWidth = CubeWidth; } public function getCubeWidth():Number { return CubeWidth; } public function setCubeDepth(CubeDepth:Number) { this.CubeDepth = CubeDepth; } public function getCubeDepth():Number { return CubeDepth; } public function setCubeHeight(CubeHeight:Number) { this.CubeHeight = CubeHeight; } public function getCubeHeight():Number { return CubeHeight; } } } package designLab.shadow { import flash.display.Sprite; import flash.filters.BlurFilter; import flash.geom.Point; import flash.geom.Rectangle; import flash.utils.Dictionary; import org.papervision3d.core.geom.TriangleMesh3D; import org.papervision3d.core.geom.renderables.Triangle3D; import org.papervision3d.core.geom.renderables.Vertex3D; import org.papervision3d.core.math.BoundingSphere; import org.papervision3d.core.math.Matrix3D; import org.papervision3d.core.math.Number3D; import org.papervision3d.core.math.Plane3D; import org.papervision3d.lights.PointLight3D; import org.papervision3d.materials.MovieMaterial; import org.papervision3d.objects.DisplayObject3D; import org.papervision3d.objects.primitives.Plane; public class ShadowCaster { private var vertexRefs:Dictionary; private var numberRefs:Dictionary; private var lightRay:Number3D = new Number3D() private var p3d:Plane3D = new Plane3D(); public var color:uint = 0; public var alpha:Number = 0; public var blend:String = ""; public var filters:Array; public var uid:String; private var _type:String = "point"; private var dir:Number3D; private var planeBounds:Dictionary; private var targetBounds:Dictionary; private var models:Dictionary; public static var DIRECTIONAL:String = "dir"; public static var SPOTLIGHT:String = "spot"; public function ShadowCaster(uid:String, color:uint = 0, blend:String = "multiply", alpha:Number = 1, filters:Array=null) { this.uid = uid; this.color = color; this.alpha = alpha; this.blend = blend; this.filters = filters ? filters : [new BlurFilter()]; numberRefs = new Dictionary(true); targetBounds = new Dictionary(true); planeBounds = new Dictionary(true); models = new Dictionary(true); } public function castModel(model:DisplayObject3D, light:PointLight3D, plane:Plane, faces:Boolean = true, cull:Boolean = false):void{ var ar:Array; if(models[model]) { ar = models[model]; }else{ ar = new Array(); getChildMesh(model, ar); models[model] = ar; } var reset:Boolean = true; for each(var t:TriangleMesh3D in ar){ if(faces) castFaces(light, t, plane, cull, reset); else castBoundingSphere(light, t, plane, 0.75, reset); reset = false; } } private function getChildMesh(do3d:DisplayObject3D, ar):void{ if(do3d is TriangleMesh3D) ar.push(do3d); for each(var d:DisplayObject3D in do3d.children) getChildMesh(d, ar); } public function setType(type:String="point"):void{ _type = type; } public function getType():String{ return _type; } public function castBoundingSphere(light:PointLight3D, target:TriangleMesh3D, plane:Plane, scaleRadius:Number=0.8, clear:Boolean = true):void{ var planeVertices:Array = plane.geometry.vertices; //convert to target space? var world:Matrix3D = plane.world; var inv:Matrix3D = Matrix3D.inverse(plane.transform); var lp:Number3D = new Number3D(light.x, light.y, light.z); Matrix3D.multiplyVector(inv, lp); p3d.setNormalAndPoint(plane.geometry.faces[0].faceNormal, new Number3D()); var b:BoundingSphere = target.geometry.boundingSphere; var bounds:Object = planeBounds[plane]; if(!bounds){ bounds = plane.boundingBox(); planeBounds[plane] = bounds; } var tbounds:Object = targetBounds[target]; if(!tbounds){ tbounds = target.boundingBox(); targetBounds[target] = tbounds; } var planeMovie:Sprite = Sprite(MovieMaterial(plane.material).movie); var movieSize:Point = new Point(planeMovie.width, planeMovie.height); var castClip:Sprite = getCastClip(plane); castClip.blendMode = this.blend; castClip.filters = this.filters; castClip.alpha = this.alpha; if(clear) castClip.graphics.clear(); vertexRefs = new Dictionary(true); var tlp:Number3D = new Number3D(light.x, light.y, light.z); Matrix3D.multiplyVector(Matrix3D.inverse(target.world), tlp); var center:Number3D = new Number3D(tbounds.min.x+tbounds.size.x*0.5, tbounds.min.y+tbounds.size.y*0.5, tbounds.min.z+tbounds.size.z*0.5); var dif:Number3D = Number3D.sub(lp, center); dif.normalize(); var other:Number3D = new Number3D(); other.x = -dif.y; other.y = dif.x; other.z = 0; other.normalize(); var cross:Number3D = Number3D.cross(new Number3D(plane.transform.n12, plane.transform.n22, plane.transform.n32), p3d.normal); cross.normalize(); //cross = new Number3D(-dif.y, dif.x, 0); //cross.normalize(); cross.multiplyEq(b.radius*scaleRadius); if(_type == DIRECTIONAL){ var oPos:Number3D = new Number3D(target.x, target.y, target.z); Matrix3D.multiplyVector(target.world, oPos); Matrix3D.multiplyVector(inv, oPos); dir = new Number3D(oPos.x-lp.x, oPos.y-lp.y, oPos.z-lp.z); } //numberRefs = new Dictionary(true); var pos:Number3D; var c2d:Point; var r2d:Point; //_type = SPOTLIGHT; pos = projectVertex(new Vertex3D(center.x, center.y, center.z), lp, inv, target.world); c2d = get2dPoint(pos, bounds.min, bounds.size, movieSize); pos = projectVertex(new Vertex3D(center.x+cross.x, center.y+cross.y, center.z+cross.z), lp, inv, target.world); r2d = get2dPoint(pos, bounds.min, bounds.size, movieSize); var dx:Number = r2d.x-c2d.x; var dy:Number = r2d.y-c2d.y; var rad:Number = Math.sqrt(dx*dx+dy*dy); castClip.graphics.beginFill(color); castClip.graphics.moveTo(c2d.x, c2d.y); castClip.graphics.drawCircle(c2d.x, c2d.y, rad); castClip.graphics.endFill(); } public function getCastClip(plane:Plane):Sprite{ var planeMovie:Sprite = Sprite(MovieMaterial(plane.material).movie); var movieSize:Point = new Point(planeMovie.width, planeMovie.height); var castClip:Sprite;// = new Sprite(); if(planeMovie.getChildByName("castClip"+uid)) return Sprite(planeMovie.getChildByName("castClip"+uid)); else{ castClip = new Sprite(); castClip.name = "castClip"+uid; castClip.scrollRect = new Rectangle(0, 0, movieSize.x, movieSize.y); //castClip.alpha = 0.4; planeMovie.addChild(castClip); return castClip; } } public function castFaces(light:PointLight3D, target:TriangleMesh3D, plane:Plane, cull:Boolean=false, clear:Boolean = true):void{ var planeVertices:Array = plane.geometry.vertices; //convert to target space? var world:Matrix3D = plane.world; var inv:Matrix3D = Matrix3D.inverse(plane.transform); var lp:Number3D = new Number3D(light.x, light.y, light.z); Matrix3D.multiplyVector(inv, lp); var tlp:Number3D; if(cull){ tlp = new Number3D(light.x, light.y, light.z); Matrix3D.multiplyVector(Matrix3D.inverse(target.world), tlp); } //Matrix3D.multiplyVector(Matrix3D.inverse(target.transform), tlp); //p3d.setThreePoints(planeVertices[0].getPosition(), planeVertices[1].getPosition(), planeVertices[2].getPosition()); p3d.setNormalAndPoint(plane.geometry.faces[0].faceNormal, new Number3D()); if(_type == DIRECTIONAL){ var oPos:Number3D = new Number3D(target.x, target.y, target.z); Matrix3D.multiplyVector(target.world, oPos); Matrix3D.multiplyVector(inv, oPos); dir = new Number3D(oPos.x-lp.x, oPos.y-lp.y, oPos.z-lp.z); } var bounds:Object = planeBounds[plane]; if(!bounds){ bounds = plane.boundingBox(); planeBounds[plane] = bounds; } var castClip:Sprite = getCastClip(plane); castClip.blendMode = this.blend; castClip.filters = this.filters; castClip.alpha = this.alpha; var planeMovie:Sprite = Sprite(MovieMaterial(plane.material).movie); var movieSize:Point = new Point(planeMovie.width, planeMovie.height); if(clear) castClip.graphics.clear(); vertexRefs = new Dictionary(true); //numberRefs = new Dictionary(true); var pos:Number3D; var p2d:Point; var s2d:Point; var hitVert:Number3D = new Number3D(); for each(var t:Triangle3D in target.geometry.faces){ if( cull){ hitVert.x = t.v0.x; hitVert.y = t.v0.y; hitVert.z = t.v0.z; if(Number3D.dot(t.faceNormal, Number3D.sub(tlp, hitVert)) <= 0) continue; } castClip.graphics.beginFill(color); pos = projectVertex(t.v0, lp, inv, target.world); s2d = get2dPoint(pos, bounds.min, bounds.size, movieSize); castClip.graphics.moveTo(s2d.x, s2d.y); pos = projectVertex(t.v1, lp, inv, target.world); p2d = get2dPoint(pos, bounds.min, bounds.size, movieSize); castClip.graphics.lineTo(p2d.x, p2d.y); pos = projectVertex(t.v2, lp, inv, target.world); p2d = get2dPoint(pos, bounds.min, bounds.size, movieSize); castClip.graphics.lineTo(p2d.x, p2d.y); castClip.graphics.lineTo(s2d.x, s2d.y); castClip.graphics.endFill(); } } public function invalidate():void{ invalidateModels(); invalidatePlanes(); } public function invalidatePlanes():void{ planeBounds = new Dictionary(true); } public function invalidateTargets():void{ numberRefs = new Dictionary(true); targetBounds = new Dictionary(true); } public function invalidateModels():void{ models = new Dictionary(true); invalidateTargets(); } private function get2dPoint(pos3D:Number3D, min3D:Number3D, size3D:Number3D, movieSize:Point):Point{ return new Point((pos3D.x-min3D.x)/size3D.x*movieSize.x, ((-pos3D.y-min3D.y)/size3D.y*movieSize.y)); } private function projectVertex(v:Vertex3D, light:Number3D, invMat:Matrix3D, world:Matrix3D):Number3D{ var pos:Number3D = vertexRefs[v]; if(pos) return pos; var n:Number3D = numberRefs[v]; if(!n){ n = new Number3D(v.x, v.y, v.z); Matrix3D.multiplyVector(world, n); Matrix3D.multiplyVector(invMat, n); numberRefs[v] = n; } if(_type == SPOTLIGHT){ lightRay.x = light.x; lightRay.y = light.y; lightRay.z = light.z; }else{ lightRay.x = n.x-dir.x; lightRay.y = n.y-dir.y; lightRay.z = n.z-dir.z; } pos = p3d.getIntersectionLineNumbers(lightRay, n); vertexRefs[v] = pos; return pos; } } } package designLab.utils { public class LayerConstant { public const INTRO:String = "INTRO"; // Intro layer string constant } }*emphasized text*

    Read the article

  • Troubleshooting Windows Authentication problems (no challenge) in IIS 7.5?

    - by Aaronaught
    I know that there are thousands of reports of people having trouble getting Integrated Windows Authentication to work with IIS, but they all seem to lead to web pages that don't apply or solutions that I've already tried. I've deployed dozens of sites like this before, so either there's something bizarre going on with the server/configuration, or I've been looking at this too long and not seeing the obvious. Simply put, everything works perfectly on my local machine, but falls apart on the production server, which as far as I can tell has the exact same configuration. On the local machine: The machine is running Windows 7 Ultimate, Service Pack 1, IIS 7.5. The site has been tested successfully, using both IIS and the VS Web Development Server. The IIS site config has all authentication methods disabled except Windows Authentication. The local machine is not on any domain. The Providers set up are Negotiate and NTLM (not Negotiate:Kerberos). Extended Protection is Off. All browsers tested (IE, Firefox, Chrome) show the challenge prompt and allow me to log in to the localhost domain with my (local) Windows account. All browsers tested also work using an opaque local IP address - so the browsers themselves don't seem to care whether the site appears "local" or "remote". I've added a display line to the web page which shows the currently-logged-in user and it shows exactly what I would expect (whichever local user I logged in with). On the remote machine: The server is running Windows Server 2008 R2, IIS 7.5. Loading the web page results in an immediate 401.2 error: You are not authorized to view this page due to invalid authentication headers. No challenge prompt ever appears. The IIS site config has all authentication methods disabled except Windows Authentication. The remote machine is not on any domain. The Providers set up are Negotiate and NTLM (not Negotiate:Kerberos). Extended Protection is Off. On the remote machine (remote desktop session), the same error appears in Internet Explorer regardless of whether the domain is localhost or the external IP address. If I try to view the remote web site from my local machine, the error is still 401, but a slightly different 401. No subcode, with the text: Access is denied due to invalid credentials. The Windows Authentication IIS role feature is installed. The WindowsAuthentication Module is added (at the Server level). The exact same error occurs if I turn off Windows Authentication and enable Basic Authentication. The site does load if I turn off Windows Authentication and enable Anonymous (obviously). I've already followed all of the troubleshooting steps on Microsoft Support: Troubleshooting HTTP 401 errors in IIS I've already tried the workaround shown on another Microsoft support page (supposedly to force NTLM as the only method). Last but not least, I tried turning on FREB for 401.2 errors and the results don't seem to tell me anything useful, all I see is the following warning: MODULE_SET_RESPONSE_ERROR_STATUS ModuleName IIS Web Core Notification 2 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 2 ErrorCode 2147942405 ConfigExceptionInfo Notification AUTHENTICATE_REQUEST ErrorCode Access is denied. (0x80070005) ...this seems to just be telling me what I already know (that it's simply rejecting the request instead of negotiating the credentials). The trace does indicate that the WindowsAuthentication module is correctly loaded because there is a NOTIFY_MODULE_START line with ModuleName = WindowsAuthentication (and various other ASP.NET follow-up events - [un]fortunately, no interesting errors or warnings here). Can anyone tell me what I might be missing here? Quick Update: I'm a little uncomfortable sending a whole Wireshark dump as it would reveal IPs, URLs and other stuff, but I did a side-by-side comparison of the HTTP responses from localhost and the remote server in Fiddler, and it seems fairly self-evident what the problem is: Localhost: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/7.5 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM X-Powered-By: ASP.NET Date: Sat, 17 Dec 2011 23:42:34 GMT Content-Length: 6399 Proxy-Support: Session-Based-Authentication Remote: HTTP/1.1 401 Unauthorized Content-Type: text/html Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Sat, 17 Dec 2011 23:43:13 GMT Content-Length: 1293 Aside from a few seemingly-inconsequential differences like cache-control, the main difference is that the remote server is not sending the WWW-Authenticate headers back to the client. So, I guess that narrows the question down to: Why is IIS not sending WWW-Authenticate headers when Windows Authentication appears to be installed, loaded, and exclusively enabled?

    Read the article

  • DNS server not functioning correctly

    - by Shamit Shrestha
    I have setup a DNS server which isnt working properly. My domain is accswift.com which has glued to two name servers ns1.accswift.com and ns2.accswift.com for the same IP address - 203.78.164.18. On domain end everything should be fine. Please check -http://www.intodns.com/accswift.com I am sure its the problem with the linux server. Can anyone help me find where the problem is for me? Below is the settings that I have in the server. ====================== DIG [root@accswift ~]# dig accswift.com ; << DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.6 << accswift.com ;; global options: +cmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: NOERROR, id: 11275 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;accswift.com. IN A ;; ANSWER SECTION: accswift.com. 38400 IN A 203.78.164.18 ;; AUTHORITY SECTION: accswift.com. 38400 IN NS ns1.accswift.com. accswift.com. 38400 IN NS ns2.accswift.com. ;; ADDITIONAL SECTION: ns1.accswift.com. 38400 IN A 203.78.164.18 ns2.accswift.com. 38400 IN A 203.78.164.18 ;; Query time: 1 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Wed Nov 6 20:12:16 2013 ;; MSG SIZE rcvd: 114 ============== IP Tables settings vi /etc/sysconfig/iptables *filter :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A FORWARD -o eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_OUT: -A FORWARD -i eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_IN: -A OUTPUT -o eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_OUT: -A INPUT -i eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_IN: -A INPUT -p udp -m udp --sport 53 -j ACCEPT -A OUTPUT -p udp -m udp --dport 53 -j ACCEPT COMMIT Completed on Fri Sep 20 04:20:33 2013 Generated by webmin *mangle :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT Completed Generated by webmin *nat :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT ====DNS settings vi /var/named/accswift.com.host $ttl 38400 @ IN SOA ns1.accswift.com. root.ns1.accswift.com. ( 1382936091 10800 3600 604800 38400 ) @ IN NS ns1.accswift.com. @ IN NS ns2.accswift.com. accswift.com. IN A 203.78.164.18 accswift.com. IN NS ns1.accswift.com. www.accswift.com. IN A 203.78.164.18 ftp.accswift.com. IN A 203.78.164.18 m.accswift.com. IN A 203.78.164.18 ns1 IN A 203.78.164.18 ns2 IN A 203.78.164.18 localhost.accswift.com. IN A 127.0.0.1 webmail.accswift.com. IN A 203.78.164.18 admin.accswift.com. IN A 203.78.164.18 mail.accswift.com. IN A 203.78.164.18 accswift.com. IN MX 5 mail.accswift.com. ====Named.conf vi /etc/named.conf options { listen-on port 53 { 127.0.0.1; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; allow-recursion { localhost; 192.168.2.0/24; }; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; forward first; forwarders {192.168.1.1;}; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; zone "accswift.com" { type master; file "/var/named/accswift.com.hosts"; allow-transfer { 127.0.0.1; localnets; 208.73.211.69; }; }; zone "ns1.accswift.com" { type master; file "/var/named/ns1.accswift.com.hosts"; }; ==================================== Can anybody find any flaw in this? I am still unable to reach accswift.com from any other ISP. But it is browsable from the same network though. Thanks in advance.

    Read the article

  • How to correctly use DERIVE or COUNTER in munin plugins

    - by Johan
    I'm using munin to monitor my server. I've been able to write plugins for it, but only if the graph type is GAUGE. When I try COUNTER or DERIVE, no data is logged or graphed. The plugin i'm currently stuck on is for monitoring bandwidth usage, and is as follows: /etc/munin/plugins/bandwidth2 #!/bin/sh if [ "$1" = "config" ]; then echo 'graph_title Bandwidth Usage 2' echo 'graph_vlabel Bandwidth' echo 'graph_scale no' echo 'graph_category network' echo 'graph_info Bandwidth usage.' echo 'used.label Used' echo 'used.info Bandwidth used so far this month.' echo 'used.type DERIVE' echo 'used.min 0' echo 'remain.label Remaining' echo 'remain.info Bandwidth remaining this month.' echo 'remain.type DERIVE' echo 'remain.min 0' exit 0 fi cat /var/log/zen.log The contents of /var/log/zen.log are: used.value 61.3251953125 remain.value 20.0146484375 And the resulting database is: <!-- Round Robin Database Dump --><rrd> <version> 0003 </version> <step> 300 </step> <!-- Seconds --> <lastupdate> 1269936605 </lastupdate> <!-- 2010-03-30 09:10:05 BST --> <ds> <name> 42 </name> <type> DERIVE </type> <minimal_heartbeat> 600 </minimal_heartbeat> <min> 0.0000000000e+00 </min> <max> NaN </max> <!-- PDP Status --> <last_ds> 61.3251953125 </last_ds> <value> NaN </value> <unknown_sec> 5 </unknown_sec> </ds> <!-- Round Robin Archives --> <rra> <cf> AVERAGE </cf> <pdp_per_row> 1 </pdp_per_row> <!-- 300 seconds --> <params> <xff> 5.0000000000e-01 </xff> </params> <cdp_prep> <ds> <primary_value> NaN </primary_value> <secondary_value> NaN </secondary_value> <value> NaN </value> <unknown_datapoints> 0 </unknown_datapoints> </ds> </cdp_prep> <database> <!-- 2010-03-28 09:15:00 BST / 1269764100 --> <row><v> NaN </v></row> <!-- 2010-03-28 09:20:00 BST / 1269764400 --> <row><v> NaN </v></row> <!-- 2010-03-28 09:25:00 BST / 1269764700 --> <row><v> NaN </v></row> <snip> The value for last_ds is correct, it just doesn't seem to make it into the actual database. If I change DERIVE to GAUGE, it works as expected. munin-run bandwidth2 outputs the contents of /var/log/zen.log I've been all over the (sparse) docs for munin plugins, and can't find my mistake. Modifying an existing plugin didn't work for me either.

    Read the article

  • Again WPA Connection problem even after changed to latest version ..please help

    - by Renjith G
    I am using hostapd, wireless tools with madwifi for my wireless ap in my board. The WEP, WPA-PSK connections and communications between my board with linux and my desktop PC, Windows XP SP2 (with Olitec USB wireless) are fine. But when I configured the WPA type, the connection seems established but shows the status "TKIP - Key Absent" in the security dialog box. Anyone faced this problem? Am attaching the conf files and the connection status. In the AP side am complaining . I am using the in built radius server conf with the hostapd 0.4.7 hostapd.conf interface=ath0 driver=madwifi logger_syslog=0 logger_syslog_level=0 logger_stdout=0 logger_stdout_level=0 debug=0 eapol_key_index_workaround=1 dump_file=/tmp/hostapd.dump.0.0 ssid=Renjith G wpa wpa=1 wpa_passphrase=mypassphrase wpa_key_mgmt=WPA-EAP wpa_pairwise=TKIP CCMP wpa_group_rekey=600 macaddr_acl=2 /* commented */ ieee8021x=1 /* commented */ eap_authenticator=1 own_ip_addr=172.16.25.1 nas_identifier=renjithg.com auth_server_addr=172.16.25.1 auth_server_port=1812 auth_server_shared_secret=key1 ca_cert=/flash1/ca.crt server_cert=/flash1/server.crt eap_user_file=/etc/hostapd.eap_user hostapd.eap_user "*@renjithg.com" TLS And the commands am using are wlanconfig ath0 create wlandev wifi0 wlanmode ap iwconfig ath0 essid Renjith channel 6 ifconfig ath0 192.168.25.1 netmask 255.255.255.0 up hostapd -ddd /etc/hostapd.conf Please correct if am wrong .. Also am getting the debug messages on my AP when am connecting in my windows machine through WPA ~/wlanexe # ./hostapd -ddd /etc/hostapd.conf Configuration file: /etc/hostapd.conf Line 18: obsolete eap_authenticator used; this has been renamed to eap_server madwifi_set_iface_flags: dev_up=0 Using interface ath0 with hwaddr 00:0b:6b:33:8c:30 and ssid 'Renjith G wpa' madwifi_set_ieee8021x: enabled=1 madwifi_configure_wpa: group key cipher=1 madwifi_configure_wpa: pairwise key ciphers=0xa madwifi_configure_wpa: key management algorithms=0x1 madwifi_configure_wpa: rsn capabilities=0x0 madwifi_configure_wpa: enable WPA= 0x1 madwifi_set_iface_flags: dev_up=1 madwifi_set_privacy: enabled=1 WPA: group state machine entering state GTK_INIT GMK - hexdump(len=32): 9c 77 cd 38 5a 60 3b 16 8a 22 90 e8 65 b3 c2 86 40 5c be c3 dd 84 3e df 58 1d 16 61 1d 13 d1 f2 GTK - hexdump(len=32): 02 78 d7 d3 5d 15 e3 89 9c 62 a8 fe 8a 0f 40 28 ba dc cd bc 07 f4 59 88 1c 08 84 2b 49 3d e2 32 WPA: group state machine entering state SETKEYSDONE madwifi_set_key: alg=TKIP addr=00:00:00:00:00:00 key_idx=1 Flushing old station entries madwifi_sta_deauth: addr=ff:ff:ff:ff:ff:ff reason_code=3 Deauthenticate all stations l2_packet_receive - recvfrom: Network is down Wireless event: cmd=0x8c03 len=20 New STA WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state INITIALIZE madwifi_del_key: addr=00:0a:78:a0:0b:09 key_idx=0 WPA: 00:0a:78:a0:0b:09 WPA_PTK_GROUP entering state IDLE WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state AUTHENTICATION WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state AUTHENTICATION2 IEEE 802.1X: 4 bytes from 00:0a:78:a0:0b:09 IEEE 802.1X: version=1 type=1 length=0 Wireless event: cmd=0x8c04 len=20 madwifi_del_key: addr=00:0a:78:a0:0b:09 key_idx=0 ioctl[unknown???]: Invalid argument WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state DISCONNECTED WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state INITIALIZE madwifi_del_key: addr=00:0a:78:a0:0b:09 key_idx=0 ioctl[unknown???]: Invalid argument Wireless event: cmd=0x8c03 len=20 New STA WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state INITIALIZE madwifi_del_key: addr=00:0a:78:a0:0b:09 key_idx=0 WPA: 00:0a:78:a0:0b:09 WPA_PTK_GROUP entering state IDLE WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state AUTHENTICATION WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state AUTHENTICATION2 IEEE 802.1X: 4 bytes from 00:0a:78:a0:0b:09 IEEE 802.1X: version=1 type=1 length=0 < Register Fail < Register Fail Wireless event: cmd=0x8c04 len=20 madwifi_del_key: addr=00:0a:78:a0:0b:09 key_idx=0 ioctl[unknown???]: Invalid argument WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state DISCONNECTED WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state INITIALIZE madwifi_del_key: addr=00:0a:78:a0:0b:09 key_idx=0 ioctl[unknown???]: Invalid argument Wireless event: cmd=0x8c03 len=20 New STA WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state INITIALIZE madwifi_del_key: addr=00:0a:78:a0:0b:09 key_idx=0 WPA: 00:0a:78:a0:0b:09 WPA_PTK_GROUP entering state IDLE WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state AUTHENTICATION WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state AUTHENTICATION2 IEEE 802.1X: 4 bytes from 00:0a:78:a0:0b:09 IEEE 802.1X: version=1 type=1 length=0 NOW am getting the following error message with latest tools. *This is the latest error messages..please refer this only..* ~/wlanexe # ./hostapd -ddd /etc/hostapd.conf TLS: Trusted root certificate(s) loaded madwifi_set_iface_flags: dev_up=0 madwifi_set_privacy: enabled=0 BSS count 1, BSSID mask ff:ff:ff:ff:ff:ff (0 bits) Flushing old station entries madwifi_sta_deauth: addr=ff:ff:ff:ff:ff:ff reason_code=3 ioctl[IEEE80211_IOCTL_SETMLME]: Invalid argument madwifi_sta_deauth: Failed to deauth STA (addr ff:ff:ff:ff:ff:ff reason 3) Could not connect to kernel driver. Deauthenticate all stations madwifi_sta_deauth: addr=ff:ff:ff:ff:ff:ff reason_code=2 ioctl[IEEE80211_IOCTL_SETMLME]: Invalid argument madwifi_sta_deauth: Failed to deauth STA (addr ff:ff:ff:ff:ff:ff reason 2) madwifi_set_privacy: enabled=0 madwifi_del_key: addr=00:00:00:00:00:00 key_idx=0 madwifi_del_key: addr=00:00:00:00:00:00 key_idx=1 madwifi_del_key: addr=00:00:00:00:00:00 key_idx=2 madwifi_del_key: addr=00:00:00:00:00:00 key_idx=3 Using interface ath0 with hwaddr 00:0b:6b:33:8c:30 and ssid 'RenjithGwpa' SSID - hexdump_ascii(len=11): 52 65 6e 6a 69 74 68 47 77 70 61 RenjithGwpa PSK (ASCII passphrase) - hexdump_ascii(len=12): 6d 79 70 61 73 73 70 68 72 61 73 65 mypassphrase PSK (from passphrase) - hexdump(len=32): a6 55 3e 76 94 8b d9 81 a1 22 5e 24 29 83 33 86 11 a8 7e 93 19 7c a9 ab ab cc 12 58 37 e5 35 b6 RADIUS local address: 172.16.25.1:1024 madwifi_set_ieee8021x: enabled=1 madwifi_configure_wpa: group key cipher=1 madwifi_configure_wpa: pairwise key ciphers=0xa madwifi_configure_wpa: key management algorithms=0x1 madwifi_configure_wpa: rsn capabilities=0x0 madwifi_configure_wpa: enable WPA=0x1 WPA: group state machine entering state GTK_INIT (VLAN-ID 0) GMK - hexdump(len=32): [REMOVED] GTK - hexdump(len=32): [REMOVED] WPA: group state machine entering state SETKEYSDONE (VLAN-ID 0) madwifi_set_key: alg=TKIP addr=00:00:00:00:00:00 key_idx=1 madwifi_set_privacy: enabled=1 madwifi_set_iface_flags: dev_up=1 ath0: Setup of interface done. l2_packet_receive - recvfrom: Network is down Wireless event: cmd=0x8b1a len=24 Wireless event: cmd=0x8c03 len=20 New STA ioctl[unknown???]: Invalid argument madwifi_process_wpa_ie: Failed to get WPA/RSN IE Failed to get WPA/RSN information element. Data frame from not associated STA 00:0a:78:a0:0b:09 Wireless event: cmd=0x8c04 len=20 Wireless event: cmd=0x8c03 len=20 New STA ioctl[unknown???]: Invalid argument madwifi_process_wpa_ie: Failed to get WPA/RSN IE Failed to get WPA/RSN information element. Data frame from not associated STA 00:0a:78:a0:0b:09 Data frame from not associated STA 00:0a:78:a0:0b:09 Data frame from not associated STA 00:0a:78:a0:0b:09 Wireless event: cmd=0x8c04 len=20 Wireless event: cmd=0x8c03 len=20 New STA ioctl[unknown???]: Invalid argument madwifi_process_wpa_ie: Failed to get WPA/RSN IE Failed to get WPA/RSN information element. Data frame from not associated STA 00:0a:78:a0:0b:09

    Read the article

  • Linux policy routing - packets not coming back

    - by Bugsik
    i am trying to set up policy routing on my home server. My network looks like this: Host routed VPN gateway Internet link through VPN 192.168.0.35/24 ---> 192.168.0.5/24 ---> 192.168.0.1 DSL router 10.200.2.235/22 .... .... 10.200.0.1 VPN server The traffic from 192.168.0.32/27 should be and is routed through VPN. I wanted to define some routing policies to route some traffic from 192.168.0.5 through VPN as well - for start - from user with uid 2000. Policy routing is done using iptables mark target and ip rule fwmark. The problem: When connecting using user 2000 from 192.168.0.5 tcpdump shows outgoing packets, but nothing comes back. Traffic from 192.168.0.35 works fine (here I am not using fwmark but src policy). Here is my VPN gateway setup: # uname -a Linux placebo 3.2.0-34-generic #53-Ubuntu SMP Thu Nov 15 10:49:02 UTC 2012 i686 i686 i386 GNU/Linux # iptables -V iptables v1.4.12 # ip -V ip utility, iproute2-ss111117 IPtables rules (all policies in table filter are ACCEPT) # iptables -t mangle -nvL Chain PREROUTING (policy ACCEPT 770K packets, 314M bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 767K packets, 312M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 5520 packets, 1920K bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 782K packets, 901M bytes) pkts bytes target prot opt in out source destination 74 4707 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner UID match 2000 MARK set 0x3 Chain POSTROUTING (policy ACCEPT 788K packets, 903M bytes) pkts bytes target prot opt in out source destination # iptables -t nat -nvL Chain PREROUTING (policy ACCEPT 996 packets, 51172 bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 7 packets, 432 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 1364 packets, 112K bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 2302 packets, 160K bytes) pkts bytes target prot opt in out source destination 119 7588 MASQUERADE all -- * vpn 0.0.0.0/0 0.0.0.0/0 Routing: # ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master lan state UNKNOWN qlen 1000 link/ether 00:40:63:f9:c3:8f brd ff:ff:ff:ff:ff:ff valid_lft forever preferred_lft forever 3: lan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:40:63:f9:c3:8f brd ff:ff:ff:ff:ff:ff inet 192.168.0.5/24 brd 192.168.0.255 scope global lan inet6 fe80::240:63ff:fef9:c38f/64 scope link valid_lft forever preferred_lft forever 4: vpn: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 100 link/none inet 10.200.2.235/22 brd 10.200.3.255 scope global vpn # ip rule show 0: from all lookup local 32764: from all fwmark 0x3 lookup VPN 32765: from 192.168.0.32/27 lookup VPN 32766: from all lookup main 32767: from all lookup default # ip route show table VPN default via 10.200.0.1 dev vpn 10.200.0.0/22 dev vpn proto kernel scope link src 10.200.2.235 192.168.0.0/24 dev lan proto kernel scope link src 192.168.0.5 # ip route show default via 192.168.0.1 dev lan metric 100 10.200.0.0/22 dev vpn proto kernel scope link src 10.200.2.235 192.168.0.0/24 dev lan proto kernel scope link src 192.168.0.5 TCP dump showing no traffic coming back when connection is made from 192.168.0.5 user 2000 # tcpdump -i vpn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on vpn, link-type RAW (Raw IP), capture size 65535 bytes ### Traffic from user 2000 on 192.168.0.5 ### 10:19:05.629985 IP 10.200.2.235.37291 > 10.100-78-194.akamai.com.http: Flags [S], seq 2868799562, win 14600, options [mss 1460,sackOK,TS val 6887764 ecr 0,nop,wscale 4], length 0 10:19:21.678001 IP 10.200.2.235.37291 > 10.100-78-194.akamai.com.http: Flags [S], seq 2868799562, win 14600, options [mss 1460,sackOK,TS val 6891776 ecr 0,nop,wscale 4], length 0 ### Traffic from 192.168.0.35 ### 10:23:12.066174 IP 10.200.2.235.49247 > 10.100-78-194.akamai.com.http: Flags [S], seq 2294159276, win 65535, options [mss 1460,nop,wscale 4,nop,nop,TS val 557451322 ecr 0,sackOK,eol], length 0 10:23:12.265640 IP 10.100-78-194.akamai.com.http > 10.200.2.235.49247: Flags [S.], seq 2521908813, ack 2294159277, win 14480, options [mss 1367,sackOK,TS val 388565772 ecr 557451322,nop,wscale 1], length 0 10:23:12.276573 IP 10.200.2.235.49247 > 10.100-78-194.akamai.com.http: Flags [.], ack 1, win 8214, options [nop,nop,TS val 557451534 ecr 388565772], length 0 10:23:12.293030 IP 10.200.2.235.49247 > 10.100-78-194.akamai.com.http: Flags [P.], seq 1:480, ack 1, win 8214, options [nop,nop,TS val 557451552 ecr 388565772], length 479 10:23:12.574773 IP 10.100-78-194.akamai.com.http > 10.200.2.235.49247: Flags [.], ack 480, win 7776, options [nop,nop,TS val 388566081 ecr 557451552], length 0

    Read the article

  • Gratuitous CRLF in Subject: line - why is it there, and is it legal?

    - by MadHatter
    I'm running into a problem with a NAGIOS system sending emails to a popular email-to-SMS service. The email-to-SMS service takes emails with text in the Subject: line, and sends them on to the mobile number encoded in the To: field. So far so good. Sadly, sendmail (and postfix before it) seem to be inserting a gratuitous CRLF into the (necessarily long) Subject: line, and that's causing my SMS messages to be truncated at the CRLF if and only if the Subject: line contains one or more colons past the gratuitous CRLF. I am confident that the messages are being created correctly, but just to be sure, here's me creating a completely noddy test message to myself, with a long Subject: line: echo "foo" | mail -s "1234567 101234567 201234567 301234567 401234567 501234567 601234567 701234567 801234567 90123456789" [email protected] Note there's no extra colon in this Subject: line; all I'm doing here is showing that an extra CRLF is inserted on the wire. Here's the result of sudo ngrep -x port 25: 44 61 74 65 3a 20 46 72    69 2c 20 33 31 20 4d 61    Date: Fri, 31 Ma 79 20 32 30 31 33 20 31    30 3a 34 33 3a 35 35 20    y 2013 10:43:55 2b 30 31 30 30 0d 0a 54    6f 3a 20 72 65 61 70 65    +0100..To: reape 72 40 74 65 61 70 61 72    74 79 2e 6e 65 74 0d 0a    [email protected].. 53 75 62 6a 65 63 74 3a    20 31 32 33 34 35 36 37    Subject: 1234567 20 31 30 31 32 33 34 35    36 37 20 32 30 31 32 33     101234567 20123 34 35 36 37 20 33 30 31    32 33 34 35 36 37 20 34    4567 301234567 4 30 31 32 33 34 35 36 37    20 35 30 31 32 33 34 35    01234567 5012345 36 37 0d 0a 20 36 30 31    32 33 34 35 36 37 20 37    67.. 601234567 7 30 31 32 33 34 35 36 37    20 38 30 31 32 33 34 35    01234567 8012345 36 37 20 39 30 31 32 33    34 35 36 37 38 39 0d 0a    67 90123456789.. 55 73 65 72 2d 41 67 65    6e 74 3a 20 48 65 69 72    User-Agent: Heir 6c 6f 6f 6d 20 6d 61 69    6c 78 20 31 32 2e 34 20    loom mailx 12.4 37 2f 32 39 2f 30 38 0d    0a 4d 49 4d 45 2d 56 65    7/29/08..MIME-Ve 72 73 69 6f 6e 3a 20 31    2e 30 0d 0a 43 6f 6e 74    rsion: 1.0..Cont 65 6e 74 2d 54 79 70 65    3a 20 74 65 78 74 2f 70    ent-Type: text/p 6c 61 69 6e 3b 20 63 68    61 72 73 65 74 3d 75 73    lain; charset=us About half way down (marked in bold+italic), between the 501234567 and the 601234567 in the original Subject: header, you can see a CRLF being inserted (0x0d 0x0a, on the left-hand side hex dump, .. on the right-hand side plain text). The receiving MTA seems happy to post-process this, and when I look at the on-disc stored mail at the receiving end, I see only a LF (0x0a) in the Subject: line, and the line is parsed correctly and in its entirety by, eg, alpine. Nevertheless, the CRLF is there on the wire, and between me and the (excellent) email-to-SMS support people, we've established that these are the cause of the problem. So my question is: is it lawful for an MTA to insert a gratuitous CRLF on the wire? If it is, and I can prove it, then it's the email-to-SMS house's problem, because they are being intolerant. If it isn't, or it is but I can't prove it, then it becomes my problem, so an answer with references would be most useful. Edit: I can now come clean that the email-to-SMS service in question is kapow. Once this problem was explained to them, they got it, worked with me to develop and test a fix, and have deployed the fix. My long subject lines with colons in now get relayed correctly into SMSes. I don't normally trumpet individual companies, especially not on SF, but I thought it worthy of note that kapow Did The Right Thing. (Disclaimer: I have no connection with kapow except as a paying customer who's happy about the way they dealt with his problem.)

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79  | Next Page >