Search Results

Search found 5279 results on 212 pages for 'execution counter'.

Page 209/212 | < Previous Page | 205 206 207 208 209 210 211 212  | Next Page >

  • Long-running Database Query

    - by JamesMLV
    I have a long-running SQL Server 2005 query that I have been hoping to optimize. When I look at the actual execution plan, it says a Clustered Index Seek has 66% of the cost. Execuation Plan Snippit: <RelOp AvgRowSize="31" EstimateCPU="0.0113754" EstimateIO="0.0609028" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="10198.5" LogicalOp="Clustered Index Seek" NodeId="16" Parallel="false" PhysicalOp="Clustered Index Seek" EstimatedTotalSubtreeCost="0.0722782"> <OutputList> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="quoteDate" /> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="price" /> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </OutputList> <RunTimeInformation> <RunTimeCountersPerThread Thread="0" ActualRows="1067" ActualEndOfScans="1" ActualExecutions="1" /> </RunTimeInformation> <IndexScan Ordered="true" ScanDirection="FORWARD" ForcedIndex="false" NoExpandHint="false"> <DefinedValues> <DefinedValue> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="quoteDate" /> </DefinedValue> <DefinedValue> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="price" /> </DefinedValue> <DefinedValue> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </DefinedValue> </DefinedValues> <Object Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Index="[_dta_index_Indices_14_320720195__K5_K2_K1_3]" Alias="[I]" /> <SeekPredicates> <SeekPredicate> <Prefix ScanType="EQ"> <RangeColumns> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="HedgeProduct" ComputedColumn="true" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="(1)"> <Const ConstValue="(1)" /> </ScalarOperator> </RangeExpressions> </Prefix> <StartRange ScanType="GE"> <RangeColumns> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="[@StartMonth]"> <Identifier> <ColumnReference Column="@StartMonth" /> </Identifier> </ScalarOperator> </RangeExpressions> </StartRange> <EndRange ScanType="LE"> <RangeColumns> <ColumnReference Database="[wf_1]" Schema="[dbo]" Table="[Indices]" Alias="[I]" Column="tenure" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="[@EndMonth]"> <Identifier> <ColumnReference Column="@EndMonth" /> </Identifier> </ScalarOperator> </RangeExpressions> </EndRange> </SeekPredicate> </SeekPredicates> </IndexScan> </RelOp> From this, does anyone see an obvious problem that would be causing this to take so long? Here is the query: (SELECT quotedate, tenure, price, ActualVolume, HedgePortfolioValue, Price AS UnhedgedPrice, ((ActualVolume*Price - HedgePortfolioValue)/ActualVolume) AS HedgedPrice FROM ( SELECT [quoteDate] ,[price] , tenure ,isnull(wf_1.[Risks].[HedgePortValueAsOfDate2](1,tenureMonth,quotedate,price),0) as HedgePortfolioValue ,[TotalOperatingGasVolume] as ActualVolume FROM [wf_1].[dbo].[Indices] I inner join ( SELECT DISTINCT tenureMonth FROM [wf_1].[Risks].[KnowRiskTrades] WHERE HedgeProduct = 1 AND portfolio <> 'Natural Gas Hedge Transactions' ) B ON I.tenure=B.tenureMonth inner join ( SELECT [Month],[TotalOperatingGasVolume] FROM [wf_1].[Risks].[ActualGasVolumes] ) C ON C.[Month]=B.tenureMonth WHERE HedgeProduct = 1 AND quoteDate>=dateadd(day, -3*365, tenureMonth) AND quoteDate<=dateadd(day,-3,tenureMonth) )A )

    Read the article

  • I can't use Spring filters in servlet-context XML

    - by gotch4
    For some reason both Eclipse and Spring can't find the filter tag (there is even a red mark)... What's wrong? <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:context="http://www.springframework.org/schema/context" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:p="http://www.springframework.org/schema/p" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd"> <bean class="org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor"></bean> <bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="com.mysql.jdbc.Driver" /> <property name="url" value="jdbc:mysql://localhost/jacciseweb" /> <property name="username" value="root" /> <property name="password" value="siussi" /> </bean> <bean id="mySessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="dataSource" ref="myDataSource" /> <property name="annotatedClasses"> <list> <value>it.jsoftware.jacciseweb.beans.Utente </value> <value>it.jsoftware.jacciseweb.beans.Ordine </value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect"> org.hibernate.dialect.MySQLDialect </prop> <prop key="hibernate.show_sql"> true </prop> <prop key="hibernate.hbm2ddl.auto"> update </prop> <prop key="hibernate.cache.provider_class">org.hibernate.cache.NoCacheProvider</prop> </props> </property> </bean> <filter> <filter-name>hibernateFilter</filter-name> <filter-class> org.springframework.orm.hibernate3.support.OpenSessionInViewFilter </filter-class> <init-param> <param-name>singleSession</param-name> <param-value>true</param-value> </init-param> <init-param> <param-name>sessionFactoryBeanName</param-name> <param-value>mySessionFactory</param-value> </init-param> </filter> <!-- <aop:config> --> <!-- <aop:pointcut id="productServiceMethods" --> <!-- expression="execution(* product.ProductService.*(..))" /> --> <!-- <aop:advisor advice-ref="txAdvice" pointcut-ref="productServiceMethods" /> --> <!-- </aop:config> --> <bean id="acciseHibernateDao" class="it.jsoftware.jacciseweb.model.JAcciseWebManagementDaoHibernate"> <property name="sessionFactory" ref="mySessionFactory" /> </bean> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="mySessionFactory" /> </bean> <tx:annotation-driven /> <bean id="acciseService" class="it.jsoftware.jacciseweb.model.JAcciseWebManagementServiceImpl"> <property name="dao" ref="acciseHibernateDao" /> </bean> <context:component-scan base-package="it.jsoftware.jacciseweb.controllers"></context:component-scan> <mvc:annotation-driven /> <bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter" p:synchronizeOnSession="true" /> <bean class="org.springframework.web.servlet.handler.BeanNameUrlHandlerMapping" /> <mvc:resources mapping="/resources/**" location="/resources/" /> <!-- non serve, è annotato --> <!-- <bean name="/accise" class="it.jsoftware.jacciseweb.controllers.MainController"> </bean> --> </beans> in particular it says "filter" is invalid content

    Read the article

  • Getting problem in threading in JAVA

    - by chetans
    In this program i want to stop GenerateImage & MovingImage Thread both... And i want to start those threads from begining. Can u send me the solution? Here is the code........ package Game; import java.applet.Applet; import java.awt.Color; import java.awt.Dimension; import java.awt.Graphics; import java.awt.Image; import java.awt.MediaTracker; import java.awt.event.KeyEvent; import java.awt.event.KeyListener; import java.net.MalformedURLException; import java.net.URL; public class ThreadInApplet extends Applet implements KeyListener { private static final long serialVersionUID = 1L; Image[] asteroidImage; Image spaceshipImage; String levelstr="Easy Level"; int[] XPos,YPos; int number=0,XPosOfSpaceship,YPosOfSpaceship,NoOfObstacles=5,speed=1,level=1,spaceBtnPressdCntr=0; boolean gameStart=false,pauseGame=false,collideUp=false,collideDown=false,collideLeft=false,collideRight=false; private Image offScreenImage; private Dimension offScreenSize; private Graphics offScreenGraphics; Thread GenerateImages,MoveImages; public void init() { try { GenerateImages=new Thread () //thread to create obstacles { synchronized public void run () { for(int g=0;g<NoOfObstacles;g++) { try { sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } number++; // Temporary counter to count the no of obstacles created } } } ; MoveImages=new Thread () //thread to move obstacles { @SuppressWarnings("deprecation") synchronized public void run () { while(YPos[NoOfObstacles-1]!=600) { pauseGame=false; if(collide()==true) { GenerateImages.suspend(); repaint(); } else GenerateImages.resume(); for(int l=0;l<number;l++) { if(collide()==false) YPos[l]++; else GenerateImages.suspend(); } repaint(); try { sleep(speed); } catch (InterruptedException e) { e.printStackTrace(); } } if(YPos[NoOfObstacles-1]>=600) //level complete state { level++; try { levelUpdation(level); System.out.println("aahe"); } catch (MalformedURLException e) { e.printStackTrace(); } repaint(); } } }; initialPos(); spaceshipImage=getImage(new URL(getCodeBase(),"images/space.png")); for(int i=0;i<NoOfObstacles;i++) { asteroidImage[i]=getImage(new URL(getCodeBase(),"images/asteroid.png")); XPos[i]=(int) (Math.random()*700); YPos[i]=0; } MediaTracker tracker = new MediaTracker (this); for(int i=0;i<NoOfObstacles;i++) { tracker.addImage (asteroidImage[i], 0); } } catch (MalformedURLException e) { e.printStackTrace(); } setBackground(Color.black); addKeyListener(this); } //Sets initial positions of spaceship & obstacle images------------------------------------------------------ public void initialPos() throws MalformedURLException { asteroidImage=new Image[NoOfObstacles]; XPos=new int[NoOfObstacles]; YPos=new int[NoOfObstacles]; XPosOfSpaceship=getWidth()/2-35; YPosOfSpaceship=getHeight()-100; collideUp = false; collideDown=false; collideLeft=false; collideRight=false; } //level finished updations------------------------------------------------------------------------------ @SuppressWarnings("deprecation") public void levelUpdation(int level) throws MalformedURLException { NoOfObstacles=NoOfObstacles+20; speed=speed-3; System.out.println(NoOfObstacles+" "+speed); pauseGame=true; initialPos(); repaint(); } //paint method of graphics to print the messages--------------------------------------------------------- public void paint(Graphics g) { g.setColor(Color.white); if(gameStart==false) { g.drawString("SPACE to start", (getWidth()/2)-15, getHeight()/2); g.drawString(levelstr, (getWidth()/2), getHeight()/2+20); } if(level>1) { if(level==2) levelstr="Medium Level"; else levelstr="High Level"; g.drawString("Level Complete ", (getWidth()/2)-15, getHeight()/2); g.drawString(levelstr, (getWidth()/2), getHeight()/2+20); //g.drawString("SPACE to start", (getWidth()/2)-15, getHeight()/2+40); } for(int n=0;n<number;n++) { if(n>0) g.drawImage(asteroidImage[n],XPos[n],YPos[n],this); } g.drawImage(spaceshipImage,XPosOfSpaceship,YPosOfSpaceship,this); } //update method of graphics to print the messages--------------------------------------------------------- @SuppressWarnings("deprecation") public void update(Graphics g) { Dimension d = size(); if((offScreenImage == null) || (d.width != offScreenSize.width) || (d.height != offScreenSize.height)) { offScreenImage = createImage(d.width, d.height); offScreenSize = d; offScreenGraphics = offScreenImage.getGraphics(); } offScreenGraphics.clearRect(0, 0, d.width, d.height); paint(offScreenGraphics); g.drawImage(offScreenImage, 0, 0, null); } public void keyReleased(KeyEvent arg0){} public void keyTyped(KeyEvent arg0) {} //---------------------Key pressed event to start game & to move the spaceship-------------------------------------- public void keyPressed(KeyEvent e) { if(e.getKeyCode()==32) { spaceBtnPressdCntr++; if(spaceBtnPressdCntr==1) { gameStart=true; GenerateImages.start(); MoveImages.start(); } } if(gameStart==true) { if(e.getKeyCode()==37) { new Thread () { @SuppressWarnings("deprecation") synchronized public void run () { for(int cnt1=1;cnt1<=10;cnt1++) { if(collide()==true && collideLeft == true) { GenerateImages.suspend(); } else { if(XPosOfSpaceship>0) XPosOfSpaceship--; } } repaint(); } }.start(); } if(e.getKeyCode()==38) { new Thread () { @SuppressWarnings("deprecation") synchronized public void run () { for(int cnt1=1;cnt1<=10;cnt1++) { if(collide()==true && collideUp == true) { GenerateImages.suspend(); } else { if(YPosOfSpaceship>10) YPosOfSpaceship--; } } repaint(); } }.start(); } if(e.getKeyCode()==39) { new Thread () { @SuppressWarnings("deprecation") synchronized public void run () { for(int cnt1=1;cnt1<=10;cnt1++) { if(collide()==true && collideRight == true) { GenerateImages.suspend(); } else { if(XPosOfSpaceship<750) XPosOfSpaceship++; } } repaint(); } }.start(); } if(e.getKeyCode()==40) { new Thread () { @SuppressWarnings("deprecation") synchronized public void run () { for(int cnt1=1;cnt1<=10;cnt1++) { if(collide()==true && collideDown == true) { GenerateImages.suspend(); } else { if(YPosOfSpaceship<550) YPosOfSpaceship++; } } repaint(); } }.start(); } } } //------------------------------Collision checking between Spaceship & obstacles------------------------------ public boolean collide() { int x1,y1,x2,y2,x3,y3,x4,y4; //coordinates of obstacles int a1,b1,a2,b2,a3,b3,a4,b4; //coordinates of spaceship a1 =XPosOfSpaceship; b1=YPosOfSpaceship; a2=a1+spaceshipImage.getWidth(this); b2=b1; a3=a1; b3=b1+spaceshipImage.getHeight(this); a4=a2; b4=b3; for(int a=0;a<number;a++) { x1 =XPos[a]; y1=YPos[a]; x2=x1+asteroidImage[a].getWidth(this); y2=y1; x3=x1; y3=y1+asteroidImage[a].getHeight(this); x4=x2; y4=y3; /********checking asteroid touch spaceship from up direction********/ if(y3==b1 && x4>=a1 && x4<=a2) { collideUp = true; collideDown=false; collideLeft=false; collideRight=false; return(true); } if(y3==b1 && x3>=a1 && x3<=a2) { collideUp = true; collideDown=false; collideLeft=false; collideRight=false; return(true); } /********checking asteroid touch spaceship from left direction******/ if(x2==a1 && y4>=b1 && y4<=b3) { collideLeft=true; collideUp = false; collideDown=false; collideRight=false; return(true); } if(x2==a1 && y2>=b1 && y2<=b3) { collideLeft=true; collideUp = false; collideDown=false; collideRight=false; return(true); } /********checking asteroid touch spaceship from right direction*****/ if(x1==a2 && y3>=b2 && y3<=b4) { collideRight=true; collideLeft=false; collideUp = false; collideDown=false; return(true); } if(x1==a2 && y1>=b2 && y1<=b4) { collideRight=true; collideLeft=false; collideUp = false; collideDown=false; return(true); } /********checking asteroid touch spaceship from down direction*****/ if(y1==b3 && x2>=a3 && x2<=a4) { collideDown=true; collideRight=false; collideLeft=false; collideUp = false; return(true); } if(y1==b3 && x1>=a3 && x1<=a4) { collideDown=true; collideRight=false; collideLeft=false; collideUp = false; return(true); } } return(false); } }

    Read the article

  • Fastest way to move records from a oracle DB into MS sql server after processing

    - by user347748
    Hi.. Ok this is the scenario...I have a table in Oracle that acts like a queue... A VB.net program reads the queue and calls a stored proc in MS SQL Server that processes and then inserts the message into another SQL server table and then deletes the record from the oracle table. We use a datareader to read the records from Oracle and then call the stored proc for each of the records. The program seems to be a little slow. The stored procedure itself isnt slow. The SP by itself when called in a loop can process about 2000 records in 20 seconds. BUt when called from the .Net program, the execution time is about 5 records per second. I have seen that most of the time consumed is in calling the stored procedure and waiting for it to return. Is there a better way of doing this? Here is a snippet of the actual code Function StartDataXfer() As Boolean Dim status As Boolean = False Try SqlConn.Open() OraConn.Open() c.ErrorLog(Now.ToString & "--Going to Get the messages from oracle", 1) If GetMsgsFromOracle() Then c.ErrorLog(Now.ToString & "--Got messages from oracle", 1) If ProcessMessages() Then c.ErrorLog(Now.ToString & "--Finished Processing all messages in the queue", 0) status = True Else c.ErrorLog(Now.ToString & "--Failed to Process all messages in the queue", 0) status = False End If Else status = True End If StartDataXfer = status Catch ex As Exception Finally SqlConn.Close() OraConn.Close() End Try End Function Private Function GetMsgsFromOracle() As Boolean Try OraDataAdapter = New OleDb.OleDbDataAdapter OraDataTable = New System.Data.DataTable OraSelCmd = New OleDb.OleDbCommand GetMsgsFromOracle = False With OraSelCmd .CommandType = CommandType.Text .Connection = OraConn .CommandText = GetMsgSql End With OraDataAdapter.SelectCommand = OraSelCmd OraDataAdapter.Fill(OraDataTable) If OraDataTable.Rows.Count > 0 Then GetMsgsFromOracle = True End If Catch ex As Exception GetMsgsFromOracle = False End Try End Function Private Function ProcessMessages() As Boolean Try ProcessMessages = False PrepareSQLInsert() PrepOraDel() i = 0 Dim Method As Integer Dim OraDataRow As DataRow c.ErrorLog(Now.ToString & "--Going to call message sending procedure", 2) For Each OraDataRow In OraDataTable.Rows With OraDataRow Method = GetMethod(.Item(0)) SQLInsCmd.Parameters("RelLifeTime").Value = c.RelLifetime SQLInsCmd.Parameters("Param1").Value = Nothing SQLInsCmd.Parameters("ID").Value = GenerateTransactionID() ' Nothing SQLInsCmd.Parameters("UID").Value = Nothing SQLInsCmd.Parameters("Param").Value = Nothing SQLInsCmd.Parameters("Credit").Value = 0 SQLInsCmd.ExecuteNonQuery() 'check the return value If SQLInsCmd.Parameters("ReturnValue").Value = 1 And SQLInsCmd.Parameters("OutPutParam").Value = 0 Then 'success 'delete the input record from the source table once it is logged c.ErrorLog(Now.ToString & "--Moved record successfully", 2) OraDataAdapter.DeleteCommand.Parameters("P(0)").Value = OraDataRow.Item(6) OraDataAdapter.DeleteCommand.ExecuteNonQuery() c.ErrorLog(Now.ToString & "--Deleted record successfully", 2) OraDataAdapter.Update(OraDataTable) c.ErrorLog(Now.ToString & "--Committed record successfully", 2) i = i + 1 Else 'failure c.ErrorLog(Now.ToString & "--Failed to exec: " & c.DestIns & "Status: " & SQLInsCmd.Parameters("OutPutParam").Value & " and TrackId: " & SQLInsCmd.Parameters("TrackID").Value.ToString, 0) End If If File.Exists("stop.txt") Then c.ErrorLog(Now.ToString & "--Stop File Found", 1) 'ProcessMessages = True 'Exit Function Exit For End If End With Next OraDataAdapter.Update(OraDataTable) c.ErrorLog(Now.ToString & "--Updated Oracle Table", 1) c.ErrorLog(Now.ToString & "--Moved " & i & " records from Oracle to SQL Table", 1) ProcessMessages = True Catch ex As Exception ProcessMessages = False c.ErrorLog(Now.ToString & "--MoveMsgsToSQL: " & ex.Message, 0) Finally OraDataTable.Clear() OraDataTable.Dispose() OraDataAdapter.Dispose() OraDelCmd.Dispose() OraDelCmd = Nothing OraSelCmd = Nothing OraDataTable = Nothing OraDataAdapter = Nothing End Try End Function Public Function GenerateTransactionID() As Int64 Dim SeqNo As Int64 Dim qry As String Dim SqlTransCmd As New OleDb.OleDbCommand qry = " select seqno from StoreSeqNo" SqlTransCmd.CommandType = CommandType.Text SqlTransCmd.Connection = SqlConn SqlTransCmd.CommandText = qry SeqNo = SqlTransCmd.ExecuteScalar If SeqNo > 2147483647 Then qry = "update StoreSeqNo set seqno=1" SqlTransCmd.CommandText = qry SqlTransCmd.ExecuteNonQuery() GenerateTransactionID = 1 Else qry = "update StoreSeqNo set seqno=" & SeqNo + 1 SqlTransCmd.CommandText = qry SqlTransCmd.ExecuteNonQuery() GenerateTransactionID = SeqNo End If End Function Private Function PrepareSQLInsert() As Boolean 'function to prepare the insert statement for the insert into the SQL stmt using 'the sql procedure SMSProcessAndDispatch Try Dim dr As DataRow SQLInsCmd = New OleDb.OleDbCommand With SQLInsCmd .CommandType = CommandType.StoredProcedure .Connection = SqlConn .CommandText = SQLInsProc .Parameters.Add("ReturnValue", OleDb.OleDbType.Integer) .Parameters("ReturnValue").Direction = ParameterDirection.ReturnValue .Parameters.Add("OutPutParam", OleDb.OleDbType.Integer) .Parameters("OutPutParam").Direction = ParameterDirection.Output .Parameters.Add("TrackID", OleDb.OleDbType.VarChar, 70) .Parameters.Add("RelLifeTime", OleDb.OleDbType.TinyInt) .Parameters("RelLifeTime").Direction = ParameterDirection.Input .Parameters.Add("Param1", OleDb.OleDbType.VarChar, 160) .Parameters("Param1").Direction = ParameterDirection.Input .Parameters.Add("TransID", OleDb.OleDbType.VarChar, 70) .Parameters("TransID").Direction = ParameterDirection.Input .Parameters.Add("UID", OleDb.OleDbType.VarChar, 20) .Parameters("UID").Direction = ParameterDirection.Input .Parameters.Add("Param", OleDb.OleDbType.VarChar, 160) .Parameters("Param").Direction = ParameterDirection.Input .Parameters.Add("CheckCredit", OleDb.OleDbType.Integer) .Parameters("CheckCredit").Direction = ParameterDirection.Input .Prepare() End With Catch ex As Exception c.ErrorLog(Now.ToString & "--PrepareSQLInsert: " & ex.Message) End Try End Function Private Function PrepOraDel() As Boolean OraDelCmd = New OleDb.OleDbCommand Try PrepOraDel = False With OraDelCmd .CommandType = CommandType.Text .Connection = OraConn .CommandText = DelSrcSQL .Parameters.Add("P(0)", OleDb.OleDbType.VarChar, 160) 'RowID .Parameters("P(0)").Direction = ParameterDirection.Input .Prepare() End With OraDataAdapter.DeleteCommand = OraDelCmd PrepOraDel = True Catch ex As Exception PrepOraDel = False End Try End Function WHat i would like to know is, if there is anyway to speed up this program? Any ideas/suggestions would be highly appreciated... Regardss, Chetan

    Read the article

  • directX texture appears incorrectly

    - by numerical25
    I finally managed to get a texture onto a cube sadly, but it is appearing incorrectly. as the below picture identifies. Anyways, I am not sure what it could be. My first guess is it could be my uv mapping or my vertex positioning is off. If someone could check and make sure thats good. The first element is the vertex position, second is the color, and third is the uv texture. //Create vectors and put in vertices // Create vertex buffer VertexPos vertices[] = { // BACK SIDES { D3DXVECTOR3(-5.0f, 5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(-5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(1.0,1.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, { D3DXVECTOR3(-5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(1.0,1.0)}, { D3DXVECTOR3(5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(1.0,1.0)}, // 2 FRONT SIDE { D3DXVECTOR3(-5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(1.0,0.0)}, { D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, { D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, { D3DXVECTOR3(5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f) , D3DXVECTOR2(1.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(1.0,1.0)}, // 3 { D3DXVECTOR3(-5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(1.0,0.0)}, { D3DXVECTOR3(-5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, { D3DXVECTOR3(-5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,2.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(1.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, // 4 { D3DXVECTOR3(-5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(1.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, { D3DXVECTOR3(5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, { D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(1.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, -5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, // 5 { D3DXVECTOR3(5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(1.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,1.0)}, { D3DXVECTOR3(5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,1.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(1.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, 5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,2.0)}, // 6 {D3DXVECTOR3(-5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, {D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(1.0,0.0)}, {D3DXVECTOR3(-5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, {D3DXVECTOR3(-5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, {D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(1.0,0.0)}, {D3DXVECTOR3(-5.0f, -5.0f, 5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, }; My second guess could be an error that I am receiving as I run the program. But I don't know where to begin with that. The following is the description of the error . D3D10: WARNING: ID3D10Device::Draw: Vertex Buffer at the input vertex slot 0 is not big enough for what the Draw*() call expects to traverse. This is OK, as reading off the end of the Buffer is defined to return 0. However the developer probably did not intend to make use of this behavior. [ EXECUTION WARNING #356: DEVICE_DRAW_VERTEX_BUFFER_TOO_SMALL ] Not sure what it could be. but where is my vertex layout description //Create Layout D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"NORMAL",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 28, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"TEXCOORD",0, DXGI_FORMAT_R32G32_FLOAT, 0 , 44, D3D10_INPUT_PER_VERTEX_DATA, 0} }; UINT numElements = (sizeof(layout)/sizeof(layout[0])); modelObject.numVertices = sizeof(vertices)/sizeof(VertexPos); for(int i = 0; i < modelObject.numVertices; i += 3) { D3DXVECTOR3 out; D3DXVECTOR3 v1 = vertices[0 + i].pos; D3DXVECTOR3 v2 = vertices[1 + i].pos; D3DXVECTOR3 v3 = vertices[2 + i].pos; D3DXVECTOR3 u = v2 - v1; D3DXVECTOR3 v = v3 - v1; D3DXVec3Cross(&out, &u, &v); D3DXVec3Normalize(&out, &out); vertices[0 + i].normal = out; vertices[1 + i].normal = out; vertices[2 + i].normal = out; } //Create buffer desc D3D10_BUFFER_DESC bufferDesc; bufferDesc.Usage = D3D10_USAGE_DEFAULT; bufferDesc.ByteWidth = sizeof(VertexPos) * modelObject.numVertices; bufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER; bufferDesc.CPUAccessFlags = 0; bufferDesc.MiscFlags = 0; D3D10_SUBRESOURCE_DATA initData; initData.pSysMem = vertices; //Create the buffer HRESULT hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pVertexBuffer); if(FAILED(hr)) return false;

    Read the article

  • Does my basic PHP Socket Server need optimization?

    - by Tom
    Like many people, I can do a lot of things with PHP. One problem I do face constantly is that other people can do it much cleaner, much more organized and much more structured. This also results in much faster execution times and much less bugs. I just finished writing a basic PHP Socket Server (the real core), and am asking you if you can tell me what I should do different before I start expanding the core. I'm not asking about improvements such as encrypted data, authentication or multi-threading. I'm more wondering about questions like "should I maybe do it in a more object oriented way (using PHP5)?", or "is the general structure of the way the script works good, or should some things be done different?". Basically, "is this how the core of a socket server should work?" In fact, I think that if I just show you the code here many of you will immediately see room for improvements. Please be so kind to tell me. Thanks! #!/usr/bin/php -q <? // config $timelimit = 180; // amount of seconds the server should run for, 0 = run indefintely $address = $_SERVER['SERVER_ADDR']; // the server's external IP $port = 9000; // the port to listen on $backlog = SOMAXCONN; // the maximum of backlog incoming connections that will be queued for processing // configure custom PHP settings error_reporting(1); // report all errors ini_set('display_errors', 1); // display all errors set_time_limit($timelimit); // timeout after x seconds ob_implicit_flush(); // results in a flush operation after every output call //create master IPv4 based TCP socket if (!($master = socket_create(AF_INET, SOCK_STREAM, SOL_TCP))) die("Could not create master socket, error: ".socket_strerror(socket_last_error())); // set socket options (local addresses can be reused) if (!socket_set_option($master, SOL_SOCKET, SO_REUSEADDR, 1)) die("Could not set socket options, error: ".socket_strerror(socket_last_error())); // bind to socket server if (!socket_bind($master, $address, $port)) die("Could not bind to socket server, error: ".socket_strerror(socket_last_error())); // start listening if (!socket_listen($master, $backlog)) die("Could not start listening to socket, error: ".socket_strerror(socket_last_error())); //display startup information echo "[".date('Y-m-d H:i:s')."] SERVER CREATED (MAXCONN: ".SOMAXCONN.").\n"; //max connections is a kernel variable and can be adjusted with sysctl echo "[".date('Y-m-d H:i:s')."] Listening on ".$address.":".$port.".\n"; $time = time(); //set startup timestamp // init read sockets array $read_sockets = array($master); // continuously handle incoming socket messages, or close if time limit has been reached while ((!$timelimit) or (time() - $time < $timelimit)) { $changed_sockets = $read_sockets; socket_select($changed_sockets, $write = null, $except = null, null); foreach($changed_sockets as $socket) { if ($socket == $master) { if (($client = socket_accept($master)) < 0) { echo "[".date('Y-m-d H:i:s')."] Socket_accept() failed, error: ".socket_strerror(socket_last_error())."\n"; continue; } else { array_push($read_sockets, $client); echo "[".date('Y-m-d H:i:s')."] Client #".count($read_sockets)." connected (connections: ".count($read_sockets)."/".SOMAXCONN.")\n"; } } else { $data = @socket_read($socket, 1024, PHP_NORMAL_READ); //read a maximum of 1024 bytes until a new line has been sent if ($data === false) { //the client disconnected $index = array_search($socket, $read_sockets); unset($read_sockets[$index]); socket_close($socket); echo "[".date('Y-m-d H:i:s')."] Client #".($index-1)." disconnected (connections: ".count($read_sockets)."/".SOMAXCONN.")\n"; } else { if ($data = trim($data)) { //remove whitespace and continue only if the message is not empty switch ($data) { case "exit": //close connection when exit command is given $index = array_search($socket, $read_sockets); unset($read_sockets[$index]); socket_close($socket); echo "[".date('Y-m-d H:i:s')."] Client #".($index-1)." disconnected (connections: ".count($read_sockets)."/".SOMAXCONN.")\n"; break; default: //for experimental purposes, write the given data back socket_write($socket, "\n you wrote: ".$data); } } } } } } socket_close($master); //close the socket echo "[".date('Y-m-d H:i:s')."] SERVER CLOSED.\n"; ?>

    Read the article

  • Fastest way to move records from an Oracle database into SQL Server

    - by user347748
    Ok this is the scenario... I have a table in Oracle that acts like a queue... A VB.net program reads the queue and calls a stored proc in SQL Server that processes and then inserts the message into another SQL Server table and then deletes the record from the oracle table. We use a DataReader to read the records from Oracle and then call the stored proc for each of the records. The program seems to be a little slow. The stored procedure itself isn't slow. The SP by itself when called in a loop can process about 2000 records in 20 seconds. But when called from the .Net program, the execution time is about 5 records per second. I have seen that most of the time consumed is in calling the stored procedure and waiting for it to return. Is there a better way of doing this? Here is a snippet of the actual code Function StartDataXfer() As Boolean Dim status As Boolean = False Try SqlConn.Open() OraConn.Open() c.ErrorLog(Now.ToString & "--Going to Get the messages from oracle", 1) If GetMsgsFromOracle() Then c.ErrorLog(Now.ToString & "--Got messages from oracle", 1) If ProcessMessages() Then c.ErrorLog(Now.ToString & "--Finished Processing all messages in the queue", 0) status = True Else c.ErrorLog(Now.ToString & "--Failed to Process all messages in the queue", 0) status = False End If Else status = True End If StartDataXfer = status Catch ex As Exception Finally SqlConn.Close() OraConn.Close() End Try End Function Private Function GetMsgsFromOracle() As Boolean Try OraDataAdapter = New OleDb.OleDbDataAdapter OraDataTable = New System.Data.DataTable OraSelCmd = New OleDb.OleDbCommand GetMsgsFromOracle = False With OraSelCmd .CommandType = CommandType.Text .Connection = OraConn .CommandText = GetMsgSql End With OraDataAdapter.SelectCommand = OraSelCmd OraDataAdapter.Fill(OraDataTable) If OraDataTable.Rows.Count > 0 Then GetMsgsFromOracle = True End If Catch ex As Exception GetMsgsFromOracle = False End Try End Function Private Function ProcessMessages() As Boolean Try ProcessMessages = False PrepareSQLInsert() PrepOraDel() i = 0 Dim Method As Integer Dim OraDataRow As DataRow c.ErrorLog(Now.ToString & "--Going to call message sending procedure", 2) For Each OraDataRow In OraDataTable.Rows With OraDataRow Method = GetMethod(.Item(0)) SQLInsCmd.Parameters("RelLifeTime").Value = c.RelLifetime SQLInsCmd.Parameters("Param1").Value = Nothing SQLInsCmd.Parameters("ID").Value = GenerateTransactionID() ' Nothing SQLInsCmd.Parameters("UID").Value = Nothing SQLInsCmd.Parameters("Param").Value = Nothing SQLInsCmd.Parameters("Credit").Value = 0 SQLInsCmd.ExecuteNonQuery() 'check the return value If SQLInsCmd.Parameters("ReturnValue").Value = 1 And SQLInsCmd.Parameters("OutPutParam").Value = 0 Then 'success 'delete the input record from the source table once it is logged c.ErrorLog(Now.ToString & "--Moved record successfully", 2) OraDataAdapter.DeleteCommand.Parameters("P(0)").Value = OraDataRow.Item(6) OraDataAdapter.DeleteCommand.ExecuteNonQuery() c.ErrorLog(Now.ToString & "--Deleted record successfully", 2) OraDataAdapter.Update(OraDataTable) c.ErrorLog(Now.ToString & "--Committed record successfully", 2) i = i + 1 Else 'failure c.ErrorLog(Now.ToString & "--Failed to exec: " & c.DestIns & "Status: " & SQLInsCmd.Parameters("OutPutParam").Value & " and TrackId: " & SQLInsCmd.Parameters("TrackID").Value.ToString, 0) End If If File.Exists("stop.txt") Then c.ErrorLog(Now.ToString & "--Stop File Found", 1) 'ProcessMessages = True 'Exit Function Exit For End If End With Next OraDataAdapter.Update(OraDataTable) c.ErrorLog(Now.ToString & "--Updated Oracle Table", 1) c.ErrorLog(Now.ToString & "--Moved " & i & " records from Oracle to SQL Table", 1) ProcessMessages = True Catch ex As Exception ProcessMessages = False c.ErrorLog(Now.ToString & "--MoveMsgsToSQL: " & ex.Message, 0) Finally OraDataTable.Clear() OraDataTable.Dispose() OraDataAdapter.Dispose() OraDelCmd.Dispose() OraDelCmd = Nothing OraSelCmd = Nothing OraDataTable = Nothing OraDataAdapter = Nothing End Try End Function Public Function GenerateTransactionID() As Int64 Dim SeqNo As Int64 Dim qry As String Dim SqlTransCmd As New OleDb.OleDbCommand qry = " select seqno from StoreSeqNo" SqlTransCmd.CommandType = CommandType.Text SqlTransCmd.Connection = SqlConn SqlTransCmd.CommandText = qry SeqNo = SqlTransCmd.ExecuteScalar If SeqNo > 2147483647 Then qry = "update StoreSeqNo set seqno=1" SqlTransCmd.CommandText = qry SqlTransCmd.ExecuteNonQuery() GenerateTransactionID = 1 Else qry = "update StoreSeqNo set seqno=" & SeqNo + 1 SqlTransCmd.CommandText = qry SqlTransCmd.ExecuteNonQuery() GenerateTransactionID = SeqNo End If End Function Private Function PrepareSQLInsert() As Boolean 'function to prepare the insert statement for the insert into the SQL stmt using 'the sql procedure SMSProcessAndDispatch Try Dim dr As DataRow SQLInsCmd = New OleDb.OleDbCommand With SQLInsCmd .CommandType = CommandType.StoredProcedure .Connection = SqlConn .CommandText = SQLInsProc .Parameters.Add("ReturnValue", OleDb.OleDbType.Integer) .Parameters("ReturnValue").Direction = ParameterDirection.ReturnValue .Parameters.Add("OutPutParam", OleDb.OleDbType.Integer) .Parameters("OutPutParam").Direction = ParameterDirection.Output .Parameters.Add("TrackID", OleDb.OleDbType.VarChar, 70) .Parameters.Add("RelLifeTime", OleDb.OleDbType.TinyInt) .Parameters("RelLifeTime").Direction = ParameterDirection.Input .Parameters.Add("Param1", OleDb.OleDbType.VarChar, 160) .Parameters("Param1").Direction = ParameterDirection.Input .Parameters.Add("TransID", OleDb.OleDbType.VarChar, 70) .Parameters("TransID").Direction = ParameterDirection.Input .Parameters.Add("UID", OleDb.OleDbType.VarChar, 20) .Parameters("UID").Direction = ParameterDirection.Input .Parameters.Add("Param", OleDb.OleDbType.VarChar, 160) .Parameters("Param").Direction = ParameterDirection.Input .Parameters.Add("CheckCredit", OleDb.OleDbType.Integer) .Parameters("CheckCredit").Direction = ParameterDirection.Input .Prepare() End With Catch ex As Exception c.ErrorLog(Now.ToString & "--PrepareSQLInsert: " & ex.Message) End Try End Function Private Function PrepOraDel() As Boolean OraDelCmd = New OleDb.OleDbCommand Try PrepOraDel = False With OraDelCmd .CommandType = CommandType.Text .Connection = OraConn .CommandText = DelSrcSQL .Parameters.Add("P(0)", OleDb.OleDbType.VarChar, 160) 'RowID .Parameters("P(0)").Direction = ParameterDirection.Input .Prepare() End With OraDataAdapter.DeleteCommand = OraDelCmd PrepOraDel = True Catch ex As Exception PrepOraDel = False End Try End Function WHat i would like to know is, if there is anyway to speed up this program? Any ideas/suggestions would be highly appreciated... Regardss, Chetan

    Read the article

  • Multi-threading does not work correctly using std::thread (C++ 11)

    - by user1364743
    I coded a small c++ program to try to understand how multi-threading works using std::thread. Here's the step of my program execution : Initialization of a 5x5 matrix of integers with a unique value '42' contained in the class 'Toto' (initialized in the main). I print the initialized 5x5 matrix. Declaration of std::vector of 5 threads. I attach all threads respectively with their task (threadTask method). Each thread will manipulate a std::vector<int> instance. I join all threads. I print the new state of my 5x5 matrix. Here's the output : 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 It should be : 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 0 0 0 0 0 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 4 4 4 4 4 Here's the code sample : #include <iostream> #include <vector> #include <thread> class Toto { public: /* ** Initialize a 5x5 matrix with the 42 value. */ void initData(void) { for (int y = 0; y < 5; y++) { std::vector<int> vec; for (int x = 0; x < 5; x++) { vec.push_back(42); } this->m_data.push_back(vec); } } /* ** Display the whole matrix. */ void printData(void) const { for (int y = 0; y < 5; y++) { for (int x = 0; x < 5; x++) { printf("%d ", this->m_data[y][x]); } printf("\n"); } printf("\n"); } /* ** Function attached to the thread (thread task). ** Replace the original '42' value by another one. */ void threadTask(std::vector<int> &list, int value) { for (int x = 0; x < 5; x++) { list[x] = value; } } /* ** Return the m_data instance propertie. */ std::vector<std::vector<int> > &getData(void) { return (this->m_data); } private: std::vector<std::vector<int> > m_data; }; int main(void) { Toto toto; toto.initData(); toto.printData(); //Display the original 5x5 matrix (first display). std::vector<std::thread> threadList(5); //Initialization of vector of 5 threads. for (int i = 0; i < 5; i++) { //Threads initializationss std::vector<int> vec = toto.getData()[i]; //Get each sub-vectors. threadList.at(i) = std::thread(&Toto::threadTask, toto, vec, i); //Each thread will be attached to a specific vector. } for (int j = 0; j < 5; j++) { threadList.at(j).join(); } toto.printData(); //Second display. getchar(); return (0); } However, in the method threadTask, if I print the variable list[x], the output is correct. I think I can't print the correct data in the main because the printData() call is in the main thread and the display in the threadTask function is correct because the method is executed in its own thread (not the main one). It's strange, it means that all threads created in a parent processes can't modified the data in this parent processes ? I think I forget something in my code. I'm really lost. Does anyone can help me, please ? Thank a lot in advance for your help.

    Read the article

  • Qt : crash due to delete (trying to handle exceptions...)

    - by Seub
    I am writing a program with Qt, and I would like it to show a dialog box with a Exit | Restart choice whenever an error is thrown somewhere in the code. What I did causes a crash and I really can't figure out why it happens, I was hoping you could help me understanding what's going on. Here's my main.cpp: #include "my_application.hpp" int main(int argc, char *argv[]) { std::cout << std::endl; My_Application app(argc, argv); return app.exec(); } And here's my_application:hpp: #ifndef MY_APPLICATION_HPP #define MY_APPLICATION_HPP #include <QApplication> class Window; class My_Application : public QApplication { public: My_Application(int& argc, char ** argv); virtual ~My_Application(); virtual bool notify(QObject * receiver, QEvent * event); private: Window *window_; void exit(); void restart(); }; #endif // MY_APPLICATION_HPP Finally, here's my_application.cpp: #include "my_application.hpp" #include "window.hpp" #include <QMessageBox> My_Application::My_Application(int& argc, char ** argv) : QApplication(argc, argv) { window_ = new Window; window_->setAttribute(Qt::WA_DeleteOnClose, false); window_->show(); } My_Application::~My_Application() { delete window_; } bool My_Application::notify(QObject * receiver, QEvent * event) { try { return QApplication::notify(receiver, event); } catch(QString error_message) { window_->setEnabled(false); QMessageBox message_box; message_box.setWindowTitle("Error"); message_box.setIcon(QMessageBox::Critical); message_box.setText("The program caught an unexpected error:"); message_box.setInformativeText("What do you want to do? <br>"); QPushButton *restart_button = message_box.addButton(tr("Restart"), QMessageBox::RejectRole); QPushButton *exit_button = message_box.addButton(tr("Exit"), QMessageBox::RejectRole); message_box.setDefaultButton(restart_button); message_box.exec(); if ((QPushButton *) message_box.clickedButton() == exit_button) { exit(); } else if ((QPushButton *) message_box.clickedButton() == restart_button) { restart(); } } return false; } void My_Application::exit() { window_->close(); //delete window_; return; } void My_Application::restart() { window_->close(); //delete window_; window_ = new Window; window_->show(); return; } Note that the line window_->setAttribute(Qt::WA_DeleteOnClose, false); means that window_ (my main window) won't be deleted when it is closed. The code I've written above works, but as far as I understand, there's a memory leak: I should uncomment the line //delete window_; in My_Application::exit() and My_Application::restart(). But when I do that, the program crashes when I click restart (or exit but who cares). (I'm not sure this is useful, in fact it might be misleading, but here's what my debugger tells me: a segmentation fault occurs in QWidgetPrivate::PaintOnScreen() const which is called by a function called by a function... called by My_Application::notify()) When I do some std::couts, I notice that the program runs through the entire restart() function and in fact through the entire notify() function before it crashes. I have no idea why it crashes. Thanks in advance for your insights! Update: I've noticed that My_Application::notify() is called very often. For example, it is called a bunch of times while the error dialog box is open, also during the execution of the restart function. The crash actually occurs in the subfunction QApplication::notify(receiver, event). This is not too surprising in light of the previous remark (the receiver has probably been deleted) But even if I forbid the function My_Application::notify() to do anything while restart() is executed, it still crashes (after having called My_Application::notify() a bunch of times, like 15 times, isn't that weird)? How should I proceed? Maybe I should say (to make the question slightly more relevant) that my class My_Application also has a "restore" function, which I've not copied here to try to keep things short. If I just had that restart feature I wouldn't bother too much, but I do want to have that restore feature. I should also say that if I keep the code with the "delete window_" commented, the problem is not only a memory leak, it still crashes sometimes apparently. There must surely be a way to fix this! But I'm clueless, I'd really appreciate some help! Thanks in advance.

    Read the article

  • IIS 7 Authentication: Certain users can't authenticate, while almost all others can.

    - by user35335
    I'm using IIS 7 Digest authentication to control access to a certain directory containing files. Users access the files through a department website from inside our network and outside. I've set NTFS permissions on the directory to allow a certain AD group to view the files. When I click a link to one of those files on the website I get prompted for a username and password. With most users everything works fine, but with a few of them it prompts for a password 3 times and then get: 401 - Unauthorized: Access is denied due to invalid credentials. But other users that are in the group can get in without a problem. If I switch it over to Windows Authentication, then the trouble users can log in fine. That directory is also shared, and users that can't log in through the website are able to browse to the share and view files in it, so I know that the permissions are ok. Here's the portion of the IIS log where I tried to download the file (/assets/files/secure/WWGNL.pdf): 2010-02-19 19:47:20 xxx.xxx.xxx.xxx GET /assets/images/bullet.gif - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 200 0 0 218 2010-02-19 19:47:20 xxx.xxx.xxx.xxx GET /assets/images/bgOFF.gif - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 200 0 0 218 2010-02-19 19:47:21 xxx.xxx.xxx.xxx GET /assets/files/secure/WWGNL.pdf - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 401 2 5 0 2010-02-19 19:47:36 xxx.xxx.xxx.xxx GET /assets/files/secure/WWGNL.pdf - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 401 1 2148074252 0 2010-02-19 19:47:43 xxx.xxx.xxx.xxx GET /assets/files/secure/WWGNL.pdf - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 401 1 2148074252 15 2010-02-19 19:47:46 xxx.xxx.xxx.xxx GET /manager/media/script/_session.gif 0.19665693119168282 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 200 0 0 203 2010-02-19 19:47:46 xxx.xxx.xxx.xxx POST /manager/index.php - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 200 0 0 296 2010-02-19 19:47:56 xxx.xxx.xxx.xxx GET /assets/files/secure/WWGNL.pdf - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 401 1 2148074252 15 2010-02-19 19:47:59 xxx.xxx.xxx.xxx GET /favicon.ico - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 404 0 2 0 Here's the Failed Logon attempt in the Security Log: Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 2/19/2010 11:47:43 AM Event ID: 4625 Task Category: Logon Level: Information Keywords: Audit Failure User: N/A Computer: WEB4.net.domain.org Description: An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: jim.lastname Account Domain: net.domain.org Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc000006a Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: - Source Network Address: 10.5.16.138 Source Port: 50065 Detailed Authentication Information: Logon Process: WDIGEST Authentication Package: WDigest Transited Services: - Package Name (NTLM only): - Key Length: 0 This event is generated when a logon request fails. It is generated on the computer where access was attempted. The Subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The Logon Type field indicates the kind of logon that was requested. The most common types are 2 (interactive) and 3 (network). The Process Information fields indicate which account and process on the system requested the logon. The Network Information fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases. The authentication information fields provide detailed information about this specific logon request. - Transited services indicate which intermediate services have participated in this logon request. - Package name indicates which sub-protocol was used among the NTLM protocols. - Key length indicates the length of the generated session key. This will be 0 if no session key was requested. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-Security-Auditing" Guid="{54849625-5478-4994-a5ba-3e3b0328c30d}" /> <EventID>4625</EventID> <Version>0</Version> <Level>0</Level> <Task>12544</Task> <Opcode>0</Opcode> <Keywords>0x8010000000000000</Keywords> <TimeCreated SystemTime="2010-02-19T19:47:43.890Z" /> <EventRecordID>2276316</EventRecordID> <Correlation /> <Execution ProcessID="612" ThreadID="692" /> <Channel>Security</Channel> <Computer>WEB4.net.domain.org</Computer> <Security /> </System> <EventData> <Data Name="SubjectUserSid">S-1-0-0</Data> <Data Name="SubjectUserName">-</Data> <Data Name="SubjectDomainName">-</Data> <Data Name="SubjectLogonId">0x0</Data> <Data Name="TargetUserSid">S-1-0-0</Data> <Data Name="TargetUserName">jim.lastname</Data> <Data Name="TargetDomainName">net.domain.org</Data> <Data Name="Status">0xc000006d</Data> <Data Name="FailureReason">%%2313</Data> <Data Name="SubStatus">0xc000006a</Data> <Data Name="LogonType">3</Data> <Data Name="LogonProcessName">WDIGEST</Data> <Data Name="AuthenticationPackageName">WDigest</Data> <Data Name="WorkstationName">-</Data> <Data Name="TransmittedServices">-</Data> <Data Name="LmPackageName">-</Data> <Data Name="KeyLength">0</Data> <Data Name="ProcessId">0x0</Data> <Data Name="ProcessName">-</Data> <Data Name="IpAddress">10.5.16.138</Data> <Data Name="IpPort">50065</Data> </EventData> </Event>

    Read the article

  • nagios NRPE: Unable to read output

    - by user555854
    I currently set up a script to restart my http servers + php5 fpm but can't get it to work. I have googled and have found that mostly permissions are the problems of my error but can't figure it out. I start my script using /usr/lib/nagios/plugins/check_nrpe -H bart -c restart_http This is the output in my syslog on the node I want to restart Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 port 25028 Jun 27 06:29:35 bart nrpe[8926]: Host address is in allowed_hosts Jun 27 06:29:35 bart nrpe[8926]: Handling the connection... Jun 27 06:29:35 bart nrpe[8926]: Host is asking for command 'restart_http' to be run... Jun 27 06:29:35 bart nrpe[8926]: Running command: /usr/bin/sudo /usr/lib/nagios/plugins/http-restart Jun 27 06:29:35 bart nrpe[8926]: Command completed with return code 1 and output: Jun 27 06:29:35 bart nrpe[8926]: Return Code: 1, Output: NRPE: Unable to read output Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 closed. If I run the command myself it runs fine (but asks for a password) (nagios user) This are the script permission and the script contents. -rwxrwxrwx 1 nagios nagios 142 Jun 26 21:41 /usr/lib/nagios/plugins/http-restart #!/bin/bash echo "ok" /etc/init.d/nginx stop /etc/init.d/nginx start /etc/init.d/php5-fpm stop /etc/init.d/php5-fpm start echo "done" I also added this line to visudo nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ My local nagios nrpe.cfg ############################################################################# # Sample NRPE Config File # Written by: Ethan Galstad ([email protected]) # # # NOTES: # This is a sample configuration file for the NRPE daemon. It needs to be # located on the remote host that is running the NRPE daemon, not the host # from which the check_nrpe client is being executed. ############################################################################# # LOG FACILITY # The syslog facility that should be used for logging purposes. log_facility=daemon # PID FILE # The name of the file in which the NRPE daemon should write it's process ID # number. The file is only written if the NRPE daemon is started by the root # user and is running in standalone mode. pid_file=/var/run/nagios/nrpe.pid # PORT NUMBER # Port number we should wait for connections on. # NOTE: This must be a non-priviledged port (i.e. > 1024). # NOTE: This option is ignored if NRPE is running under either inetd or xinetd server_port=5666 # SERVER ADDRESS # Address that nrpe should bind to in case there are more than one interface # and you do not want nrpe to bind on all interfaces. # NOTE: This option is ignored if NRPE is running under either inetd or xinetd #server_address=127.0.0.1 # NRPE USER # This determines the effective user that the NRPE daemon should run as. # You can either supply a username or a UID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_user=nagios # NRPE GROUP # This determines the effective group that the NRPE daemon should run as. # You can either supply a group name or a GID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_group=nagios # ALLOWED HOST ADDRESSES # This is an optional comma-delimited list of IP address or hostnames # that are allowed to talk to the NRPE daemon. # # Note: The daemon only does rudimentary checking of the client's IP # address. I would highly recommend adding entries in your /etc/hosts.allow # file to allow only the specified host to connect to the port # you are running this daemon on. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd allowed_hosts=127.0.0.1,192.168.133.17 # COMMAND ARGUMENT PROCESSING # This option determines whether or not the NRPE daemon will allow clients # to specify arguments to commands that are executed. This option only works # if the daemon was configured with the --enable-command-args configure script # option. # # *** ENABLING THIS OPTION IS A SECURITY RISK! *** # Read the SECURITY file for information on some of the security implications # of enabling this variable. # # Values: 0=do not allow arguments, 1=allow command arguments dont_blame_nrpe=0 # COMMAND PREFIX # This option allows you to prefix all commands with a user-defined string. # A space is automatically added between the specified prefix string and the # command line from the command definition. # # *** THIS EXAMPLE MAY POSE A POTENTIAL SECURITY RISK, SO USE WITH CAUTION! *** # Usage scenario: # Execute restricted commmands using sudo. For this to work, you need to add # the nagios user to your /etc/sudoers. An example entry for alllowing # execution of the plugins from might be: # # nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # # This lets the nagios user run all commands in that directory (and only them) # without asking for a password. If you do this, make sure you don't give # random users write access to that directory or its contents! command_prefix=/usr/bin/sudo # DEBUGGING OPTION # This option determines whether or not debugging messages are logged to the # syslog facility. # Values: 0=debugging off, 1=debugging on debug=1 # COMMAND TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # allow plugins to finish executing before killing them off. command_timeout=60 # CONNECTION TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # wait for a connection to be established before exiting. This is sometimes # seen where a network problem stops the SSL being established even though # all network sessions are connected. This causes the nrpe daemons to # accumulate, eating system resources. Do not set this too low. connection_timeout=300 # WEEK RANDOM SEED OPTION # This directive allows you to use SSL even if your system does not have # a /dev/random or /dev/urandom (on purpose or because the necessary patches # were not applied). The random number generator will be seeded from a file # which is either a file pointed to by the environment valiable $RANDFILE # or $HOME/.rnd. If neither exists, the pseudo random number generator will # be initialized and a warning will be issued. # Values: 0=only seed from /dev/[u]random, 1=also seed from weak randomness #allow_weak_random_seed=1 # INCLUDE CONFIG FILE # This directive allows you to include definitions from an external config file. #include=<somefile.cfg> # INCLUDE CONFIG DIRECTORY # This directive allows you to include definitions from config files (with a # .cfg extension) in one or more directories (with recursion). #include_dir=<somedirectory> #include_dir=<someotherdirectory> # COMMAND DEFINITIONS # Command definitions that this daemon will run. Definitions # are in the following format: # # command[<command_name>]=<command_line> # # When the daemon receives a request to return the results of <command_name> # it will execute the command specified by the <command_line> argument. # # Unlike Nagios, the command line cannot contain macros - it must be # typed exactly as it should be executed. # # Note: Any plugins that are used in the command lines must reside # on the machine that this daemon is running on! The examples below # assume that you have plugins installed in a /usr/local/nagios/libexec # directory. Also note that you will have to modify the definitions below # to match the argument format the plugins expect. Remember, these are # examples only! # The following examples use hardcoded command arguments... command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1 command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 # The following examples allow user-supplied arguments and can # only be used if the NRPE daemon was compiled with support for # command arguments *AND* the dont_blame_nrpe directive in this # config file is set to '1'. This poses a potential security risk, so # make sure you read the SECURITY file before doing this. #command[check_users]=/usr/lib/nagios/plugins/check_users -w $ARG1$ -c $ARG2$ #command[check_load]=/usr/lib/nagios/plugins/check_load -w $ARG1$ -c $ARG2$ #command[check_disk]=/usr/lib/nagios/plugins/check_disk -w $ARG1$ -c $ARG2$ -p $ARG3$ #command[check_procs]=/usr/lib/nagios/plugins/check_procs -w $ARG1$ -c $ARG2$ -s $ARG3$ command[restart_http]=/usr/lib/nagios/plugins/http-restart # # local configuration: # if you'd prefer, you can instead place directives here include=/etc/nagios/nrpe_local.cfg # # you can place your config snipplets into nrpe.d/ include_dir=/etc/nagios/nrpe.d/ My Sudoers files # /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # See the man page for details on how to write a sudoers file. # Defaults env_reset # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL) ALL nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # Allow members of group sudo to execute any command # (Note that later entries override this, so you might need to move # it further down) %sudo ALL=(ALL) ALL # #includedir /etc/sudoers.d Hopefully someone can help!

    Read the article

  • Failing Sata HDD

    - by DaveCol
    I think my HDD is fried... Could someone confirm or help me restore it? I was using Hardware RAID 1 Configuration [2 x 160GB SATA HDD] on a CentOS 4 Installation. All of a sudden I started seeing bad sectors on the second HDD which stopped being mirrored. I have removed the RAID array and have tested with SMART which showed the following error: 187 Unknown_Attribute 0x003a 001 001 051 Old_age Always FAILING_NOW 4645 I have no clue what this means, or if I can recover from it. Could someone give me some ideas on how to fix this, or what HDD to get to replace this? Complete SMART report: Smartctl version 5.33 [i686-redhat-linux-gnu] Copyright (C) 2002-4 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: GB0160CAABV Serial Number: 6RX58NAA Firmware Version: HPG1 User Capacity: 160,041,885,696 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 7 ATA Standard is: ATA/ATAPI-7 T13 1532D revision 4a Local Time is: Tue Oct 19 13:42:42 2010 COT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED See vendor-specific Attribute list for marginal Attributes. General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 433) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 54) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 100 253 006 Pre-fail Always - 0 3 Spin_Up_Time 0x0002 097 097 000 Old_age Always - 0 4 Start_Stop_Count 0x0033 100 100 020 Pre-fail Always - 152 5 Reallocated_Sector_Ct 0x0033 095 095 036 Pre-fail Always - 214 7 Seek_Error_Rate 0x000f 078 060 030 Pre-fail Always - 73109713 9 Power_On_Hours 0x0032 083 083 000 Old_age Always - 15133 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0033 100 100 020 Pre-fail Always - 154 184 Unknown_Attribute 0x0032 038 038 000 Old_age Always - 62 187 Unknown_Attribute 0x003a 001 001 051 Old_age Always FAILING_NOW 4645 189 Unknown_Attribute 0x0022 100 100 000 Old_age Always - 0 190 Unknown_Attribute 0x001a 061 055 000 Old_age Always - 656408615 194 Temperature_Celsius 0x0000 039 045 000 Old_age Offline - 39 (Lifetime Min/Max 0/22) 195 Hardware_ECC_Recovered 0x0032 070 059 000 Old_age Always - 12605265 197 Current_Pending_Sector 0x0000 100 100 000 Old_age Offline - 1 198 Offline_Uncorrectable 0x0000 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0000 200 200 000 Old_age Offline - 62 SMART Error Log Version: 1 ATA Error Count: 4645 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 4645 occurred at disk power-on lifetime: 15132 hours (630 days + 12 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 7b 86 b1 ea Error: UNC at LBA = 0x0ab1867b = 179406459 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 02 7b 86 b1 ea 00 00:38:52.796 READ DMA ec 03 45 00 00 00 a0 00 00:38:52.796 IDENTIFY DEVICE ef 03 45 00 00 00 a0 00 00:38:52.794 SET FEATURES [Set transfer mode] ec 00 00 7b 86 b1 a0 00 00:38:49.991 IDENTIFY DEVICE c8 00 04 79 86 b1 ea 00 00:38:49.935 READ DMA Error 4644 occurred at disk power-on lifetime: 15132 hours (630 days + 12 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 7b 86 b1 ea Error: UNC at LBA = 0x0ab1867b = 179406459 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 04 79 86 b1 ea 00 00:38:41.517 READ DMA ec 03 45 00 00 00 a0 00 00:38:41.515 IDENTIFY DEVICE ef 03 45 00 00 00 a0 00 00:38:41.515 SET FEATURES [Set transfer mode] ec 00 00 7b 86 b1 a0 00 00:38:49.991 IDENTIFY DEVICE c8 00 06 77 86 b1 ea 00 00:38:49.935 READ DMA Error 4643 occurred at disk power-on lifetime: 15132 hours (630 days + 12 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 7b 86 b1 ea Error: UNC at LBA = 0x0ab1867b = 179406459 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 06 77 86 b1 ea 00 00:38:41.517 READ DMA ec 03 45 00 00 00 a0 00 00:38:41.515 IDENTIFY DEVICE ef 03 45 00 00 00 a0 00 00:38:41.515 SET FEATURES [Set transfer mode] ec 00 00 7b 86 b1 a0 00 00:38:41.513 IDENTIFY DEVICE c8 00 06 77 86 b1 ea 00 00:38:38.706 READ DMA Error 4642 occurred at disk power-on lifetime: 15132 hours (630 days + 12 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 7b 86 b1 ea Error: UNC at LBA = 0x0ab1867b = 179406459 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 06 77 86 b1 ea 00 00:38:41.517 READ DMA ec 03 45 00 00 00 a0 00 00:38:41.515 IDENTIFY DEVICE ef 03 45 00 00 00 a0 00 00:38:41.515 SET FEATURES [Set transfer mode] ec 00 00 7b 86 b1 a0 00 00:38:41.513 IDENTIFY DEVICE c8 00 06 77 86 b1 ea 00 00:38:38.706 READ DMA Error 4641 occurred at disk power-on lifetime: 15132 hours (630 days + 12 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 7b 86 b1 ea Error: UNC at LBA = 0x0ab1867b = 179406459 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 06 77 86 b1 ea 00 00:38:41.517 READ DMA ec 03 45 00 00 00 a0 00 00:38:41.515 IDENTIFY DEVICE ef 03 45 00 00 00 a0 00 00:38:41.515 SET FEATURES [Set transfer mode] ec 00 00 7b 86 b1 a0 00 00:38:41.513 IDENTIFY DEVICE c8 00 06 77 86 b1 ea 00 00:38:38.706 READ DMA SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 15131 - # 2 Short offline Completed without error 00% 15131 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.

    Read the article

  • Hard Disk DRDY error: is it a crash

    - by pranjal
    I am using IBM Thinkpad, 1.7GHz, 512 RAM with Linux Mint 9 installed. I have two partitions in addition to root. One of the partitions became read-only yesterday, after which I rebooted my system. It is extremely slow along with DRDY Error : Is my Hard disk crashed ? Error Log while booting. Differences between boot sector and its backup. failed command : READ DMA BMDMA : stat 0X25 ata 1.00 : status : { DRDY ERR } ata 1.00 : status :{ UNC } Buffer I/O error on logical device, logical block 65467 smartctl output for the partition: mint mint # smartctl -a /dev/sda1 smartctl version 5.38 [i686-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: TOSHIBA MK4026GAX RoHS Serial Number: X5LY1623T Firmware Version: PA107E User Capacity: 40,007,761,920 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 6 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Thu Feb 17 06:48:25 2011 UTC SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x84) Offline data collection activity was suspended by an interrupting command from host. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 153) seconds. Offline data collection capabilities: (0x1b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. No Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. No General Purpose Logging support. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 30) minutes. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0 2 Throughput_Performance 0x0005 100 100 050 Pre-fail Offline - 0 3 Spin_Up_Time 0x0027 100 100 001 Pre-fail Always - 310 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 3968 5 Reallocated_Sector_Ct 0x0033 100 100 050 Pre-fail Always - 40 7 Seek_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0 8 Seek_Time_Performance 0x0005 100 100 050 Pre-fail Offline - 0 9 Power_On_Hours 0x0032 082 082 000 Old_age Always - 7257 10 Spin_Retry_Count 0x0033 179 100 030 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 3484 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 489 193 Load_Cycle_Count 0x0032 064 064 000 Old_age Always - 367150 194 Temperature_Celsius 0x0022 100 100 000 Old_age Always - 36 (Lifetime Min/Max 14/57) 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 33 197 Current_Pending_Sector 0x0032 100 100 000 Old_age Always - 82 198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 1 199 UDMA_CRC_Error_Count 0x0032 200 253 000 Old_age Always - 0 220 Disk_Shift 0x0002 100 100 000 Old_age Always - 101 222 Loaded_Hours 0x0032 085 085 000 Old_age Always - 6146 223 Load_Retry_Count 0x0032 100 100 000 Old_age Always - 0 224 Load_Friction 0x0022 100 100 000 Old_age Always - 0 226 Load-in_Time 0x0026 100 100 000 Old_age Always - 227 240 Head_Flying_Hours 0x0001 100 100 001 Pre-fail Offline - 0 SMART Error Log Version: 1 ATA Error Count: 2371 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 2371 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:03:10.061 READ DMA f8 00 00 00 00 00 e0 00 00:03:10.061 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:03:10.053 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:03:10.053 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:03:10.053 READ NATIVE MAX ADDRESS Error 2370 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:03:03.328 READ DMA f8 00 00 00 00 00 e0 00 00:03:03.327 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:03:03.320 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:03:03.319 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:03:03.319 READ NATIVE MAX ADDRESS Error 2369 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:02:56.582 READ DMA f8 00 00 00 00 00 e0 00 00:02:56.582 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:02:56.574 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:02:56.574 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:02:56.574 READ NATIVE MAX ADDRESS Error 2368 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:02:49.809 READ DMA f8 00 00 00 00 00 e0 00 00:02:49.809 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:02:49.801 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:02:49.801 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:02:49.801 READ NATIVE MAX ADDRESS Error 2367 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:02:43.056 READ DMA f8 00 00 00 00 00 e0 00 00:02:43.056 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:02:43.048 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:02:43.048 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:02:43.047 READ NATIVE MAX ADDRESS SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] Device does not support Selective Self Tests/Logging Do I need to get a new Hard Disk my PC ?

    Read the article

  • Garbled text in Screen [closed]

    - by Prabin Dahal
    The graphical Interface in my system is garbled with some text. At the beginning I thought it was due to java and tomcat that I installed. But after removing java and tomcat, it is still the same. I am using ubuntu server and i have installed xfce desktop environment with oboard softkey I have added my dmesg output to this message. What is the problem here. I am not able to figure it out. Thank you for your help. Prabin [ 0.390936] usbcore: registered new interface driver usbfs [ 0.391006] usbcore: registered new interface driver hub [ 0.391147] usbcore: registered new device driver usb [ 0.391580] PCI: Using ACPI for IRQ routing [ 0.400509] PCI: pci_cache_line_size set to 64 bytes [ 0.400669] reserve RAM buffer: 000000000009ec00 - 000000000009ffff [ 0.400681] reserve RAM buffer: 000000007f597000 - 000000007fffffff [ 0.400699] reserve RAM buffer: 000000007f6f0000 - 000000007fffffff [ 0.401135] NetLabel: Initializing [ 0.401155] NetLabel: domain hash size = 128 [ 0.401168] NetLabel: protocols = UNLABELED CIPSOv4 [ 0.401212] NetLabel: unlabeled traffic allowed by default [ 0.401466] HPET: 3 timers in total, 0 timers will be used for per-cpu timer [ 0.401494] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 [ 0.401520] hpet0: 3 comparators, 64-bit 14.318180 MHz counter [ 0.408228] Switching to clocksource hpet [ 0.434341] AppArmor: AppArmor Filesystem Enabled [ 0.434447] pnp: PnP ACPI init [ 0.434531] ACPI: bus type pnp registered [ 0.434784] pnp 00:00: [bus 00-ff] [ 0.434794] pnp 00:00: [io 0x0cf8-0x0cff] [ 0.434804] pnp 00:00: [io 0x0000-0x0cf7 window] [ 0.434813] pnp 00:00: [io 0x0d00-0xffff window] [ 0.434822] pnp 00:00: [mem 0x000a0000-0x000bffff window] [ 0.434831] pnp 00:00: [mem 0x00000000 window] [ 0.434840] pnp 00:00: [mem 0x80000000-0xffffffff window] [ 0.435018] pnp 00:00: Plug and Play ACPI device, IDs PNP0a08 PNP0a03 (active) [ 0.435526] pnp 00:01: [mem 0xe0000000-0xefffffff] [ 0.435537] pnp 00:01: [mem 0x7f700000-0x7f7fffff] [ 0.435545] pnp 00:01: [mem 0x7f800000-0x7fffffff] [ 0.435554] pnp 00:01: [mem 0xfee00000-0xfeefffff] [ 0.435727] system 00:01: [mem 0xe0000000-0xefffffff] has been reserved [ 0.435754] system 00:01: [mem 0x7f700000-0x7f7fffff] has been reserved [ 0.435775] system 00:01: [mem 0x7f800000-0x7fffffff] has been reserved [ 0.435796] system 00:01: [mem 0xfee00000-0xfeefffff] has been reserved [ 0.435818] system 00:01: Plug and Play ACPI device, IDs PNP0c01 (active) [ 0.436233] pnp 00:02: [io 0x0000-0xffffffffffffffff disabled] [ 0.436245] pnp 00:02: [io 0x0000-0xffffffffffffffff disabled] [ 0.436414] system 00:02: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.436512] pnp 00:03: [io 0x0060] [ 0.436521] pnp 00:03: [io 0x0064] [ 0.436548] pnp 00:03: [irq 1] [ 0.436682] pnp 00:03: Plug and Play ACPI device, IDs PNP0303 PNP030b (active) [ 0.436825] pnp 00:04: [irq 12] [ 0.436958] pnp 00:04: Plug and Play ACPI device, IDs PNP0f03 PNP0f13 (active) [ 0.437835] pnp 00:05: [io 0x03f8-0x03ff] [ 0.437861] pnp 00:05: [irq 4] [ 0.437870] pnp 00:05: [dma 0 disabled] [ 0.438142] pnp 00:05: Plug and Play ACPI device, IDs PNP0501 (active) [ 0.439014] pnp 00:06: [io 0x02f8-0x02ff] [ 0.439036] pnp 00:06: [irq 3] [ 0.439045] pnp 00:06: [dma 0 disabled] [ 0.439297] pnp 00:06: Plug and Play ACPI device, IDs PNP0501 (active) [ 0.439346] pnp 00:07: [io 0x0000-0x000f] [ 0.439355] pnp 00:07: [io 0x0081-0x0083] [ 0.439363] pnp 00:07: [io 0x0087] [ 0.439371] pnp 00:07: [io 0x0089-0x008b] [ 0.439380] pnp 00:07: [io 0x008f] [ 0.439388] pnp 00:07: [io 0x00c0-0x00df] [ 0.439563] system 00:07: Plug and Play ACPI device, IDs PNP0c01 (active) [ 0.439617] pnp 00:08: [io 0x0070-0x0077] [ 0.439639] pnp 00:08: [irq 8] [ 0.439751] pnp 00:08: Plug and Play ACPI device, IDs PNP0b00 (active) [ 0.439788] pnp 00:09: [io 0x0061] [ 0.439893] pnp 00:09: Plug and Play ACPI device, IDs PNP0800 (active) [ 0.439977] pnp 00:0a: [io 0x0010-0x001f] [ 0.439986] pnp 00:0a: [io 0x0022-0x003f] [ 0.439994] pnp 00:0a: [io 0x0044-0x005f] [ 0.440055] pnp 00:0a: [io 0x0063] [ 0.440063] pnp 00:0a: [io 0x0065] [ 0.440071] pnp 00:0a: [io 0x0067-0x006f] [ 0.440079] pnp 00:0a: [io 0x0072-0x007f] [ 0.440086] pnp 00:0a: [io 0x0080] [ 0.440094] pnp 00:0a: [io 0x0084-0x0086] [ 0.440102] pnp 00:0a: [io 0x0088] [ 0.440109] pnp 00:0a: [io 0x008c-0x008e] [ 0.440117] pnp 00:0a: [io 0x0090-0x009f] [ 0.440125] pnp 00:0a: [io 0x00a2-0x00bf] [ 0.440133] pnp 00:0a: [io 0x00e0-0x00ef] [ 0.440141] pnp 00:0a: [io 0x04d0-0x04d1] [ 0.440150] pnp 00:0a: [io 0x0000-0xffffffffffffffff disabled] [ 0.440160] pnp 00:0a: [io 0x0000-0xffffffffffffffff disabled] [ 0.440168] pnp 00:0a: [io 0x03f4] [ 0.440175] pnp 00:0a: [io 0x03f5] [ 0.440183] pnp 00:0a: [io 0x0374] [ 0.440190] pnp 00:0a: [io 0x0375] [ 0.440405] system 00:0a: [io 0x04d0-0x04d1] has been reserved [ 0.440432] system 00:0a: [io 0x03f4] has been reserved [ 0.440451] system 00:0a: [io 0x03f5] has been reserved [ 0.440469] system 00:0a: [io 0x0374] has been reserved [ 0.440488] system 00:0a: [io 0x0375] has been reserved [ 0.440508] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.440550] pnp 00:0b: [io 0x00f0-0x00ff] [ 0.440572] pnp 00:0b: [irq 13] [ 0.440691] pnp 00:0b: Plug and Play ACPI device, IDs PNP0c04 (active) [ 0.440770] pnp 00:0c: [io 0x0810] [ 0.440779] pnp 00:0c: [io 0x0800-0x080f] [ 0.440787] pnp 00:0c: [io 0xffff] [ 0.440947] system 00:0c: [io 0x0810] has been reserved [ 0.440970] system 00:0c: [io 0x0800-0x080f] has been reserved [ 0.440989] system 00:0c: [io 0xffff] has been reserved [ 0.441010] system 00:0c: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.441620] pnp 00:0d: [io 0x0900-0x097f] [ 0.441630] pnp 00:0d: [io 0x09c0-0x09ff] [ 0.441639] pnp 00:0d: [io 0x0400-0x043f] [ 0.441647] pnp 00:0d: [io 0x0480-0x04bf] [ 0.441656] pnp 00:0d: [mem 0xfec00000-0xfec85fff] [ 0.441664] pnp 00:0d: [mem 0xfed1c000-0xfed1ffff] [ 0.441673] pnp 00:0d: [mem 0x000c0000-0x000dffff] [ 0.441689] pnp 00:0d: [mem 0x000e0000-0x000effff] [ 0.441697] pnp 00:0d: [mem 0x000f0000-0x000fffff] [ 0.441706] pnp 00:0d: [mem 0xff800000-0xffffffff] [ 0.441911] system 00:0d: [io 0x0900-0x097f] has been reserved [ 0.441935] system 00:0d: [io 0x09c0-0x09ff] has been reserved [ 0.441955] system 00:0d: [io 0x0400-0x043f] has been reserved [ 0.441975] system 00:0d: [io 0x0480-0x04bf] has been reserved [ 0.441997] system 00:0d: [mem 0xfec00000-0xfec85fff] could not be reserved [ 0.442019] system 00:0d: [mem 0xfed1c000-0xfed1ffff] has been reserved [ 0.442040] system 00:0d: [mem 0x000c0000-0x000dffff] could not be reserved [ 0.442061] system 00:0d: [mem 0x000e0000-0x000effff] could not be reserved [ 0.442082] system 00:0d: [mem 0x000f0000-0x000fffff] could not be reserved [ 0.442103] system 00:0d: [mem 0xff800000-0xffffffff] has been reserved [ 0.442126] system 00:0d: Plug and Play ACPI device, IDs PNP0c01 (active) [ 0.442308] pnp 00:0e: [mem 0xfed00000-0xfed003ff] [ 0.442454] pnp 00:0e: Plug and Play ACPI device, IDs PNP0103 (active) [ 0.442569] pnp 00:0f: [mem 0x7f6f0000-0x7f6fffff] [ 0.442762] system 00:0f: [mem 0x7f6f0000-0x7f6fffff] has been reserved [ 0.442788] system 00:0f: Plug and Play ACPI device, IDs PNP0c01 (active) [ 0.443360] pnp: PnP ACPI: found 16 devices [ 0.443378] ACPI: ACPI bus type pnp unregistered [ 0.443395] PnPBIOS: Disabled by ACPI PNP [ 0.486106] PCI: max bus depth: 3 pci_try_num: 4 [ 0.486189] pci 0000:00:1c.0: PCI bridge to [bus 01-01] [ 0.486217] pci 0000:00:1c.0: bridge window [io 0xe000-0xefff] [ 0.486241] pci 0000:00:1c.0: bridge window [mem 0xd0100000-0xd01fffff] [ 0.486266] pci 0000:00:1c.0: bridge window [mem 0xff700000-0xff7fffff pref] [ 0.486298] pci 0000:03:01.0: PCI bridge to [bus 04-04] [ 0.486319] pci 0000:03:01.0: bridge window [io 0xd000-0xdfff] [ 0.486348] pci 0000:03:01.0: bridge window [mem 0xd0000000-0xd00fffff] [ 0.486374] pci 0000:03:01.0: bridge window [mem 0xff600000-0xff6fffff 64bit pref] [ 0.486406] pci 0000:03:02.0: PCI bridge to [bus 05-05] [ 0.486444] pci 0000:03:03.0: PCI bridge to [bus 06-06] [ 0.486479] pci 0000:02:00.0: PCI bridge to [bus 03-06] [ 0.486499] pci 0000:02:00.0: bridge window [io 0xd000-0xdfff] [ 0.486522] pci 0000:02:00.0: bridge window [mem 0xd0000000-0xd00fffff] [ 0.486545] pci 0000:02:00.0: bridge window [mem 0xff600000-0xff6fffff 64bit pref] [ 0.486575] pci 0000:00:1c.1: PCI bridge to [bus 02-06] [ 0.486593] pci 0000:00:1c.1: bridge window [io 0xd000-0xdfff] [ 0.486615] pci 0000:00:1c.1: bridge window [mem 0xd0000000-0xd00fffff] [ 0.486637] pci 0000:00:1c.1: bridge window [mem 0xff600000-0xff6fffff pref] [ 0.486710] pci 0000:00:1c.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 0.486735] pci 0000:00:1c.0: setting latency timer to 64 [ 0.486774] pci 0000:00:1c.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 [ 0.486796] pci 0000:00:1c.1: setting latency timer to 64 [ 0.486817] pci 0000:02:00.0: setting latency timer to 64 [ 0.486836] pci 0000:03:01.0: setting latency timer to 64 [ 0.486858] pci 0000:03:02.0: setting latency timer to 64 [ 0.486880] pci 0000:03:03.0: setting latency timer to 64 [ 0.486893] pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7] [ 0.486902] pci_bus 0000:00: resource 5 [io 0x0d00-0xffff] [ 0.486912] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff] [ 0.486922] pci_bus 0000:00: resource 7 [mem 0x80000000-0xffffffff] [ 0.486932] pci_bus 0000:01: resource 0 [io 0xe000-0xefff] [ 0.486941] pci_bus 0000:01: resource 1 [mem 0xd0100000-0xd01fffff] [ 0.486951] pci_bus 0000:01: resource 2 [mem 0xff700000-0xff7fffff pref] [ 0.486961] pci_bus 0000:02: resource 0 [io 0xd000-0xdfff] [ 0.486970] pci_bus 0000:02: resource 1 [mem 0xd0000000-0xd00fffff] [ 0.486980] pci_bus 0000:02: resource 2 [mem 0xff600000-0xff6fffff pref] [ 0.486989] pci_bus 0000:03: resource 0 [io 0xd000-0xdfff] [ 0.486998] pci_bus 0000:03: resource 1 [mem 0xd0000000-0xd00fffff] [ 0.487008] pci_bus 0000:03: resource 2 [mem 0xff600000-0xff6fffff 64bit pref] [ 0.487018] pci_bus 0000:04: resource 0 [io 0xd000-0xdfff] [ 0.487028] pci_bus 0000:04: resource 1 [mem 0xd0000000-0xd00fffff] [ 0.487038] pci_bus 0000:04: resource 2 [mem 0xff600000-0xff6fffff 64bit pref] [ 0.487177] NET: Registered protocol family 2 [ 0.487405] IP route cache hash table entries: 32768 (order: 5, 131072 bytes) [ 0.488397] TCP established hash table entries: 131072 (order: 8, 1048576 bytes) [ 0.489792] TCP bind hash table entries: 65536 (order: 7, 524288 bytes) [ 0.490493] TCP: Hash tables configured (established 131072 bind 65536) [ 0.490525] TCP reno registered [ 0.490551] UDP hash table entries: 512 (order: 2, 16384 bytes) [ 0.490590] UDP-Lite hash table entries: 512 (order: 2, 16384 bytes) [ 0.490898] NET: Registered protocol family 1 [ 0.490970] pci 0000:00:02.0: Boot video device [ 0.491052] pci 0000:00:1d.0: PCI INT A -> GSI 20 (level, low) -> IRQ 20 [ 0.491092] pci 0000:00:1d.0: PCI INT A disabled [ 0.491134] pci 0000:00:1d.1: PCI INT B -> GSI 21 (level, low) -> IRQ 21 [ 0.491174] pci 0000:00:1d.1: PCI INT B disabled [ 0.491220] pci 0000:00:1d.2: PCI INT C -> GSI 22 (level, low) -> IRQ 22 [ 0.491259] pci 0000:00:1d.2: PCI INT C disabled [ 0.491307] pci 0000:00:1d.7: PCI INT D -> GSI 23 (level, low) -> IRQ 23 [ 0.864431] Freeing initrd memory: 13820k freed [ 2.088042] pci 0000:00:1d.7: EHCI: BIOS handoff failed (BIOS bug?) 01010001 [ 2.088207] pci 0000:00:1d.7: PCI INT D disabled [ 2.088267] PCI: CLS 64 bytes, default 64 [ 2.089248] audit: initializing netlink socket (disabled) [ 2.089287] type=2000 audit(1349363630.084:1): initialized [ 2.144783] highmem bounce pool size: 64 pages [ 2.144808] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 2.160057] VFS: Disk quotas dquot_6.5.2 [ 2.160232] Dquot-cache hash table entries: 1024 (order 0, 4096 bytes) [ 2.161716] fuse init (API version 7.17) [ 2.161995] msgmni has been set to 1713 [ 2.162925] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 2.163008] io scheduler noop registered [ 2.163023] io scheduler deadline registered [ 2.163048] io scheduler cfq registered (default) [ 2.163339] pcieport 0000:00:1c.0: setting latency timer to 64 [ 2.163530] pcieport 0000:00:1c.1: setting latency timer to 64 [ 2.163706] pcieport 0000:02:00.0: setting latency timer to 64 [ 2.163873] pcieport 0000:03:01.0: setting latency timer to 64 [ 2.163964] pcieport 0000:03:01.0: irq 40 for MSI/MSI-X [ 2.164193] pcieport 0000:03:02.0: setting latency timer to 64 [ 2.164272] pcieport 0000:03:02.0: irq 41 for MSI/MSI-X [ 2.164453] pcieport 0000:03:03.0: setting latency timer to 64 [ 2.164531] pcieport 0000:03:03.0: irq 42 for MSI/MSI-X [ 2.164783] pcieport 0000:00:1c.0: Signaling PME through PCIe PME interrupt [ 2.164801] pci 0000:01:00.0: Signaling PME through PCIe PME interrupt [ 2.164816] pcie_pme 0000:00:1c.0:pcie01: service driver pcie_pme loaded [ 2.164853] pcieport 0000:00:1c.1: Signaling PME through PCIe PME interrupt [ 2.164867] pcieport 0000:02:00.0: Signaling PME through PCIe PME interrupt [ 2.164880] pcieport 0000:03:01.0: Signaling PME through PCIe PME interrupt [ 2.164892] pci 0000:04:00.0: Signaling PME through PCIe PME interrupt [ 2.164904] pcieport 0000:03:02.0: Signaling PME through PCIe PME interrupt [ 2.164917] pcieport 0000:03:03.0: Signaling PME through PCIe PME interrupt [ 2.164932] pcie_pme 0000:00:1c.1:pcie01: service driver pcie_pme loaded [ 2.164988] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 2.165115] pciehp 0000:00:1c.0:pcie04: HPC vendor_id 8086 device_id 8110 ss_vid 8086 ss_did 8119 [ 2.165177] pciehp 0000:00:1c.0:pcie04: service driver pciehp loaded [ 2.165199] pciehp 0000:00:1c.1:pcie04: HPC vendor_id 8086 device_id 8112 ss_vid 8086 ss_did 8119 [ 2.165260] pciehp 0000:00:1c.1:pcie04: service driver pciehp loaded [ 2.165290] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 2.165488] intel_idle: MWAIT substates: 0x3020220 [ 2.165508] intel_idle: v0.4 model 0x1C [ 2.165513] intel_idle: lapic_timer_reliable_states 0x2 [ 2.165519] Marking TSC unstable due to TSC halts in idle states deeper than C2 [ 2.165779] input: Lid Switch as /devices/LNXSYSTM:00/device:00/PNP0C0D:00/input/input0 [ 2.165855] ACPI: Lid Switch [LID] [ 2.165983] input: Power Button as /devices/LNXSYSTM:00/device:00/PNP0C0C:00/input/input1 [ 2.166005] ACPI: Power Button [PWRB] [ 2.173811] thermal LNXTHERM:00: registered as thermal_zone0 [ 2.173829] ACPI: Thermal Zone [TZ00] (48 C) [ 2.174004] thermal LNXTHERM:01: registered as thermal_zone1 [ 2.174018] ACPI: Thermal Zone [TZ01] (34 C) [ 2.174194] thermal LNXTHERM:02: registered as thermal_zone2 [ 2.174207] ACPI: Thermal Zone [TZ02] (34 C) [ 2.174378] thermal LNXTHERM:03: registered as thermal_zone3 [ 2.174392] ACPI: Thermal Zone [TZ03] (34 C) [ 2.174503] ERST: Table is not found! [ 2.174513] GHES: HEST is not enabled! [ 2.174601] isapnp: Scanning for PnP cards... [ 2.176175] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled [ 2.196702] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 2.292409] serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 2.528909] isapnp: No Plug & Play device found [ 2.588733] 00:05: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 2.624523] 00:06: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 2.640702] Linux agpgart interface v0.103 [ 2.645138] brd: module loaded [ 2.647452] loop: module loaded [ 2.648149] pata_acpi 0000:00:1f.1: setting latency timer to 64 [ 2.649238] Fixed MDIO Bus: probed [ 2.649315] tun: Universal TUN/TAP device driver, 1.6 [ 2.649327] tun: (C) 1999-2004 Max Krasnyansky <[email protected]> [ 2.649524] PPP generic driver version 2.4.2 [ 2.649824] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 2.649884] ehci_hcd 0000:00:1d.7: PCI INT D -> GSI 23 (level, low) -> IRQ 23 [ 2.649937] ehci_hcd 0000:00:1d.7: setting latency timer to 64 [ 2.649946] ehci_hcd 0000:00:1d.7: EHCI Host Controller [ 2.650082] ehci_hcd 0000:00:1d.7: new USB bus registered, assigned bus number 1 [ 2.650148] ehci_hcd 0000:00:1d.7: debug port 1 [ 2.654045] ehci_hcd 0000:00:1d.7: cache line size of 64 is not supported [ 2.654093] ehci_hcd 0000:00:1d.7: irq 23, io mem 0xd02c4000 [ 2.668035] ehci_hcd 0000:00:1d.7: USB 2.0 started, EHCI 1.00 [ 2.668392] hub 1-0:1.0: USB hub found [ 2.668413] hub 1-0:1.0: 8 ports detected [ 2.668618] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 2.668666] uhci_hcd: USB Universal Host Controller Interface driver [ 2.668726] uhci_hcd 0000:00:1d.0: PCI INT A -> GSI 20 (level, low) -> IRQ 20 [ 2.668751] uhci_hcd 0000:00:1d.0: setting latency timer to 64 [ 2.668759] uhci_hcd 0000:00:1d.0: UHCI Host Controller [ 2.668910] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 2 [ 2.668981] uhci_hcd 0000:00:1d.0: irq 20, io base 0x0000f040 [ 2.669335] hub 2-0:1.0: USB hub found [ 2.669355] hub 2-0:1.0: 2 ports detected [ 2.669508] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 21 (level, low) -> IRQ 21 [ 2.669531] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [ 2.669538] uhci_hcd 0000:00:1d.1: UHCI Host Controller [ 2.669675] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 3 [ 2.669739] uhci_hcd 0000:00:1d.1: irq 21, io base 0x0000f020 [ 2.670099] hub 3-0:1.0: USB hub found [ 2.670118] hub 3-0:1.0: 2 ports detected [ 2.670271] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 22 (level, low) -> IRQ 22 [ 2.670295] uhci_hcd 0000:00:1d.2: setting latency timer to 64 [ 2.670302] uhci_hcd 0000:00:1d.2: UHCI Host Controller [ 2.670435] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 4 [ 2.670502] uhci_hcd 0000:00:1d.2: irq 22, io base 0x0000f000 [ 2.670869] hub 4-0:1.0: USB hub found [ 2.670888] hub 4-0:1.0: 2 ports detected [ 2.671186] usbcore: registered new interface driver libusual [ 2.671332] i8042: PNP: PS/2 Controller [PNP0303:PS2K,PNP0f03:PS2M] at 0x60,0x64 irq 1,12 [ 2.673408] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 2.673437] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 2.673844] mousedev: PS/2 mouse device common for all mice [ 2.674272] rtc_cmos 00:08: RTC can wake from S4 [ 2.674482] rtc_cmos 00:08: rtc core: registered rtc_cmos as rtc0 [ 2.674529] rtc0: alarms up to one year, y3k, 242 bytes nvram, hpet irqs [ 2.674691] device-mapper: uevent: version 1.0.3 [ 2.674903] device-mapper: ioctl: 4.22.0-ioctl (2011-10-19) initialised: [email protected] [ 2.675024] EISA: Probing bus 0 at eisa.0 [ 2.675037] EISA: Cannot allocate resource for mainboard [ 2.675050] Cannot allocate resource for EISA slot 1 [ 2.675061] Cannot allocate resource for EISA slot 2 [ 2.675072] Cannot allocate resource for EISA slot 3 [ 2.675083] Cannot allocate resource for EISA slot 4 [ 2.675094] Cannot allocate resource for EISA slot 5 [ 2.675105] Cannot allocate resource for EISA slot 6 [ 2.675116] Cannot allocate resource for EISA slot 7 [ 2.675127] Cannot allocate resource for EISA slot 8 [ 2.675137] EISA: Detected 0 cards. [ 2.675161] cpufreq-nforce2: No nForce2 chipset. [ 2.675401] cpuidle: using governor ladder [ 2.675786] cpuidle: using governor menu [ 2.675797] EFI Variables Facility v0.08 2004-May-17 [ 2.676429] TCP cubic registered [ 2.676751] NET: Registered protocol family 10 [ 2.678031] NET: Registered protocol family 17 [ 2.678052] Registering the dns_resolver key type [ 2.678107] Using IPI No-Shortcut mode [ 2.678515] PM: Hibernation image not present or could not be loaded. [ 2.678543] registered taskstats version 1 [ 2.701145] Magic number: 0:84:234 [ 2.701312] rtc_cmos 00:08: setting system clock to 2012-10-04 15:13:51 UTC (1349363631) [ 2.702280] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found [ 2.702294] EDD information not available. [ 2.702858] Freeing unused kernel memory: 740k freed [ 2.703630] Write protecting the kernel text: 5816k [ 2.703692] Write protecting the kernel read-only data: 2376k [ 2.703706] NX-protecting the kernel data: 4424k [ 2.751226] udevd[84]: starting version 175 [ 2.980162] usb 1-1: new high-speed USB device number 2 using ehci_hcd [ 3.001394] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded [ 3.001474] r8169 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 3.001554] r8169 0000:01:00.0: setting latency timer to 64 [ 3.001654] r8169 0000:01:00.0: irq 43 for MSI/MSI-X [ 3.004220] r8169 0000:01:00.0: eth0: RTL8168c/8111c at 0xf8416000, 00:18:92:03:10:46, XID 1c4000c0 IRQ 43 [ 3.004254] r8169 0000:01:00.0: eth0: jumbo features [frames: 6128 bytes, tx checksumming: ko] [ 3.004347] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded [ 3.005085] r8169 0000:04:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18 [ 3.005182] r8169 0000:04:00.0: setting latency timer to 64 [ 3.005292] r8169 0000:04:00.0: irq 44 for MSI/MSI-X [ 3.007187] r8169 0000:04:00.0: eth1: RTL8168c/8111c at 0xf8418000, 00:18:92:03:10:47, XID 1c4000c0 IRQ 44 [ 3.007224] r8169 0000:04:00.0: eth1: jumbo features [frames: 6128 bytes, tx checksumming: ko] [ 3.034417] pata_sch 0000:00:1f.1: version 0.2 [ 3.034518] pata_sch 0000:00:1f.1: setting latency timer to 64 [ 3.036698] scsi0 : pata_sch [ 3.039842] scsi1 : pata_sch [ 3.040913] ata1: PATA max UDMA/100 cmd 0x1f0 ctl 0x3f6 bmdma 0xf060 irq 14 [ 3.040940] ata2: PATA max UDMA/100 cmd 0x170 ctl 0x376 bmdma 0xf068 irq 15 [ 3.131850] Initializing USB Mass Storage driver... [ 3.136405] scsi2 : usb-storage 1-1:1.0 [ 3.136642] usbcore: registered new interface driver usb-storage [ 3.136656] USB Mass Storage support registered. [ 3.524465] usb 3-1: new low-speed USB device number 2 using uhci_hcd [ 3.968144] usb 3-2: new full-speed USB device number 3 using uhci_hcd [ 4.137903] scsi 2:0:0:0: Direct-Access TS TS4GUFM-H 1100 PQ: 0 ANSI: 0 CCS [ 4.140067] sd 2:0:0:0: Attached scsi generic sg0 type 0 [ 4.140590] sd 2:0:0:0: [sda] 8028160 512-byte logical blocks: (4.11 GB/3.82 GiB) [ 4.141597] sd 2:0:0:0: [sda] Write Protect is off [ 4.141618] sd 2:0:0:0: [sda] Mode Sense: 43 00 00 00 [ 4.142974] sd 2:0:0:0: [sda] No Caching mode page present [ 4.143000] sd 2:0:0:0: [sda] Assuming drive cache: write through [ 4.145837] sd 2:0:0:0: [sda] No Caching mode page present [ 4.145858] sd 2:0:0:0: [sda] Assuming drive cache: write through [ 4.147931] sda: sda1 sda2 < sda5 > [ 4.150972] sd 2:0:0:0: [sda] No Caching mode page present [ 4.151001] sd 2:0:0:0: [sda] Assuming drive cache: write through [ 4.151023] sd 2:0:0:0: [sda] Attached SCSI disk [ 4.249168] input: HID 046a:004b as /devices/pci0000:00/0000:00:1d.1/usb3/3-1/3-1:1.0/input/input2 [ 4.249579] generic-usb 0003:046A:004B.0001: input,hidraw0: USB HID v1.11 Keyboard [HID 046a:004b] on usb-0000:00:1d.1-1/input0 [ 4.287805] input: HID 046a:004b as /devices/pci0000:00/0000:00:1d.1/usb3/3-1/3-1:1.1/input/input3 [ 4.289235] generic-usb 0003:046A:004B.0002: input,hidraw1: USB HID v1.11 Mouse [HID 046a:004b] on usb-0000:00:1d.1-1/input1 [ 4.297604] input: EloTouchSystems,Inc Elo TouchSystems 2216 AccuTouch\xffffffc2\xffffffae\xffffffae USB Touchmonitor Interface as /devices/pci0000:00/0000:00:1d.1/usb3/3-2/3-2:1.0/input/input4 [ 4.298913] generic-usb 0003:04E7:0050.0003: input,hidraw2: USB HID v1.00 Pointer [EloTouchSystems,Inc Elo TouchSystems 2216 AccuTouch\xffffffc2\xffffffae\xffffffae USB Touchmonitor Interface] on usb-0000:00:1d.1-2/input0 [ 4.299878] usbcore: registered new interface driver usbhid [ 4.299925] usbhid: USB HID core driver [ 4.352639] EXT4-fs (sda1): INFO: recovery required on readonly filesystem [ 4.352661] EXT4-fs (sda1): write access will be enabled during recovery [ 8.519257] EXT4-fs (sda1): recovery complete [ 8.564389] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null) [ 14.280922] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 14.280944] ADDRCONF(NETDEV_UP): eth1: link is not ready [ 14.310368] udevd[308]: starting version 175 [ 14.353873] Adding 1045500k swap on /dev/sda5. Priority:-1 extents:1 across:1045500k [ 14.428718] lp: driver loaded but no devices found [ 14.521667] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro [ 15.073459] [drm] Initialized drm 1.1.0 20060810 [ 15.097073] psb_gfx: module is from the staging directory, the quality is unknown, you have been warned. [ 15.180630] gma500 0000:00:02.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 15.180648] gma500 0000:00:02.0: setting latency timer to 64 [ 15.182117] Stolen memory information [ 15.182127] base in RAM: 0x7f800000 [ 15.182134] size: 7932K, calculated by (GTT RAM base) - (Stolen base), seems wrong [ 15.182143] the correct size should be: 8M(dvmt mode=3) [ 15.234889] Set up 1983 stolen pages starting at 0x7f800000, GTT offset 0K [ 15.235126] [drm] SGX core id = 0x01130000 [ 15.235135] [drm] SGX core rev major = 0x01, minor = 0x02 [ 15.235143] [drm] SGX core rev maintenance = 0x01, designer = 0x00 [ 15.268796] [Firmware Bug]: ACPI: No _BQC method, cannot determine initial brightness [ 15.269888] acpi device:04: registered as cooling_device2 [ 15.270568] acpi device:05: registered as cooling_device3 [ 15.270947] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A08:00/LNXVIDEO:00/input/input5 [ 15.271238] ACPI: Video Device [GFX0] (multi-head: yes rom: no post: no) [ 15.271424] [drm] Supports vblank timestamp caching Rev 1 (10.10.2010). [ 15.271434] [drm] No driver support for vblank timestamp query. [ 15.374694] type=1400 audit(1349363644.167:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=435 comm="apparmor_parser" [ 15.385518] type=1400 audit(1349363644.179:3): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=435 comm="apparmor_parser" [ 15.386369] type=1400 audit(1349363644.179:4): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=435 comm="apparmor_parser" [ 15.677514] r8169 0000:01:00.0: eth0: link down [ 15.694828] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 16.537490] gma500 0000:00:02.0: allocated 800x480 fb [ 16.558066] fbcon: psbfb (fb0) is primary device [ 16.747122] gma500 0000:00:02.0: BL bug: Reg 00000000 save 00000000 [ 16.775550] Console: switching to colour frame buffer device 100x30 [ 16.781804] fb0: psbfb frame buffer device [ 16.781812] drm: registered panic notifier [ 16.870168] [drm] Initialized gma500 1.0.0 2011-06-06 for 0000:00:02.0 on minor 0 [ 16.871166] snd_hda_intel 0000:00:1b.0: power state changed by ACPI to D0 [ 16.871186] snd_hda_intel 0000:00:1b.0: power state changed by ACPI to D0 [ 16.871207] snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 16.871284] snd_hda_intel 0000:00:1b.0: setting latency timer to 64 [ 29.338953] r8169 0000:01:00.0: eth0: link up [ 29.339471] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 31.427223] init: failsafe main process (675) killed by TERM signal [ 31.522411] type=1400 audit(1349363660.316:5): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=889 comm="apparmor_parser" [ 31.523956] type=1400 audit(1349363660.316:6): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=889 comm="apparmor_parser" [ 31.524882] type=1400 audit(1349363660.320:7): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=889 comm="apparmor_parser" [ 31.525940] type=1400 audit(1349363660.320:8): apparmor="STATUS" operation="profile_load" name="/usr/sbin/tcpdump" pid=891 comm="apparmor_parser" [ 34.526445] postgres (1003): /proc/1003/oom_adj is deprecated, please use /proc/1003/oom_score_adj instead. [ 40.144048] eth0: no IPv6 routers present

    Read the article

  • rsync over ssh is not working anymore, while ssh itself is working fine (Write failed: broken pipe)

    - by brazorf
    This issue started happening after i changed router. This is the scenario: Windows7 Host Ubuntu 10.04 Guest (VirtualBox) Ubuntu 10.04 remote server What i used to do is run a very basic rsync command: rsync -avz --delete /local/path/ username@host:/path/to/remote/directory This worked perfect until i did change adsl provider, and i changed router aswell: now, this happens: rsync on Ubuntu Guest is not working anymore (to any random server), if using this new router rsync on Ubuntu Guest is WORKING, if i switch back to old router i tried a new virtual box ubuntu install, and the command is WORKING with both the routers So, the not-working-combo is oldUbuntu + newRouter. To get things worst, i can state that (on the not-working ubuntu) i ping the remote host plain ssh connection to the remote host is working fine (i can auth, connect, and do stuff on the remote host) scp is NOT working (this is just a further thing i tried) This is the console output of the execution, with ssh verbose set to vvvv: root@client:~# rsync -ae 'ssh -vvvv' /root/test-rsync/ {username}@{hostname}:/home/{username}/test/ OpenSSH_5.3p1 Debian-3ubuntu7, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /root/.ssh/config debug1: Applying options for {hostname} debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to {hostname} [{ip.add.re.ss}] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug3: Not a RSA1 key file /root/.ssh/{private_key}. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /root/.ssh/{private_key} type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3p1 Debian-3ubuntu7 debug1: match: OpenSSH_5.3p1 Debian-3ubuntu7 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu7 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug3: Wrote 792 bytes for a total of 831 debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: [email protected],zlib,none debug2: kex_parse_kexinit: [email protected],zlib,none debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 [email protected] debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 [email protected] debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug3: Wrote 24 bytes for a total of 855 debug2: dh_gen_key: priv key bits set: 125/256 debug2: bits set: 525/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug3: Wrote 144 bytes for a total of 999 debug3: check_host_in_hostfile: filename /root/.ssh/known_hosts debug3: check_host_in_hostfile: match line 4 debug3: check_host_in_hostfile: filename /root/.ssh/known_hosts debug3: check_host_in_hostfile: match line 5 debug1: Host '{hostname}' is known and matches the RSA host key. debug1: Found key in /root/.ssh/known_hosts:4 debug2: bits set: 512/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug3: Wrote 16 bytes for a total of 1015 debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug3: Wrote 48 bytes for a total of 1063 debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /root/.ssh/{private_key} (0x7f3ad0e7f9b0) debug3: Wrote 80 bytes for a total of 1143 debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,gssapi,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering public key: /root/.ssh/{private_key} debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug3: Wrote 368 bytes for a total of 1511 debug1: Server accepts key: pkalg ssh-rsa blen 277 debug2: input_userauth_pk_ok: fp 1b:65:36:92:59:b3:12:3e:8c:c6:03:28:d4:81:09:dc debug3: sign_and_send_pubkey debug1: read PEM private key done: type RSA debug3: Wrote 656 bytes for a total of 2167 debug1: Enabling compression at level 6. debug1: Authentication succeeded (publickey). debug2: fd 4 setting O_NONBLOCK debug3: fd 5 is O_NONBLOCK debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug3: Wrote 112 bytes for a total of 2279 debug2: callback start debug2: client_session2_setup: id 0 debug1: Sending environment. debug3: Ignored env TERM debug3: Ignored env SHELL debug3: Ignored env SSH_CLIENT debug3: Ignored env SSH_TTY debug1: Sending env LC_ALL = en_US.UTF-8 debug2: channel 0: request env confirm 0 debug3: Ignored env USER debug3: Ignored env LS_COLORS debug3: Ignored env MAIL debug3: Ignored env PATH debug3: Ignored env PWD debug1: Sending env LANG = en_US.UTF-8 debug2: channel 0: request env confirm 0 debug3: Ignored env SHLVL debug3: Ignored env HOME debug3: Ignored env LANGUAGE debug3: Ignored env LOGNAME debug3: Ignored env SSH_CONNECTION debug3: Ignored env LESSOPEN debug3: Ignored env LESSCLOSE debug3: Ignored env _ debug1: Sending command: rsync --server -logDtpre.iLsf . /home/{username}/test/ debug2: channel 0: request exec confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug3: Wrote 208 bytes for a total of 2487 At this point everything freeze for lots of minutes, ending in Write failed: Broken pipe rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: unexplained error (code 255) at io.c(601) [sender=3.0.7] Any suggestion? Thank You F. Edit 2012/09/13: i am changing title and issue definition, since i made some TINY step ahead and i think i can give more detailed clues.

    Read the article

  • Reading input from a text file, omits the first and adds a nonsense value to the end?

    - by Greenhouse Gases
    Hi there When I input locations from a txt file I am getting a peculiar error where it seems to miss off the first entry, yet add a garbage entry to the end of the link list (it is designed to take the name, latitude and longitude for each location you will notice). I imagine this to be an issue with where it starts collecting the inputs and where it stops but I cant find the error!! It reads the first line correctly but then skips to the next before adding it because during testing for the bug it had no record of the first location Lisbon though whilst stepping into the method call it was reading it. Very bizarre but hopefully someone knows the issue. Here is firstly my header file: #include <string> struct locationNode { char nodeCityName [35]; double nodeLati; double nodeLongi; locationNode* Next; void CorrectCase() // Correct upper and lower case letters of input { int MAX_SIZE = 35; int firstLetVal = this->nodeCityName[0], letVal; int n = 1; // variable for name index from second letter onwards if((this->nodeCityName[0] >90) && (this->nodeCityName[0] < 123)) // First letter is lower case { firstLetVal = firstLetVal - 32; // Capitalise first letter this->nodeCityName[0] = firstLetVal; } while(n <= MAX_SIZE - 1) { if((this->nodeCityName[n] >= 65) && (this->nodeCityName[n] <= 90)) { letVal = this->nodeCityName[n] + 32; this->nodeCityName[n] = letVal; } n++; } //cityNameInput = this->nodeCityName; } }; class Locations { private: int size; public: Locations(){ }; // constructor for the class locationNode* Head; //int Add(locationNode* Item); }; And here is the file containing main: // U08221.cpp : main project file. #include "stdafx.h" #include "Locations.h" #include <iostream> #include <string> #include <fstream> using namespace std; int n = 0,x, locationCount = 0, MAX_SIZE = 35; string cityNameInput; char targetCity[35]; bool acceptedInput = false, userInputReq = true, match = false, nodeExists = false;// note: addLocation(), set to true to enable user input as opposed to txt file locationNode *start_ptr = NULL; // pointer to first entry in the list locationNode *temp, *temp2; // Part is a pointer to a new locationNode we can assign changing value followed by a call to Add locationNode *seek, *bridge; void setElementsNull(char cityParam[]) { int y=0, count =0; while(cityParam[y] != NULL) { y++; } while(y < MAX_SIZE) { cityParam[y] = NULL; y++; } } void addLocation() { temp = new locationNode; // declare the space for a pointer item and assign a temporary pointer to it if(!userInputReq) // bool that determines whether user input is required in adding the node to the list { cout << endl << "Enter the name of the location: "; cin >> temp->nodeCityName; temp->CorrectCase(); setElementsNull(temp->nodeCityName); cout << endl << "Please enter the latitude value for this location: "; cin >> temp->nodeLati; cout << endl << "Please enter the longitude value for this location: "; cin >> temp->nodeLongi; cout << endl; } temp->Next = NULL; //set to NULL as when one is added it is currently the last in the list and so can not point to the next if(start_ptr == NULL){ // if list is currently empty, start_ptr will point to this node start_ptr = temp; } else { temp2 = start_ptr; // We know this is not NULL - list not empty! while (temp2->Next != NULL) { temp2 = temp2->Next; // Move to next link in chain until reach end of list } temp2->Next = temp; } ++locationCount; // increment counter for number of records in list if(!userInputReq){ cout << "Location sucessfully added to the database! There are " << locationCount << " location(s) stored" << endl; } } void populateList(){ ifstream inputFile; inputFile.open ("locations.txt", ios::in); userInputReq = true; temp = new locationNode; // declare the space for a pointer item and assign a temporary pointer to it do { inputFile.get(temp->nodeCityName, 35, ' '); setElementsNull(temp->nodeCityName); inputFile >> temp->nodeLati; inputFile >> temp->nodeLongi; setElementsNull(temp->nodeCityName); if(temp->nodeCityName[0] == 10) //remove linefeed from input { for(int i = 0; temp->nodeCityName[i] != NULL; i++) { temp->nodeCityName[i] = temp->nodeCityName[i + 1]; } } addLocation(); } while(!inputFile.eof()); userInputReq = false; cout << "Successful!" << endl << "List contains: " << locationCount << " entries" << endl; cout << endl; inputFile.close(); } bool nodeExistTest(char targetCity[]) // see if entry is present in the database { match = false; seek = start_ptr; int letters = 0, letters2 = 0, x = 0, y = 0; while(targetCity[y] != NULL) { letters2++; y++; } while(x <= locationCount) // locationCount is number of entries currently in list { y=0, letters = 0; while(seek->nodeCityName[y] != NULL) // count letters in the current name { letters++; y++; } if(letters == letters2) // same amount of letters in the name { y = 0; while(y <= letters) // compare each letter against one another { if(targetCity[y] == seek->nodeCityName[y]) { match = true; y++; } else { match = false; y = letters + 1; // no match, terminate comparison } } } if(match) { x = locationCount + 1; //found match so terminate loop } else{ if(seek->Next != NULL) { bridge = seek; seek = seek->Next; x++; } else { x = locationCount + 1; // end of list so terminate loop } } } return match; } void deleteRecord() // complete this { int junction = 0; locationNode *place; cout << "Enter the name of the city you wish to remove" << endl; cin >> targetCity; setElementsNull(targetCity); if(nodeExistTest(targetCity)) //if this node does exist { if(seek == start_ptr) // if it is the first in the list { junction = 1; } if(seek != start_ptr && seek->Next == NULL) // if it is last in the list { junction = 2; } switch(junction) // will alter list accordingly dependant on where the searched for link is { case 1: start_ptr = start_ptr->Next; delete seek; --locationCount; break; case 2: place = seek; seek = bridge; delete place; --locationCount; break; default: bridge->Next = seek->Next; delete seek; --locationCount; break; } } else { cout << targetCity << "That entry does not currently exist" << endl << endl << endl; } } void searchDatabase() { char choice; cout << "Enter search term..." << endl; cin >> targetCity; if(nodeExistTest(targetCity)) { cout << "Entry: " << endl << endl; } else { cout << "Sorry, that city is not currently present in the list." << endl << "Would you like to add this city now Y/N?" << endl; cin >> choice; /*while(choice != ('Y' || 'N')) { cout << "Please enter a valid choice..." << endl; cin >> choice; }*/ switch(choice) { case 'Y': addLocation(); break; case 'N': break; default : cout << "Invalid choice" << endl; break; } } } void printDatabase() { temp = start_ptr; // set temp to the start of the list do { if (temp == NULL) { cout << "You have reached the end of the database" << endl; } else { // Display details for what temp points to at that stage cout << "Location : " << temp->nodeCityName << endl; cout << "Latitude : " << temp->nodeLati << endl; cout << "Longitude : " << temp->nodeLongi << endl; cout << endl; // Move on to next locationNode if one exists temp = temp->Next; } } while (temp != NULL); } void nameValidation(string name) { n = 0; // start from first letter x = name.size(); while(!acceptedInput) { if((name[n] >= 65) && (name[n] <= 122)) // is in the range of letters { while(n <= x - 1) { while((name[n] >=91) && (name[n] <=97)) // ERROR!! { cout << "Please enter a valid city name" << endl; cin >> name; } n++; } } else { cout << "Please enter a valid city name" << endl; cin >> name; } if(n <= x - 1) { acceptedInput = true; } } cityNameInput = name; } int main(array<System::String ^> ^args) { //main contains test calls to functions at present cout << "Populating list..."; populateList(); printDatabase(); deleteRecord(); printDatabase(); cin >> cityNameInput; } The text file contains this (ignore the names, they are just for testing!!): Lisbon 45 47 Fattah 45 47 Darius 42 49 Peter 45 27 Sarah 85 97 Michelle 45 47 John 25 67 Colin 35 87 Shiron 40 57 George 34 45 Sean 22 33 The output omits Lisbon, but adds on a garbage entry with nonsense values. Any ideas why? Thank you in advance.

    Read the article

  • Sensible Way to Pass Web Data in XML to a SQL Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Sensible Way to Pass Web Data to Sql Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Convert Java program to C

    - by imicrothinking
    I need a bit of guidance with writing a C program...a bit of quick background as to my level, I've programmed in Java previously, but this is my first time programming in C, and we've been tasked to translate a word count program from Java to C that consists of the following: Read a file from memory Count the words in the file For each occurrence of a unique word, keep a word counter variable Print out the top ten most frequent words and their corresponding occurrences Here's the source program in Java: package lab0; import java.io.File; import java.io.FileReader; import java.util.ArrayList; import java.util.Calendar; import java.util.Collections; public class WordCount { private ArrayList<WordCountNode> outputlist = null; public WordCount(){ this.outputlist = new ArrayList<WordCountNode>(); } /** * Read the file into memory. * * @param filename name of the file. * @return content of the file. * @throws Exception if the file is too large or other file related exception. */ public char[] readFile(String filename) throws Exception{ char [] result = null; File file = new File(filename); long size = file.length(); if (size > Integer.MAX_VALUE){ throw new Exception("File is too large"); } result = new char[(int)size]; FileReader reader = new FileReader(file); int len, offset = 0, size2read = (int)size; while(size2read > 0){ len = reader.read(result, offset, size2read); if(len == -1) break; size2read -= len; offset += len; } return result; } /** * Make article word by word. * * @param article the content of file to be counted. * @return string contains only letters and "'". */ private enum SPLIT_STATE {IN_WORD, NOT_IN_WORD}; /** * Go through article, find all the words and add to output list * with their count. * * @param article the content of the file to be counted. * @return words in the file and their counts. */ public ArrayList<WordCountNode> countWords(char[] article){ SPLIT_STATE state = SPLIT_STATE.NOT_IN_WORD; if(null == article) return null; char curr_ltr; int curr_start = 0; for(int i = 0; i < article.length; i++){ curr_ltr = Character.toUpperCase( article[i]); if(state == SPLIT_STATE.IN_WORD){ article[i] = curr_ltr; if ((curr_ltr < 'A' || curr_ltr > 'Z') && curr_ltr != '\'') { article[i] = ' '; //printf("\nthe word is %s\n\n",curr_start); if(i - curr_start < 0){ System.out.println("i = " + i + " curr_start = " + curr_start); } addWord(new String(article, curr_start, i-curr_start)); state = SPLIT_STATE.NOT_IN_WORD; } }else{ if (curr_ltr >= 'A' && curr_ltr <= 'Z') { curr_start = i; article[i] = curr_ltr; state = SPLIT_STATE.IN_WORD; } } } return outputlist; } /** * Add the word to output list. */ public void addWord(String word){ int pos = dobsearch(word); if(pos >= outputlist.size()){ outputlist.add(new WordCountNode(1L, word)); }else{ WordCountNode tmp = outputlist.get(pos); if(tmp.getWord().compareTo(word) == 0){ tmp.setCount(tmp.getCount() + 1); }else{ outputlist.add(pos, new WordCountNode(1L, word)); } } } /** * Search the output list and return the position to put word. * @param word is the word to be put into output list. * @return position in the output list to insert the word. */ public int dobsearch(String word){ int cmp, high = outputlist.size(), low = -1, next; // Binary search the array to find the key while (high - low > 1) { next = (high + low) / 2; // all in upper case cmp = word.compareTo((outputlist.get(next)).getWord()); if (cmp == 0) return next; else if (cmp < 0) high = next; else low = next; } return high; } public static void main(String args[]){ // handle input if (args.length == 0){ System.out.println("USAGE: WordCount <filename> [Top # of results to display]\n"); System.exit(1); } String filename = args[0]; int dispnum; try{ dispnum = Integer.parseInt(args[1]); }catch(Exception e){ dispnum = 10; } long start_time = Calendar.getInstance().getTimeInMillis(); WordCount wordcount = new WordCount(); System.out.println("Wordcount: Running..."); // read file char[] input = null; try { input = wordcount.readFile(filename); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); System.exit(1); } // count all word ArrayList<WordCountNode> result = wordcount.countWords(input); long end_time = Calendar.getInstance().getTimeInMillis(); System.out.println("wordcount: completed " + (end_time - start_time)/1000000 + "." + (end_time - start_time)%1000000 + "(s)"); System.out.println("wordsort: running ..."); start_time = Calendar.getInstance().getTimeInMillis(); Collections.sort(result); end_time = Calendar.getInstance().getTimeInMillis(); System.out.println("wordsort: completed " + (end_time - start_time)/1000000 + "." + (end_time - start_time)%1000000 + "(s)"); Collections.reverse(result); System.out.println("\nresults (TOP "+ dispnum +" from "+ result.size() +"):\n" ); // print out result String str ; for (int i = 0; i < result.size() && i < dispnum; i++){ if(result.get(i).getWord().length() > 15) str = result.get(i).getWord().substring(0, 14); else str = result.get(i).getWord(); System.out.println(str + " - " + result.get(i).getCount()); } } public class WordCountNode implements Comparable{ private String word; private long count; public WordCountNode(long count, String word){ this.count = count; this.word = word; } public String getWord() { return word; } public void setWord(String word) { this.word = word; } public long getCount() { return count; } public void setCount(long count) { this.count = count; } public int compareTo(Object arg0) { // TODO Auto-generated method stub WordCountNode obj = (WordCountNode)arg0; if( count - obj.getCount() < 0) return -1; else if( count - obj.getCount() == 0) return 0; else return 1; } } } Here's my attempt (so far) in C: #include <stdio.h> #include <stdlib.h> #include <stdbool.h> #include <string.h> // Read in a file FILE *readFile (char filename[]) { FILE *inputFile; inputFile = fopen (filename, "r"); if (inputFile == NULL) { printf ("File could not be opened.\n"); exit (EXIT_FAILURE); } return inputFile; } // Return number of words in an array int wordCount (FILE *filePointer, char filename[]) {//, char *words[]) { // count words int count = 0; char temp; while ((temp = getc(filePointer)) != EOF) { //printf ("%c", temp); if ((temp == ' ' || temp == '\n') && (temp != '\'')) count++; } count += 1; // counting method uses space AFTER last character in word - the last space // of the last character isn't counted - off by one error // close file fclose (filePointer); return count; } // Print out the frequencies of the 10 most frequent words in the console int main (int argc, char *argv[]) { /* Step 1: Read in file and check for errors */ FILE *filePointer; filePointer = readFile (argv[1]); /* Step 2: Do a word count to prep for array size */ int count = wordCount (filePointer, argv[1]); printf ("Number of words is: %i\n", count); /* Step 3: Create a 2D array to store words in the file */ // open file to reset marker to beginning of file filePointer = fopen (argv[1], "r"); // store words in character array (each element in array = consecutive word) char allWords[count][100]; // 100 is an arbitrary size - max length of word int i,j; char temp; for (i = 0; i < count; i++) { for (j = 0; j < 100; j++) { // labels are used with goto statements, not loops in C temp = getc(filePointer); if ((temp == ' ' || temp == '\n' || temp == EOF) && (temp != '\'') ) { allWords[i][j] = '\0'; break; } else { allWords[i][j] = temp; } printf ("%c", allWords[i][j]); } printf ("\n"); } // close file fclose (filePointer); /* Step 4: Use a simple selection sort algorithm to sort 2D char array */ // PStep 1: Compare two char arrays, and if // (a) c1 > c2, return 2 // (b) c1 == c2, return 1 // (c) c1 < c2, return 0 qsort(allWords, count, sizeof(char[][]), pstrcmp); /* int k = 0, l = 0, m = 0; char currentMax, comparedElement; int max; // the largest element in the current 2D array int elementToSort = 0; // elementToSort determines the element to swap with starting from the left // Outer a iterates through number of swaps needed for (k = 0; k < count - 1; k++) { // times of swaps max = k; // max element set to k // Inner b iterates through successive elements to fish out the largest element for (m = k + 1; m < count - k; m++) { currentMax = allWords[k][l]; comparedElement = allWords[m][l]; // Inner c iterates through successive chars to set the max vars to the largest for (l = 0; (currentMax != '\0' || comparedElement != '\0'); l++) { if (currentMax > comparedElement) break; else if (currentMax < comparedElement) { max = m; currentMax = allWords[m][l]; break; } else if (currentMax == comparedElement) continue; } } // After max (count and string) is determined, perform swap with temp variable char swapTemp[1][20]; int y = 0; do { swapTemp[0][y] = allWords[elementToSort][y]; allWords[elementToSort][y] = allWords[max][y]; allWords[max][y] = swapTemp[0][y]; } while (swapTemp[0][y++] != '\0'); elementToSort++; } */ int a, b; for (a = 0; a < count; a++) { for (b = 0; (temp = allWords[a][b]) != '\0'; b++) { printf ("%c", temp); } printf ("\n"); } // Copy rows to different array and print results /* char arrayCopy [count][20]; int ac, ad; char tempa; for (ac = 0; ac < count; ac++) { for (ad = 0; (tempa = allWords[ac][ad]) != '\0'; ad++) { arrayCopy[ac][ad] = tempa; printf("%c", arrayCopy[ac][ad]); } printf("\n"); } */ /* Step 5: Create two additional arrays: (a) One in which each element contains unique words from char array (b) One which holds the count for the corresponding word in the other array */ /* Step 6: Sort the count array in decreasing order, and print the corresponding array element as well as word count in the console */ return 0; } // Perform housekeeping tasks like freeing up memory and closing file I'm really stuck on the selection sort algorithm. I'm currently using 2D arrays to represent strings, and that worked out fine, but when it came to sorting, using three level nested loops didn't seem to work, I tried to use qsort instead, but I don't fully understand that function as well. Constructive feedback and criticism greatly welcome (...and needed)!

    Read the article

  • Microsoft and jQuery

    - by Rick Strahl
    The jQuery JavaScript library has been steadily getting more popular and with recent developments from Microsoft, jQuery is also getting ever more exposure on the ASP.NET platform including now directly from Microsoft. jQuery is a light weight, open source DOM manipulation library for JavaScript that has changed how many developers think about JavaScript. You can download it and find more information on jQuery on www.jquery.com. For me jQuery has had a huge impact on how I develop Web applications and was probably the main reason I went from dreading to do JavaScript development to actually looking forward to implementing client side JavaScript functionality. It has also had a profound impact on my JavaScript skill level for me by seeing how the library accomplishes things (and often reviewing the terse but excellent source code). jQuery made an uncomfortable development platform (JavaScript + DOM) a joy to work on. Although jQuery is by no means the only JavaScript library out there, its ease of use, small size, huge community of plug-ins and pure usefulness has made it easily the most popular JavaScript library available today. As a long time jQuery user, I’ve been excited to see the developments from Microsoft that are bringing jQuery to more ASP.NET developers and providing more integration with jQuery for ASP.NET’s core features rather than relying on the ASP.NET AJAX library. Microsoft and jQuery – making Friends jQuery is an open source project but in the last couple of years Microsoft has really thrown its weight behind supporting this open source library as a supported component on the Microsoft platform. When I say supported I literally mean supported: Microsoft now offers actual tech support for jQuery as part of their Product Support Services (PSS) as jQuery integration has become part of several of the ASP.NET toolkits and ships in several of the default Web project templates in Visual Studio 2010. The ASP.NET MVC 3 framework (still in Beta) also uses jQuery for a variety of client side support features including client side validation and we can look forward toward more integration of client side functionality via jQuery in both MVC and WebForms in the future. In other words jQuery is becoming an optional but included component of the ASP.NET platform. PSS support means that support staff will answer jQuery related support questions as part of any support incidents related to ASP.NET which provides some piece of mind to some corporate development shops that require end to end support from Microsoft. In addition to including jQuery and supporting it, Microsoft has also been getting involved in providing development resources for extending jQuery’s functionality via plug-ins. Microsoft’s last version of the Microsoft Ajax Library – which is the successor to the native ASP.NET AJAX Library – included some really cool functionality for client templates, databinding and localization. As it turns out Microsoft has rebuilt most of that functionality using jQuery as the base API and provided jQuery plug-ins of these components. Very recently these three plug-ins were submitted and have been approved for inclusion in the official jQuery plug-in repository and been taken over by the jQuery team for further improvements and maintenance. Even more surprising: The jQuery-templates component has actually been approved for inclusion in the next major update of the jQuery core in jQuery V1.5, which means it will become a native feature that doesn’t require additional script files to be loaded. Imagine this – an open source contribution from Microsoft that has been accepted into a major open source project for a core feature improvement. Microsoft has come a long way indeed! What the Microsoft Involvement with jQuery means to you For Microsoft jQuery support is a strategic decision that affects their direction in client side development, but nothing stopped you from using jQuery in your applications prior to Microsoft’s official backing and in fact a large chunk of developers did so readily prior to Microsoft’s announcement. Official support from Microsoft brings a few benefits to developers however. jQuery support in Visual Studio 2010 means built-in support for jQuery IntelliSense, automatically added jQuery scripts in many projects types and a common base for client side functionality that actually uses what most developers are already using. If you have already been using jQuery and were worried about straying from the Microsoft line and their internal Microsoft Ajax Library – worry no more. With official support and the change in direction towards jQuery Microsoft is now following along what most in the ASP.NET community had already been doing by using jQuery, which is likely the reason for Microsoft’s shift in direction in the first place. ASP.NET AJAX and the Microsoft AJAX Library weren’t bad technology – there was tons of useful functionality buried in these libraries. However, these libraries never got off the ground, mainly because early incarnations were squarely aimed at control/component developers rather than application developers. For all the functionality that these controls provided for control developers they lacked in useful and easily usable application developer functionality that was easily accessible in day to day client side development. The result was that even though Microsoft shipped support for these tools in the box (in .NET 3.5 and 4.0), other than for the internal support in ASP.NET for things like the UpdatePanel and the ASP.NET AJAX Control Toolkit as well as some third party vendors, the Microsoft client libraries were largely ignored by the developer community opening the door for other client side solutions. Microsoft seems to be acknowledging developer choice in this case: Many more developers were going down the jQuery path rather than using the Microsoft built libraries and there seems to be little sense in continuing development of a technology that largely goes unused by the majority of developers. Kudos for Microsoft for recognizing this and gracefully changing directions. Note that even though there will be no further development in the Microsoft client libraries they will continue to be supported so if you’re using them in your applications there’s no reason to start running for the exit in a panic and start re-writing everything with jQuery. Although that might be a reasonable choice in some cases, jQuery and the Microsoft libraries work well side by side so that you can leave existing solutions untouched even as you enhance them with jQuery. The Microsoft jQuery Plug-ins – Solid Core Features One of the most interesting developments in Microsoft’s embracing of jQuery is that Microsoft has started contributing to jQuery via standard mechanism set for jQuery developers: By submitting plug-ins. Microsoft took some of the nicest new features of the unpublished Microsoft Ajax Client Library and re-wrote these components for jQuery and then submitted them as plug-ins to the jQuery plug-in repository. Accepted plug-ins get taken over by the jQuery team and that’s exactly what happened with the three plug-ins submitted by Microsoft with the templating plug-in even getting slated to be published as part of the jQuery core in the next major release (1.5). The following plug-ins are provided by Microsoft: jQuery Templates – a client side template rendering engine jQuery Data Link – a client side databinder that can synchronize changes without code jQuery Globalization – provides formatting and conversion features for dates and numbers The first two are ports of functionality that was slated for the Microsoft Ajax Library while functionality for the globalization library provides functionality that was already found in the original ASP.NET AJAX library. To me all three plug-ins address a pressing need in client side applications and provide functionality I’ve previously used in other incarnations, but with more complete implementations. Let’s take a close look at these plug-ins. jQuery Templates http://api.jquery.com/category/plugins/templates/ Client side templating is a key component for building rich JavaScript applications in the browser. Templating on the client lets you avoid from manually creating markup by creating DOM nodes and injecting them individually into the document via code. Rather you can create markup templates – similar to the way you create classic ASP server markup – and merge data into these templates to render HTML which you can then inject into the document or replace existing content with. Output from templates are rendered as a jQuery matched set and can then be easily inserted into the document as needed. Templating is key to minimize client side code and reduce repeated code for rendering logic. Instead a single template can be used in many places for updating and adding content to existing pages. Further if you build pure AJAX interfaces that rely entirely on client rendering of the initial page content, templates allow you to a use a single markup template to handle all rendering of each specific HTML section/element. I’ve used a number of different client rendering template engines with jQuery in the past including jTemplates (a PHP style templating engine) and a modified version of John Resig’s MicroTemplating engine which I built into my own set of libraries because it’s such a commonly used feature in my client side applications. jQuery templates adds a much richer templating model that allows for sub-templates and access to the data items. Like John Resig’s original Micro Template engine, the core basics of the templating engine create JavaScript code which means that templates can include JavaScript code. To give you a basic idea of how templates work imagine I have an application that downloads a set of stock quotes based on a symbol list then displays them in the document. To do this you can create an ‘item’ template that describes how each of the quotes is renderd as a template inside of the document: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div><div>${LastPrice}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div><div>${LastQuoteTimeString}</div> </div> </script> The ‘template’ is little more than HTML with some markup expressions inside of it that define the template language. Notice the embedded ${} expressions which reference data from the quote objects returned from an AJAX call on the server. You can embed any JavaScript or value expression in these template expressions. There are also a number of structural commands like {{if}} and {{each}} that provide for rudimentary logic inside of your templates as well as commands ({{tmpl}} and {{wrap}}) for nesting templates. You can find more about the full set of markup expressions available in the documentation. To load up this data you can use code like the following: <script type="text/javascript"> //var Proxy = new ServiceProxy("../PageMethods/PageMethodsService.asmx/"); $(document).ready(function () { $("#btnGetQuotes").click(GetQuotes); }); function GetQuotes() { var symbols = $("#txtSymbols").val().split(","); $.ajax({ url: "../PageMethods/PageMethodsService.asmx/GetStockQuotes", data: JSON.stringify({ symbols: symbols }), // parameter map type: "POST", // data has to be POSTed contentType: "application/json", timeout: 10000, dataType: "json", success: function (result) { var quotes = result.d; var jEl = $("#stockTemplate").tmpl(quotes); $("#quoteDisplay").empty().append(jEl); }, error: function (xhr, status) { alert(status + "\r\n" + xhr.responseText); } }); }; </script> In this case an ASMX AJAX service is called to retrieve the stock quotes. The service returns an array of quote objects. The result is returned as an object with the .d property (in Microsoft service style) that returns the actual array of quotes. The template is applied with: var jEl = $("#stockTemplate").tmpl(quotes); which selects the template script tag and uses the .tmpl() function to apply the data to it. The result is a jQuery matched set of elements that can then be appended to the quote display element in the page. The template is merged against an array in this example. When the result is an array the template is automatically applied to each each array item. If you pass a single data item – like say a stock quote – the template works exactly the same way but is applied only once. Templates also have access to a $data item which provides the current data item and information about the tempalte that is currently executing. This makes it possible to keep context within the context of the template itself and also to pass context from a parent template to a child template which is very powerful. Templates can be evaluated by using the template selector and calling the .tmpl() function on the jQuery matched set as shown above or you can use the static $.tmpl() function to provide a template as a string. This allows you to dynamically create templates in code or – more likely – to load templates from the server via AJAX calls. In short there are options The above shows off some of the basics, but there’s much for functionality available in the template engine. Check the documentation link for more information and links to additional examples. The plug-in download also comes with a number of examples that demonstrate functionality. jQuery templates will become a native component in jQuery Core 1.5, so it’s definitely worthwhile checking out the engine today and get familiar with this interface. As much as I’m stoked about templating becoming part of the jQuery core because it’s such an integral part of many applications, there are also a couple shortcomings in the current incarnation: Lack of Error Handling Currently if you embed an expression that is invalid it’s simply not rendered. There’s no error rendered into the template nor do the various  template functions throw errors which leaves finding of bugs as a runtime exercise. I would like some mechanism – optional if possible – to be able to get error info of what is failing in a template when it’s rendered. No String Output Templates are always rendered into a jQuery matched set and there’s no way that I can see to directly render to a string. String output can be useful for debugging as well as opening up templating for creating non-HTML string output. Limited JavaScript Access Unlike John Resig’s original MicroTemplating Engine which was entirely based on JavaScript code generation these templates are limited to a few structured commands that can ‘execute’. There’s no code execution inside of script code which means you’re limited to calling expressions available in global objects or the data item passed in. This may or may not be a big deal depending on the complexity of your template logic. Error handling has been discussed quite a bit and it’s likely there will be some solution to that particualar issue by the time jQuery templates ship. The others are relatively minor issues but something to think about anyway. jQuery Data Link http://api.jquery.com/category/plugins/data-link/ jQuery Data Link provides the ability to do two-way data binding between input controls and an underlying object’s properties. The typical scenario is linking a textbox to a property of an object and have the object updated when the text in the textbox is changed and have the textbox change when the value in the object or the entire object changes. The plug-in also supports converter functions that can be applied to provide the conversion logic from string to some other value typically necessary for mapping things like textbox string input to say a number property and potentially applying additional formatting and calculations. In theory this sounds great, however in reality this plug-in has some serious usability issues. Using the plug-in you can do things like the following to bind data: person = { firstName: "rick", lastName: "strahl"}; $(document).ready( function() { // provide for two-way linking of inputs $("form").link(person); // bind to non-input elements explicitly $("#objFirst").link(person, { firstName: { name: "objFirst", convertBack: function (value, source, target) { $(target).text(value); } } }); $("#objLast").link(person, { lastName: { name: "objLast", convertBack: function (value, source, target) { $(target).text(value); } } }); }); This code hooks up two-way linking between a couple of textboxes on the page and the person object. The first line in the .ready() handler provides mapping of object to form field with the same field names as properties on the object. Note that .link() does NOT bind items into the textboxes when you call .link() – changes are mapped only when values change and you move out of the field. Strike one. The two following commands allow manual binding of values to specific DOM elements which is effectively a one-way bind. You specify the object and a then an explicit mapping where name is an ID in the document. The converter is required to explicitly assign the value to the element. Strike two. You can also detect changes to the underlying object and cause updates to the input elements bound. Unfortunately the syntax to do this is not very natural as you have to rely on the jQuery data object. To update an object’s properties and get change notification looks like this: function updateFirstName() { $(person).data("firstName", person.firstName + " (code updated)"); } This works fine in causing any linked fields to be updated. In the bindings above both the firstName input field and objFirst DOM element gets updated. But the syntax requires you to use a jQuery .data() call for each property change to ensure that the changes are tracked properly. Really? Sure you’re binding through multiple layers of abstraction now but how is that better than just manually assigning values? The code savings (if any) are going to be minimal. As much as I would like to have a WPF/Silverlight/Observable-like binding mechanism in client script, this plug-in doesn’t help much towards that goal in its current incarnation. While you can bind values, the ‘binder’ is too limited to be really useful. If initial values can’t be assigned from the mappings you’re going to end up duplicating work loading the data using some other mechanism. There’s no easy way to re-bind data with a different object altogether since updates trigger only through the .data members. Finally, any non-input elements have to be bound via code that’s fairly verbose and frankly may be more voluminous than what you might write by hand for manual binding and unbinding. Two way binding can be very useful but it has to be easy and most importantly natural. If it’s more work to hook up a binding than writing a couple of lines to do binding/unbinding this sort of thing helps very little in most scenarios. In talking to some of the developers the feature set for Data Link is not complete and they are still soliciting input for features and functionality. If you have ideas on how you want this feature to be more useful get involved and post your recommendations. As it stands, it looks to me like this component needs a lot of love to become useful. For this component to really provide value, bindings need to be able to be refreshed easily and work at the object level, not just the property level. It seems to me we would be much better served by a model binder object that can perform these binding/unbinding tasks in bulk rather than a tool where each link has to be mapped first. I also find the choice of creating a jQuery plug-in questionable – it seems a standalone object – albeit one that relies on the jQuery library – would provide a more intuitive interface than the current forcing of options onto a plug-in style interface. Out of the three Microsoft created components this is by far the least useful and least polished implementation at this point. jQuery Globalization http://github.com/jquery/jquery-global Globalization in JavaScript applications often gets short shrift and part of the reason for this is that natively in JavaScript there’s little support for formatting and parsing of numbers and dates. There are a number of JavaScript libraries out there that provide some support for globalization, but most are limited to a particular portion of globalization. As .NET developers we’re fairly spoiled by the richness of APIs provided in the framework and when dealing with client development one really notices the lack of these features. While you may not necessarily need to localize your application the globalization plug-in also helps with some basic tasks for non-localized applications: Dealing with formatting and parsing of dates and time values. Dates in particular are problematic in JavaScript as there are no formatters whatsoever except the .toString() method which outputs a verbose and next to useless long string. With the globalization plug-in you get a good chunk of the formatting and parsing functionality that the .NET framework provides on the server. You can write code like the following for example to format numbers and dates: var date = new Date(); var output = $.format(date, "MMM. dd, yy") + "\r\n" + $.format(date, "d") + "\r\n" + // 10/25/2010 $.format(1222.32213, "N2") + "\r\n" + $.format(1222.33, "c") + "\r\n"; alert(output); This becomes even more useful if you combine it with templates which can also include any JavaScript expressions. Assuming the globalization plug-in is loaded you can create template expressions that use the $.format function. Here’s the template I used earlier for the stock quote again with a couple of formats applied: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div> <div>${$.format(LastPrice,"N2")}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div> <div>${$.format(LastQuoteTime,"MMM dd, yyyy")}</div> </div> </script> There are also parsing methods that can parse dates and numbers from strings into numbers easily: alert($.parseDate("25.10.2010")); alert($.parseInt("12.222")); // de-DE uses . for thousands separators As you can see culture specific options are taken into account when parsing. The globalization plugin provides rich support for a variety of locales: Get a list of all available cultures Query cultures for culture items (like currency symbol, separators etc.) Localized string names for all calendar related items (days of week, months) Generated off of .NET’s supported locales In short you get much of the same functionality that you already might be using in .NET on the server side. The plugin includes a huge number of locales and an Globalization.all.min.js file that contains the text defaults for each of these locales as well as small locale specific script files that define each of the locale specific settings. It’s highly recommended that you NOT use the huge globalization file that includes all locales, but rather add script references to only those languages you explicitly care about. Overall this plug-in is a welcome helper. Even if you use it with a single locale (like en-US) and do no other localization, you’ll gain solid support for number and date formatting which is a vital feature of many applications. Changes for Microsoft It’s good to see Microsoft coming out of its shell and away from the ‘not-built-here’ mentality that has been so pervasive in the past. It’s especially good to see it applied to jQuery – a technology that has stood in drastic contrast to Microsoft’s own internal efforts in terms of design, usage model and… popularity. It’s great to see that Microsoft is paying attention to what customers prefer to use and supporting the customer sentiment – even if it meant drastically changing course of policy and moving into a more open and sharing environment in the process. The additional jQuery support that has been introduced in the last two years certainly has made lives easier for many developers on the ASP.NET platform. It’s also nice to see Microsoft submitting proposals through the standard jQuery process of plug-ins and getting accepted for various very useful projects. Certainly the jQuery Templates plug-in is going to be very useful to many especially since it will be baked into the jQuery core in jQuery 1.5. I hope we see more of this type of involvement from Microsoft in the future. Kudos!© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  

    Read the article

  • Rendering ASP.NET MVC Views to String

    - by Rick Strahl
    It's not uncommon in my applications that I require longish text output that does not have to be rendered into the HTTP output stream. The most common scenario I have for 'template driven' non-Web text is for emails of all sorts. Logon confirmations and verifications, email confirmations for things like orders, status updates or scheduler notifications - all of which require merged text output both within and sometimes outside of Web applications. On other occasions I also need to capture the output from certain views for logging purposes. Rather than creating text output in code, it's much nicer to use the rendering mechanism that ASP.NET MVC already provides by way of it's ViewEngines - using Razor or WebForms views - to render output to a string. This is nice because it uses the same familiar rendering mechanism that I already use for my HTTP output and it also solves the problem of where to store the templates for rendering this content in nothing more than perhaps a separate view folder. The good news is that ASP.NET MVC's rendering engine is much more modular than the full ASP.NET runtime engine which was a real pain in the butt to coerce into rendering output to string. With MVC the rendering engine has been separated out from core ASP.NET runtime, so it's actually a lot easier to get View output into a string. Getting View Output from within an MVC Application If you need to generate string output from an MVC and pass some model data to it, the process to capture this output is fairly straight forward and involves only a handful of lines of code. The catch is that this particular approach requires that you have an active ControllerContext that can be passed to the view. This means that the following approach is limited to access from within Controller methods. Here's a class that wraps the process and provides both instance and static methods to handle the rendering:/// <summary> /// Class that renders MVC views to a string using the /// standard MVC View Engine to render the view. /// /// Note: This class can only be used within MVC /// applications that have an active ControllerContext. /// </summary> public class ViewRenderer { /// <summary> /// Required Controller Context /// </summary> protected ControllerContext Context { get; set; } public ViewRenderer(ControllerContext controllerContext) { Context = controllerContext; } /// <summary> /// Renders a full MVC view to a string. Will render with the full MVC /// View engine including running _ViewStart and merging into _Layout /// </summary> /// <param name="viewPath"> /// The path to the view to render. Either in same controller, shared by /// name or as fully qualified ~/ path including extension /// </param> /// <param name="model">The model to render the view with</param> /// <returns>String of the rendered view or null on error</returns> public string RenderView(string viewPath, object model) { return RenderViewToStringInternal(viewPath, model, false); } /// <summary> /// Renders a partial MVC view to string. Use this method to render /// a partial view that doesn't merge with _Layout and doesn't fire /// _ViewStart. /// </summary> /// <param name="viewPath"> /// The path to the view to render. Either in same controller, shared by /// name or as fully qualified ~/ path including extension /// </param> /// <param name="model">The model to pass to the viewRenderer</param> /// <returns>String of the rendered view or null on error</returns> public string RenderPartialView(string viewPath, object model) { return RenderViewToStringInternal(viewPath, model, true); } public static string RenderView(string viewPath, object model, ControllerContext controllerContext) { ViewRenderer renderer = new ViewRenderer(controllerContext); return renderer.RenderView(viewPath, model); } public static string RenderPartialView(string viewPath, object model, ControllerContext controllerContext) { ViewRenderer renderer = new ViewRenderer(controllerContext); return renderer.RenderPartialView(viewPath, model); } protected string RenderViewToStringInternal(string viewPath, object model, bool partial = false) { // first find the ViewEngine for this view ViewEngineResult viewEngineResult = null; if (partial) viewEngineResult = ViewEngines.Engines.FindPartialView(Context, viewPath); else viewEngineResult = ViewEngines.Engines.FindView(Context, viewPath, null); if (viewEngineResult == null) throw new FileNotFoundException(Properties.Resources.ViewCouldNotBeFound); // get the view and attach the model to view data var view = viewEngineResult.View; Context.Controller.ViewData.Model = model; string result = null; using (var sw = new StringWriter()) { var ctx = new ViewContext(Context, view, Context.Controller.ViewData, Context.Controller.TempData, sw); view.Render(ctx, sw); result = sw.ToString(); } return result; } } The key is the RenderViewToStringInternal method. The method first tries to find the view to render based on its path which can either be in the current controller's view path or the shared view path using its simple name (PasswordRecovery) or alternately by its full virtual path (~/Views/Templates/PasswordRecovery.cshtml). This code should work both for Razor and WebForms views although I've only tried it with Razor Views. Note that WebForms Views might actually be better for plain text as Razor adds all sorts of white space into its output when there are code blocks in the template. The Web Forms engine provides more accurate rendering for raw text scenarios. Once a view engine is found the view to render can be retrieved. Views in MVC render based on data that comes off the controller like the ViewData which contains the model along with the actual ViewData and ViewBag. From the View and some of the Context data a ViewContext is created which is then used to render the view with. The View picks up the Model and other data from the ViewContext internally and processes the View the same it would be processed if it were to send its output into the HTTP output stream. The difference is that we can override the ViewContext's output stream which we provide and capture into a StringWriter(). After rendering completes the result holds the output string. If an error occurs the error behavior is similar what you see with regular MVC errors - you get a full yellow screen of death including the view error information with the line of error highlighted. It's your responsibility to handle the error - or let it bubble up to your regular Controller Error filter if you have one. To use the simple class you only need a single line of code if you call the static methods. Here's an example of some Controller code that is used to send a user notification to a customer via email in one of my applications:[HttpPost] public ActionResult ContactSeller(ContactSellerViewModel model) { InitializeViewModel(model); var entryBus = new busEntry(); var entry = entryBus.LoadByDisplayId(model.EntryId); if ( string.IsNullOrEmpty(model.Email) ) entryBus.ValidationErrors.Add("Email address can't be empty.","Email"); if ( string.IsNullOrEmpty(model.Message)) entryBus.ValidationErrors.Add("Message can't be empty.","Message"); model.EntryId = entry.DisplayId; model.EntryTitle = entry.Title; if (entryBus.ValidationErrors.Count > 0) { ErrorDisplay.AddMessages(entryBus.ValidationErrors); ErrorDisplay.ShowError("Please correct the following:"); } else { string message = ViewRenderer.RenderView("~/views/template/ContactSellerEmail.cshtml",model, ControllerContext); string title = entry.Title + " (" + entry.DisplayId + ") - " + App.Configuration.ApplicationName; AppUtils.SendEmail(title, message, model.Email, entry.User.Email, false, false)) } return View(model); } Simple! The view in this case is just a plain MVC view and in this case it's a very simple plain text email message (edited for brevity here) that is created and sent off:@model ContactSellerViewModel @{ Layout = null; }re: @Model.EntryTitle @Model.ListingUrl @Model.Message ** SECURITY ADVISORY - AVOID SCAMS ** Avoid: wiring money, cross-border deals, work-at-home ** Beware: cashier checks, money orders, escrow, shipping ** More Info: @(App.Configuration.ApplicationBaseUrl)scams.html Obviously this is a very simple view (I edited out more from this page to keep it brief) -  but other template views are much more complex HTML documents or long messages that are occasionally updated and they are a perfect fit for Razor rendering. It even works with nested partial views and _layout pages. Partial Rendering Notice that I'm rendering a full View here. In the view I explicitly set the Layout=null to avoid pulling in _layout.cshtml for this view. This can also be controlled externally by calling the RenderPartial method instead: string message = ViewRenderer.RenderPartialView("~/views/template/ContactSellerEmail.cshtml",model, ControllerContext); with this line of code no layout page (or _viewstart) will be loaded, so the output generated is just what's in the view. I find myself using Partials most of the time when rendering templates, since the target of templates usually tend to be emails or other HTML fragment like output, so the RenderPartialView() method is definitely useful to me. Rendering without a ControllerContext The preceding class is great when you're need template rendering from within MVC controller actions or anywhere where you have access to the request Controller. But if you don't have a controller context handy - maybe inside a utility function that is static, a non-Web application, or an operation that runs asynchronously in ASP.NET - which makes using the above code impossible. I haven't found a way to manually create a Controller context to provide the ViewContext() what it needs from outside of the MVC infrastructure. However, there are ways to accomplish this,  but they are a bit more complex. It's possible to host the RazorEngine on your own, which side steps all of the MVC framework and HTTP and just deals with the raw rendering engine. I wrote about this process in Hosting the Razor Engine in Non-Web Applications a long while back. It's quite a process to create a custom Razor engine and runtime, but it allows for all sorts of flexibility. There's also a RazorEngine CodePlex project that does something similar. I've been meaning to check out the latter but haven't gotten around to it since I have my own code to do this. The trick to hosting the RazorEngine to have it behave properly inside of an ASP.NET application and properly cache content so templates aren't constantly rebuild and reparsed. Anyway, in the same app as above I have one scenario where no ControllerContext is available: I have a background scheduler running inside of the app that fires on timed intervals. This process could be external but because it's lightweight we decided to fire it right inside of the ASP.NET app on a separate thread. In my app the code that renders these templates does something like this:var model = new SearchNotificationViewModel() { Entries = entries, Notification = notification, User = user }; // TODO: Need logging for errors sending string razorError = null; var result = AppUtils.RenderRazorTemplate("~/views/template/SearchNotificationTemplate.cshtml", model, razorError); which references a couple of helper functions that set up my RazorFolderHostContainer class:public static string RenderRazorTemplate(string virtualPath, object model,string errorMessage = null) { var razor = AppUtils.CreateRazorHost(); var path = virtualPath.Replace("~/", "").Replace("~", "").Replace("/", "\\"); var merged = razor.RenderTemplateToString(path, model); if (merged == null) errorMessage = razor.ErrorMessage; return merged; } /// <summary> /// Creates a RazorStringHostContainer and starts it /// Call .Stop() when you're done with it. /// /// This is a static instance /// </summary> /// <param name="virtualPath"></param> /// <param name="binBasePath"></param> /// <param name="forceLoad"></param> /// <returns></returns> public static RazorFolderHostContainer CreateRazorHost(string binBasePath = null, bool forceLoad = false) { if (binBasePath == null) { if (HttpContext.Current != null) binBasePath = HttpContext.Current.Server.MapPath("~/"); else binBasePath = AppDomain.CurrentDomain.BaseDirectory; } if (_RazorHost == null || forceLoad) { if (!binBasePath.EndsWith("\\")) binBasePath += "\\"; //var razor = new RazorStringHostContainer(); var razor = new RazorFolderHostContainer(); razor.TemplatePath = binBasePath; binBasePath += "bin\\"; razor.BaseBinaryFolder = binBasePath; razor.UseAppDomain = false; razor.ReferencedAssemblies.Add(binBasePath + "ClassifiedsBusiness.dll"); razor.ReferencedAssemblies.Add(binBasePath + "ClassifiedsWeb.dll"); razor.ReferencedAssemblies.Add(binBasePath + "Westwind.Utilities.dll"); razor.ReferencedAssemblies.Add(binBasePath + "Westwind.Web.dll"); razor.ReferencedAssemblies.Add(binBasePath + "Westwind.Web.Mvc.dll"); razor.ReferencedAssemblies.Add("System.Web.dll"); razor.ReferencedNamespaces.Add("System.Web"); razor.ReferencedNamespaces.Add("ClassifiedsBusiness"); razor.ReferencedNamespaces.Add("ClassifiedsWeb"); razor.ReferencedNamespaces.Add("Westwind.Web"); razor.ReferencedNamespaces.Add("Westwind.Utilities"); _RazorHost = razor; _RazorHost.Start(); //_RazorHost.Engine.Configuration.CompileToMemory = false; } return _RazorHost; } The RazorFolderHostContainer essentially is a full runtime that mimics a folder structure like a typical Web app does including caching semantics and compiling code only if code changes on disk. It maps a folder hierarchy to views using the ~/ path syntax. The host is then configured to add assemblies and namespaces. Unfortunately the engine is not exactly like MVC's Razor - the expression expansion and code execution are the same, but some of the support methods like sections, helpers etc. are not all there so templates have to be a bit simpler. There are other folder hosts provided as well to directly execute templates from strings (using RazorStringHostContainer). The following is an example of an HTML email template @inherits RazorHosting.RazorTemplateFolderHost <ClassifiedsWeb.SearchNotificationViewModel> <html> <head> <title>Search Notifications</title> <style> body { margin: 5px;font-family: Verdana, Arial; font-size: 10pt;} h3 { color: SteelBlue; } .entry-item { border-bottom: 1px solid grey; padding: 8px; margin-bottom: 5px; } </style> </head> <body> Hello @Model.User.Name,<br /> <p>Below are your Search Results for the search phrase:</p> <h3>@Model.Notification.SearchPhrase</h3> <small>since @TimeUtils.ShortDateString(Model.Notification.LastSearch)</small> <hr /> You can see that the syntax is a little different. Instead of the familiar @model header the raw Razor  @inherits tag is used to specify the template base class (which you can extend). I took a quick look through the feature set of RazorEngine on CodePlex (now Github I guess) and the template implementation they use is closer to MVC's razor but there are other differences. In the end don't expect exact behavior like MVC templates if you use an external Razor rendering engine. This is not what I would consider an ideal solution, but it works well enough for this project. My biggest concern is the overhead of hosting a second razor engine in a Web app and the fact that here the differences in template rendering between 'real' MVC Razor views and another RazorEngine really are noticeable. You win some, you lose some It's extremely nice to see that if you have a ControllerContext handy (which probably addresses 99% of Web app scenarios) rendering a view to string using the native MVC Razor engine is pretty simple. Kudos on making that happen - as it solves a problem I see in just about every Web application I work on. But it is a bummer that a ControllerContext is required to make this simple code work. It'd be really sweet if there was a way to render views without being so closely coupled to the ASP.NET or MVC infrastructure that requires a ControllerContext. Alternately it'd be nice to have a way for an MVC based application to create a minimal ControllerContext from scratch - maybe somebody's been down that path. I tried for a few hours to come up with a way to make that work but gave up in the soup of nested contexts (MVC/Controller/View/Http). I suspect going down this path would be similar to hosting the ASP.NET runtime requiring a WorkerRequest. Brrr…. The sad part is that it seems to me that a View should really not require much 'context' of any kind to render output to string. Yes there are a few things that clearly are required like paths to the virtual and possibly the disk paths to the root of the app, but beyond that view rendering should not require much. But, no such luck. For now custom RazorHosting seems to be the only way to make Razor rendering go outside of the MVC context… Resources Full ViewRenderer.cs source code from Westwind.Web.Mvc library Hosting the Razor Engine for Non-Web Applications RazorEngine on GitHub© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212  | Next Page >