Search Results

Search found 654 results on 27 pages for 'ds'.

Page 17/27 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Is there a way to get each row's value from a database into an array?

    - by Guyver
    Say I have a query like the one below. What would be the best way to put each value into an array if I don't know how many results there will be? Normally I would do this with a loop, but I have no idea how many results there are. Would I need run another query to count the results first? <CFQUERY name="alllocations" DATASOURCE="#DS#"> SELECT locationID FROM tblProjectLocations WHERE projectID = '#ProjectName#' </CFQUERY>

    Read the article

  • Position of Footer is Constant

    - by mdogg
    How can I get my footer to be at the bottom of the container, after everything in main? Here's the site: (It's fine on the homepage, but not on any of the others) http://dl.dropbox.com/u/122695/ds/index.html

    Read the article

  • Problem in LINQ query formation

    - by Newbie
    I have written List<int> Uids = new List<int>(); Uids = (from returnResultSet in ds.ToList() from portfolioReturn in returnResultSet.Portfolios from baseRecord in portfolioReturn.ChildData select new int { id = baseRecord.Id }).ToList<int>(); Getting error: 'int' does not contain a definition for 'id' what is the problem that i created? Thanks

    Read the article

  • DRBD not syncing between my nodes when IP is reset

    - by ramdaz
    I am trying to setup DRBD by following the article at http://www.howtoforge.com/setting-up-network-raid1-with-drbd-on-ubuntu-11.10-p2 I am using Ubuntu 10.04 DRBD - 8.3.11 In the first run I had everything working perfectly and when shifting the systems to a production environment I decided to restart the Meta Data creation part and start from scratch. The IPs had changed entirely in the production environment. Issuing drdbadm create-md r0 in both the servers runs successfully. But when I do "drbdadm -- --overwrite-data-of-peer primary all" on the primary it fails to start the re sync. My config file is as given below resource r0 { protocol C; syncer { rate 50M; } startup { wfc-timeout 15; degr-wfc-timeout 60; } net { cram-hmac-alg sha1; shared-secret "aklsadkjlhdbskjndsf8738734jkfkjfkjf"; } on primaryds { device /dev/drbd0; disk /dev/md2; address 172.16.7.1:7788; meta-disk internal; } on secondaryds { device /dev/drbd0; disk /dev/md2; address 172.16.7.3:7788; meta-disk internal; } } Status on primary root at primaryds:~# cat /proc/drbd version: 8.3.7 (api:88/proto:86-91) GIT-hash: ea9e28dbff98e331a62bcbcc63a6135808fe2917 build by root at primaryds, 2012-05-12 15:08:01 0: cs:WFBitMapS ro:Primary/Secondary ds:UpToDate/Inconsistent C r---- ns:0 nr:0 dw:0 dr:200 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:5690352828 Status on secondary root at secondaryds:/etc/drbd.d# cat /proc/drbd version: 8.3.7 (api:88/proto:86-91) GIT-hash: ea9e28dbff98e331a62bcbcc63a6135808fe2917 build by root at secondaryds, 2012-05-12 15:25:25 0: cs:WFBitMapT ro:Secondary/Primary ds:Inconsistent/UpToDate C r---- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:5690352828 Log of Primary May 30 13:42:23 primaryds kernel: [ 1584.057076] block drbd0: role( Secondary -> Primary ) disk( Inconsistent -> UpToDate ) May 30 13:42:23 primaryds kernel: [ 1584.086264] block drbd0: Forced to consider local data as UpToDate! May 30 13:42:23 primaryds kernel: [ 1584.086303] block drbd0: Creating new current UUID May 30 13:42:26 primaryds kernel: [ 1586.405551] block drbd0: drbd_sync_handshake: May 30 13:42:26 primaryds kernel: [ 1586.405564] block drbd0: self E8A075F378173D4B:0000000000000004:0000000000000000:0000000000000000 bits:1422588207 flags:0 May 30 13:42:26 primaryds kernel: [ 1586.405574] block drbd0: peer 0000000000000004:0000000000000000:0000000000000000:0000000000000000 bits:1422588207 flags:0 May 30 13:42:26 primaryds kernel: [ 1586.405582] block drbd0: uuid_compare()=2 by rule 30 May 30 13:42:26 primaryds kernel: [ 1586.405587] block drbd0: Becoming sync source due to disk states. May 30 13:42:26 primaryds kernel: [ 1586.405592] block drbd0: Writing the whole bitmap, full sync required after drbd_sync_handshake. May 30 13:42:27 primaryds kernel: [ 1588.171638] block drbd0: 5427 GB (1422588207 bits) marked out-of-sync by on disk bit-map. May 30 13:42:27 primaryds kernel: [ 1588.172769] block drbd0: conn( Connected -> WFBitMapS ) Log in Secondary May 30 13:42:24 secondaryds kernel: [ 1563.304894] block drbd0: peer( Secondary - Primary ) pdsk( Inconsistent - UpToDate ) May 30 13:42:24 secondaryds kernel: [ 1563.339674] block drbd0: drbd_sync_handshake: May 30 13:42:24 secondaryds kernel: [ 1563.339685] block drbd0: self 0000000000000004:0000000000000000:0000000000000000:0000000000000000 bits:1422588207 flags:0 May 30 13:42:24 secondaryds kernel: [ 1563.339695] block drbd0: peer E8A075F378173D4B:0000000000000004:0000000000000000:0000000000000000 bits:1422588207 flags:0 May 30 13:42:24 secondaryds kernel: [ 1563.339703] block drbd0: uuid_compare()=-2 by rule 20 May 30 13:42:24 secondaryds kernel: [ 1563.339709] block drbd0: Becoming sync target due to disk states. May 30 13:42:24 secondaryds kernel: [ 1563.339714] block drbd0: Writing the whole bitmap, full sync required after drbd_sync_handshake. May 30 13:42:26 secondaryds kernel: [ 1565.652342] block drbd0: 5427 GB (1422588207 bits) marked out-of-sync by on disk bit-map. May 30 13:42:26 secondaryds kernel: [ 1565.652965] block drbd0: conn( Connected - WFBitMapT ) The serves are not responding once it reaches this stage. Tried redoing it couple of time but noting happens. Why could the resync not be taking place? I would like some advice? Directions?

    Read the article

  • how to create a DataAccessLayer ?

    - by NIGHIL DAS
    hi, i am creating a database applicatin in .Net. I am using a DataAccessLayer for communicating .net objects with database but i am not sure that this class is correct or not Can anyone cross check it and rectify any mistakes namespace IDataaccess { #region Collection Class public class SPParamCollection : List<SPParams> { } public class SPParamReturnCollection : List<SPParams> { } #endregion #region struct public struct SPParams { public string Name { get; set; } public object Value { get; set; } public ParameterDirection ParamDirection { get; set; } public SqlDbType Type { get; set; } public int Size { get; set; } public string TypeName { get; set; } // public string datatype; } #endregion /// <summary> /// Interface DataAccess Layer implimentation New version /// </summary> public interface IDataAccess { DataTable getDataUsingSP(string spName); DataTable getDataUsingSP(string spName, SPParamCollection spParamCollection); DataSet getDataSetUsingSP(string spName); DataSet getDataSetUsingSP(string spName, SPParamCollection spParamCollection); SqlDataReader getDataReaderUsingSP(string spName); SqlDataReader getDataReaderUsingSP(string spName, SPParamCollection spParamCollection); int executeSP(string spName); int executeSP(string spName, SPParamCollection spParamCollection, bool addExtraParmas); int executeSP(string spName, SPParamCollection spParamCollection); DataTable getDataUsingSqlQuery(string strSqlQuery); int executeSqlQuery(string strSqlQuery); SPParamReturnCollection executeSPReturnParam(string spName, SPParamReturnCollection spParamReturnCollection); SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection); SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection, bool addExtraParmas); int executeSPReturnParam(string spName, SPParamCollection spParamCollection, ref SPParamReturnCollection spParamReturnCollection); object getScalarUsingSP(string spName); object getScalarUsingSP(string spName, SPParamCollection spParamCollection); } } using IDataaccess; namespace Dataaccess { /// <summary> /// Class DataAccess Layer implimentation New version /// </summary> public class DataAccess : IDataaccess.IDataAccess { #region Public variables static string Strcon; DataSet dts = new DataSet(); public DataAccess() { Strcon = sReadConnectionString(); } private string sReadConnectionString() { try { //dts.ReadXml("C:\\cnn.config"); //Strcon = dts.Tables[0].Rows[0][0].ToString(); //System.Configuration.Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); //Strcon = config.ConnectionStrings.ConnectionStrings["connectionString"].ConnectionString; // Add an Application Setting. //Strcon = "Data Source=192.168.50.103;Initial Catalog=erpDB;User ID=ipixerp1;Password=NogoXVc3"; Strcon = System.Configuration.ConfigurationManager.AppSettings["connection"]; //Strcon = System.Configuration.ConfigurationSettings.AppSettings[0].ToString(); } catch (Exception) { } return Strcon; } public SqlConnection connection; public SqlCommand cmd; public SqlDataAdapter adpt; public DataTable dt; public int intresult; public SqlDataReader sqdr; #endregion #region Public Methods public DataTable getDataUsingSP(string spName) { return getDataUsingSP(spName, null); } public DataTable getDataUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); dt = new DataTable(); adpt.Fill(dt); return (dt); } } } finally { connection.Close(); } } public DataSet getDataSetUsingSP(string spName) { return getDataSetUsingSP(spName, null); } public DataSet getDataSetUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); DataSet ds = new DataSet(); adpt.Fill(ds); return ds; } } } finally { connection.Close(); } } public SqlDataReader getDataReaderUsingSP(string spName) { return getDataReaderUsingSP(spName, null); } public SqlDataReader getDataReaderUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; sqdr = cmd.ExecuteReader(); return (sqdr); } } } finally { connection.Close(); } } public int executeSP(string spName) { return executeSP(spName, null); } public int executeSP(string spName, SPParamCollection spParamCollection, bool addExtraParmas) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { SqlParameter par = new SqlParameter(spParamCollection[count].Name, spParamCollection[count].Value); if (addExtraParmas) { par.TypeName = spParamCollection[count].TypeName; par.SqlDbType = spParamCollection[count].Type; } cmd.Parameters.Add(par); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; return (cmd.ExecuteNonQuery()); } } } finally { connection.Close(); } } public int executeSP(string spName, SPParamCollection spParamCollection) { return executeSP(spName, spParamCollection, false); } public DataTable getDataUsingSqlQuery(string strSqlQuery) { try { using (connection = new SqlConnection(Strcon)) connection.Open(); { using (cmd = new SqlCommand(strSqlQuery, connection)) { cmd.CommandType = CommandType.Text; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); dt = new DataTable(); adpt.Fill(dt); return (dt); } } } finally { connection.Close(); } } public int executeSqlQuery(string strSqlQuery) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(strSqlQuery, connection)) { cmd.CommandType = CommandType.Text; cmd.CommandTimeout = 60; intresult = cmd.ExecuteNonQuery(); return (intresult); } } } finally { connection.Close(); } } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamReturnCollection spParamReturnCollection) { return executeSPReturnParam(spName, null, spParamReturnCollection); } public int executeSPReturnParam() { return 0; } public int executeSPReturnParam(string spName, SPParamCollection spParamCollection, ref SPParamReturnCollection spParamReturnCollection) { try { SPParamReturnCollection spParamReturned = new SPParamReturnCollection(); using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; foreach (SPParams paramReturn in spParamReturnCollection) { SqlParameter _parmReturn = new SqlParameter(paramReturn.Name, paramReturn.Size); _parmReturn.Direction = paramReturn.ParamDirection; if (paramReturn.Size > 0) _parmReturn.Size = paramReturn.Size; else _parmReturn.Size = 32; _parmReturn.SqlDbType = paramReturn.Type; cmd.Parameters.Add(_parmReturn); } cmd.CommandTimeout = 60; intresult = cmd.ExecuteNonQuery(); connection.Close(); //for (int i = 0; i < spParamReturnCollection.Count; i++) //{ // spParamReturned.Add(new SPParams // { // Name = spParamReturnCollection[i].Name, // Value = cmd.Parameters[spParamReturnCollection[i].Name].Value // }); //} } } return intresult; } finally { connection.Close(); } } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection) { return executeSPReturnParam(spName, spParamCollection, spParamReturnCollection, false); } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection, bool addExtraParmas) { try { SPParamReturnCollection spParamReturned = new SPParamReturnCollection(); using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { //cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); SqlParameter par = new SqlParameter(spParamCollection[count].Name, spParamCollection[count].Value); if (addExtraParmas) { par.TypeName = spParamCollection[count].TypeName; par.SqlDbType = spParamCollection[count].Type; } cmd.Parameters.Add(par); } cmd.CommandType = CommandType.StoredProcedure; foreach (SPParams paramReturn in spParamReturnCollection) { SqlParameter _parmReturn = new SqlParameter(paramReturn.Name, paramReturn.Value); _parmReturn.Direction = paramReturn.ParamDirection; if (paramReturn.Size > 0) _parmReturn.Size = paramReturn.Size; else _parmReturn.Size = 32; _parmReturn.SqlDbType = paramReturn.Type; cmd.Parameters.Add(_parmReturn); } cmd.CommandTimeout = 60; cmd.ExecuteNonQuery(); connection.Close(); for (int i = 0; i < spParamReturnCollection.Count; i++) { spParamReturned.Add(new SPParams { Name = spParamReturnCollection[i].Name, Value = cmd.Parameters[spParamReturnCollection[i].Name].Value }); } } } return spParamReturned; } catch (Exception ex) { return null; } finally { connection.Close(); } } public object getScalarUsingSP(string spName) { return getScalarUsingSP(spName, null); } public object getScalarUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); cmd.CommandTimeout = 60; } cmd.CommandType = CommandType.StoredProcedure; return cmd.ExecuteScalar(); } } } finally { connection.Close(); cmd.Dispose(); } } #endregion } }

    Read the article

  • HDFS some datanodes of cluster are suddenly disconnected while reducers are running

    - by user1429825
    I have 8 slave computers and 1 master computer for running Hadoop (ver 0.21) some datanodes of cluster are suddenly disconnected while I was running MapReduce code on 10GB data After all mappers finished and around 80% of reducers was processed, randomly one or more datanode disconned from network. and then the other datanodes start to disappear from network even if I killed the MapReduce job when I found some datanode was disconnected. I've tried to change dfs.datanode.max.xcievers to 4096, turned off fire-walls of all computing node, disabled selinux and increased the number of file open limit to 20000 but they didn't work at all... anyone have a idea to solve this problem? followings are error log from mapreduce 12/06/01 12:31:29 INFO mapreduce.Job: Task Id : attempt_201206011227_0001_r_000006_0, Status : FAILED java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:889) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:820) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427) and followings are logs from datanode 2012-06-01 13:01:01,118 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5549263231281364844_3453 src: /*.*.*.147:56205 dest: /*.*.*.142:20010 2012-06-01 13:01:01,136 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020) Starting thread to transfer block blk_-3849519151985279385_5906 to *.*.*.147:20010 2012-06-01 13:01:19,135 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-5797481564121417802_3453 to *.*.*.146:20010 got java.net.ConnectException: > Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1257) at java.lang.Thread.run(Thread.java:722) 2012-06-01 13:06:20,342 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_6674438989226364081_3453 2012-06-01 13:09:01,781 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-3849519151985279385_5906 to *.*.*.147:20010 got java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/*.*.*.142:60057 remote=/*.*.*.147:20010] at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246) at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164) at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:388) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:476) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1284) at java.lang.Thread.run(Thread.java:722) hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/home/hadoop/data/name</value> </property> <property> <name>dfs.data.dir</name> <value>/home/hadoop/data/hdfs1,/home/hadoop/data/hdfs2,/home/hadoop/data/hdfs3,/home/hadoop/data/hdfs4,/home/hadoop/data/hdfs5</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.http.address</name> <value>0.0.0.0:20070</value> <description>50070 The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:20075</value> <description>50075 The datanode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.secondary.http.address</name> <value>0.0.0.0:20090</value> <description>50090 The secondary namenode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:20010</value> <description>50010 The address where the datanode server will listen to. If the port is 0 then the server will start on a free port. </description> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:20020</value> <description>50020 The datanode ipc server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:20475</value> </property> <property> <name>dfs.https.address</name> <value>0.0.0.0:20470</value> </property> </configuration> mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>masternode:29001</value> </property> <property> <name>mapred.system.dir</name> <value>/home/hadoop/data/mapreduce/system</value> </property> <property> <name>mapred.local.dir</name> <value>/home/hadoop/data/mapreduce/local</value> </property> <property> <name>mapred.map.tasks</name> <value>32</value> <description> default number of map tasks per job.</description> </property> <property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>4</value> </property> <property> <name>mapred.reduce.tasks</name> <value>8</value> <description> default number of reduce tasks per job.</description> </property> <property> <name>mapred.map.child.java.opts</name> <value>-Xmx2048M</value> </property> <property> <name>io.sort.mb</name> <value>500</value> </property> <property> <name>mapred.task.timeout</name> <value>1800000</value> <!-- 30 minutes --> </property> <property> <name>mapred.job.tracker.http.address</name> <value>0.0.0.0:20030</value> <description> 50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>mapred.task.tracker.http.address</name> <value>0.0.0.0:20060</value> <description> 50060 </property> </configuration>

    Read the article

  • Force RAID to read "exiled" disk?

    - by user197015
    We have a RAID 6 array (Infortrend EonStor DS S16F) that recently had two disks fail. Immediately prior to replacing these two disks, a third, good, disk was accidentally ejected from the array. After reinserting this disk it is marked as "exiled" by the array's firmware, and so even after replacing the two failed disks with new ones the array refuses to rebuild the logical volume and remains inaccessible. Since the temporarily-ejected disk is still functional and nothing has been written to the array since it was ejected, it seems that it should theoretically be possible to recover all the data on the array, but how can we convince the array to use the data from the "exiled" disk? Thanks for any help or advice you can offer.

    Read the article

  • Using Synology NAS attached to WDS-Repeater

    - by Kai B
    I'm using the following devices for my home network: Router 1: Speedport W 723 V (192.168.2.1) Router 2: AVM Fritzbox 3270 (192.168.2.2) NAS: Synology DS 207+(192.168.2.3) I successfully set up a WDS connection between the two routers. The Speedport acts as basestation, the Fritzbox repeats the WIFI signal of the Speedport. Everything fine, so far. Now I'm trying to achieve the following: Client → Speedport (Base) → Fritzbox (Repeater) → Synology NAS I want to use my Synology NAS attached to the Fritzbox (which is in repeating mode). I already gave it a static IP (as written above) but all connection attempts failed. Did I miss something out or is this set-up simply impossible?

    Read the article

  • Crashing HVM domain, what do I do?

    - by rassie
    My DomUs on a Xen 3.4 on an RHEL5 are crashing when too much memory is needed: (XEN) p2m_pod_demand_populate: Out of populate-on-demand memory! (XEN) domain_crash called from p2m.c:1091 (XEN) Domain 15 (vcpu#3) crashed on cpu#2: (XEN) ----[ Xen-3.4.0 x86_64 debug=n Not tainted ]---- (XEN) CPU: 2 (XEN) RIP: 0010:[<ffffffff80062c02>] (XEN) RFLAGS: 0000000000010216 CONTEXT: hvm guest (XEN) rax: 0000000000000000 rbx: 0000000000000001 rcx: 000000000000003f (XEN) rdx: 0000000004812000 rsi: ffff810001000000 rdi: ffff810004812000 (XEN) rbp: 0000000000000282 rsp: ffff810007635cf0 r8: ffff810037c0288e (XEN) r9: 00000000000023e1 r10: 0000000000000000 r11: 0000000000000001 (XEN) r12: ffff81000000cb00 r13: ffff8100007e43f0 r14: ffff81000000fc10 (XEN) r15: 00000000000280d2 cr0: 0000000080050033 cr4: 00000000000006e0 (XEN) cr3: 0000000006760000 cr2: 0000000003d47078 (XEN) ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: 0000 cs: 0010 Can I disable populate-on-demand for HVM somehow? Xen 3.3 didn't exhibit such behaviour...

    Read the article

  • Dynamic DNS at freedns.afraid.org using a Fritz!Box

    - by kai
    I am having some trouble setting up Dynamic DNS with my Fritz!Box 7360. I have set up the Dynamic DNS page with (this is translated from German, so might be worded a bit differently): [x] Use dynamic DNS Dynamic DNS Provider: User defined Update-URL: https://freedns.afraid.org/dynamic/update.php?MY-DIRECT-URL-KEY Domain Name: mydomain.crabdance.com User Name: myusername Password: mypassword Now on the FritzBox status page, it says: Dynamic DNS: activated, mydomain.crabdance.com, Status: Account temprarily deactivated When I check back on http://freedns.afraid.org, my IP address never changes. Is there any way to fix this? Note my router is on an IPv6 network (m-net), with IPv4 only through DS-Lite. I'm not sure whether this affects anything. Update: Following the guide here (putting myusername instead of MY-DIRECT-URL-KEY) hasn't given any succes. However, the status field has changed slightly: Dynamic DNS: activated, mydomain.crabdance.com, Status: unknown

    Read the article

  • Can you authenticate into SSAS with AD LDS (ADAM) accounts?

    - by Jaxidian
    I'm very new to AD LDS and experienced but not qualified with SSAS, so my apologies for my ignorances with these. We have a couple implementations where we expose SSAS via an HTTPS proxy (msmdpump.dll) and currently we have a temporary domain setup handling this (where our end-users have a second account+creds to manage because of this = non-ideal). I want to move us towards a more permanent solution which I'm thinking of moving all authentication to AD LDS for our web apps, SSAS, and others. However, SSAS is where I'm concerned about this. I know SSAS requires Windows Authentication and to play nicely, and that this ultimately means Active Directory will be involved. Is there a way to get this done with AD LDS instead of having to use a full AD DS implementation? If so, how? (Note: My question over at StackOverflow had a suggestion that I post this question here on ServerFault instead. My apologies if I'm not asking in the right forum.)

    Read the article

  • How to Make Your Verizon FIOS Router 1000% More Secure

    - by The Geek
    If you’ve just switched to Verizon FIOS and they’ve installed the new router in your house, there’s just one problem: it’s set to use lousy WEP encryption by default, instead of the much more secure WPA2. Here’s how to fix it. The problem with WEP encryption is that it can be cracked really easily—a skilled hacker can do it in a few minutes, and even an unskilled geek can do it in just a little more time with the right tools. Once they’ve done that, they can leech off your internet connection and do anything they want—including illegal stuff coming from your network. Note: if you are using an old Nintendo DS connected to the internet, they usually only support WEP encryption, so you may not want to do this Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor The Spam Police Parts 1 and 2 – Goodbye Spammers [Videos] Snow Angels Theme for Windows 7 Exploring the Jungle Ruins Wallpaper Protect Your Privacy When Browsing with Chrome and Iron Browser Free Shipping Day is Friday, December 17, 2010 – National Free Shipping Day Find an Applicable Quote for Any Programming Situation

    Read the article

  • OVF Tools error

    - by ToreTrygg
    Hello all, I am trying to use OVF tools 1.0 to create an .ovf appliance split to 4gbs. When I run the following command: ovftool --chunkSize=4gb vi://:@/?ds= e:\test\test.ovf everything starts off fine. It will got for about 15-16% then I get this message: "Error: unable to get NFC ticket for target disk". I have looked online, but cannot find anything that matches this problem. I am using Windows 2k3 as the ovf creation server, and I'm in a ESX 3.5 environment. The ovf must be split into 4gb chunks (or less) so they can be put onto DVDs. Also, when it is done it deletes the files it started to create. Any help would be greatly appreciated.

    Read the article

  • Cannot start tor with vidalia, failed to bind listening port because of tor-socks running

    - by ganjan
    I get these errors trying to run tor with vidalia Apr 19 21:55:15.371 [Notice] Tor v0.2.1.30. This is experimental software. Do not rely on it for strong anonymity. (Running on Linux i686) Apr 19 21:55:15.372 [Notice] Initialized libevent version 1.4.13-stable using method epoll. Good. Apr 19 21:55:15.373 [Notice] Opening Socks listener on 127.0.0.1:9050 Apr 19 21:55:15.373 [Warning] Could not bind to 127.0.0.1:9050: Address already in use. Is Tor already running? Apr 19 21:55:15.373 [Warning] Failed to parse/validate config: Failed to bind one of the listener ports. Apr 19 21:55:15.373 [Error] Reading config failed--see warnings above. I don't think tor is running. Here is a nmap scan of my localhost Starting Nmap 5.21 ( http://nmap.org ) at 2011-04-19 21:59 CEST Nmap scan report for localhost (127.0.0.1) Host is up (0.0000050s latency). Hostname localhost resolves to 2 IPs. Only scanned 127.0.0.1 rDNS record for 127.0.0.1: localhost.localdomain Not shown: 989 closed ports PORT STATE SERVICE 22/tcp open ssh 53/tcp open domain 80/tcp open http 139/tcp open netbios-ssn 445/tcp open microsoft-ds 631/tcp open ipp 3128/tcp open squid-http 3306/tcp open mysql 9000/tcp open cslistener 9050/tcp open tor-socks 10000/tcp open snet-sensor-mgmt I see tor-socks is running here, probably be the cause of the problem. How do I stop this from starting up? I want to use vidalia so I can monitor whats going on.

    Read the article

  • Assembly load and execute issue

    - by Jean Carlos Suárez Marranzini
    I'm trying to develop Assembly code allowing me to load and execute(by input of the user) 2 other Assembly .EXE programs. I'm having two problems: -I don't seem to be able to assign the pathname to a valid register(Or maybe incorrect syntax) -I need to be able to execute the other program after the first one (could be either) started its execution. This is what I have so far: mov ax,cs ; moving code segment to data segment mov ds,ax mov ah,1h ; here I read from keyboard int 21h mov dl,al cmp al,'1' ; if 1 jump to LOADRUN1 JE LOADRUN1 popf cmp al,'2' ; if 1 jump to LOADRUN2 JE LOADRUN2 popf LOADRUN1: MOV AH,4BH MOV AL,00 LEA DX,[PROGNAME1] ; Not sure if it works INT 21H LOADRUN2: MOV AH,4BH MOV AL,00 LEA DX,[PROGNAME2] ; Not sure if it works INT 21H ; Here I define the bytes containing the pathnames PROGNAME1 db 'C:\Users\Usuario\NASM\Adding.exe',0 PROGNAME2 db 'C:\Users\Usuario\NASM\Substracting.exe',0 I just don't know how start another program by input in the 'parent' program, after one is already executing. Thanks in advance for your help! Any additional information I'll be more than happy to provide. -I'm using NASM 16 bits, Windows 7 32 bits.

    Read the article

  • Macports irssi & perl5 installation issues

    - by Dmitri DB
    Long time reader, first time poster. Big, appreciative thanks for everyone's collective questioning and answering here and at stackoverflow, it's helped me quite a lot over the time I've been learning answers through these sites! Apologies in advance if I didn't search hard enough on posts already up on this site to find out what I could do about this issue, but I thought I'd just reach out for the sake of trying at least once. I've experienced this issue while starting up my macports-installed version of irssi: 13:25 -!- Irssi: Error in script dispatch: 13:25 Can't locate lib.pm in @INC (@INC contains: /opt/local/lib/perl5/site_perl/5.12.4/darwin-multi-2level /opt/local/lib/perl5/site_perl/5.12.4 /opt/local/lib/perl5/vendor_perl/5.12.4/darwin-multi-2level /opt/local/lib/perl5/vendor_perl/5.12.4 /opt/local/lib/perl5/5.12.4/darwin-multi-2level /opt/local/lib/perl5/5.12.4 /opt/local/lib/perl5/site_perl/5.12.3/darwin-multi-2level /opt/local/lib/perl5/site_perl/5.12.3 /opt/local/lib/perl5/site_perl /opt/local/lib/perl5/vendor_perl .) at (eval 18) line 1. 13:25 BEGIN failed--compilation aborted at (eval 18) line 1. 13:25 Huh, strange. I looked into it a bit: [email protected] /opt/local/lib/perl5 ?- find . -name "lib.pm" -ls 14673887 16 -r--r--r-- 1 root admin 6853 25 Jun 23:39 ./5.12.4/darwin-thread-multi- 2level/lib.pm [email protected] /opt/local/lib/perl5 ?- l 5.12.4/darwin-thread-multi-2level total 1864 drwxr-xr-x 55 root admin 1870 28 Jun 19:28 . drwxr-xr-x 158 root admin 5372 28 Jun 19:28 .. -rw-r--r-- 1 root admin 177814 25 Jun 23:39 .packlist drwxr-xr-x 6 root admin 204 28 Jun 19:28 B -r--r--r-- 1 root admin 25714 25 Jun 23:39 B.pm drwxr-xr-x 64 root admin 2176 28 Jun 19:28 CORE drwxr-xr-x 3 root admin 102 28 Jun 19:28 Compress -r--r--r-- 1 root admin 3000 25 Jun 23:39 Config.pm -r--r--r-- 1 root admin 228094 25 Jun 23:39 Config.pod -r--r--r-- 1 root admin 409 25 Jun 23:39 Config_git.pl -r--r--r-- 1 root admin 38759 25 Jun 23:39 Config_heavy.pl -r--r--r-- 1 root admin 21174 25 Jun 23:39 Cwd.pm -r--r--r-- 1 root admin 63535 25 Jun 23:39 DB_File.pm drwxr-xr-x 3 root admin 102 28 Jun 19:28 Data drwxr-xr-x 5 root admin 170 28 Jun 19:28 Devel drwxr-xr-x 4 root admin 136 28 Jun 19:28 Digest -r--r--r-- 1 root admin 25185 25 Jun 23:39 DynaLoader.pm drwxr-xr-x 22 root admin 748 28 Jun 19:28 Encode -r--r--r-- 1 root admin 29731 25 Jun 23:39 Encode.pm -r--r--r-- 1 root admin 6736 25 Jun 23:39 Errno.pm -r--r--r-- 1 root admin 5445 25 Jun 23:39 Fcntl.pm drwxr-xr-x 5 root admin 170 28 Jun 19:28 File drwxr-xr-x 3 root admin 102 28 Jun 19:28 Filter -r--r--r-- 1 root admin 1819 25 Jun 23:39 GDBM_File.pm drwxr-xr-x 4 root admin 136 28 Jun 19:28 Hash drwxr-xr-x 3 root admin 102 28 Jun 19:28 I18N drwxr-xr-x 11 root admin 374 28 Jun 19:28 IO -r--r--r-- 1 root admin 1404 25 Jun 23:39 IO.pm drwxr-xr-x 6 root admin 204 28 Jun 19:28 IPC drwxr-xr-x 4 root admin 136 28 Jun 19:28 List drwxr-xr-x 4 root admin 136 28 Jun 19:28 MIME drwxr-xr-x 3 root admin 102 28 Jun 19:28 Math -r--r--r-- 1 root admin 2519 25 Jun 23:39 NDBM_File.pm -r--r--r-- 1 root admin 4208 25 Jun 23:39 O.pm -r--r--r-- 1 root admin 15563 25 Jun 23:39 Opcode.pm -r--r--r-- 1 root admin 21011 25 Jun 23:39 POSIX.pm -r--r--r-- 1 root admin 58962 25 Jun 23:39 POSIX.pod drwxr-xr-x 5 root admin 170 28 Jun 19:28 PerlIO -r--r--r-- 1 root admin 2515 25 Jun 23:39 SDBM_File.pm drwxr-xr-x 4 root admin 136 28 Jun 19:28 Scalar -r--r--r-- 1 root admin 10837 25 Jun 23:39 Socket.pm -r--r--r-- 1 root admin 41003 25 Jun 23:39 Storable.pm drwxr-xr-x 4 root admin 136 28 Jun 19:28 Sys drwxr-xr-x 3 root admin 102 28 Jun 19:28 Text drwxr-xr-x 5 root admin 170 28 Jun 19:28 Time drwxr-xr-x 3 root admin 102 28 Jun 19:28 Unicode -r--r--r-- 1 root admin 14462 25 Jun 23:39 attributes.pm drwxr-xr-x 38 root admin 1292 28 Jun 19:28 auto -r--r--r-- 1 root admin 19892 25 Jun 23:39 encoding.pm -r--r--r-- 1 root admin 6853 25 Jun 23:39 lib.pm -r--r--r-- 1 root admin 11044 25 Jun 23:39 mro.pm -r--r--r-- 1 root admin 997 25 Jun 23:39 ops.pm -r--r--r-- 1 root admin 13945 25 Jun 23:39 re.pm drwxr-xr-x 3 root admin 102 28 Jun 19:28 threads -r--r--r-- 1 root admin 33283 25 Jun 23:39 threads.pm So, it sort of seems to me that the permissions which perl5 got installed with for these modules has gotten mixed up somehow? I'm not really a perl user beyond enjoying it for massive directory-recursive find/replace operations within text files, so I haven't much of an idea what the permissions here are supposed to look like, and I'm not really sure how to go about determining how macports has gone and installed perl this way when it's otherwise worked without failure for years now. Does anyone have any recommendations for the sanest path towards rectifying this issue? Also, is there any interesting reason as to why the macports default for the perl5 port installs 5.12.4, and not 5.16.0, which has to be explicitly installed via the perl5.16 port? Thanks again!

    Read the article

  • Cannot FTP without simultaneous SSH connection?

    - by Lucas
    I'm trying to set up an old box as a backup server (running 10.04.4 LTS). I intend to use 3rd party software on my PC to periodically connect to my server via FTP(S) and to mirror certain files. For some reason, all FTP connection attempts fail UNLESS I'm simultaneously connected via SSH. For example, if I use putty to test the connection to port 21, the system hangs and times out. I get: 220 Connected to LeServer USER lucas 331 Please specify the password. PASS [password] <cursor> However, when I'm simultaneously logged in (in another session) everything works: 220 Connected to LeServer USER lucas 331 Please specify the password. PASS [password] 230 Login successful. Basically, this means that my software will never be able to connect on its own, as intended. I know that the correct port is open because it works (sometimes) and nmap gives me: Starting Nmap 5.00 ( http://nmap.org ) at 2012-03-20 16:15 CDT Interesting ports on xx.xxx.xx.x: Not shown: 995 closed ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 53/tcp open domain 139/tcp open netbios-ssn 445/tcp open microsoft-ds Nmap done: 1 IP address (1 host up) scanned in 0.15 seconds My only hypothesis is that this has something to do with iptables. Maybe it's allowing only established connections? I don't think that's how I set it up, but maybe? Here's my iptables rules for INPUT: lucas@rearden:~$ sudo iptables -L INPUT Chain INPUT (policy DROP) target prot opt source destination fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh ufw-before-logging-input all -- anywhere anywhere ufw-before-input all -- anywhere anywhere ufw-after-input all -- anywhere anywhere ufw-after-logging-input all -- anywhere anywhere ufw-reject-input all -- anywhere anywhere ufw-track-input all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:ftp I'm using vsftpd. Any thoughts/resources on how I could fix this? L

    Read the article

  • Password problem while creating domain

    - by Murdock
    Hi, I'm freshman so far in server management stuff but this seems to be clearly against logic. After updating my Windows Server 2008 Standard 32bit, installing DNS server and AD DS I wanted to create domain via using CMD and dcpromo.exe setup. But no matter if I disable demand for comlex password in Password policies or create a password which fully comply with requirements for strong and complex password, still I can't get any further and it says that my password doesn't meet requirements. I'm also asked there to activate password demand by NET USER -passwordreq:yes and when I do so, this password doesn't work any more and I have to remove it from other admin account to be at least able to login with proper Administrator account.

    Read the article

  • Connected 2 routers, but they won't talk

    - by ekolis
    I'm trying to set up a second WLAN at home (since the Nintendo DS firmware won't connect to my WPA-encrypted main WLAN), but when I connect my second router's WAN port to one of my main router's LAN ports, the routers won't talk, and I can't connect wirelessly to the second router. I can still see the second router's WLAN - I am just unable to connect to it. And it seems that even the main router can't see the second router, despite being plugged directly into it - I went to the main router's admin console and pinged the second router (which is receiving an IP address), but it was unable to reach it! Does anyone know what might be wrong? Thanks!

    Read the article

  • Xen DomU on DRBD device: barrier errors

    - by Halfgaar
    I'm testing setting up a Xen DomU with a DRBD storage for easy failover. Most of the time, immediatly after booting the DomU, I get an IO error: [ 3.153370] EXT3-fs (xvda2): using internal journal [ 3.277115] ip_tables: (C) 2000-2006 Netfilter Core Team [ 3.336014] nf_conntrack version 0.5.0 (3899 buckets, 15596 max) [ 3.515604] init: failsafe main process (397) killed by TERM signal [ 3.801589] blkfront: barrier: write xvda2 op failed [ 3.801597] blkfront: xvda2: barrier or flush: disabled [ 3.801611] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801630] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801642] Buffer I/O error on device xvda2, logical block 6521396 [ 3.801652] lost page write due to I/O error on xvda2 [ 3.801755] Aborting journal on device xvda2. [ 3.804415] EXT3-fs (xvda2): error: ext3_journal_start_sb: Detected aborted journal [ 3.804434] EXT3-fs (xvda2): error: remounting filesystem read-only [ 3.814754] journal commit I/O error [ 6.973831] init: udev-fallback-graphics main process (538) terminated with status 1 [ 6.992267] init: plymouth-splash main process (546) terminated with status 1 The manpage of drbdsetup says that LVM (which I use) doesn't support barriers (better known as tagged command queuing or native command queing), so I configured the drbd device not to use barriers. This can be seen in /proc/drbd (by "wo:f, meaning flush, the next method drbd chooses after barrier): 3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---- ns:2160152 nr:520204 dw:2680344 dr:2678107 al:3549 bm:9183 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 And on the other host: 3: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r---- ns:0 nr:2160152 dw:2160152 dr:0 al:0 bm:8052 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 I also enabled the option disable_sendpage, as per the drbd docs: cat /sys/module/drbd/parameters/disable_sendpage Y I also tried adding barriers=0 to fstab as mount option. Still it sometimes says: [ 58.603896] blkfront: barrier: write xvda2 op failed [ 58.603903] blkfront: xvda2: barrier or flush: disabled I don't even know if ext3 has a nobarrier option. And, because only one of my storage systems is battery backed, it would not be smart anyway. Why does it still compain about barriers when I disabled that? Both host are: Debian: 6.0.4 uname -a: Linux 2.6.32-5-xen-amd64 drbd: 8.3.7 Xen: 4.0.1 Guest: Ubuntu 12.04 LTS uname -a: Linux 3.2.0-24-generic pvops drbd resource: resource drbdvm { meta-disk internal; device /dev/drbd3; startup { # The timeout value when the last known state of the other side was available. 0 means infinite. wfc-timeout 0; # Timeout value when the last known state was disconnected. 0 means infinite. degr-wfc-timeout 180; } syncer { # This is recommended only for low-bandwidth lines, to only send those # blocks which really have changed. #csums-alg md5; # Set to about half your net speed rate 60M; # It seems that this option moved to the 'net' section in drbd 8.4. (later release than Debian has currently) verify-alg md5; } net { # The manpage says this is recommended only in pre-production (because of its performance), to determine # if your LAN card has a TCP checksum offloading bug. #data-integrity-alg md5; } disk { # Detach causes the device to work over-the-network-only after the # underlying disk fails. Detach is not default for historical reasons, but is # recommended by the docs. # However, the Debian defaults in drbd.conf suggest the machine will reboot in that event... on-io-error detach; # LVM doesn't support barriers, so disabling it. It will revert to flush. Check wo: in /proc/drbd. If you don't disable it, you get IO errors. no-disk-barrier; } on host1 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.1:7792; } on host2 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.2:7792; } } DomU cfg: bootloader = '/usr/lib/xen-default/bin/pygrub' vcpus = '2' memory = '512' # # Disk device(s). # root = '/dev/xvda2 ro' disk = [ 'phy:/dev/drbd3,xvda2,w', 'phy:/dev/universe/drbdvm-swap,xvda1,w', ] # # Hostname # name = 'drbdvm' # # Networking # # fake IP for posting vif = [ 'ip=1.2.3.4,mac=00:16:3E:22:A8:A7' ] # # Behaviour # on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' In my test setup: the primary host's storage is 9650SE SATA-II RAID PCIe with battery. The secondary is software RAID1. Isn't DRBD+Xen widely used? With these problems, it's not going to work.

    Read the article

  • Can't send emails through sendmail, error occured

    - by skomak
    Hi, I have sendmail MTA and i use pear:Mail class to send mails through remote sendmail server. Everything was fine till yesterday. Probably nothing changes was made in configs. At maillog i can see: May 6 12:58:55 xxx sendmail[25903]: STARTTLS=server, relay=hostxxxx.static.xx.xx.pl [85.x.x.x], version=TLSv1/SSLv3, verify=NO, cipher=DHE-RSA-AES256-SHA, bits=256/256 May 6 12:58:56 xxx sendmail[25903]: o46AwtqE025903: hostxxxx.static.xx.xx.pl [85.x.x.x] did not issue MAIL/EXPN/VRFY/ETRN during connection to MTA2 and in /var/log/messages: May 6 13:00:17 lilia sendmail[27193]: realm changed: authentication aborted I use ldap to authenticate users but i used the same script to check mailing on another server and it works there good, only this server behave weird. Packets are deliverd to sendmail server i can see it in tcpdump, but there is smaller packets than on other server which sends emails. Could you tell me how can i check what is wrong with that? D.S.

    Read the article

  • OpenLDAP PAM authen does not support SSHA on FreeBSD10

    - by suker200
    OpenLDAP PAM authen does not support SSHA? Hi everyone, Now, I lost one day to figure out, the reason my FreeBSD10 can not authenticate SSH user via LDAP because pam_ldap and nss_ldap do not support SSHA password when OpenLDAP support SSHA method. I have checked /usr/local/etc/ldap.conf, they just have these pam_password methods: clear, crypt, nds, racf, ad, exop. So, If I switch to CRYPT, I can authenticate successfully. So, IMHO, I will be very appreciative for any point or suggestion from everyone to make my FreeBSD10 PAM support SSHA, is there any way or can not? Infor: Ldap Server (389 DS - Centos) - Ldap client (FreeBSD10) what I have got: authen via Ldap between Centos - Centos (Okie). Centos (Ldap Server) - FreeBSD failed (work if I using crypt) Thank and BR Suker200

    Read the article

  • What do I do when I get a Linux kernel bug?

    - by raldi
    I just bought a tiny computer called a fit-pc2 which came with a somewhat customized Ubuntu 9.10 installation. uname -a reports: Linux 2.6.31-34-fitpc2 #7 SMP Thu Apr 22 17:43:26 IDT 2010 i686 GNU/Linux It seems that after several hours of running with heavy network load, all networking ceases and I get the following in kern.log: BUG: unable to handle kernel paging request at ff09dfc0 IP: [<c0150300>] kthread_should_stop+0x10/0x20 *pde = 00000000 Oops: 0000 [#1] SMP last sysfs file: /sys/devices/pci0000:00/0000:00:1d.7/usb1/idVendor Modules linked in: binfmt_misc ppdev sbc_fitpc2_wdt snd_usb_audio snd_usb_lib i2c_isch sch_gpio snd_seq_dummy snd_hda_intel snd_pcm_oss snd_seq_oss snd_seq_midi snd_rawmidi snd_mixer_oss snd_seq_midi_event snd_seq snd_pcm snd_timer snd_page_alloc snd_seq_device iptable_filter ip_tables x_tables snd_hwdep lpc_sch snd psmouse rt2860sta(C) uvcvideo video pl2303 soundcore mfd_core output videodev v4l1_compat lirc_igorplugusb lirc_dev serio_raw lp parport usbhid r8169 mii iegd_mod drm agpgart Pid: 16, comm: kblockd/1 Tainted: G C (2.6.31-34-fitpc2 #7) SBC-FITPC2 EIP: 0060:[<c0150300>] EFLAGS: 00010246 CPU: 1 EIP is at kthread_should_stop+0x10/0x20 EAX: ff09dfc4 EBX: c180cbac ECX: 0109d000 EDX: f709df98 ESI: f709df98 EDI: c180cba0 EBP: f709dfb8 ESP: f709df90 DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 Process kblockd/1 (pid: 16, ti=f709c000 task=f7084b60 task.ti=f709c000) Stack: c014c14d c180cba4 00000000 f7084b60 c0150770 f709dfa4 f709dfa4 f7023ef4 <0> c180cba0 c014c0d0 f709dfe0 c015047c 00000000 00000000 00000000 f709dfcc <0> f709dfcc c0150400 00000000 00000000 00000000 c0103ce7 f7023ef4 00000000 Call Trace: [<c014c14d>] ? worker_thread+0x7d/0xe0 [<c0150770>] ? autoremove_wake_function+0x0/0x40 [<c014c0d0>] ? worker_thread+0x0/0xe0 [<c015047c>] ? kthread+0x7c/0x90 [<c0150400>] ? kthread+0x0/0x90 [<c0103ce7>] ? kernel_thread_helper+0x7/0x10 Code: a6 8b 55 0c 8d 4d e0 89 f8 89 34 24 e8 7a fd ff ff 89 c3 eb 92 90 90 90 90 90 90 55 64 a1 00 80 76 c0 8b 80 70 02 00 00 89 e5 5d <8b> 40 fc c3 8d b6 00 00 00 00 8d bf 00 00 00 00 55 ba d7 86 62 EIP: [<c0150300>] kthread_should_stop+0x10/0x20 SS:ESP 0068:f709df90 CR2: 00000000ff09dfc0 ---[ end trace 06004df70b9cf435 ]--- BUG: unable to handle kernel paging request at ff09dfc8 IP: [<c0521bc8>] _spin_lock_irqsave+0x18/0x30 *pde = 00000000 Oops: 0002 [#2] SMP last sysfs file: /sys/devices/pci0000:00/0000:00:1d.7/usb1/idVendor Modules linked in: binfmt_misc ppdev sbc_fitpc2_wdt snd_usb_audio snd_usb_lib i2c_isch sch_gpio snd_seq_dummy snd_hda_intel snd_pcm_oss snd_seq_oss snd_seq_midi snd_rawmidi snd_mixer_oss snd_seq_midi_event snd_seq snd_pcm snd_timer snd_page_alloc snd_seq_device iptable_filter ip_tables x_tables snd_hwdep lpc_sch snd psmouse rt2860sta(C) uvcvideo video pl2303 soundcore mfd_core output videodev v4l1_compat lirc_igorplugusb lirc_dev serio_raw lp parport usbhid r8169 mii iegd_mod drm agpgart Pid: 16, comm: kblockd/1 Tainted: G D C (2.6.31-34-fitpc2 #7) SBC-FITPC2 EIP: 0060:[<c0521bc8>] EFLAGS: 00010086 CPU: 1 EIP is at _spin_lock_irqsave+0x18/0x30 EAX: 00000100 EBX: ff09dfc8 ECX: 00000286 EDX: ff09dfc8 ESI: f7084b60 EDI: ff09dfc4 EBP: f709dd88 ESP: f709dd88 DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 Process kblockd/1 (pid: 16, ti=f709c000 task=f7084b60 task.ti=f709c000) Stack: f709dda4 c0127c0b 00000082 00000001 ff09dfc4 f7084b60 00000000 f709ddd0 <0> c0137fd2 00000086 f70954c4 00000000 f7098480 f709ddf0 f7094fc0 f7084b60 <0> 00000000 00000009 f709ddf0 c013c3f8 00000001 c1807c60 f709ddf0 f7084b60 Call Trace: [<c0127c0b>] ? complete+0x1b/0x60 [<c0137fd2>] ? mm_release+0x52/0xf0 [<c013c3f8>] ? exit_mm+0x18/0x110 [<c013c6db>] ? do_exit+0xfb/0x2e0 [<c013998a>] ? print_oops_end_marker+0x2a/0x30 [<c0522aab>] ? oops_end+0x8b/0xd0 [<c011eac4>] ? no_context+0xb4/0xd0 [<c011eb1d>] ? __bad_area_nosemaphore+0x3d/0x1a0 [<c0133a56>] ? load_balance_newidle+0x96/0x320 [<c011ec92>] ? bad_area_nosemaphore+0x12/0x20 [<c0524106>] ? do_page_fault+0x2f6/0x380 [<c012cc30>] ? finish_task_switch+0x50/0xe0 [<c0523e10>] ? do_page_fault+0x0/0x380 [<c0522006>] ? error_code+0x66/0x70 [<c0523e10>] ? do_page_fault+0x0/0x380 [<c0150300>] ? kthread_should_stop+0x10/0x20 [<c014c14d>] ? worker_thread+0x7d/0xe0 [<c0150770>] ? autoremove_wake_function+0x0/0x40 [<c014c0d0>] ? worker_thread+0x0/0xe0 [<c015047c>] ? kthread+0x7c/0x90 [<c0150400>] ? kthread+0x0/0x90 [<c0103ce7>] ? kernel_thread_helper+0x7/0x10 Code: 00 00 00 55 89 e5 f0 83 28 01 79 05 e8 02 ff ff ff 5d c3 55 89 c2 89 e5 9c 58 8d 74 26 00 89 c1 fa 90 8d 74 26 00 b8 00 01 00 00 <f0> 66 0f c1 02 38 e0 74 06 f3 90 8a 02 eb f6 89 c8 5d c3 90 8d EIP: [<c0521bc8>] _spin_lock_irqsave+0x18/0x30 SS:ESP 0068:f709dd88 CR2: 00000000ff09dfc8 ---[ end trace 06004df70b9cf436 ]--- Fixing recursive fault but reboot is needed! This seems to happen at least once a day. How do I even begin to debug this?

    Read the article

  • What do I do when I get a Linux kernel bug?

    - by raldi
    I just bought a tiny computer called a fit-pc2 which came with a somewhat customized Ubuntu 9.10 installation. uname -a reports: Linux 2.6.31-34-fitpc2 #7 SMP Thu Apr 22 17:43:26 IDT 2010 i686 GNU/Linux It seems that after several hours of running with heavy network load, all networking ceases and I get the following in kern.log: BUG: unable to handle kernel paging request at ff09dfc0 IP: [<c0150300>] kthread_should_stop+0x10/0x20 *pde = 00000000 Oops: 0000 [#1] SMP last sysfs file: /sys/devices/pci0000:00/0000:00:1d.7/usb1/idVendor Modules linked in: binfmt_misc ppdev sbc_fitpc2_wdt snd_usb_audio snd_usb_lib i2c_isch sch_gpio snd_seq_dummy snd_hda_intel snd_pcm_oss snd_seq_oss snd_seq_midi snd_rawmidi snd_mixer_oss snd_seq_midi_event snd_seq snd_pcm snd_timer snd_page_alloc snd_seq_device iptable_filter ip_tables x_tables snd_hwdep lpc_sch snd psmouse rt2860sta(C) uvcvideo video pl2303 soundcore mfd_core output videodev v4l1_compat lirc_igorplugusb lirc_dev serio_raw lp parport usbhid r8169 mii iegd_mod drm agpgart Pid: 16, comm: kblockd/1 Tainted: G C (2.6.31-34-fitpc2 #7) SBC-FITPC2 EIP: 0060:[<c0150300>] EFLAGS: 00010246 CPU: 1 EIP is at kthread_should_stop+0x10/0x20 EAX: ff09dfc4 EBX: c180cbac ECX: 0109d000 EDX: f709df98 ESI: f709df98 EDI: c180cba0 EBP: f709dfb8 ESP: f709df90 DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 Process kblockd/1 (pid: 16, ti=f709c000 task=f7084b60 task.ti=f709c000) Stack: c014c14d c180cba4 00000000 f7084b60 c0150770 f709dfa4 f709dfa4 f7023ef4 <0> c180cba0 c014c0d0 f709dfe0 c015047c 00000000 00000000 00000000 f709dfcc <0> f709dfcc c0150400 00000000 00000000 00000000 c0103ce7 f7023ef4 00000000 Call Trace: [<c014c14d>] ? worker_thread+0x7d/0xe0 [<c0150770>] ? autoremove_wake_function+0x0/0x40 [<c014c0d0>] ? worker_thread+0x0/0xe0 [<c015047c>] ? kthread+0x7c/0x90 [<c0150400>] ? kthread+0x0/0x90 [<c0103ce7>] ? kernel_thread_helper+0x7/0x10 Code: a6 8b 55 0c 8d 4d e0 89 f8 89 34 24 e8 7a fd ff ff 89 c3 eb 92 90 90 90 90 90 90 55 64 a1 00 80 76 c0 8b 80 70 02 00 00 89 e5 5d <8b> 40 fc c3 8d b6 00 00 00 00 8d bf 00 00 00 00 55 ba d7 86 62 EIP: [<c0150300>] kthread_should_stop+0x10/0x20 SS:ESP 0068:f709df90 CR2: 00000000ff09dfc0 ---[ end trace 06004df70b9cf435 ]--- BUG: unable to handle kernel paging request at ff09dfc8 IP: [<c0521bc8>] _spin_lock_irqsave+0x18/0x30 *pde = 00000000 Oops: 0002 [#2] SMP last sysfs file: /sys/devices/pci0000:00/0000:00:1d.7/usb1/idVendor Modules linked in: binfmt_misc ppdev sbc_fitpc2_wdt snd_usb_audio snd_usb_lib i2c_isch sch_gpio snd_seq_dummy snd_hda_intel snd_pcm_oss snd_seq_oss snd_seq_midi snd_rawmidi snd_mixer_oss snd_seq_midi_event snd_seq snd_pcm snd_timer snd_page_alloc snd_seq_device iptable_filter ip_tables x_tables snd_hwdep lpc_sch snd psmouse rt2860sta(C) uvcvideo video pl2303 soundcore mfd_core output videodev v4l1_compat lirc_igorplugusb lirc_dev serio_raw lp parport usbhid r8169 mii iegd_mod drm agpgart Pid: 16, comm: kblockd/1 Tainted: G D C (2.6.31-34-fitpc2 #7) SBC-FITPC2 EIP: 0060:[<c0521bc8>] EFLAGS: 00010086 CPU: 1 EIP is at _spin_lock_irqsave+0x18/0x30 EAX: 00000100 EBX: ff09dfc8 ECX: 00000286 EDX: ff09dfc8 ESI: f7084b60 EDI: ff09dfc4 EBP: f709dd88 ESP: f709dd88 DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 Process kblockd/1 (pid: 16, ti=f709c000 task=f7084b60 task.ti=f709c000) Stack: f709dda4 c0127c0b 00000082 00000001 ff09dfc4 f7084b60 00000000 f709ddd0 <0> c0137fd2 00000086 f70954c4 00000000 f7098480 f709ddf0 f7094fc0 f7084b60 <0> 00000000 00000009 f709ddf0 c013c3f8 00000001 c1807c60 f709ddf0 f7084b60 Call Trace: [<c0127c0b>] ? complete+0x1b/0x60 [<c0137fd2>] ? mm_release+0x52/0xf0 [<c013c3f8>] ? exit_mm+0x18/0x110 [<c013c6db>] ? do_exit+0xfb/0x2e0 [<c013998a>] ? print_oops_end_marker+0x2a/0x30 [<c0522aab>] ? oops_end+0x8b/0xd0 [<c011eac4>] ? no_context+0xb4/0xd0 [<c011eb1d>] ? __bad_area_nosemaphore+0x3d/0x1a0 [<c0133a56>] ? load_balance_newidle+0x96/0x320 [<c011ec92>] ? bad_area_nosemaphore+0x12/0x20 [<c0524106>] ? do_page_fault+0x2f6/0x380 [<c012cc30>] ? finish_task_switch+0x50/0xe0 [<c0523e10>] ? do_page_fault+0x0/0x380 [<c0522006>] ? error_code+0x66/0x70 [<c0523e10>] ? do_page_fault+0x0/0x380 [<c0150300>] ? kthread_should_stop+0x10/0x20 [<c014c14d>] ? worker_thread+0x7d/0xe0 [<c0150770>] ? autoremove_wake_function+0x0/0x40 [<c014c0d0>] ? worker_thread+0x0/0xe0 [<c015047c>] ? kthread+0x7c/0x90 [<c0150400>] ? kthread+0x0/0x90 [<c0103ce7>] ? kernel_thread_helper+0x7/0x10 Code: 00 00 00 55 89 e5 f0 83 28 01 79 05 e8 02 ff ff ff 5d c3 55 89 c2 89 e5 9c 58 8d 74 26 00 89 c1 fa 90 8d 74 26 00 b8 00 01 00 00 <f0> 66 0f c1 02 38 e0 74 06 f3 90 8a 02 eb f6 89 c8 5d c3 90 8d EIP: [<c0521bc8>] _spin_lock_irqsave+0x18/0x30 SS:ESP 0068:f709dd88 CR2: 00000000ff09dfc8 ---[ end trace 06004df70b9cf436 ]--- Fixing recursive fault but reboot is needed! This seems to happen at least once a day. How do I even begin to debug this?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >