Search Results

Search found 718 results on 29 pages for 'nintendo ds'.

Page 19/29 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Problem in LINQ query formation

    - by Newbie
    I have written List<int> Uids = new List<int>(); Uids = (from returnResultSet in ds.ToList() from portfolioReturn in returnResultSet.Portfolios from baseRecord in portfolioReturn.ChildData select new int { id = baseRecord.Id }).ToList<int>(); Getting error: 'int' does not contain a definition for 'id' what is the problem that i created? Thanks

    Read the article

  • Chipmunk Physics or Box2D for C++ 2D GameEngine ?

    - by Mr.Gando
    Hello, I'm developing what it's turning into a "cross-platform" 2D Game Engine, my initial platform target is iPhone OS, but could move on to Android or even some console like the PSP, or Nintendo DS, I want to keep my options open. My engine is developed in C++, and have been reading a lot about Box2D and Chipmunk but still I can't decide which one to use as my Physics Middleware. Chipmunk appears to have been made to be embedded easily, and Box2D seems to be widely used. Chipmunk is C , and Box2D is C++, but I've heard the API's of Box2D are much worse than chipmunk's API's. For now I will be using the engine shape creation and collision detection features for irregular polygons (not concave). I value: 1) Good API's 2) Easy to integrate. 3) Portability. And of course if you notice anything else, I would love to hear it. Which one do you think that would fit my needs better ?

    Read the article

  • 2D Engine scrolling on OpenGL via hardware?

    - by drudru
    hi, I'm using OpenGL as the bottom end for a 2D tiling engine. When everything is 2D, it is simple to optimize certain issues. For example, scrolling. If I know a certain section of the screen needs to scroll off the bottom, then I can just blit over that portion. I'm evening moving more than 1 pixel at a time. Without explicit hardware support (think old nintendo hw), this requires a lot of pixel writes. An on chip bitblt would be the next best thing. Essentially, I'm looking at how I can optimize my GL calls to use VRAM texture renders as efficient hardware blits. Is it possible to have GL scroll the framebuffer, or should I just resign myself to double-buffering and re-rendering an entire scene for each frame? Thx

    Read the article

  • DRBD not syncing between my nodes when IP is reset

    - by ramdaz
    I am trying to setup DRBD by following the article at http://www.howtoforge.com/setting-up-network-raid1-with-drbd-on-ubuntu-11.10-p2 I am using Ubuntu 10.04 DRBD - 8.3.11 In the first run I had everything working perfectly and when shifting the systems to a production environment I decided to restart the Meta Data creation part and start from scratch. The IPs had changed entirely in the production environment. Issuing drdbadm create-md r0 in both the servers runs successfully. But when I do "drbdadm -- --overwrite-data-of-peer primary all" on the primary it fails to start the re sync. My config file is as given below resource r0 { protocol C; syncer { rate 50M; } startup { wfc-timeout 15; degr-wfc-timeout 60; } net { cram-hmac-alg sha1; shared-secret "aklsadkjlhdbskjndsf8738734jkfkjfkjf"; } on primaryds { device /dev/drbd0; disk /dev/md2; address 172.16.7.1:7788; meta-disk internal; } on secondaryds { device /dev/drbd0; disk /dev/md2; address 172.16.7.3:7788; meta-disk internal; } } Status on primary root at primaryds:~# cat /proc/drbd version: 8.3.7 (api:88/proto:86-91) GIT-hash: ea9e28dbff98e331a62bcbcc63a6135808fe2917 build by root at primaryds, 2012-05-12 15:08:01 0: cs:WFBitMapS ro:Primary/Secondary ds:UpToDate/Inconsistent C r---- ns:0 nr:0 dw:0 dr:200 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:5690352828 Status on secondary root at secondaryds:/etc/drbd.d# cat /proc/drbd version: 8.3.7 (api:88/proto:86-91) GIT-hash: ea9e28dbff98e331a62bcbcc63a6135808fe2917 build by root at secondaryds, 2012-05-12 15:25:25 0: cs:WFBitMapT ro:Secondary/Primary ds:Inconsistent/UpToDate C r---- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:5690352828 Log of Primary May 30 13:42:23 primaryds kernel: [ 1584.057076] block drbd0: role( Secondary -> Primary ) disk( Inconsistent -> UpToDate ) May 30 13:42:23 primaryds kernel: [ 1584.086264] block drbd0: Forced to consider local data as UpToDate! May 30 13:42:23 primaryds kernel: [ 1584.086303] block drbd0: Creating new current UUID May 30 13:42:26 primaryds kernel: [ 1586.405551] block drbd0: drbd_sync_handshake: May 30 13:42:26 primaryds kernel: [ 1586.405564] block drbd0: self E8A075F378173D4B:0000000000000004:0000000000000000:0000000000000000 bits:1422588207 flags:0 May 30 13:42:26 primaryds kernel: [ 1586.405574] block drbd0: peer 0000000000000004:0000000000000000:0000000000000000:0000000000000000 bits:1422588207 flags:0 May 30 13:42:26 primaryds kernel: [ 1586.405582] block drbd0: uuid_compare()=2 by rule 30 May 30 13:42:26 primaryds kernel: [ 1586.405587] block drbd0: Becoming sync source due to disk states. May 30 13:42:26 primaryds kernel: [ 1586.405592] block drbd0: Writing the whole bitmap, full sync required after drbd_sync_handshake. May 30 13:42:27 primaryds kernel: [ 1588.171638] block drbd0: 5427 GB (1422588207 bits) marked out-of-sync by on disk bit-map. May 30 13:42:27 primaryds kernel: [ 1588.172769] block drbd0: conn( Connected -> WFBitMapS ) Log in Secondary May 30 13:42:24 secondaryds kernel: [ 1563.304894] block drbd0: peer( Secondary - Primary ) pdsk( Inconsistent - UpToDate ) May 30 13:42:24 secondaryds kernel: [ 1563.339674] block drbd0: drbd_sync_handshake: May 30 13:42:24 secondaryds kernel: [ 1563.339685] block drbd0: self 0000000000000004:0000000000000000:0000000000000000:0000000000000000 bits:1422588207 flags:0 May 30 13:42:24 secondaryds kernel: [ 1563.339695] block drbd0: peer E8A075F378173D4B:0000000000000004:0000000000000000:0000000000000000 bits:1422588207 flags:0 May 30 13:42:24 secondaryds kernel: [ 1563.339703] block drbd0: uuid_compare()=-2 by rule 20 May 30 13:42:24 secondaryds kernel: [ 1563.339709] block drbd0: Becoming sync target due to disk states. May 30 13:42:24 secondaryds kernel: [ 1563.339714] block drbd0: Writing the whole bitmap, full sync required after drbd_sync_handshake. May 30 13:42:26 secondaryds kernel: [ 1565.652342] block drbd0: 5427 GB (1422588207 bits) marked out-of-sync by on disk bit-map. May 30 13:42:26 secondaryds kernel: [ 1565.652965] block drbd0: conn( Connected - WFBitMapT ) The serves are not responding once it reaches this stage. Tried redoing it couple of time but noting happens. Why could the resync not be taking place? I would like some advice? Directions?

    Read the article

  • Ask the Readers: Do You Prefer Computers, Game Consoles, or Other Devices for Your Gaming Needs?

    - by Asian Angel
    Nearly everyone who has access to a computer will play games on it at some point, but many people also use a separate game platform as well. What we would like to know this week is if you prefer using a computer, game consoles, or other devices for your gaming needs. Photo of Faith and Kate Connors from Mirror’s Edge by Tamahikari Tammas. Video games are a perfect way to relax and have fun at home (or at work if you can sneak in some game time!). The increasing variety of devices available with each passing year are making it easier to have access to a gaming platform to suit your needs or “darkest gaming desires”. For many people their computers are the perfect platform…they can play Flash-based games in their browsers, use the default set of games that come with their system, and install any extras that catch their eyes. The added benefit is that when game time is over they can drop right into their browsing, e-mail, personal projects, or work without having to switch hardware. The convenience of the “all-in-one” platform is certainly appealing! Perhaps you prefer to use your computer for other activities outside of gaming and own one or more separate game consoles. You might have chosen an Xbox, Playstation, or Nintendo for example. Maybe a hand-held is preferable for its’ size and portability. Then there are mobile phones and the iPad… With so many options it may feel hard to choose the right platform(s) without a good bit of research regarding display, availability of games for a particular platform, how long before the platform starts to become “obsolete”, etc. What we would like to know this week is which gaming platform you prefer. Is there only one that you choose to use or do you use multiple platforms for gaming? Is there a particular reason such as convenience for your choices? You may even be keeping an older platform around just for a certain game (or games) made for it. Are there any recommendations or advice that you would like to share with your fellow readers? Let us know in the comments! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Apture Highlights Turns Your Cursor into a Search Tool Add Classic Sci-Fi Goodness to Your Desktop with the Matrix Theme for Windows 7 You Can’t Walk Straight without Visual Markers [Video] Lord of the Rings Movie Parody Double Feature [Video] Turn a Webpage into an Asteroids-Styled Shooting Game in Opera Dolphin Browser Mini Leaves Beta; Sports New GUI, Easy Bookmarking, and More

    Read the article

  • how to create a DataAccessLayer ?

    - by NIGHIL DAS
    hi, i am creating a database applicatin in .Net. I am using a DataAccessLayer for communicating .net objects with database but i am not sure that this class is correct or not Can anyone cross check it and rectify any mistakes namespace IDataaccess { #region Collection Class public class SPParamCollection : List<SPParams> { } public class SPParamReturnCollection : List<SPParams> { } #endregion #region struct public struct SPParams { public string Name { get; set; } public object Value { get; set; } public ParameterDirection ParamDirection { get; set; } public SqlDbType Type { get; set; } public int Size { get; set; } public string TypeName { get; set; } // public string datatype; } #endregion /// <summary> /// Interface DataAccess Layer implimentation New version /// </summary> public interface IDataAccess { DataTable getDataUsingSP(string spName); DataTable getDataUsingSP(string spName, SPParamCollection spParamCollection); DataSet getDataSetUsingSP(string spName); DataSet getDataSetUsingSP(string spName, SPParamCollection spParamCollection); SqlDataReader getDataReaderUsingSP(string spName); SqlDataReader getDataReaderUsingSP(string spName, SPParamCollection spParamCollection); int executeSP(string spName); int executeSP(string spName, SPParamCollection spParamCollection, bool addExtraParmas); int executeSP(string spName, SPParamCollection spParamCollection); DataTable getDataUsingSqlQuery(string strSqlQuery); int executeSqlQuery(string strSqlQuery); SPParamReturnCollection executeSPReturnParam(string spName, SPParamReturnCollection spParamReturnCollection); SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection); SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection, bool addExtraParmas); int executeSPReturnParam(string spName, SPParamCollection spParamCollection, ref SPParamReturnCollection spParamReturnCollection); object getScalarUsingSP(string spName); object getScalarUsingSP(string spName, SPParamCollection spParamCollection); } } using IDataaccess; namespace Dataaccess { /// <summary> /// Class DataAccess Layer implimentation New version /// </summary> public class DataAccess : IDataaccess.IDataAccess { #region Public variables static string Strcon; DataSet dts = new DataSet(); public DataAccess() { Strcon = sReadConnectionString(); } private string sReadConnectionString() { try { //dts.ReadXml("C:\\cnn.config"); //Strcon = dts.Tables[0].Rows[0][0].ToString(); //System.Configuration.Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); //Strcon = config.ConnectionStrings.ConnectionStrings["connectionString"].ConnectionString; // Add an Application Setting. //Strcon = "Data Source=192.168.50.103;Initial Catalog=erpDB;User ID=ipixerp1;Password=NogoXVc3"; Strcon = System.Configuration.ConfigurationManager.AppSettings["connection"]; //Strcon = System.Configuration.ConfigurationSettings.AppSettings[0].ToString(); } catch (Exception) { } return Strcon; } public SqlConnection connection; public SqlCommand cmd; public SqlDataAdapter adpt; public DataTable dt; public int intresult; public SqlDataReader sqdr; #endregion #region Public Methods public DataTable getDataUsingSP(string spName) { return getDataUsingSP(spName, null); } public DataTable getDataUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); dt = new DataTable(); adpt.Fill(dt); return (dt); } } } finally { connection.Close(); } } public DataSet getDataSetUsingSP(string spName) { return getDataSetUsingSP(spName, null); } public DataSet getDataSetUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); DataSet ds = new DataSet(); adpt.Fill(ds); return ds; } } } finally { connection.Close(); } } public SqlDataReader getDataReaderUsingSP(string spName) { return getDataReaderUsingSP(spName, null); } public SqlDataReader getDataReaderUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; sqdr = cmd.ExecuteReader(); return (sqdr); } } } finally { connection.Close(); } } public int executeSP(string spName) { return executeSP(spName, null); } public int executeSP(string spName, SPParamCollection spParamCollection, bool addExtraParmas) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { SqlParameter par = new SqlParameter(spParamCollection[count].Name, spParamCollection[count].Value); if (addExtraParmas) { par.TypeName = spParamCollection[count].TypeName; par.SqlDbType = spParamCollection[count].Type; } cmd.Parameters.Add(par); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; return (cmd.ExecuteNonQuery()); } } } finally { connection.Close(); } } public int executeSP(string spName, SPParamCollection spParamCollection) { return executeSP(spName, spParamCollection, false); } public DataTable getDataUsingSqlQuery(string strSqlQuery) { try { using (connection = new SqlConnection(Strcon)) connection.Open(); { using (cmd = new SqlCommand(strSqlQuery, connection)) { cmd.CommandType = CommandType.Text; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); dt = new DataTable(); adpt.Fill(dt); return (dt); } } } finally { connection.Close(); } } public int executeSqlQuery(string strSqlQuery) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(strSqlQuery, connection)) { cmd.CommandType = CommandType.Text; cmd.CommandTimeout = 60; intresult = cmd.ExecuteNonQuery(); return (intresult); } } } finally { connection.Close(); } } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamReturnCollection spParamReturnCollection) { return executeSPReturnParam(spName, null, spParamReturnCollection); } public int executeSPReturnParam() { return 0; } public int executeSPReturnParam(string spName, SPParamCollection spParamCollection, ref SPParamReturnCollection spParamReturnCollection) { try { SPParamReturnCollection spParamReturned = new SPParamReturnCollection(); using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; foreach (SPParams paramReturn in spParamReturnCollection) { SqlParameter _parmReturn = new SqlParameter(paramReturn.Name, paramReturn.Size); _parmReturn.Direction = paramReturn.ParamDirection; if (paramReturn.Size > 0) _parmReturn.Size = paramReturn.Size; else _parmReturn.Size = 32; _parmReturn.SqlDbType = paramReturn.Type; cmd.Parameters.Add(_parmReturn); } cmd.CommandTimeout = 60; intresult = cmd.ExecuteNonQuery(); connection.Close(); //for (int i = 0; i < spParamReturnCollection.Count; i++) //{ // spParamReturned.Add(new SPParams // { // Name = spParamReturnCollection[i].Name, // Value = cmd.Parameters[spParamReturnCollection[i].Name].Value // }); //} } } return intresult; } finally { connection.Close(); } } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection) { return executeSPReturnParam(spName, spParamCollection, spParamReturnCollection, false); } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection, bool addExtraParmas) { try { SPParamReturnCollection spParamReturned = new SPParamReturnCollection(); using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { //cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); SqlParameter par = new SqlParameter(spParamCollection[count].Name, spParamCollection[count].Value); if (addExtraParmas) { par.TypeName = spParamCollection[count].TypeName; par.SqlDbType = spParamCollection[count].Type; } cmd.Parameters.Add(par); } cmd.CommandType = CommandType.StoredProcedure; foreach (SPParams paramReturn in spParamReturnCollection) { SqlParameter _parmReturn = new SqlParameter(paramReturn.Name, paramReturn.Value); _parmReturn.Direction = paramReturn.ParamDirection; if (paramReturn.Size > 0) _parmReturn.Size = paramReturn.Size; else _parmReturn.Size = 32; _parmReturn.SqlDbType = paramReturn.Type; cmd.Parameters.Add(_parmReturn); } cmd.CommandTimeout = 60; cmd.ExecuteNonQuery(); connection.Close(); for (int i = 0; i < spParamReturnCollection.Count; i++) { spParamReturned.Add(new SPParams { Name = spParamReturnCollection[i].Name, Value = cmd.Parameters[spParamReturnCollection[i].Name].Value }); } } } return spParamReturned; } catch (Exception ex) { return null; } finally { connection.Close(); } } public object getScalarUsingSP(string spName) { return getScalarUsingSP(spName, null); } public object getScalarUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); cmd.CommandTimeout = 60; } cmd.CommandType = CommandType.StoredProcedure; return cmd.ExecuteScalar(); } } } finally { connection.Close(); cmd.Dispose(); } } #endregion } }

    Read the article

  • HDFS some datanodes of cluster are suddenly disconnected while reducers are running

    - by user1429825
    I have 8 slave computers and 1 master computer for running Hadoop (ver 0.21) some datanodes of cluster are suddenly disconnected while I was running MapReduce code on 10GB data After all mappers finished and around 80% of reducers was processed, randomly one or more datanode disconned from network. and then the other datanodes start to disappear from network even if I killed the MapReduce job when I found some datanode was disconnected. I've tried to change dfs.datanode.max.xcievers to 4096, turned off fire-walls of all computing node, disabled selinux and increased the number of file open limit to 20000 but they didn't work at all... anyone have a idea to solve this problem? followings are error log from mapreduce 12/06/01 12:31:29 INFO mapreduce.Job: Task Id : attempt_201206011227_0001_r_000006_0, Status : FAILED java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:889) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:820) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427) and followings are logs from datanode 2012-06-01 13:01:01,118 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5549263231281364844_3453 src: /*.*.*.147:56205 dest: /*.*.*.142:20010 2012-06-01 13:01:01,136 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020) Starting thread to transfer block blk_-3849519151985279385_5906 to *.*.*.147:20010 2012-06-01 13:01:19,135 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-5797481564121417802_3453 to *.*.*.146:20010 got java.net.ConnectException: > Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1257) at java.lang.Thread.run(Thread.java:722) 2012-06-01 13:06:20,342 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_6674438989226364081_3453 2012-06-01 13:09:01,781 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-3849519151985279385_5906 to *.*.*.147:20010 got java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/*.*.*.142:60057 remote=/*.*.*.147:20010] at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246) at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164) at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:388) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:476) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1284) at java.lang.Thread.run(Thread.java:722) hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/home/hadoop/data/name</value> </property> <property> <name>dfs.data.dir</name> <value>/home/hadoop/data/hdfs1,/home/hadoop/data/hdfs2,/home/hadoop/data/hdfs3,/home/hadoop/data/hdfs4,/home/hadoop/data/hdfs5</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.http.address</name> <value>0.0.0.0:20070</value> <description>50070 The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:20075</value> <description>50075 The datanode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.secondary.http.address</name> <value>0.0.0.0:20090</value> <description>50090 The secondary namenode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:20010</value> <description>50010 The address where the datanode server will listen to. If the port is 0 then the server will start on a free port. </description> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:20020</value> <description>50020 The datanode ipc server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:20475</value> </property> <property> <name>dfs.https.address</name> <value>0.0.0.0:20470</value> </property> </configuration> mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>masternode:29001</value> </property> <property> <name>mapred.system.dir</name> <value>/home/hadoop/data/mapreduce/system</value> </property> <property> <name>mapred.local.dir</name> <value>/home/hadoop/data/mapreduce/local</value> </property> <property> <name>mapred.map.tasks</name> <value>32</value> <description> default number of map tasks per job.</description> </property> <property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>4</value> </property> <property> <name>mapred.reduce.tasks</name> <value>8</value> <description> default number of reduce tasks per job.</description> </property> <property> <name>mapred.map.child.java.opts</name> <value>-Xmx2048M</value> </property> <property> <name>io.sort.mb</name> <value>500</value> </property> <property> <name>mapred.task.timeout</name> <value>1800000</value> <!-- 30 minutes --> </property> <property> <name>mapred.job.tracker.http.address</name> <value>0.0.0.0:20030</value> <description> 50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>mapred.task.tracker.http.address</name> <value>0.0.0.0:20060</value> <description> 50060 </property> </configuration>

    Read the article

  • Network Traffic Log

    - by Chris Becke
    Background - On my "home" network I have a Linksys WTR45GL router providing my internet access as well as a wireless AP. Connected I have * 2 Windows PCs (wired) * At least one laptop (Wired) * Some 802.11 enabled handheld consoles (PSPs) * A Nintendo Wii * Some windows XP pcs used by the people in the granny flat. Where I live, South Africa, well, 1Gb worth of monthly cap is, while not expensive, costly enough that I'd like to be sure that all the bandwidth used by devices on my network is ... well ... legitimate and not the result of neighbors parasiting my wireless, malware or just the result of "liberal" download policies in my software. I got the Linksys WRT45GL on the understanding that there were custom firmwares (DD-WRT and Tomato) that allowed bandwidth tracking, but there doesn't seem to be any facility to get a log of traffic that can be examined to see (a) which local devices were the biggest consumers of bandwidth and (b) what they were connected to. What tools are there for logging traffic such that, when it gets to that OMG moment in the month when all my bandwidth is gone, I have a chance to find out what the hell used it all up (and hopefully attempt some corrective action).

    Read the article

  • Windows Firewall Software to Filter Transit Traffic

    - by soonts
    I need to test my networking code for Nintendo Wii under the conditions when some specific Internet server is not available. Wii is connected to my PC with crossover ethernet cable. PC has 2 NICs. PC is connected to hardware router with ethernet cable. The hardware router serves as NAT and has an internet connected to its uplink. I set the Wii to be in the same lan as PC by using Windows XP Network bridge. I can observe the WII network traffic using e.g. Wireshark sniffer. Is there a software firewall that can selectively filter out transit traffic? (e.g. block outgoing TCP connections to 123.45.67.89 to port 443) I tried Outpost Pro 2009 and Comodo. Outpost firewall blocks all transit traffic with it's implicit "block transit packet" rule. If the transit traffic is explicitly allowed by creating the system-wide low level rule, then it's allowed completely and no other filter can selectively block it. Comodo firewall only process rules when the packet has localhost's IP as either source or destination, allowing the rest of the traffic. Any ideas? Thanks in advance! P.S. Platform is Windows XP 32 bit, no other OSes is allowed, Windows ICS (Internet Connection Sharing) doesnt work since the Wii is unable to connect, becides I don't like the idea of adding one more level of NAT.

    Read the article

  • PS3 controller -> PC -> emulators -> TV

    - by abrereton
    I'm researching a media PC for the living room. Playing videos, audio and streaming Internet is straightforward enough. I would also like to run a gaming console system. I was wondering if anyone has any thoughts on this. So far I've discovered that a PS3 controller (thankfully it uses USB and Bluetooth) can be connected to a PC. I've also found that MAME, MESS and PCSX2 are all the emulators I need (I can even emulate a TI-83 calculator with MESS). These emulators can re-map keys, so for example I can make the Nintendo's A button to the PS3 X button, or the SNES key pad could be the PS3 keypad or the analog stick. There are also front-ends to these emulators which can do fancy things like image scaling, anti-aliasing and double-buffering to improve the image quality of an 8-bit Mario on a 50 inch plasma. My set up would be this: PS3 controller connecting over Bluetooth to the PC, PC with Windows, PS3 controller drivers, all my emulators, Network drive with all my ROMs, PC connected to TV via HDMI TV playing Super Mario Kart Does this sound feasible? Does anyone have experience of doing anything like this? Is this a good idea or should I grow up and stop living in the past?

    Read the article

  • Games consoles won't connect through the TP-Link TL-WA500G Access Point

    - by Manfred Wolff
    I hope that someone can help me. I have several Laptops and other devices, all using my Wireless Router (Sky Digital Netgear) To extend the range to the back of the house, I purchased a TP-Link TL-WA500G Range extender. configured just as a pure repeater, it picks up the signal from the Netgear Router. The Netgear Router does the DHCP, handing out the IP addresses. This all works a treat with several different laptops and my iPone4S, but when my son tries to use his XBox360, Sony Playstation3 or the Nintendo Wii those devices fail to acquire an IP address. They just sit their waiting for the IP config. This also happens with my wife's HTC desire ONE Android phone. My son says that, when his HTC Desire C won't get an IP address, he just unplugs the AP briefly - the phone will connect and he puts the AP back on. Once he is connected to the Router, the AP won't disturb function. The Games Consoles don't seem to work like that. They stop working, when the AP is reconnected. I had my son try to configure permanent IP addresses, and he said that did not work either, though I have to confirm that, as I did not see that for myself. Has anybody seen this before? I have searched the Net and have not found any similar problems anywhere. I wonder if there is setting somewhere that would fix this. Many thanks for anyone reading this and trying to help. M

    Read the article

  • how to throttle http requests on a linux machine?

    - by hooraygradschool
    EDIT: here is the summery: i need to reduce max connections preferably system wide on Ubuntu 11.04 but at least within Google Chrome. i do not need or want to throttle bandwidth, Verizon seems to only care about the number of connections so that is all i want to change. also, i don't want to use firefox unless i have to, i have three other machines all using chrome and synced and i just prefer it over firefox. i use tethering for my home internet connection via my verizon cell phone. without paying for it. this works just fine for streaming netflix via my nintendo wii and pretty much every other conceivable use ive had for it. except, during heavy usage with multiple tabs open on my laptop, the network connection on my phone will just turn off, then on again, then off, but it never fully connects. i think, based on this and other questions that this is caused by verizon getting too many http requests from my phone. is there some software, script, setting or otherwise that would allow me to throttle my requests to say, 5 or 10 or whatever it turns out is 1 less than verizon is looking for, so that my cell's network connection is not lost? i would far prefer a slow down rather than complete shut off of my internet connection. i am almost certain is from quantity of requests and not related to data, because, as i mentioned, netflix will run all day without a hitch, and that uses more data than anything else i would be doing. if i had a router i am pretty sure there are settings i could easily change to only allow so many requests at a time ... but in this case, my phone is my router, so no settings. im using ubuntu 11.04 on my netbook with an htc incredible on verizon (not that the phone details are relevant) i have been trying to figure this out for quite some time, currently the only fix is ensure that all requests are stopped and then sometimes it works again, other times i have to manually turn my 3g service off and then back on. thank you so much for any assistance!

    Read the article

  • Force RAID to read "exiled" disk?

    - by user197015
    We have a RAID 6 array (Infortrend EonStor DS S16F) that recently had two disks fail. Immediately prior to replacing these two disks, a third, good, disk was accidentally ejected from the array. After reinserting this disk it is marked as "exiled" by the array's firmware, and so even after replacing the two failed disks with new ones the array refuses to rebuild the logical volume and remains inaccessible. Since the temporarily-ejected disk is still functional and nothing has been written to the array since it was ejected, it seems that it should theoretically be possible to recover all the data on the array, but how can we convince the array to use the data from the "exiled" disk? Thanks for any help or advice you can offer.

    Read the article

  • Using Synology NAS attached to WDS-Repeater

    - by Kai B
    I'm using the following devices for my home network: Router 1: Speedport W 723 V (192.168.2.1) Router 2: AVM Fritzbox 3270 (192.168.2.2) NAS: Synology DS 207+(192.168.2.3) I successfully set up a WDS connection between the two routers. The Speedport acts as basestation, the Fritzbox repeats the WIFI signal of the Speedport. Everything fine, so far. Now I'm trying to achieve the following: Client → Speedport (Base) → Fritzbox (Repeater) → Synology NAS I want to use my Synology NAS attached to the Fritzbox (which is in repeating mode). I already gave it a static IP (as written above) but all connection attempts failed. Did I miss something out or is this set-up simply impossible?

    Read the article

  • Crashing HVM domain, what do I do?

    - by rassie
    My DomUs on a Xen 3.4 on an RHEL5 are crashing when too much memory is needed: (XEN) p2m_pod_demand_populate: Out of populate-on-demand memory! (XEN) domain_crash called from p2m.c:1091 (XEN) Domain 15 (vcpu#3) crashed on cpu#2: (XEN) ----[ Xen-3.4.0 x86_64 debug=n Not tainted ]---- (XEN) CPU: 2 (XEN) RIP: 0010:[<ffffffff80062c02>] (XEN) RFLAGS: 0000000000010216 CONTEXT: hvm guest (XEN) rax: 0000000000000000 rbx: 0000000000000001 rcx: 000000000000003f (XEN) rdx: 0000000004812000 rsi: ffff810001000000 rdi: ffff810004812000 (XEN) rbp: 0000000000000282 rsp: ffff810007635cf0 r8: ffff810037c0288e (XEN) r9: 00000000000023e1 r10: 0000000000000000 r11: 0000000000000001 (XEN) r12: ffff81000000cb00 r13: ffff8100007e43f0 r14: ffff81000000fc10 (XEN) r15: 00000000000280d2 cr0: 0000000080050033 cr4: 00000000000006e0 (XEN) cr3: 0000000006760000 cr2: 0000000003d47078 (XEN) ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: 0000 cs: 0010 Can I disable populate-on-demand for HVM somehow? Xen 3.3 didn't exhibit such behaviour...

    Read the article

  • Dynamic DNS at freedns.afraid.org using a Fritz!Box

    - by kai
    I am having some trouble setting up Dynamic DNS with my Fritz!Box 7360. I have set up the Dynamic DNS page with (this is translated from German, so might be worded a bit differently): [x] Use dynamic DNS Dynamic DNS Provider: User defined Update-URL: https://freedns.afraid.org/dynamic/update.php?MY-DIRECT-URL-KEY Domain Name: mydomain.crabdance.com User Name: myusername Password: mypassword Now on the FritzBox status page, it says: Dynamic DNS: activated, mydomain.crabdance.com, Status: Account temprarily deactivated When I check back on http://freedns.afraid.org, my IP address never changes. Is there any way to fix this? Note my router is on an IPv6 network (m-net), with IPv4 only through DS-Lite. I'm not sure whether this affects anything. Update: Following the guide here (putting myusername instead of MY-DIRECT-URL-KEY) hasn't given any succes. However, the status field has changed slightly: Dynamic DNS: activated, mydomain.crabdance.com, Status: unknown

    Read the article

  • Can you authenticate into SSAS with AD LDS (ADAM) accounts?

    - by Jaxidian
    I'm very new to AD LDS and experienced but not qualified with SSAS, so my apologies for my ignorances with these. We have a couple implementations where we expose SSAS via an HTTPS proxy (msmdpump.dll) and currently we have a temporary domain setup handling this (where our end-users have a second account+creds to manage because of this = non-ideal). I want to move us towards a more permanent solution which I'm thinking of moving all authentication to AD LDS for our web apps, SSAS, and others. However, SSAS is where I'm concerned about this. I know SSAS requires Windows Authentication and to play nicely, and that this ultimately means Active Directory will be involved. Is there a way to get this done with AD LDS instead of having to use a full AD DS implementation? If so, how? (Note: My question over at StackOverflow had a suggestion that I post this question here on ServerFault instead. My apologies if I'm not asking in the right forum.)

    Read the article

  • OVF Tools error

    - by ToreTrygg
    Hello all, I am trying to use OVF tools 1.0 to create an .ovf appliance split to 4gbs. When I run the following command: ovftool --chunkSize=4gb vi://:@/?ds= e:\test\test.ovf everything starts off fine. It will got for about 15-16% then I get this message: "Error: unable to get NFC ticket for target disk". I have looked online, but cannot find anything that matches this problem. I am using Windows 2k3 as the ovf creation server, and I'm in a ESX 3.5 environment. The ovf must be split into 4gb chunks (or less) so they can be put onto DVDs. Also, when it is done it deletes the files it started to create. Any help would be greatly appreciated.

    Read the article

  • Cannot start tor with vidalia, failed to bind listening port because of tor-socks running

    - by ganjan
    I get these errors trying to run tor with vidalia Apr 19 21:55:15.371 [Notice] Tor v0.2.1.30. This is experimental software. Do not rely on it for strong anonymity. (Running on Linux i686) Apr 19 21:55:15.372 [Notice] Initialized libevent version 1.4.13-stable using method epoll. Good. Apr 19 21:55:15.373 [Notice] Opening Socks listener on 127.0.0.1:9050 Apr 19 21:55:15.373 [Warning] Could not bind to 127.0.0.1:9050: Address already in use. Is Tor already running? Apr 19 21:55:15.373 [Warning] Failed to parse/validate config: Failed to bind one of the listener ports. Apr 19 21:55:15.373 [Error] Reading config failed--see warnings above. I don't think tor is running. Here is a nmap scan of my localhost Starting Nmap 5.21 ( http://nmap.org ) at 2011-04-19 21:59 CEST Nmap scan report for localhost (127.0.0.1) Host is up (0.0000050s latency). Hostname localhost resolves to 2 IPs. Only scanned 127.0.0.1 rDNS record for 127.0.0.1: localhost.localdomain Not shown: 989 closed ports PORT STATE SERVICE 22/tcp open ssh 53/tcp open domain 80/tcp open http 139/tcp open netbios-ssn 445/tcp open microsoft-ds 631/tcp open ipp 3128/tcp open squid-http 3306/tcp open mysql 9000/tcp open cslistener 9050/tcp open tor-socks 10000/tcp open snet-sensor-mgmt I see tor-socks is running here, probably be the cause of the problem. How do I stop this from starting up? I want to use vidalia so I can monitor whats going on.

    Read the article

  • Assembly load and execute issue

    - by Jean Carlos Suárez Marranzini
    I'm trying to develop Assembly code allowing me to load and execute(by input of the user) 2 other Assembly .EXE programs. I'm having two problems: -I don't seem to be able to assign the pathname to a valid register(Or maybe incorrect syntax) -I need to be able to execute the other program after the first one (could be either) started its execution. This is what I have so far: mov ax,cs ; moving code segment to data segment mov ds,ax mov ah,1h ; here I read from keyboard int 21h mov dl,al cmp al,'1' ; if 1 jump to LOADRUN1 JE LOADRUN1 popf cmp al,'2' ; if 1 jump to LOADRUN2 JE LOADRUN2 popf LOADRUN1: MOV AH,4BH MOV AL,00 LEA DX,[PROGNAME1] ; Not sure if it works INT 21H LOADRUN2: MOV AH,4BH MOV AL,00 LEA DX,[PROGNAME2] ; Not sure if it works INT 21H ; Here I define the bytes containing the pathnames PROGNAME1 db 'C:\Users\Usuario\NASM\Adding.exe',0 PROGNAME2 db 'C:\Users\Usuario\NASM\Substracting.exe',0 I just don't know how start another program by input in the 'parent' program, after one is already executing. Thanks in advance for your help! Any additional information I'll be more than happy to provide. -I'm using NASM 16 bits, Windows 7 32 bits.

    Read the article

  • Macports irssi & perl5 installation issues

    - by Dmitri DB
    Long time reader, first time poster. Big, appreciative thanks for everyone's collective questioning and answering here and at stackoverflow, it's helped me quite a lot over the time I've been learning answers through these sites! Apologies in advance if I didn't search hard enough on posts already up on this site to find out what I could do about this issue, but I thought I'd just reach out for the sake of trying at least once. I've experienced this issue while starting up my macports-installed version of irssi: 13:25 -!- Irssi: Error in script dispatch: 13:25 Can't locate lib.pm in @INC (@INC contains: /opt/local/lib/perl5/site_perl/5.12.4/darwin-multi-2level /opt/local/lib/perl5/site_perl/5.12.4 /opt/local/lib/perl5/vendor_perl/5.12.4/darwin-multi-2level /opt/local/lib/perl5/vendor_perl/5.12.4 /opt/local/lib/perl5/5.12.4/darwin-multi-2level /opt/local/lib/perl5/5.12.4 /opt/local/lib/perl5/site_perl/5.12.3/darwin-multi-2level /opt/local/lib/perl5/site_perl/5.12.3 /opt/local/lib/perl5/site_perl /opt/local/lib/perl5/vendor_perl .) at (eval 18) line 1. 13:25 BEGIN failed--compilation aborted at (eval 18) line 1. 13:25 Huh, strange. I looked into it a bit: [email protected] /opt/local/lib/perl5 ?- find . -name "lib.pm" -ls 14673887 16 -r--r--r-- 1 root admin 6853 25 Jun 23:39 ./5.12.4/darwin-thread-multi- 2level/lib.pm [email protected] /opt/local/lib/perl5 ?- l 5.12.4/darwin-thread-multi-2level total 1864 drwxr-xr-x 55 root admin 1870 28 Jun 19:28 . drwxr-xr-x 158 root admin 5372 28 Jun 19:28 .. -rw-r--r-- 1 root admin 177814 25 Jun 23:39 .packlist drwxr-xr-x 6 root admin 204 28 Jun 19:28 B -r--r--r-- 1 root admin 25714 25 Jun 23:39 B.pm drwxr-xr-x 64 root admin 2176 28 Jun 19:28 CORE drwxr-xr-x 3 root admin 102 28 Jun 19:28 Compress -r--r--r-- 1 root admin 3000 25 Jun 23:39 Config.pm -r--r--r-- 1 root admin 228094 25 Jun 23:39 Config.pod -r--r--r-- 1 root admin 409 25 Jun 23:39 Config_git.pl -r--r--r-- 1 root admin 38759 25 Jun 23:39 Config_heavy.pl -r--r--r-- 1 root admin 21174 25 Jun 23:39 Cwd.pm -r--r--r-- 1 root admin 63535 25 Jun 23:39 DB_File.pm drwxr-xr-x 3 root admin 102 28 Jun 19:28 Data drwxr-xr-x 5 root admin 170 28 Jun 19:28 Devel drwxr-xr-x 4 root admin 136 28 Jun 19:28 Digest -r--r--r-- 1 root admin 25185 25 Jun 23:39 DynaLoader.pm drwxr-xr-x 22 root admin 748 28 Jun 19:28 Encode -r--r--r-- 1 root admin 29731 25 Jun 23:39 Encode.pm -r--r--r-- 1 root admin 6736 25 Jun 23:39 Errno.pm -r--r--r-- 1 root admin 5445 25 Jun 23:39 Fcntl.pm drwxr-xr-x 5 root admin 170 28 Jun 19:28 File drwxr-xr-x 3 root admin 102 28 Jun 19:28 Filter -r--r--r-- 1 root admin 1819 25 Jun 23:39 GDBM_File.pm drwxr-xr-x 4 root admin 136 28 Jun 19:28 Hash drwxr-xr-x 3 root admin 102 28 Jun 19:28 I18N drwxr-xr-x 11 root admin 374 28 Jun 19:28 IO -r--r--r-- 1 root admin 1404 25 Jun 23:39 IO.pm drwxr-xr-x 6 root admin 204 28 Jun 19:28 IPC drwxr-xr-x 4 root admin 136 28 Jun 19:28 List drwxr-xr-x 4 root admin 136 28 Jun 19:28 MIME drwxr-xr-x 3 root admin 102 28 Jun 19:28 Math -r--r--r-- 1 root admin 2519 25 Jun 23:39 NDBM_File.pm -r--r--r-- 1 root admin 4208 25 Jun 23:39 O.pm -r--r--r-- 1 root admin 15563 25 Jun 23:39 Opcode.pm -r--r--r-- 1 root admin 21011 25 Jun 23:39 POSIX.pm -r--r--r-- 1 root admin 58962 25 Jun 23:39 POSIX.pod drwxr-xr-x 5 root admin 170 28 Jun 19:28 PerlIO -r--r--r-- 1 root admin 2515 25 Jun 23:39 SDBM_File.pm drwxr-xr-x 4 root admin 136 28 Jun 19:28 Scalar -r--r--r-- 1 root admin 10837 25 Jun 23:39 Socket.pm -r--r--r-- 1 root admin 41003 25 Jun 23:39 Storable.pm drwxr-xr-x 4 root admin 136 28 Jun 19:28 Sys drwxr-xr-x 3 root admin 102 28 Jun 19:28 Text drwxr-xr-x 5 root admin 170 28 Jun 19:28 Time drwxr-xr-x 3 root admin 102 28 Jun 19:28 Unicode -r--r--r-- 1 root admin 14462 25 Jun 23:39 attributes.pm drwxr-xr-x 38 root admin 1292 28 Jun 19:28 auto -r--r--r-- 1 root admin 19892 25 Jun 23:39 encoding.pm -r--r--r-- 1 root admin 6853 25 Jun 23:39 lib.pm -r--r--r-- 1 root admin 11044 25 Jun 23:39 mro.pm -r--r--r-- 1 root admin 997 25 Jun 23:39 ops.pm -r--r--r-- 1 root admin 13945 25 Jun 23:39 re.pm drwxr-xr-x 3 root admin 102 28 Jun 19:28 threads -r--r--r-- 1 root admin 33283 25 Jun 23:39 threads.pm So, it sort of seems to me that the permissions which perl5 got installed with for these modules has gotten mixed up somehow? I'm not really a perl user beyond enjoying it for massive directory-recursive find/replace operations within text files, so I haven't much of an idea what the permissions here are supposed to look like, and I'm not really sure how to go about determining how macports has gone and installed perl this way when it's otherwise worked without failure for years now. Does anyone have any recommendations for the sanest path towards rectifying this issue? Also, is there any interesting reason as to why the macports default for the perl5 port installs 5.12.4, and not 5.16.0, which has to be explicitly installed via the perl5.16 port? Thanks again!

    Read the article

  • Cannot FTP without simultaneous SSH connection?

    - by Lucas
    I'm trying to set up an old box as a backup server (running 10.04.4 LTS). I intend to use 3rd party software on my PC to periodically connect to my server via FTP(S) and to mirror certain files. For some reason, all FTP connection attempts fail UNLESS I'm simultaneously connected via SSH. For example, if I use putty to test the connection to port 21, the system hangs and times out. I get: 220 Connected to LeServer USER lucas 331 Please specify the password. PASS [password] <cursor> However, when I'm simultaneously logged in (in another session) everything works: 220 Connected to LeServer USER lucas 331 Please specify the password. PASS [password] 230 Login successful. Basically, this means that my software will never be able to connect on its own, as intended. I know that the correct port is open because it works (sometimes) and nmap gives me: Starting Nmap 5.00 ( http://nmap.org ) at 2012-03-20 16:15 CDT Interesting ports on xx.xxx.xx.x: Not shown: 995 closed ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 53/tcp open domain 139/tcp open netbios-ssn 445/tcp open microsoft-ds Nmap done: 1 IP address (1 host up) scanned in 0.15 seconds My only hypothesis is that this has something to do with iptables. Maybe it's allowing only established connections? I don't think that's how I set it up, but maybe? Here's my iptables rules for INPUT: lucas@rearden:~$ sudo iptables -L INPUT Chain INPUT (policy DROP) target prot opt source destination fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh ufw-before-logging-input all -- anywhere anywhere ufw-before-input all -- anywhere anywhere ufw-after-input all -- anywhere anywhere ufw-after-logging-input all -- anywhere anywhere ufw-reject-input all -- anywhere anywhere ufw-track-input all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:ftp I'm using vsftpd. Any thoughts/resources on how I could fix this? L

    Read the article

  • Password problem while creating domain

    - by Murdock
    Hi, I'm freshman so far in server management stuff but this seems to be clearly against logic. After updating my Windows Server 2008 Standard 32bit, installing DNS server and AD DS I wanted to create domain via using CMD and dcpromo.exe setup. But no matter if I disable demand for comlex password in Password policies or create a password which fully comply with requirements for strong and complex password, still I can't get any further and it says that my password doesn't meet requirements. I'm also asked there to activate password demand by NET USER -passwordreq:yes and when I do so, this password doesn't work any more and I have to remove it from other admin account to be at least able to login with proper Administrator account.

    Read the article

  • Xen DomU on DRBD device: barrier errors

    - by Halfgaar
    I'm testing setting up a Xen DomU with a DRBD storage for easy failover. Most of the time, immediatly after booting the DomU, I get an IO error: [ 3.153370] EXT3-fs (xvda2): using internal journal [ 3.277115] ip_tables: (C) 2000-2006 Netfilter Core Team [ 3.336014] nf_conntrack version 0.5.0 (3899 buckets, 15596 max) [ 3.515604] init: failsafe main process (397) killed by TERM signal [ 3.801589] blkfront: barrier: write xvda2 op failed [ 3.801597] blkfront: xvda2: barrier or flush: disabled [ 3.801611] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801630] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801642] Buffer I/O error on device xvda2, logical block 6521396 [ 3.801652] lost page write due to I/O error on xvda2 [ 3.801755] Aborting journal on device xvda2. [ 3.804415] EXT3-fs (xvda2): error: ext3_journal_start_sb: Detected aborted journal [ 3.804434] EXT3-fs (xvda2): error: remounting filesystem read-only [ 3.814754] journal commit I/O error [ 6.973831] init: udev-fallback-graphics main process (538) terminated with status 1 [ 6.992267] init: plymouth-splash main process (546) terminated with status 1 The manpage of drbdsetup says that LVM (which I use) doesn't support barriers (better known as tagged command queuing or native command queing), so I configured the drbd device not to use barriers. This can be seen in /proc/drbd (by "wo:f, meaning flush, the next method drbd chooses after barrier): 3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---- ns:2160152 nr:520204 dw:2680344 dr:2678107 al:3549 bm:9183 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 And on the other host: 3: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r---- ns:0 nr:2160152 dw:2160152 dr:0 al:0 bm:8052 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 I also enabled the option disable_sendpage, as per the drbd docs: cat /sys/module/drbd/parameters/disable_sendpage Y I also tried adding barriers=0 to fstab as mount option. Still it sometimes says: [ 58.603896] blkfront: barrier: write xvda2 op failed [ 58.603903] blkfront: xvda2: barrier or flush: disabled I don't even know if ext3 has a nobarrier option. And, because only one of my storage systems is battery backed, it would not be smart anyway. Why does it still compain about barriers when I disabled that? Both host are: Debian: 6.0.4 uname -a: Linux 2.6.32-5-xen-amd64 drbd: 8.3.7 Xen: 4.0.1 Guest: Ubuntu 12.04 LTS uname -a: Linux 3.2.0-24-generic pvops drbd resource: resource drbdvm { meta-disk internal; device /dev/drbd3; startup { # The timeout value when the last known state of the other side was available. 0 means infinite. wfc-timeout 0; # Timeout value when the last known state was disconnected. 0 means infinite. degr-wfc-timeout 180; } syncer { # This is recommended only for low-bandwidth lines, to only send those # blocks which really have changed. #csums-alg md5; # Set to about half your net speed rate 60M; # It seems that this option moved to the 'net' section in drbd 8.4. (later release than Debian has currently) verify-alg md5; } net { # The manpage says this is recommended only in pre-production (because of its performance), to determine # if your LAN card has a TCP checksum offloading bug. #data-integrity-alg md5; } disk { # Detach causes the device to work over-the-network-only after the # underlying disk fails. Detach is not default for historical reasons, but is # recommended by the docs. # However, the Debian defaults in drbd.conf suggest the machine will reboot in that event... on-io-error detach; # LVM doesn't support barriers, so disabling it. It will revert to flush. Check wo: in /proc/drbd. If you don't disable it, you get IO errors. no-disk-barrier; } on host1 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.1:7792; } on host2 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.2:7792; } } DomU cfg: bootloader = '/usr/lib/xen-default/bin/pygrub' vcpus = '2' memory = '512' # # Disk device(s). # root = '/dev/xvda2 ro' disk = [ 'phy:/dev/drbd3,xvda2,w', 'phy:/dev/universe/drbdvm-swap,xvda1,w', ] # # Hostname # name = 'drbdvm' # # Networking # # fake IP for posting vif = [ 'ip=1.2.3.4,mac=00:16:3E:22:A8:A7' ] # # Behaviour # on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' In my test setup: the primary host's storage is 9650SE SATA-II RAID PCIe with battery. The secondary is software RAID1. Isn't DRBD+Xen widely used? With these problems, it's not going to work.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >