Search Results

Search found 2374 results on 95 pages for 'stops'.

Page 90/95 | < Previous Page | 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Applications: The Mathematics of Movement, Part 2

    - by TechTwaddle
    In part 1 of this series we saw how we can make the marble move towards the click point, with a fixed speed. In this post we’ll see, first, how to get rid of Atan2(), sine() and cosine() in our calculations, and, second, reducing the speed of the marble as it approaches the destination, so it looks like the marble is easing into it’s final position. As I mentioned in one of the previous posts, this is achieved by making the speed of the marble a function of the distance between the marble and the destination point. Getting rid of Atan2(), sine() and cosine() Ok, to be fair we are not exactly getting rid of these trigonometric functions, rather, replacing one form with another. So instead of writing sin(?), we write y/length. You see the point. So instead of using the trig functions as below, double x = destX - marble1.x; double y = destY - marble1.y; //distance between destination and current position, before updating marble position distanceSqrd = x * x + y * y; double angle = Math.Atan2(y, x); //Cos and Sin give us the unit vector, 6 is the value we use to magnify the unit vector along the same direction incrX = speed * Math.Cos(angle); incrY = speed * Math.Sin(angle); marble1.x += incrX; marble1.y += incrY; we use the following, double x = destX - marble1.x; double y = destY - marble1.y; //distance between destination and marble (before updating marble position) lengthSqrd = x * x + y * y; length = Math.Sqrt(lengthSqrd); //unit vector along the same direction as vector(x, y) unitX = x / length; unitY = y / length; //update marble position incrX = speed * unitX; incrY = speed * unitY; marble1.x += incrX; marble1.y += incrY; so we replaced cos(?) with x/length and sin(?) with y/length. The result is the same.   Adding oomph to the way it moves In the last post we had the speed of the marble fixed at 6, double speed = 6; to make the marble decelerate as it moves, we have to keep updating the speed of the marble in every frame such that the speed is calculated as a function of the length. So we may have, speed = length/12; ‘length’ keeps decreasing as the marble moves and so does speed. The Form1_MouseUp() function remains the same as before, here is the UpdatePosition() method, private void UpdatePosition() {     double incrX = 0, incrY = 0;     double lengthSqrd = 0, length = 0, lengthSqrdNew = 0;     double unitX = 0, unitY = 0;     double speed = 0;     double x = destX - marble1.x;     double y = destY - marble1.y;     //distance between destination and marble (before updating marble position)     lengthSqrd = x * x + y * y;     length = Math.Sqrt(lengthSqrd);     //unit vector along the same direction as vector(x, y)     unitX = x / length;     unitY = y / length;     //speed as a function of length     speed = length / 12;     //update marble position     incrX = speed * unitX;     incrY = speed * unitY;     marble1.x += incrX;     marble1.y += incrY;     //check for bounds     if ((int)marble1.x < MinX + marbleWidth / 2)     {         marble1.x = MinX + marbleWidth / 2;     }     else if ((int)marble1.x > (MaxX - marbleWidth / 2))     {         marble1.x = MaxX - marbleWidth / 2;     }     if ((int)marble1.y < MinY + marbleHeight / 2)     {         marble1.y = MinY + marbleHeight / 2;     }     else if ((int)marble1.y > (MaxY - marbleHeight / 2))     {         marble1.y = MaxY - marbleHeight / 2;     }     //distance between destination and marble (after updating marble position)     x = destX - (marble1.x);     y = destY - (marble1.y);     lengthSqrdNew = x * x + y * y;     /*      * End Condition:      * 1. If there is not much difference between lengthSqrd and lengthSqrdNew      * 2. If the marble has moved more than or equal to a distance of totLenToTravel (see Form1_MouseUp)      */     x = startPosX - marble1.x;     y = startPosY - marble1.y;     double totLenTraveledSqrd = x * x + y * y;     if ((int)totLenTraveledSqrd >= (int)totLenToTravelSqrd)     {         System.Console.WriteLine("Stopping because Total Len has been traveled");         timer1.Enabled = false;     }     else if (Math.Abs((int)lengthSqrd - (int)lengthSqrdNew) < 4)     {         System.Console.WriteLine("Stopping because no change in Old and New");         timer1.Enabled = false;     } } A point to note here is that, in this implementation, the marble never stops because it travelled a distance of totLenToTravelSqrd (first if condition). This happens because speed is a function of the length. During the final few frames length becomes very small and so does speed; and so the amount by which the marble shifts is quite small, and the second if condition always hits true first. I’ll end this series with a third post. In part 3 we will cover two things, one, when the user clicks, the marble keeps moving in that direction, rebounding off the screen edges and keeps moving forever. Two, when the user clicks on the screen, the marble moves towards it, with it’s speed reducing by every frame. It doesn’t come to a halt when the destination point is reached, instead, it continues to move, rebounds off the screen edges and slowly comes to halt. The amount of time that the marble keeps moving depends on how far the user clicks from the marble. I had mentioned this second situation here. Finally, here’s a video of this program running,

    Read the article

  • Resolving data redundancy up front

    - by okeofs
    Introduction As all of us do when confronted with a problem, the resource of choice is to ‘Google it’. This is where the plot thickens. Recently I was asked to stage data from numerous databases which were to be loaded into a data warehouse. To make a long story short, I was looking for a manner in which to obtain the table names from each database, to ascertain potential overlap.   As the source data comes from a SQL database created from dumps of a third party product,  one could say that there were +/- 95 tables for each database.   Yes I know that first instinct is to use the system stored procedure “exec sp_msforeachdb 'select "?" AS db, * from [?].sys.tables'”. However, if one stops to think about this, it would be nice to have all the results in a temporary or disc based  table; which in itself , implies additional labour. This said,  I decided to ‘re-invent’ the wheel. The full code sample may be found at the bottom of this article.   Define a few temporary tables and variables   declare @SQL varchar(max); declare @databasename varchar(75) /* drop table ##rawdata3 drop table #rawdata1 drop table #rawdata11 */ -- A temp table to hold the names of my databases CREATE TABLE #rawdata1 (    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) )     --A temp table with the same database names as above, HOWEVER using an --Identity number (recNO) as a loop variable. --You will note below that I loop through until I reach 25 (see below) as at --that point the system databases, the reporting server database etc begin. --1- 24 are user databases. These are really what I was looking for. --Whilst NOT the best solution,it works and the code was meant as a quick --and dirty. CREATE TABLE #rawdata11 (    recNo int identity(1,1),    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) )   --My output table showing the database name and table name CREATE TABLE ##rawdata3 (    database_name varchar(75) ,    table_name varchar(75), )   Insert the database names into a temporary table I pull the database names using the system stored procedure sp_databases   INSERT INTO #rawdata1 EXEC sp_databases Go   Insert the results from #rawdata1 into a table containing a record number  #rawdata11 so that I can LOOP through the extract   INSERT into #rawdata11 select * from  #rawdata1   We now declare 3 more variables:  @kounter is used to keep track of our position within the loop. @databasename is used to keep track of the’ current ‘ database name being used in the current pass of the loop;  as inorder to obtain the tables for that database we  need to issue a ‘USE’ statement, an insert command and other related code parts. This is the challenging part. @sql is a varchar(max) variable used to contain the ‘USE’ statement PLUS the’ insert ‘ code statements. We now initalize @kounter to 1 .   declare @kounter int; declare @databasename varchar(75); declare @sql varchar(max); set @kounter = 1   The Loop The astute reader will remember that the temporary table #rawdata11 contains our  database names  and each ‘database row’ has a record number (recNo). I am only interested in record numbers under 25. I now set the value of the temporary variable @DatabaseName (see below) .Note that I used the row number as a part of the predicate. Now, knowing the database name, I can create dynamic T-SQL to be executed using the sp_sqlexec stored procedure (see the code in red below). Finally, after all the tables for that given database have been placed in temporary table ##rawdata3, I increment the counter and continue on. Note that I used a global temporary table to ensure that the result set persists after the termination of the run. At some stage, I plan to redo this part of the code, as global temporary tables are not really an ideal solution.    WHILE (@kounter < 25)  BEGIN  select @DatabaseName = database_name from #rawdata11 where recNo = @kounter  set @SQL = 'Use ' + @DatabaseName + ' Insert into ##rawdata3 ' + + ' SELECT table_catalog,Table_name FROM information_schema.tables' exec sp_sqlexec  @Sql  SET @kounter  = @kounter + 1  END   The full code extract   Here is the full code sample.   declare @SQL varchar(max); declare @databasename varchar(75) /* drop table ##rawdata3 drop table #rawdata1 drop table #rawdata11 */ CREATE TABLE #rawdata1 (    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) ) CREATE TABLE #rawdata11 (    recNo int identity(1,1),    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) ) CREATE TABLE ##rawdata3 (    database_name varchar(75) ,    table_name varchar(75), )   INSERT INTO #rawdata1 EXEC sp_databases go INSERT into #rawdata11 select * from  #rawdata1 declare @kounter int; declare @databasename varchar(75); declare @sql varchar(max); set @kounter = 1 WHILE (@kounter < 25)  BEGIN  select @databasename = database_name from #rawdata11 where recNo = @kounter  set @SQL = 'Use ' + @DatabaseName + ' Insert into ##rawdata3 ' + + ' SELECT table_catalog,Table_name FROM information_schema.tables' exec sp_sqlexec  @Sql  SET @kounter  = @kounter + 1  END    select * from ##rawdata3  where table_name like '%SalesOrderHeader%'

    Read the article

  • I'm a contract developer and I think I'm about to get screwed [closed]

    - by kagaku
    I do contract development on the side. You could say that I'm a contract developer? Considering I've only ever had one client I'd say that's not exactly the truth - more like I took a side job and needed some extra cash. It started out as a "rebuild our website and we'll pay you $10k" type project. Once that was complete (a bit over schedule, but certainly not over budget), the company hired me on as a "long term support" contractor. The contract is to go from March of this year, expiring on December 31st of this year - 10 months. Over which a payment is to be paid on the 30th of each month for a set amount. I've been fulfilling my end of the contract on all points - doing server maintenence, application and database changes, doing huge rush changes and pretty much just going above and beyond. Currently I'm in the middle of development of an iPhone mobile application (PhoneGap based) which is nearing completion (probably 3-4 weeks from submission). It has not been all peaches and flowers though. Each and every month when my paycheck comes due, there always seems to be an issue of sorts. These issues did not occur during the initial project, only during the support contract. The actual contract states that my check should be mailed out on the 30th of the month. I have received my check on time approximately once (on time being about 2-3 days within the 30th). I've received my paycheck as late as the 15th of the next month - over two weeks late. I've put up with it because I need the paycheck. There have been promises and promises of "we'll send it out on time next time! I promise" - only to receive it just as late the next month. When I ask about payment they give me a vibe like "why are you only worried about money?" - unfortunately I don't have the luxury of not worrying about money. The last straw was with my August payment, which should have been mailed on August 30th. I received it on September 12th. The reason for the delay? "USPS is delaying it man! we sent it out on the 1st!" is the reason I got. When I finally got the check in the mail, the postage on the envelope was marked September 10th - the date it was run through the postage machine. I've been outright lied to, at this point. I carry on working, because again - I need a paycheck. I orchestrated the move of our application to a new server, developed a bunch of new changes and continued work on the iPhone app. All told I probably went over my hourly allotment (I'm paid for 40 hours a month, I probably put in at least 50). On Saturday, the 1st, I gave the main contact at the company (a company of 3, by the way - this is not some big corporation) a ring and filled him in on the status of my work for the past two weeks. Unusually I hadn't heard from him since the middle of September. His response was "oh... well, that is nice and uh.. good job. well, we've been talking within the company about things and we've certainly got some decisions ahead of us..." - not verbatim but you get the idea (I hope?). I got out of this conversation that the site is not doing very well (which it's not) and they're considering pulling the plug. Crap, this contract is going to end early - there goes Christmas! Fine, that's alright, no problem. I'll get paid for the last months work and call it a day. Unfortunately I still haven't gotten last months check, and I'm getting dicked around now. "Oh.. we had problems transferring funds, we'll try and mail it out tomorrow" and "I left a VM with the finance guy, but I can't get ahold of him". So I'm getting the feeling I'm not getting paid for all the work I put in for September. This is obviously breach of contract, and I am pissed. Thinking irrationally, I considered changing all their passwords and holding their stuff hostage. Before I think it through (by the way, I am NOT going to do this, realized it would probably get me in trouble), I go and try some passwords for our various accounts. Google Apps? Oh, I'm no longer administrator here. Godaddy? Whoops, invalid password. Disqus? Nope, invalid password here too. Google Adsense / Analytics? Invalid password. Dedicated server account manager? Invalid password. Now, I have the servers root password - I just built the box last week and haven't had a chance to send the guy the root password. Wasn't in a rush, I manage the server and they never touch it. Now all of a sudden all the passwords except this one are changed; the writing is on the wall - I am out. Here's the conundrum. I have the root password, they do not. If I give them this password all the leverage I have is gone, out the door and out of my hands. During this argument of why am I not getting paid the guy sends me an email saying "oh by the way, what's the root username and password to the server?". Considering he knows absolutely nothing, I gave him an "admin" account which really has almost no rights. I still have exclusive access to the server, I just don't know where to go. I can hold their data hostage, but I'm almost positive this is the wrong thing to do. I'd consider it blackmail, regardless of whether or not I have gotten paid yet. I can "break" something on the server and give them the whole "well, if you were paying me I could fix it!" spiel. This works from a "well he's not holding their stuff hostage" point of view, but what stops them from hiring some one else to just fix the issue at hand? For all I know the guys nephew is a "l33t hax0r" and can figure it out for free. I can give in, document as much as I can and take him to small claims court. This is breach of contract, I'm not getting paid. I have a case, right? ???? Does anyone have any experience in this? What can I do? What are my options? I'm broke, I can't afford a lawyer and I can barely afford not getting this paycheck. My wife doesn't work (I work two jobs so she doesn't have to work - we have a 1 year old) and is already looking at getting a part time job to cover the bills. Long term we'll be fine, but this has pissed me off beyond belief! Help me out, I'm about to get screwed.

    Read the article

  • How to fix Solr - Server is shutting down issue?

    - by Krunal
    I was having a running Solr 4.1 on Windows Server 2008 R2. The Solr is deployed on Tomcat. However, today it stops suddenly, and while accessing Solr it gives following error. HTTP Status 503 - Server is shutting down type Status report message Server is shutting down description The requested service is not currently available. On further looking into Logs, we got following: Log File: tomcat7-stderr.2013-05-09.txt May 09, 2013 8:00:40 PM org.apache.solr.core.CoreContainer finalize SEVERE: CoreContainer was not shutdown prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=2221663 Log File: catalina.2013-05-09.txt May 09, 2013 7:59:25 PM org.apache.solr.core.SolrResourceLoader <init> INFO: new SolrResourceLoader for directory: 'c:\solrdir\' May 09, 2013 7:59:29 PM org.apache.solr.common.SolrException log SEVERE: Exception during parsing file: null:org.xml.sax.SAXParseException; systemId: file:/c:/solr/solr.xml; lineNumber: 2; columnNumber: 6; The processing instruction target matching "[xX][mM][lL]" is not allowed. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanPIData(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanPIData(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanPI(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Unknown Source) at org.apache.solr.core.Config.<init>(Config.java:121) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:428) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:404) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:336) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:98) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:281) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:262) at org.apache.catalina.core.ApplicationFilterConfig.<init>(ApplicationFilterConfig.java:107) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4656) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5309) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:633) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:977) at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1655) at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) May 09, 2013 7:59:29 PM org.apache.solr.servlet.SolrDispatchFilter init SEVERE: Could not start Solr. Check solr/home property and the logs May 09, 2013 7:59:29 PM org.apache.solr.common.SolrException log SEVERE: null:org.apache.solr.common.SolrException: at org.apache.solr.core.CoreContainer.load(CoreContainer.java:431) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:404) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:336) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:98) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:281) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:262) at org.apache.catalina.core.ApplicationFilterConfig.<init>(ApplicationFilterConfig.java:107) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4656) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5309) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:633) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:977) at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1655) at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: org.xml.sax.SAXParseException; systemId: file:/c:/solrdir/solr.xml; lineNumber: 2; columnNumber: 6; The processing instruction target matching "[xX][mM][lL]" is not allowed. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanPIData(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanPIData(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanPI(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Unknown Source) at org.apache.solr.core.Config.<init>(Config.java:121) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:428) ... 20 more May 09, 2013 7:59:29 PM org.apache.solr.servlet.SolrDispatchFilter init INFO: SolrDispatchFilter.init() done May 09, 2013 7:59:29 PM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory C:\Program Files (x86)\Apache Software Foundation\Tomcat 7.0\webapps\docs May 09, 2013 7:59:30 PM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory C:\Program Files (x86)\Apache Software Foundation\Tomcat 7.0\webapps\manager May 09, 2013 7:59:30 PM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory C:\Program Files (x86)\Apache Software Foundation\Tomcat 7.0\webapps\ROOT May 09, 2013 7:59:30 PM org.apache.coyote.AbstractProtocol start INFO: Starting ProtocolHandler ["http-bio-8983"] May 09, 2013 7:59:30 PM org.apache.coyote.AbstractProtocol start INFO: Starting ProtocolHandler ["ajp-bio-8009"] May 09, 2013 7:59:30 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 9578 ms May 09, 2013 8:00:40 PM org.apache.solr.core.CoreContainer finalize SEVERE: CoreContainer was not shutdown prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! instance=2221663 Any idea what could be wrong and how to fix?

    Read the article

  • Setting up a VPN connection to Amazon VPC - routing

    - by Keeno
    I am having some real issues setting up a VPN between out office and AWS VPC. The "tunnels" appear to be up, however I don't know if they are configured correctly. The device I am using is a Netgear VPN Firewall - FVS336GV2 If you see in the attached config downloaded from VPC (#3 Tunnel Interface Configuration), it gives me some "inside" addresses for the tunnel. When setting up the IPsec tunnels do I use the inside tunnel IP's (e.g. 169.254.254.2/30) or do I use my internal network subnet (10.1.1.0/24) I have tried both, when I tried the local network (10.1.1.x) the tracert stops at the router. When I tried with the "inside" ips, the tracert to the amazon VPC (10.0.0.x) goes out over the internet. this all leads me to the next question, for this router, how do I set up stage #4, the static next hop? What are these seemingly random "inside" addresses and where did amazon generate them from? 169.254.254.x seems odd? With a device like this, is the VPN behind the firewall? I have tweaked any IP addresses below so that they are not "real". I am fully aware, this is probably badly worded. Please if there is any further info/screenshots that will help, let me know. Amazon Web Services Virtual Private Cloud IPSec Tunnel #1 ================================================================================ #1: Internet Key Exchange Configuration Configure the IKE SA as follows - Authentication Method : Pre-Shared Key - Pre-Shared Key : --- - Authentication Algorithm : sha1 - Encryption Algorithm : aes-128-cbc - Lifetime : 28800 seconds - Phase 1 Negotiation Mode : main - Perfect Forward Secrecy : Diffie-Hellman Group 2 #2: IPSec Configuration Configure the IPSec SA as follows: - Protocol : esp - Authentication Algorithm : hmac-sha1-96 - Encryption Algorithm : aes-128-cbc - Lifetime : 3600 seconds - Mode : tunnel - Perfect Forward Secrecy : Diffie-Hellman Group 2 IPSec Dead Peer Detection (DPD) will be enabled on the AWS Endpoint. We recommend configuring DPD on your endpoint as follows: - DPD Interval : 10 - DPD Retries : 3 IPSec ESP (Encapsulating Security Payload) inserts additional headers to transmit packets. These headers require additional space, which reduces the amount of space available to transmit application data. To limit the impact of this behavior, we recommend the following configuration on your Customer Gateway: - TCP MSS Adjustment : 1387 bytes - Clear Don't Fragment Bit : enabled - Fragmentation : Before encryption #3: Tunnel Interface Configuration Your Customer Gateway must be configured with a tunnel interface that is associated with the IPSec tunnel. All traffic transmitted to the tunnel interface is encrypted and transmitted to the Virtual Private Gateway. The Customer Gateway and Virtual Private Gateway each have two addresses that relate to this IPSec tunnel. Each contains an outside address, upon which encrypted traffic is exchanged. Each also contain an inside address associated with the tunnel interface. The Customer Gateway outside IP address was provided when the Customer Gateway was created. Changing the IP address requires the creation of a new Customer Gateway. The Customer Gateway inside IP address should be configured on your tunnel interface. Outside IP Addresses: - Customer Gateway : 217.33.22.33 - Virtual Private Gateway : 87.222.33.42 Inside IP Addresses - Customer Gateway : 169.254.254.2/30 - Virtual Private Gateway : 169.254.254.1/30 Configure your tunnel to fragment at the optimal size: - Tunnel interface MTU : 1436 bytes #4: Static Routing Configuration: To route traffic between your internal network and your VPC, you will need a static route added to your router. Static Route Configuration Options: - Next hop : 169.254.254.1 You should add static routes towards your internal network on the VGW. The VGW will then send traffic towards your internal network over the tunnels. IPSec Tunnel #2 ================================================================================ #1: Internet Key Exchange Configuration Configure the IKE SA as follows - Authentication Method : Pre-Shared Key - Pre-Shared Key : --- - Authentication Algorithm : sha1 - Encryption Algorithm : aes-128-cbc - Lifetime : 28800 seconds - Phase 1 Negotiation Mode : main - Perfect Forward Secrecy : Diffie-Hellman Group 2 #2: IPSec Configuration Configure the IPSec SA as follows: - Protocol : esp - Authentication Algorithm : hmac-sha1-96 - Encryption Algorithm : aes-128-cbc - Lifetime : 3600 seconds - Mode : tunnel - Perfect Forward Secrecy : Diffie-Hellman Group 2 IPSec Dead Peer Detection (DPD) will be enabled on the AWS Endpoint. We recommend configuring DPD on your endpoint as follows: - DPD Interval : 10 - DPD Retries : 3 IPSec ESP (Encapsulating Security Payload) inserts additional headers to transmit packets. These headers require additional space, which reduces the amount of space available to transmit application data. To limit the impact of this behavior, we recommend the following configuration on your Customer Gateway: - TCP MSS Adjustment : 1387 bytes - Clear Don't Fragment Bit : enabled - Fragmentation : Before encryption #3: Tunnel Interface Configuration Outside IP Addresses: - Customer Gateway : 217.33.22.33 - Virtual Private Gateway : 87.222.33.46 Inside IP Addresses - Customer Gateway : 169.254.254.6/30 - Virtual Private Gateway : 169.254.254.5/30 Configure your tunnel to fragment at the optimal size: - Tunnel interface MTU : 1436 bytes #4: Static Routing Configuration: Static Route Configuration Options: - Next hop : 169.254.254.5 You should add static routes towards your internal network on the VGW. The VGW will then send traffic towards your internal network over the tunnels. EDIT #1 After writing this post, I continued to fiddle and something started to work, just not very reliably. The local IPs to use when setting up the tunnels where indeed my network subnets. Which further confuses me over what these "inside" IP addresses are for. The problem is, results are not consistent what so ever. I can "sometimes" ping, I can "sometimes" RDP using the VPN. Sometimes, Tunnel 1 or Tunnel 2 can be up or down. When I came back into work today, Tunnel 1 was down, so I deleted it and re-created it from scratch. Now I cant ping anything, but Amazon AND the router are telling me tunnel 1/2 are fine. I guess the router/vpn hardware I have just isnt up to the job..... EDIT #2 Now Tunnel 1 is up, Tunnel 2 is down (I didn't change any settings) and I can ping/rdp again. EDIT #3 Screenshot of route table that the router has built up. Current state (tunnel 1 still up and going string, 2 is still down and wont re-connect)

    Read the article

  • All downloads being interrupted

    - by Jake
    System: Windows 7 Professional 64bit. 8GB RAM, Intel i5-2400 CPU, +300GB free on the hard drive. AVG Internet Security 2012 (enabled & disabled, with firewall enabled and disabled - no effect for either). This computer is less than a year old. Network: This problem is occurring on a single computer on a network with multiple computers. The router is a Motorola Netopia 3347-02 (DSL Modem/Wireless Router combined). The computer is plugged in directly to the modem, other computers are using the wireless successfully. The router has been reset. The only thing odd about the connection between the router and computer is that it is configured to allow RDP through, so it is assigned a static IP by the router and port forwarding is enabled for port 3389. Also, though I doubt it matters, a second wireless router is active behind this router providing a second network that some computers in the area use without issues. Details: All downloads initiated on this specific computer eventually fail, this includes streaming from youtube, specialized downloads (itunes), downloads from websites, FTP downloads, etc. Failure occurs with all browsers, but in chrome this is the process it takes: 1) Download begins normally, 2) At some point between (observed) 7MBs and 229MBs the download stops progressing (at this point, if watching chrome's task manager, you can see the network activity for the downloading tab drop to 0kps), 3) for some time the download sits there still attempting to complete, but will eventually display "123,049,871/0 B, Interrupted" (where the number is whatever it actually got to). The file I am using to test this is a very large .zip file located on a server I control, but the problem seems to occur on any site. The amount downloaded is completely random, and seems to be more time-based than anything (if I start a download immediately after the last one fails, it tends to get further than the last one). Small files can get through for this reason, though they can fail as well. In a test where I simultaneously downloaded the same file via HTTP (chrome) and FTP (windows explorer), both downloads failed at the same instant, though explorer displayed "Connection timed out" several minutes before chrome finally showed the download as interrupted. Other things I have tried based on advice given to people with similar/identical problems: Setting my MTU to 1492 (as described here: http://blog.thecompwiz.com/2011/08/networking-issues.html) Disabling write caching to the hard drive storing the download on an external device successfully transmitted +1GB file from one computer on the same network to this computer disabling indexing in the folder the download was being stored in disabling all security software checked to make sure all drivers were up to date read about 50 accounts with nearly exact descriptions of what I'm experiencing, none of which had a solution given Running Processes: Image Name PID Session Name Session# Mem Usage ========================= ======== ================ =========== ============ System Idle Process 0 Services 0 24 K System 4 Services 0 104,836 K smss.exe 332 Services 0 1,276 K csrss.exe 764 Services 0 5,060 K wininit.exe 820 Services 0 4,748 K csrss.exe 844 Console 1 23,764 K services.exe 876 Services 0 11,856 K lsass.exe 892 Services 0 14,420 K lsm.exe 900 Services 0 7,820 K winlogon.exe 944 Console 1 7,716 K svchost.exe 428 Services 0 12,744 K svchost.exe 796 Services 0 12,240 K svchost.exe 1036 Services 0 22,372 K svchost.exe 1084 Services 0 174,132 K svchost.exe 1112 Services 0 56,144 K svchost.exe 1288 Services 0 18,640 K svchost.exe 1404 Services 0 29,616 K spoolsv.exe 1576 Services 0 25,924 K svchost.exe 1616 Services 0 12,788 K AppleMobileDeviceService. 1728 Services 0 9,796 K avgwdsvc.exe 1820 Services 0 8,268 K mDNSResponder.exe 1844 Services 0 5,832 K w3dbsmgr.exe 1108 Services 0 43,760 K QBCFMonitorService.exe 1336 Services 0 16,408 K svchost.exe 2404 Services 0 28,240 K taskhost.exe 3020 Console 1 12,372 K dwm.exe 2280 Console 1 5,968 K explorer.exe 2964 Console 1 152,476 K WUDFHost.exe 3316 Services 0 6,740 K svchost.exe 3408 Services 0 5,556 K RAVCpl64.exe 3684 Console 1 13,864 K igfxtray.exe 3700 Console 1 7,804 K hkcmd.exe 3772 Console 1 7,868 K igfxpers.exe 3788 Console 1 10,940 K sidebar.exe 3836 Console 1 84,400 K chrome.exe 3964 Console 1 19,640 K pptd40nt.exe 4068 Console 1 5,156 K acrotray.exe 3908 Console 1 14,676 K avgtray.exe 3872 Console 1 9,508 K jusched.exe 4076 Console 1 4,412 K iTunesHelper.exe 1532 Console 1 87,308 K SearchIndexer.exe 3492 Services 0 36,948 K iPodService.exe 4136 Services 0 7,944 K BrccMCtl.exe 4276 Console 1 18,132 K splwow64.exe 4380 Console 1 32,600 K qbupdate.exe 4836 Console 1 24,236 K svchost.exe 4288 Services 0 20,700 K wmpnetwk.exe 3112 Services 0 9,516 K FNPLicensingService.exe 5248 Services 0 5,852 K QBW32.EXE 5508 Console 1 127,068 K QBDBMgrN.exe 5600 Services 0 42,252 K EXCEL.EXE 2512 Console 1 99,100 K LMS.exe 3188 Services 0 5,616 K UNS.exe 1600 Services 0 7,308 K axlbridge.exe 5260 Console 1 5,132 K chrome.exe 5888 Console 1 200,336 K chrome.exe 3536 Console 1 26,076 K chrome.exe 1952 Console 1 20,168 K chrome.exe 4596 Console 1 24,696 K chrome.exe 4292 Console 1 48,096 K chrome.exe 2796 Console 1 23,520 K Acrobat.exe 1240 Console 1 87,252 K 123w.exe 4892 Console 1 22,728 K calc.exe 1700 Console 1 12,636 K chrome.exe 1328 Console 1 28,888 K chrome.exe 3696 Console 1 47,012 K rundll32.exe 6320 Console 1 7,104 K chrome.exe 4928 Console 1 44,248 K AVGIDSAgent.exe 260 Services 0 12,940 K avgfws.exe 6052 Services 0 26,912 K avgnsa.exe 5064 Services 0 2,496 K avgrsa.exe 3088 Services 0 2,200 K avgcsrva.exe 2596 Services 0 380 K avgcsrva.exe 6948 Services 0 408 K StikyNot.exe 452 Console 1 14,772 K chrome.exe 4580 Console 1 28,200 K chrome.exe 4016 Console 1 57,756 K svchost.exe 7140 Services 0 4,500 K chrome.exe 6264 Console 1 56,824 K chrome.exe 7008 Console 1 56,896 K chrome.exe 2224 Console 1 38,032 K taskhost.exe 612 Console 1 7,228 K chrome.exe 6000 Console 1 10,928 K chrome.exe 2568 Console 1 43,052 K chrome.exe 272 Console 1 75,988 K chrome.exe 7328 Console 1 53,240 K PaprPort.exe 7976 Console 1 137,152 K pplinks.exe 7500 Console 1 14,052 K ppscanmg.exe 5744 Console 1 18,996 K taskeng.exe 7388 Console 1 6,308 K SearchProtocolHost.exe 8024 Services 0 8,804 K SearchFilterHost.exe 7232 Services 0 7,848 K chrome.exe 8016 Console 1 37,440 K cmd.exe 7692 Console 1 3,096 K conhost.exe 7516 Console 1 5,872 K tasklist.exe 8160 Console 1 5,772 K WmiPrvSE.exe 7684 Services 0 6,400 K Any help with this would be greatly appreciated, I've been beating my head against a wall over this all day. This computer serves dual purpose as the main company document server and the Owner's work computer, it's fairly important it be fully functional and I cannot figure this out.

    Read the article

  • arp problems with transparent bridge on linux

    - by Mink
    I've been trying to secure my virtual machines on my esx server by putting them behind a transparent bridge with 2 interfaces, one in front, one at the back. My intention is to put all the firewall rules in one place (instead of on each virtual server). I've been using as bridge a blank new virtual machine based on arch linux (but I suspect it doesn't matter which brand of linux it is). What I have is 2 virtual switchs (thus two Virtual Network, VN_front and VN_back), each with 2 types of ports (switched/separated or promiscious/where the machine can see all packets). On my bridge machine, I've set up 2 virtual NIC, one on VN_front, one on VN_back, both in promisc mode. I've created a bridge br0 with both NIC in it: brctl addbr br0 brctl stp br0 off brctl addif br0 front_if brctl addif br0 back_if Then brought them up: ifconfig front_if 0.0.0.0 promisc ifconfig back_if 0.0.0.0 promisc ifconfig br0 0.0.0.0 (I use promisc mode, because I'm not sure I can do without, thinking that maybe the packets don't reach the NICs) Then I took one of my virtual server sitting on VN_front, and plugged it to VN_back instead (that's the nifty use case I'm thinking about, being able to move my servers around just by changing the VN they are plugged into, without changing anything in the configuration). Then I looked into the macs "seen" by my addressless bridge using brctl showmacs br0 and it did show my server from both sides: I get something that looks like this : port no mac addr is local? ageing timer 2 00:0c:29:e1:54:75 no 9.27 1 00:0c:29:fd:86:0c no 9.27 2 00:50:56:90:05:86 no 73.38 1 00:50:56:90:05:88 no 0.10 2 00:50:56:90:05:8b yes 0.00 << FRONT VN 1 00:50:56:90:05:8c yes 0.00 << BACK VN 2 00:50:56:90:19:18 no 13.55 2 00:50:56:90:3c:cf no 13.57 the thing is that the server that are plugged in front/back are not shown on the correct port. I suspect some horrible thing happening in the ARP-world... :-/ If I ping from a front virtual server to a back virtual server, I can only see the back machine if that back machine pings something in the front. As soon as I stop the ping from the back machine, the ping from the front machine stops getting through... I've noticed that if the back machine pings, then its port on the bridge is the correct one... I've tried to play with the arp_ switch of /proc/sys, but with no clear effect on the end result... /proc/sys/net/ipv4/ip_forward doesn't seem to be of any use when using a bridge (seems it's all taken care of by brctl) /proc/sys/net/ipv4/conf//arp_ don't seem to change much either... (tried arp_announce to 2 or 8 - like suggested elsewhere - and arp_ignore to 0 or 1 ) All the examples I've seen have a different subnet on either side like 10.0.1.0/24 and 10.0.2.0/24... In my case I want 10.0.1.0/24 on both side (just like a transparent switch - except it's a hidden fw ). Turning stp on/off doesn't seem to have any impact on my issue. It's as if the arp packets where getting through the bridge, corrupting the other side with false data... I've tried to use the -arp on each interface, br0, front, back... it breaks the thing altogether... I suspect it has something to do with both side being on the same subnet... I've thought about putting all my machine behind the fw, so as to have all the same subnet at the back... but I'm stuck with my provider's gateway standing at the front with part of my subnet (in fact 3 appliance to route the whole subnet), so I'll always have ips from the same subnet on both side, whatever I do... (I'm using fixed front IPs on my delegated subnet). I'm at a loss... -_-'' Thx for your help. (As anyone tried something like this? from within ESXi?) (It's not just a stunt, the idea is to have something like fail2ban running on some servers, sending their banned IP to the bridge/fw so that it too could ban them - saving all the other servers from that same attacker in one go, allowing for some honeypot that would trigger the fw from any kind of suitable response, and stuffs of the sort... I am aware I could use something like snort, but it addresses some completely different kind of problems, in a completely different way... )

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? ---update 2-6-11 Since I have not received any responses except the one below which appears to misunderstand my point, I am updating this post hoping to get more responses. I have used the terminal command sudo opensnoop -p PID where PID is the mdworker process ID to try and determine what Spotlight is doing and hopefully find the files it's having trouble with. Here's what happens: After indexing for a few hours, mdworker is gone. It no longer shows up in Activity Monitor under "All Processes" and the Terminal window with the opensnoop result stops moving. I then proceeded to execute the same command on mds to see what it was doing and here's what I get, repeatedly: 501 57 mds 21 / 501 57 mds 21 /Volumes/Sno Leppard 501 57 mds 21 /Volumes/Tiger 501 57 mds 21 /Volumes/Leppard 501 57 mds 21 /Volumes/Disk Warrior 501 57 mds 21 /Volumes/ONM Data These represent all the volumes currently mounted in the system. All except ONM Data, which is the one I am trying to index, are excluded from SPotlight indexing at the moment. The sequence above repeats over and over, with slight variation, sometimes skipping one of the volumes. Questions - what happened to mdworker? What is mds doing? I will let this run until tomorrow morning and throughout the day and monitor for any changes. Any input would be very much appreciated. Even if you're not sure what the ultimate answer is, please alert me to anything you think I may be missing. Hopefully at some point we will figure this out... Thanks, M __final edit__ I finally resolved the issue and here is how I did it. I used the terminal command "sudo opensnoop -p PID" where the PID is the process id of the processes I was monitoring. I was looking at all instances of mds and mdworker running in the system. After the third time through indexing the same data set (see info above), I contacted Apple and got to their highest level of support - they were flabbergasted as well. They advised me to install yet another default 10.6.6 system and try again. The same pattern repeated - mds and mdworker(s) would start indexing and eventually the spotlight icon would say 6 hours remaining and all mdworkers were gone, mds at 90% or so of CPU. But I did finally figure out that the first time mdworker stopped like that, the last file it touched was always in the same folder. I excluded that folder from spotlight search and the rest of the data set indexed within about 2 hours with no strange behavior or failures. I copied that folder to another machine and Spotlight barfed immediately. Exclude that folder and all is well again. I have no clue what is causing this behavior, still, but I did find a functional solution to the problem. Anyone with a similar problem - run opensnoop on all instances of mds and mdworker and wait patiently for wdworker to exit. Look at the last file it touched and exclude the enclosing folder from being indexed. I was able to repeat the issue and solution on 2 different installs and 2 different copies of the data set. Hope this helps. If we find an actual cause of the folder being such a problem (it is called MICHAEL BRECKER RECORD SOLOS and contains almost 1 GB of audio related files - performer, live, SD2 - things like that), I will edit again to let you all know. Thanks for ay attempts to help, M

    Read the article

  • Wireless internet connection connects but internet does not work (no packets received). Wired does.

    - by Rodney
    When I connect my PC via ethernet cable to my ADSL router it works fine. When I connect via Wireless it connects and the internet will work for a random amount of time and then stop working. It stays connected with a strong signal but no packets are received. My laptop/iphone are right next to it and wireless works fine. If I open the Wireless USB status, it says it is connected to my SSID with full strength (54 mps - I am 3 meteres away from my router) and the activty shows as Packets 594 SENT and 105 RECEIVED (this goes up VERY slowly) I have tried the following: Turned off anitvirus and firewall completely. Tested the wifi signal- I am writing this on my laptop which is next to my PC and also has full wifi strength. Tried a different wireless adapter - I dug out an old PCI wireless card - it does the exact same thing. Compared all wireless settings to my laptop. I can ping google.com and it replies (sometimes with packet loss) When I reboot the PC it will connect for a minute or two (random time) and then just stops again. I tried Firefox, IE etc. no joy I have updated all latest versions (Netgear WG111v2) and drivers Checked Event Log - nothing unusual Ping the router (and even connect as admin for the few minutes when the internet does work) Changed the MTU down to 1200 using DrTCP Checked Device Manager for conflicts - none. I ping the router from the PC (192.168.0.10 - 192.168.0.1) and it replies with 4 packets. BUT, on my router admin page (which I access via http on my laptop wirelessly) - if I ping 192.168.0.10 all packets timeout (pinging my laptop 192.168.0.12 works fine) My router admin page shows the leased IP address for 192.168.0.10 (ie it is definitely talking to the router initially) Now I am out of ideas - please help. I think it is an OS/Software issue as I have tried 2 different wireless adapaters (PCI and USB) with the same result but all other wireless devices work fine around mine). It's not the firewall. It is getting assigned an IP address correctly (my PC gets 192.168.0.10, my laptop is .12) It is assigned by DHCP. As soon as I plug in the ethernet cable it all works fine. Repairing the adapter sometimes helps but it will always stop working after a random time. The wireless adapter always shows as connected with Excellent signal but the internet does not work. I am running Windows XP SP3 and have tried a Netgear WG111v2 USB adapter. Thanks in advance! UPDATE: The internet seems to be working, it is just either sending packets too small or slow to work (some small pages load bits of them very slowly but then hang). XP seems to have a networking diagnostic app - here is the output: Last diagnostic run time: 08/30/10 08:16:38 IP Configuration Diagnostic Invalid IP address info Valid IP address detected: 192.168.0.10 IP Layer Diagnostic Corrupted IP routing table info The default route is valid info The loopback route is valid info The local host route is valid info The local subnet route is valid Invalid ARP cache entries action The ARP cache has been flushed Gateway Diagnostic Gateway info The following proxy configuration is being used by IE: Automatically Detect Settings:Disabled Automatic Configuration Script: Proxy Server: Proxy Bypass list: info This computer has the following default gateway entry(ies): 192.168.0.1 info This computer has the following IP address(es): 192.168.0.10 info The default gateway is in the same subnet as this computer info The default gateway entry is a valid unicast address info The default gateway address was resolved via ARP in 1 try(ies) info The default gateway was reached via ICMP Ping in 1 try(ies) info TCP port 80 on host 65.55.12.249 was successfully reached info The Internet host www.microsoft.com was successfully reached info The default gateway is OK DNS Client Diagnostic DNS - Not a home user scenario info Using Web Proxy: no info Resolving name ok for (www.microsoft.com): yes No DNS servers DNS failure HTTP, HTTPS, FTP Diagnostic HTTP, HTTPS, FTP connectivity info FTP (Passive): Successfully connected to ftp.microsoft.com. info HTTP: Successfully connected to www.microsoft.com. warn HTTPS: Error 12002 connecting to www.microsoft.com: The operation timed out warn HTTPS: Error 12002 connecting to www.passport.net: The operation timed out error Could not make an HTTPS connection. info Redirecting user to support call WinSock Diagnostic WinSock status info All base service provider entries are present in the Winsock catalog. info The Winsock Service provider chains are valid. info Provider entry MSAFD Tcpip [TCP/IP] passed the loopback communication test. info Provider entry MSAFD Tcpip [UDP/IP] passed the loopback communication test. info Provider entry RSVP UDP Service Provider passed the loopback communication test. info Provider entry RSVP TCP Service Provider passed the loopback communication test. info Connectivity is valid for all Winsock service providers. Wireless Diagnostic Wireless - Service disabled Wireless - User SSID action User input required: Specify network name or SSID Wireless - First time setup info The Wireless Network name (SSID) to which the user would like to connect = RodSof Wifi. Wireless - Radio off info Valid IP address detected: 192.168.0.10 Wireless - Out of range Wireless - Hardware issue Wireless - Novice user Wireless - Ad-hoc network Wireless - Less preferred Wireless - 802.1x enabled Wireless - Configuration mismatch Wireless - Low SNR Network Adapter Diagnostic Network location detection info Using home Internet connection Network adapter identification info Network connection: Name=Local Area Connection 2, Device=Realtek RTL8168C(P)/8111C(P) PCI-E Gigabit Ethernet NIC, MediaType=LAN, SubMediaType=LAN info Network connection: Name=Wireless USB, Device=NETGEAR WG111v2 54Mbps Wireless USB 2.0 Adapter, MediaType=LAN, SubMediaType=WIRELESS info Both Ethernet and Wireless connections available, prompting user for selection action User input required: Select network connection info Wireless connection selected Network adapter status info Network connection status: Connected HTTP, HTTPS, FTP Diagnostic HTTP, HTTPS, FTP connectivity info FTP (Active): Successfully connected to ftp.microsoft.com. warn HTTP: Error 12007 connecting to www.microsoft.com: The server name or address could not be resolved warn HTTP: Error 12002 connecting to www.hotmail.com: The operation timed out warn HTTPS: Error 12002 connecting to www.passport.net: The operation timed out warn HTTPS: Error 12002 connecting to www.microsoft.com: The operation timed out error Could not make an HTTP connection. error Could not make an HTTPS connection.

    Read the article

  • Cluster failover and strange gratuitous arp behavior

    - by lazerpld
    I am experiencing a strange Windows 2008R2 cluster related issue that is bothering me. I feel that I have come close as to what the issue is, but still don't fully understand what is happening. I have a two node exchange 2007 cluster running on two 2008R2 servers. The exchange cluster application works fine when running on the "primary" cluster node. The problem occurs when failing over the cluster ressource to the secondary node. When failing over the cluster to the "secondary" node, which for instance is on the same subnet as the "primary", the failover initially works ok and the cluster ressource continues to work for a couple of minutes on the new node. Which means that the recieving node does send out a gratuitous arp reply packet that updated the arp tables on the network. But after x amount of time (typically within 5 minutes time) something updates the arp-tables again because all of a sudden the cluster service does not answer to pings. So basically I start a ping to the exchange cluster address when its running on the "primary node". It works just great. I failover the cluster ressource group to the "secondary node" and I only have loss of one ping which is acceptable. The cluster ressource still answers for some time after being failed over and all of a sudden the ping starts timing out. This is telling me that the arp table initially is updated by the secondary node, but then something (which I haven't found out yet) wrongfully updates it again, probably with the primary node's MAC. Why does this happen - has anyone experienced the same problem? The cluster is NOT running NLB and the problem stops immidiately after failing over back to the primary node where there are no problems. Each node is using NIC teaming (intel) with ALB. Each node is on the same subnet and has gateway and so on entered correctly as far as I am concerned. Edit: I was wondering if it could be related to network binding order maybe? Because I have noticed that the only difference I can see from node to node is when showing the local arp table. On the "primary" node the arp table is generated on the cluster address as the source. While on the "secondary" its generated from the nodes own network card. Any input on this? Edit: Ok here is the connection layout. Cluster address: A.B.6.208/25 Exchange application address: A.B.6.212/25 Node A: 3 physical nics. Two teamed using intels teaming with the address A.B.6.210/25 called public The last one used for cluster traffic called private with 10.0.0.138/24 Node B: 3 physical nics. Two teamed using intels teaming with the address A.B.6.211/25 called public The last one used for cluster traffic called private with 10.0.0.139/24 Each node sits in a seperate datacenter connected together. End switches being cisco in DC1 and NEXUS 5000/2000 in DC2. Edit: I have been testing a little more. I have now created an empty application on the same cluster, and given it another ip address on the same subnet as the exchange application. After failing this empty application over, I see the exact same problem occuring. After one or two minutes clients on other subnets cannot ping the virtual ip of the application. But while clients on other subnets cannot, another server from another cluster on the same subnet has no trouble pinging. But if i then make another failover to the original state, then the situation is the opposite. So now clients on same subnet cannot, and on other they can. We have another cluster set up the same way and on the same subnet, with the same intel network cards, the same drivers and same teaming settings. Here we are not seeing this. So its somewhat confusing. Edit: OK done some more research. Removed the NIC teaming of the secondary node, since it didnt work anyway. After some standard problems following that, I finally managed to get it up and running again with the old NIC teaming settings on one single physical network card. Now I am not able to reproduce the problem described above. So it is somehow related to the teaming - maybe some kind of bug? Edit: Did some more failing over without being able to make it fail. So removing the NIC team looks like it was a workaround. Now I tried to reestablish the intel NIC teaming with ALB (as it was before) and i still cannot make it fail. This is annoying due to the fact that now i actually cannot pinpoint the root of the problem. Now it just seems to be some kind of MS/intel hick-up - which is hard to accept because what if the problem reoccurs in 14 days? There is a strange thing that happened though. After recreating the NIC team I was not able to rename the team to "PUBLIC" which the old team was called. So something has not been cleaned up in windows - although the server HAS been restarted! Edit: OK after restablishing the ALB teaming the error came back. So I am now going to do some thorough testing and i will get back with my observations. One thing is for sure. It is related to Intel 82575EB NICS, ALB and Gratuitous Arp.

    Read the article

  • ffmpeg add two audio streams to video

    - by Tossin Hausen
    I tried this: ffmpeg -i /sdcard/video/transcode/video.avi -map 0:0,0 -i /sdcard/video/transcode/first.mp3 -map 1:0,1 -i /sdcard/video/transcode/second.mp3 -map 2:0,2 -acodec copy -vcodec py /sdcard/video/transcode/Output.avi to add two audio streams to one video file. But ffmpeg says the number of mappings should match the number of output streams. What is wrong here? I'm trying to work with an Android build of FFmepg "ffmpeg for android beta". "Does not work" means that this uncommunicative Android build of FFmpeg just stops without giving any error message. The -codec copy option does not work with this build. Now I tried the same set of files with the FFmpeg called command line tool that comes with Ubuntu 10. Something (can't say where it is from). The -codec copy option does not work with this FFmpeg too. Here the complete output: m30x:~/movie/Film$ ffmpeg -i input.avi -i first.mp3 -i second.mp3 -map 0 -map 1 -map 2 -acodec copy -vcodec copy output.avi FFmpeg version SVN-r0.5.9-4:0.5.9-0ubuntu0.10.04.1, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --extra-version=4:0.5.9-0ubuntu0.10.04.1 --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --disable-stripping --disable-vhook --enable-runtime-cpudetect --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Jun 12 2012 16:27:34, gcc: 4.4.3 [NULL @ 0x93cfd10]looks like this file was encoded with (divx4/(old)xvid/opendivx) -> forcing low_delay flag Seems stream 0 codec frame rate differs from container frame rate: 30000.00 (30000/1) -> 25.00 (25/1) Input #0, avi, from 'input.avi': Duration: 01:30:33.00, start: 0.000000, bitrate: 901 kb/s Stream #0.0: Video: mpeg4, yuv420p, 576x432, 25 tbr, 25 tbn, 30k tbc Input #1, mp3, from 'first.mp3': Duration: 01:30:32.84, start: 0.000000, bitrate: 63 kb/s Stream #1.0: Audio: mp3, 22050 Hz, stereo, s16, 64 kb/s Input #2, mp3, from 'second.mp3': Duration: 01:30:32.84, start: 0.000000, bitrate: 63 kb/s Stream #2.0: Audio: mp3, 22050 Hz, stereo, s16, 64 kb/s Number of stream maps must match number of output streams Merging only one audio stream with the video stream works with Ubuntu and Android version of FFmpeg. Here the complete output: ffmpeg -i input.avi -i first.mp3 -map 0 -map 1 -acodec copy -vcodec copy output.avi FFmpeg version SVN-r0.5.9-4:0.5.9-0ubuntu0.10.04.1, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --extra-version=4:0.5.9-0ubuntu0.10.04.1 --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --disable-stripping --disable-vhook --enable-runtime-cpudetect --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Jun 12 2012 16:27:34, gcc: 4.4.3 [NULL @ 0x9bfad10]looks like this file was encoded with (divx4/(old)xvid/opendivx) -> forcing low_delay flag Seems stream 0 codec frame rate differs from container frame rate: 30000.00 (30000/1) -> 25.00 (25/1) Input #0, avi, from 'input.avi': Duration: 01:30:33.00, start: 0.000000, bitrate: 901 kb/s Stream #0.0: Video: mpeg4, yuv420p, 576x432, 25 tbr, 25 tbn, 30k tbc Input #1, mp3, from 'first.mp3': Duration: 01:30:32.84, start: 0.000000, bitrate: 63 kb/s Stream #1.0: Audio: mp3, 22050 Hz, stereo, s16, 64 kb/s Output #0, avi, to 'output.avi': Stream #0.0: Video: mpeg4, yuv420p, 576x432, q=2-31, 90k tbn, 25 tbc Stream #0.1: Audio: libmp3lame, 22050 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #1.0 -> #0.1 Press [q] to stop encoding frame= 6157 fps=6156 q=-1.0 size= 31667kB time=246.28 bitrate=1053.3kbits/s Do you have an idea why it does not work with two audio streams? By the way, ffmpeg -i input_with_first_audio_stream.avi -i second.mp3 -acodec copy -vcodec copy output_two_audio_streams.avi -newaudio works with both versions of ffmpeg that I use, but the first audio stream is played too fast (x10 or more), while the second audio stream is played correct. Many thanks in advance and sorry for my unconventional question and outdated versions of ffmpeg. But I am a lamer and it is not so easy for me to compile from the source (especially for the Android version). I will try to compile an up to date version of ffmpeg with Ubuntu, but I don't have much free time.

    Read the article

  • PhpMyAdmin Hangs On MySQL Error

    - by user75228
    I'm currently running PhpMyAdmin 4.0.10 (the latest version supporting PHP 4.2.X) on my Amazon EC2 connecting to a MySQL database on RDS. Everything works perfectly fine except actions that return a mysql error message. Whether I perform "any" kind of action that will return a mysql error, Phpmyadmin will hang with the yellow "Loading" box forever without displaying anything. For example, if I perform the following command in MySQL CLI : select * from 123; It instantly returns the following error : ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '123' at line 1 which is completely normal because table 123 doesn't exist. However, if I execute the exact same command in the "SQL" box in Phpmyadmin, after I click "Go" it'll display "Loading" and stops there forever. Has anyone ever encountered this kind of issue with Phpmyadmin? Is this a bug or I have something wrong with my config.inc.php? Any help would be much appreciated. I also noticed these error messages in my apache error logs : /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open Below are my config.inc.php settings : <?php /* vim: set expandtab sw=4 ts=4 sts=4: */ /** * phpMyAdmin sample configuration, you can use it as base for * manual configuration. For easier setup you can use setup/ * * All directives are explained in documentation in the doc/ folder * or at <http://docs.phpmyadmin.net/>. * * @package PhpMyAdmin */ /* * This is needed for cookie based authentication to encrypt password in * cookie */ $cfg['blowfish_secret'] = 'something_random'; /* YOU MUST FILL IN THIS FOR COOKIE AUTH! */ /* * Servers configuration */ $i = 0; /* * First server */ $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'cookie'; /* Server parameters */ $cfg['Servers'][$i]['host'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = true; /* Select mysql if your server does not have mysqli */ $cfg['Servers'][$i]['extension'] = 'mysqli'; $cfg['Servers'][$i]['AllowNoPassword'] = false; $cfg['LoginCookieValidity'] = '3600'; /* * phpMyAdmin configuration storage settings. */ /* User used to manipulate with storage */ $cfg['Servers'][$i]['controlhost'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['controluser'] = 'pma'; $cfg['Servers'][$i]['controlpass'] = 'password'; /* Storage database and tables */ $cfg['Servers'][$i]['pmadb'] = 'phpmyadmin'; $cfg['Servers'][$i]['bookmarktable'] = 'pma__bookmark'; $cfg['Servers'][$i]['relation'] = 'pma__relation'; $cfg['Servers'][$i]['table_info'] = 'pma__table_info'; $cfg['Servers'][$i]['table_coords'] = 'pma__table_coords'; $cfg['Servers'][$i]['pdf_pages'] = 'pma__pdf_pages'; $cfg['Servers'][$i]['column_info'] = 'pma__column_info'; $cfg['Servers'][$i]['history'] = 'pma__history'; $cfg['Servers'][$i]['table_uiprefs'] = 'pma__table_uiprefs'; $cfg['Servers'][$i]['tracking'] = 'pma__tracking'; $cfg['Servers'][$i]['designer_coords'] = 'pma__designer_coords'; $cfg['Servers'][$i]['userconfig'] = 'pma__userconfig'; $cfg['Servers'][$i]['recent'] = 'pma__recent'; /* Contrib / Swekey authentication */ // $cfg['Servers'][$i]['auth_swekey_config'] = '/etc/swekey-pma.conf'; /* * End of servers configuration */ /* * Directories for saving/loading files from server */ $cfg['UploadDir'] = ''; $cfg['SaveDir'] = ''; /** * Defines whether a user should be displayed a "show all (records)" * button in browse mode or not. * default = false */ //$cfg['ShowAll'] = true; /** * Number of rows displayed when browsing a result set. If the result * set contains more rows, "Previous" and "Next". * default = 30 */ $cfg['MaxRows'] = 50; /** * disallow editing of binary fields * valid values are: * false allow editing * 'blob' allow editing except for BLOB fields * 'noblob' disallow editing except for BLOB fields * 'all' disallow editing * default = blob */ //$cfg['ProtectBinary'] = 'false'; /** * Default language to use, if not browser-defined or user-defined * (you find all languages in the locale folder) * uncomment the desired line: * default = 'en' */ //$cfg['DefaultLang'] = 'en'; //$cfg['DefaultLang'] = 'de'; /** * default display direction (horizontal|vertical|horizontalflipped) */ //$cfg['DefaultDisplay'] = 'vertical'; /** * How many columns should be used for table display of a database? * (a value larger than 1 results in some information being hidden) * default = 1 */ //$cfg['PropertiesNumColumns'] = 2; /** * Set to true if you want DB-based query history.If false, this utilizes * JS-routines to display query history (lost by window close) * * This requires configuration storage enabled, see above. * default = false */ //$cfg['QueryHistoryDB'] = true; /** * When using DB-based query history, how many entries should be kept? * * default = 25 */ //$cfg['QueryHistoryMax'] = 100; /* * You can find more configuration options in the documentation * in the doc/ folder or at <http://docs.phpmyadmin.net/>. */ ?>

    Read the article

  • What needs updating when moving a bootable Windows 7 (or Vista) partition?

    - by SuperTempel
    When I move a bootable NTFS partition with Windows on it to a different block offset, what needs updating to make it bootable again? In particular, here's what I tried: I have a disk with several partitions, one of which is the NTFS partition with Windows on it, and the disk uses the plain old MBR block 0 for the partitions layout (no more than 4 partitions). Now I format and partition a new, larger, disk. There I make room for the NTFS partition and copy the contents from the old disk's NTFS Windows partition into. And I make the partition "active". However, when I try to boot from this disk, I get a "read error" message immediately and the booting stops, the exact text is: A disk read error occurred Press Ctrl+Alt+Del to restart I verified that both disks have the same boot sector code in block 0. It seems to me that something else might need updating. I guess that somewhere there's a absolute block reference that I need to update, probably pointing to the next level loader or to the NT kernel. Update: I found this article going quite into the depth of what I want to know. However, it says to modify boot.ini, but I have Windows 7 installed here, where such things appear to have changed: No boot.ini but a folder called System Volume Information with GUID and other data in it that sounds related to my problem. Going to keep digging... Update 2: Thanks to the terrible looking but very informative website by starman, I was able to figure out the first step: The NTFS boot sector has a field for "hidden" sectors. This feld has to contain the sector number of the boot sector. This solves the "read error" message. Now, however, I get a "BOOTMGR is missing" error instead. Looks like there's another place where a block number has to be adjusted, but I can't find anything in the code listing about this. I do find a lot of help sites suggesting Windows tools for fixing this "BOOTMGR is missing" problem, but none seem to know what goes on behind the scenes. Kind of like suggesting to re-install Windows when there's a little problem with it. At least, those fixes seem to work, mostly involving the Bcdedit and Bootrec tools. Now, who knows what they do, especially the latter, in regards to a moved partition? Update 3: After lots of trial-and-error attempts, I believe now that the solution lies in the BCD-Template registry file, residing usually inside \Windows\System32\config. If I get this updated using the "bcdboot" command, Windows starts up from it. I am now in the middle of figuring out what information this registry contains relevant to the above question. Any pointers to the contents of this registry are welcome. Update 4: Turns out that while the BCD-Template file gets rewritten and has different binary contents than its predecessor, the values inside do not change. So it must be something else that bcdboot.exe writes. I had previously already checked if it changes the first 32 boot blocks of the partition, but they appear to remain unchanged. Parititon map doesn't get changed, either. So what is it that bcdboot modifies besides the BCD registry? Any tips on how I can trace that? Are there low level tools that show me what files a program writes to? Update 5: The answer seems to be: c:\Boot\BCD is also changed, and that appears to be the key file for the boot manager's process. I'll investigate this later... Update 6: It seems to be an important detail that I had originally two partitions created when I installed Windows 7: A small partition of 204800 sectors which appears to be a bootstrap partition, followed by the actual, large, partition containing the Windows system (drive C:). When I tried to transfer this installation to a new, larger, disk, I had kept the same two partitions intact on the new drive, although they ended up at a different offset. This alone led to the "BOOTMGR is missing" message. Since then, I've used bcdboot.exe only on the Windows partition, which added the \Boot\BCD file on that partition. That file (and folder) did originally only exist on the smaller partition. Hence, this problem may be more complicated in my case as one partition (the boot strapper) referred to another partition (the one containing the OS), whereas other people may only have to deal with one partition containing both, and maybe there the solution is simpler. Update 7: Found one more detail: The \Boot\BCD file records the MBR's serial number. If that number doesn't match, the system won't boot. Next I'll test if there's also an absolute block reference stored in there.

    Read the article

  • C# Bind DataTable to Existing DataGridView Column Definitions

    - by Timothy
    I've been struggling with a NullReferenceException and hope someone here will be able to point me in the right direction. I'm trying to create and populate a DataTable and then show the results in a DataGridView control. The basic code follows, and Execution stops with a NullReferenceException at the point where I invoke the new UpdateResults_Delegate. Oddly enough, I can trace entries.Rows.Count successfully before I return it from QueryEventEntries, so I can at least show 1) entries is not a null reference, and 2) the DataTable contains rows of data. I know I have to be doing something wrong, but I just don't know what. private void UpdateResults(DataTable entries) { dataGridView.DataSource = entries; } private void button_Click(object sender, EventArgs e) { PerformQuery(); } private void PerformQuery() { DateTime start = new DateTime(dateTimePicker1.Value.Year, dateTimePicker1.Value.Month, dateTimePicker1.Value.Day, 0, 0, 0); DateTime stop = new DateTime(dateTimePicker2.Value.Year, dateTimePicker2.Value.Month, dateTimePicker2.Value.Day, 0, 0, 0); DataTable entries = QueryEventEntries(start, stop); UpdateResults(entries); } private DataTable QueryEventEntries(DateTime start, DateTime stop) { DataTable entries = new DataTable(); entries.Columns.AddRange(new DataColumn[] { new DataColumn("event_type", typeof(Int32)), new DataColumn("event_time", typeof(DateTime)), new DataColumn("event_detail", typeof(String))}); using (SqlConnection conn = new SqlConnection(DSN)) { using (SqlDataAdapter adapter = new SqlDataAdapter( "SELECT event_type, event_time, event_detail FROM event_log " + "WHERE event_time >= @start AND event_time <= @stop", conn)) { adapter.SelectCommand.Parameters.AddRange(new Object[] { new SqlParameter("@start", start), new SqlParameter("@stop", stop)}); adapter.Fill(entries); } } return entries; } Update I'd like to summarize and provide some additional information I've learned from the discussion here and debugging efforts since I originally posted this question. I am refactoring old code that retrieved records from a database, collected those records as an array, and then later iterated through the array to populate a DataGridView row by row. Threading was originally implemented to compensate and keep the UI responsive during the unnecessary looping. I have since stripped out Thread/Invoke; everything now occurs on the same execution thread (thank you, Sam). I am attempting to replace the slow, unwieldy approach using a DataTable which I can fill with a DataAdapter, and assign to the DataGridView through it's DataSource property (above code updated). I've iterated through the entries DataTable's rows to verify the table contains the expected data before assigning it as the DataGridView's DataSource. foreach (DataRow row in entries.Rows) { System.Diagnostics.Trace.WriteLine( String.Format("{0} {1} {2}", row[0], row[1], row[2])); } One of the column of the DataGridView is a custom DataGridViewColumn to stylize the event_type value. I apologize I didn't mention this before in the original post but I wasn't aware it was important to my problem. I have converted this column temporarily to a standard DataGridViewTextBoxColumn control and am no longer experiencing the Exception. The fields in the DataTable are appended to the list of fields that have been pre-specified in Design view of the DataGridView. The records' values are being populated in these appended fields. When the run time attempts to render the cell a null value is provided (as the value that should be rendered is done so a couple columns over). In light of this, I am re-titling and re-tagging the question. I would still appreciate it if others who have experienced this can instruct me on how to go about binding the DataTable to the existing column definitions of the DataGridView.

    Read the article

  • Simple jquery ajax call leaks memory in ie.

    - by Thomas Lane
    I created a web page that makes an ajax call every second. In Internet Explorer 7, it leaks memory badly (20MB in about 15 minutes). The program is very simple. It just runs a javascript function that makes an ajax call. The server returns an empty string, and the javascript does nothing with it. I use setTimout to run the function every second, and I'm using Drip to watch the thing. Here is the source: <html> <head> <script type="text/javascript" src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load('jquery', '1.4.2'); google.load('jqueryui', '1.7.2'); </script> <script type="text/javascript"> setTimeout('testJunk()',1000); function testJunk() { $.ajax({ url: 'http://xxxxxxxxxxxxxx/test', // The url returns an empty string dataType: 'html', success: function(data){} }); setTimeout('testJunk()',1000) } </script> </head> <body> Why is memory usage going up? </body> </html> Anyone have an idea how to plug this leak? I have a real application that updates a large table this way, but left unattended will eat up Gigabytes of memory. Okay, so after some good suggestions, I modified the code to: <html> <head> <script type="text/javascript" src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load('jquery', '1.4.2'); google.load('jqueryui', '1.7.2'); </script> <script type="text/javascript"> setTimeout(testJunk,1000); function testJunk() { $.ajax({ url: 'http://xxxxxxxxxxxxxx/test', // The url returns an empty string dataType: 'html', success: function(data){setTimeout(testJunk,1000)} }); } </script> </head> <body> Why is memory usage going up? </body> </html> It didn't seem to make any difference though. I'm not doing anything with the DOM, and if I comment out the ajax call, the memory leak stops. So it looks like the leak is entirely in the ajax call. Does jquery ajax inherently create some sort of circular reference, and if so, how can I free it? By the way, it doesn't leak in Firefox. Someone suggested running the test in another VM and see if the results are the same. Rather than setting up another VM, I found a laptop that was running XP Home with IE8. It exhibits the same problem. I tried some older versions of jquery and got better results, but the problem didn't go away entirely until I abandoned ajax in jquery and went with more traditional (and ugly) ajax.

    Read the article

  • Ninject 3.0 MVC kernel.bind error Auto Registration

    - by user295734
    Getting and error on kernel.Bind(scanner = ... "scanner" has the little error line under it in VS 2010. Cannot convert lambda expression to type 'System.Type[]' because it is not a delegate type Tyring to Auto Register like the old kernel.scan in 2.0. I can not figure out what i am doing wrong. Added and removed so many Ninject packages. completely lost, getting to be a big waste of time. using System; using System.Web; using Microsoft.Web.Infrastructure.DynamicModuleHelper; using Ninject; using Ninject.Web.Common; //using Ninject.Extensions.Conventions; using Ninject.Web.WebApi; using Ninject.Web.Mvc; using CommonServiceLocator.NinjectAdapter; using System.Reflection; using System.IO; using LR.Repository; using LR.Repository.Interfaces; using LR.Service.Interfaces; using System.Web.Http; public static class NinjectWebCommon { private static readonly Bootstrapper bootstrapper = new Bootstrapper(); /// <summary> /// Starts the application /// </summary> public static void Start() { DynamicModuleUtility.RegisterModule(typeof(OnePerRequestHttpModule)); DynamicModuleUtility.RegisterModule(typeof(NinjectHttpModule)); bootstrapper.Initialize(CreateKernel); } /// <summary> /// Stops the application. /// </summary> public static void Stop() { bootstrapper.ShutDown(); } /// <summary> /// Creates the kernel that will manage your application. /// </summary> /// <returns>The created kernel.</returns> private static IKernel CreateKernel() { var kernel = new StandardKernel(); kernel.Bind<Func<IKernel>>().ToMethod(ctx => () => new Bootstrapper().Kernel); kernel.Bind<IHttpModule>().To<HttpApplicationInitializationHttpModule>(); RegisterServices(kernel); return kernel; } /// <summary> /// Load your modules or register your services here! /// </summary> /// <param name="kernel">The kernel.</param> private static void RegisterServices(IKernel kernel) { kernel.Bind(scanner => scanner.FromAssembliesInPath(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location)) .Select(IsServiceType) .BindToDefaultInterface() .Configure(binding => binding.InSingletonScope()) ); } private static bool IsServiceType(Type type) { // temp return true; // .Any() is not recognized either. return true; // type.IsClass && type.GetInterfaces().Any(intface => intface.Name == "I" + type.Name); }

    Read the article

  • Jboss logging issue

    - by balaji
    Our application is deployed on JBoss As 4.0x and we face some issues with JBoss logging. Whenever the server is restarted, JBoss stops logging, and there is no update in server.log. After that it is not updating the log file. Then we do touch cmd on log4j.xml, so that it creates the log files again. Please help me in fixing the issue we cant do touch everytime. We face this issue in both the nodes. I could not figure where the problem is? If any other issues, we can check the log files. If log itself is not getting updated/logged, how can we move further in analyzing the issues without the recent/updated logs? Contents of log4j.xml, copied from the comments below: <appender name="FILE" class="org.jboss.logging.appender.DailyRollingFileAppender"> <errorHandler class="org.jboss.logging.util.OnlyOnceErrorHandler"/> <param name="File" value="${jboss.server.log.dir}/server.log"/> <param name="Append" value="false"/> <param name="DatePattern" value="'.'yyyy-MM-dd"/> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d %-5p [%c] %m%n"/> </layout> </appender> <appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender"> <errorHandler class="org.jboss.logging.util.OnlyOnceErrorHandler"/> <param name="Target" value="System.out"/> <param name="Threshold" value="INFO"/> <layout class="org.apache.log4j.PatternLayout"> <!-- The default pattern: Date Priority [Category] Message\n --> <param name="ConversionPattern" value="%d{ABSOLUTE} %-5p [%c{1}] %m%n"/> </layout> </appender> <root> <appender-ref ref="CONSOLE"/> <appender-ref ref="FILE"/> </root> <category name="org.apache"> <priority value="INFO"/> </category> <category name="org.apache.axis"> <priority value="INFO"/> </category> <category name="org.jgroups"> <priority value="WARN"/> </category> <category name="jacorb"> <priority value="WARN"/> </category> <category name="org.jboss.management"> <priority value="INFO"/> </category>

    Read the article

  • Application stopped unexpectedly at launch

    - by Chris Stryker
    I've run this on a device and on the emulator. The app stops unexpectedly on both. I have not a clue what is wrong currently. It uses Google API Maps I compiled with Google Api 7. I followed this tutorial http://developer.android.com/guide/tutorials/views/hello-mapview.html (made some alterations clearly) I did use the correct API Key That the final apk is signed with This is the source(If you compile it shouldn't work as it is unsigned) This is the compiled signed apk Log 03-21 00:30:38.912: INFO/ActivityManager(54): Starting activity: Intent { act=android.intent.action.MAIN flg=0x10000000 cmp=com.chris.stryker.worldly/.com.poppoob.WorldlyMap } 03-21 00:30:39.173: INFO/ActivityManager(54): Start proc com.chris.stryker.worldly for activity com.chris.stryker.worldly/.com.poppoob.WorldlyMap: pid=287 uid=10031 gids={3003, 1015} 03-21 00:30:39.532: DEBUG/ddm-heap(287): Got feature list request 03-21 00:30:40.185: WARN/dalvikvm(287): Unable to resolve superclass of Lcom/chris/stryker/worldly/com/poppoob/WorldlyMap; (17) 03-21 00:30:40.193: WARN/dalvikvm(287): Link of class 'Lcom/chris/stryker/worldly/com/poppoob/WorldlyMap;' failed 03-21 00:30:40.205: DEBUG/AndroidRuntime(287): Shutting down VM 03-21 00:30:40.223: WARN/dalvikvm(287): threadid=3: thread exiting with uncaught exception (group=0x4001b188) 03-21 00:30:40.223: ERROR/AndroidRuntime(287): Uncaught handler: thread main exiting due to uncaught exception 03-21 00:30:40.252: ERROR/AndroidRuntime(287): java.lang.RuntimeException: Unable to instantiate activity ComponentInfo{com.chris.stryker.worldly/com.chris.stryker.worldly.com.poppoob.WorldlyMap}: java.lang.ClassNotFoundException: com.chris.stryker.worldly.com.poppoob.WorldlyMap in loader dalvik.system.PathClassLoader@45a13938 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2417) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2512) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.app.ActivityThread.access$2200(ActivityThread.java:119) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1863) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.os.Handler.dispatchMessage(Handler.java:99) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.os.Looper.loop(Looper.java:123) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.app.ActivityThread.main(ActivityThread.java:4363) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at java.lang.reflect.Method.invokeNative(Native Method) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at java.lang.reflect.Method.invoke(Method.java:521) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at dalvik.system.NativeStart.main(Native Method) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): Caused by: java.lang.ClassNotFoundException: com.chris.stryker.worldly.com.poppoob.WorldlyMap in loader dalvik.system.PathClassLoader@45a13938 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at dalvik.system.PathClassLoader.findClass(PathClassLoader.java:243) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at java.lang.ClassLoader.loadClass(ClassLoader.java:573) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at java.lang.ClassLoader.loadClass(ClassLoader.java:532) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.app.Instrumentation.newActivity(Instrumentation.java:1021) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2409) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): ... 11 more 03-21 00:30:40.300: INFO/Process(54): Sending signal. PID: 287 SIG: 3 03-21 00:30:40.312: INFO/dalvikvm(287): threadid=7: reacting to signal 3 03-21 00:30:40.396: INFO/dalvikvm(287): Wrote stack trace to '/data/anr/traces.txt' 03-21 00:30:49.002: WARN/ActivityManager(54): Launch timeout has expired, giving up wake lock! 03-21 00:30:49.685: WARN/ActivityManager(54): Activity idle timeout for HistoryRecord{458ab6d0 com.chris.stryker.worldly/.com.poppoob.WorldlyMap}

    Read the article

  • Odd tcp deadlock under windows

    - by John Robertson
    We are moving large amounts of data on a LAN and it has to happen very rapidly and reliably. Currently we use windows TCP as implemented in C++. Using large (synchronous) sends moves the data much faster than a bunch of smaller (synchronous) sends but will frequently deadlock for large gaps of time (.15 seconds) causing the overall transfer rate to plummet. This deadlock happens in very particular circumstances which makes me believe it should be preventable altogether. More importantly if we don't really know the cause we don't really know it won't happen some time with smaller sends anyway. Can anyone explain this deadlock? Deadlock description (OK, zombie-locked, it isn't dead, but for .15 or so seconds it stops, then starts again) The receiving side sends an ACK. The sending side sends a packet containing the end of a message (push flag is set) The call to socket.recv takes about .15 seconds(!) to return About the time the call returns an ACK is sent by the receiving side The the next packet from the sender is finally sent (why is it waiting? the tcp window is plenty big) The odd thing about (3) is that typically that call doesn't take much time at all and receives exactly the same amount of data. On a 2Ghz machine that's 300 million instructions worth of time. I am assuming the call doesn't (heaven forbid) wait for the received data to be acked before it returns, so the ack must be waiting for the call to return, or both must be delayed by something else. The problem NEVER happens when there is a second packet of data (part of the same message) arriving between 1 and 2. That part very clearly makes it sound like it has to do with the fact that windows TCP will not send back a no-data ACK until either a second packet arrives or a 200ms timer expires. However the delay is less than 200 ms (its more like 150 ms). The third unseemly character (and to my mind the real culprit) is (5). Send is definitely being called well before that .15 seconds is up, but the data NEVER hits the wire before that ack returns. That is the most bizarre part of this deadlock to me. Its not a tcp blockage because the TCP window is plenty big since we set SO_RCVBUF to something like 500*1460 (which is still under a meg). The data is coming in very fast (basically there is a loop spinning out data via send) so the buffer should fill almost immediately. According to msdn the buffer being full and at least one pending send should cause the data to be sent (though in another place it mentions that there various "heuristics" used in deciding when a send hits the wire). Anway, why the sender doesn't actually send more data during that .15 second pause is the most bizarre part to me. The information above was captured on the receiving side via wireshark (except of course the socket.recv return times which were logged in a text file). We tried changing the send buffer to zero and turning off Nagle on the sender (yes, I know Nagle is about not sending small packets - but we tried turning Nagle off in case that was part of the unstated "heuristics" affecting whether the message would be posted to the wire. Technically microsoft's Nagle is that a small packet isn't sent if the buffer is full and there is an outstanding ACK, so it seemed like a possibility).

    Read the article

  • nsdata to nsstring to nsdata

    - by Jayshree
    Hello everybody. I am calling the webservices. The web service returns data in xml format. Now the problem is the xml data is not being received in proper format. In place of "<", "" it returns in html format like-(<) and (>) . So i assigned the xmldata to a NsMutableString and replaced the escape characters so that the format of xml data is proper. Then i reassigned the NSMutableString to NSData so that i can parse the tags. but the problem is the xmlparser stops right where the xml data is. Where i replaced the tags. it doesnt go further. can anybody make me understand what is going on???? This is the xml response from webservice that i called. <?xml version="1.0" encoding="ISO-8859-1"?><SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/"> <SOAP-ENV:Body> <ns1:GetCustomerInfoResponse xmlns:ns1="http://aa.somewebservice.com/phpwebservice"> <return xsi:type="xsd:string"> &lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt;&lt;customer&gt;&lt;id&gt;1&lt;/id&gt;&lt;customername&gt;Hitesh&lt;/customername&gt;&lt;phonenumber&gt;98989898&lt;/phonenumber&gt;&lt;/customer&gt; </return> </ns1:GetCustomerInfoResponse> </SOAP-ENV:Body> </SOAP-ENV:Envelope> And this is after i replaced the tags. <?xml version="1.0" encoding="ISO-8859-1"?><SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/"> <SOAP-ENV:Body> <ns1:GetCustomerInfoResponse xmlns:ns1="http://a.somewebservice.com/phpwebservice"> <return xsi:type="xsd:string"> <?xml version="1.0" encoding="utf-8"?> <customer><id>1</id><customername>Hitesh</customername><phonenumber>98989898</phonenumber></customer> </return> </ns1:GetCustomerInfoResponse> </SOAP-ENV:Body></SOAP-ENV:Envelope> Now the problem is while parsing the parser only goes upto return tag. And then it is stuck there. It doesnt go any further. So wat do i do? anybody care to help me?

    Read the article

  • TinyMCE include crashes IE8

    - by dkris
    I am trying to open a popup onclick using a function call as shown below. <a id="forgotPasswordLink" href="#" onclick="openSupportPage(document.getElementById('forgotPasswordLink').innerHTML);"> Some Text </a> I am creating the HTML for the pop up page on the fly and also including the TinyMCE source file over there. The code is as shown below: <script type="text/javascript"> <!-- function openSupportPage(unsafeSupportText) { var features="width=700,height=400,status=yes,toolbar=no,menubar=no,location=no,scrollbars=yes"; var winId=window.open('','Test',features); winId.focus(); winId.document.open('text/html','replace'); winId.document.write('<html><head><title>' + document.title + '</title><link rel="stylesheet" href="./css/default.css" type="text/css">\n'); winId.document.write('<script src="./js/tiny_mce/tiny_mce.js" type="text/javascript" language="javascript">Script_1</script>\n'); winId.document.write('<script src="./js/support_page.js" type="text/javascript" language="javascript">Script_2</script>\n'); winId.document.write('</head><body onload="inittextarea()">\n');/*function call which will use the TinyMCE source file*/ winId.document.write(' \n'); winId.document.write('<p>&#160;</p>'); var hiddenFrameHTML = document.getElementById("HiddenFrame").innerHTML; hiddenFrameHTML = hiddenFrameHTML.replace(/&amp;/gi, "&"); hiddenFrameHTML = hiddenFrameHTML.replace(/&lt;/gi, "<"); hiddenFrameHTML = hiddenFrameHTML.replace(/&gt;/gi, ">"); winId.document.write(hiddenFrameHTML); winId.document.write('<textarea id="content" rows="10" style="width:100%">\n'); winId.document.write(document.getElementById(top.document.forms[0].id + ":supportStuff").innerHTML); winId.document.write('</textArea>\n'); var hiddenFrameHTML2 = document.getElementById("HiddenFrame2").innerHTML; hiddenFrameHTML2 = hiddenFrameHTML2.replace(/&amp;/gi, "&").replace(/&lt;/gi, "<").replace(/&gt;/gi, ">"); winId.document.write(hiddenFrameHTML2); winId.document.write('</body></html>\n'); winId.document.close(); } // --> </script> The support.js file contains the following. function inittextarea() { tinyMCE.init({ elements : "content", mode : "exact", theme : "advanced", readonly : true, setup : function(ed) { ed.onInit.add(function() { tinyMCE.activeEditor.execCommand("mceToggleVisualAid"); }); } }); } The problem arises when the onclick event is fired and the pop up opens up, IE8 stops responding and seems to hang. It is working fine on Chrome, Firefox and Safari. I feel that the issue is because of TinyMCE script inclusion because on commenting the lines that include the tiny_mce.js and the support_page.js, the popup renders with no issues. I am also using the latest TinyMCE release. Please let me know why this is happening and what could be the resolution.

    Read the article

  • Flushing a queue using AMQP, Rabbit, and Ruby

    - by jeffshantz
    I'm developing a system in Ruby that is going to make use of RabbitMQ to send messages to a queue as it does some work. I am using: Ruby 1.9.1 Stable RabbitMQ 1.7.2 AMQP gem v0.6.7 (http://github.com/tmm1/amqp) Most of the examples I have seen on this gem have their publish calls in an EM.add_periodic_timer block. This doesn't work for what I suspect is a vast majority of use cases, and certainly not for mine. I need to publish a message as I complete some work, so putting a publish statement in an add_periodic_timer block doesn't suffice. So, I'm trying to figure out how to publish a few messages to a queue, and then "flush" it, so that any messages I've published are then delivered to my subscribers. To give you an idea of what I mean, consider the following publisher code: #!/usr/bin/ruby require 'rubygems' require 'mq' MESSAGES = ["hello","goodbye","test"] AMQP.start do queue = MQ.queue('testq') messages_published = 0 while (messages_published < 50) if (rand() < 0.4) message = MESSAGES[rand(MESSAGES.size)] puts "#{Time.now.to_s}: Publishing: #{message}" queue.publish(message) messages_published += 1 end sleep(0.1) end AMQP.stop do EM.stop end end So, this code simply loops, publishing a message with 40% probability on each iteration of the loop, and then sleeps for 0.1 seconds. It does this until 50 messages have been published, and then stops AMQP. Of course, this is just a proof of concept. Now, my subscriber code: #!/usr/bin/ruby require 'rubygems' require 'mq' AMQP.start do queue = MQ.queue('testq') queue.subscribe do |header, msg| puts "#{Time.now.to_s}: Received #{msg}" end end So, we just subscribe to the queue, and for each message received, we print it out. Great, except that the subscriber only receives all 50 messages when the publisher calls AMQP.stop. Here's the output from my publisher. It has been truncated in the middle for brevity: $ ruby publisher.rb 2010-04-14 21:45:42 -0400: Publishing: test 2010-04-14 21:45:42 -0400: Publishing: hello 2010-04-14 21:45:42 -0400: Publishing: test 2010-04-14 21:45:43 -0400: Publishing: test 2010-04-14 21:45:44 -0400: Publishing: test 2010-04-14 21:45:44 -0400: Publishing: goodbye 2010-04-14 21:45:45 -0400: Publishing: goodbye 2010-04-14 21:45:45 -0400: Publishing: test 2010-04-14 21:45:45 -0400: Publishing: test . . . 2010-04-14 21:45:55 -0400: Publishing: test 2010-04-14 21:45:55 -0400: Publishing: test 2010-04-14 21:45:55 -0400: Publishing: test 2010-04-14 21:45:55 -0400: Publishing: goodbye Next, the output from my subscriber: $ ruby consumer.rb 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received hello 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received goodbye 2010-04-14 21:45:56 -0400: Received goodbye 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received test . . . 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received goodbye If you note the timestamps in the output, the subscriber only receives all of the messages once the publisher has stopped AMQP and exited. So, being an AMQP newb, how can I get my messages to deliver immediately? I tried putting AMQP.start and AMQP.stop in the body of the while loop of the publisher, but then only the first message gets delivered -- though strangely, if I turn on logging, there are no error messages reported by the server and the messages do get sent to the queue, but never get received by the subscriber. Suggestions would be much appreciated. Thanks for reading.

    Read the article

  • My VARCHAR(MAX) field is capping itself at 4000; what gives?

    - by eidylon
    Hello all... I have a table in one of my databases which is a queue of emails. Emails to certain addresses get accumulated into one email, which is done by a sproc. In the sproc, I have a table variable which I use to build the accumulated bodies of the emails, and then loop through to send each email. In my table var I have my body column defined as VARCHAR(MAX), seeing as there could be any number of emails currently accumulated for a given email address. It seems though that even though my column is defined as VARCHAR(MAX) it is behaving as if it were VARCHAR(4000) and is truncating the data going into it, although it does NOT throw any exceptions, it just silently stops concatenating any more data after 4000 characters. The MERGE statement is where it is building the accumulated email body into @EMAILS.BODY, which is the field that is truncating itself at 4000 characters. Below is the code of my sproc... ALTER PROCEDURE [system].[SendAccumulatedEmails] AS BEGIN SET NOCOUNT ON; DECLARE @SENTS BIGINT = 0; DECLARE @ROWS TABLE ( ROWID ROWID, DATED DATETIME, ADDRESS NAME, SUBJECT VARCHAR(1000), BODY VARCHAR(MAX) ) INSERT INTO @ROWS SELECT ROWID, DATED, ADDRESS, SUBJECT, BODY FROM system.EMAILQUEUE WHERE ACCUMULATE = 1 AND SENT IS NULL ORDER BY ADDRESS, DATED DECLARE @EMAILS TABLE ( ADDRESS NAME, ALLIDS VARCHAR(1000), BODY VARCHAR(MAX) ) DECLARE @PRVRID ROWID = NULL, @CURRID ROWID = NULL SELECT @CURRID = MIN(ROWID) FROM @ROWS WHILE @CURRID IS NOT NULL BEGIN MERGE @EMAILS AS DST USING (SELECT * FROM @ROWS WHERE ROWID = @CURRID) AS SRC ON SRC.ADDRESS = DST.ADDRESS WHEN MATCHED THEN UPDATE SET DST.ALLIDS = DST.ALLIDS + ', ' + CONVERT(VARCHAR,ROWID), DST.BODY = DST.BODY + '<i>'+CONVERT(VARCHAR,SRC.DATED,101)+' ' +CONVERT(VARCHAR,SRC.DATED,8) +':</i> <b>'+SRC.SUBJECT+'</b>'+CHAR(13)+SRC.BODY +' (Message ID '+CONVERT(VARCHAR,SRC.ROWID)+')' +CHAR(13)+CHAR(13) WHEN NOT MATCHED BY TARGET THEN INSERT (ADDRESS, ALLIDS, BODY) VALUES ( SRC.ADDRESS, CONVERT(VARCHAR,ROWID), '<i>'+CONVERT(VARCHAR,SRC.DATED,101)+' ' +CONVERT(VARCHAR,SRC.DATED,8)+':</i> <b>' +SRC.SUBJECT+'</b>'+CHAR(13)+SRC.BODY +' (Message ID '+CONVERT(VARCHAR,SRC.ROWID)+')' +CHAR(13)+CHAR(13)); SELECT @PRVRID = @CURRID, @CURRID = NULL SELECT @CURRID = MIN(ROWID) FROM @ROWS WHERE ROWID > @PRVRID END DECLARE @MAILFROM VARCHAR(100) = system.getOption('MAILFROM'), DECLARE @SMTPHST VARCHAR(100) = system.getOption('SMTPSERVER'), DECLARE @SMTPUSR VARCHAR(100) = system.getOption('SMTPUSER'), DECLARE @SMTPPWD VARCHAR(100) = system.getOption('SMTPPASS') DECLARE @ADDRESS NAME, @BODY VARCHAR(MAX), @ADDL VARCHAR(MAX) DECLARE @SUBJECT VARCHAR(1000) = 'Accumulated Emails from LIJSL' DECLARE @PRVID NAME = NULL, @CURID NAME = NULL SELECT @CURID = MIN(ADDRESS) FROM @EMAILS WHILE @CURID IS NOT NULL BEGIN SELECT @ADDRESS = ADDRESS, @BODY = BODY FROM @EMAILS WHERE ADDRESS = @CURID SELECT @BODY = @BODY + 'This is an automated message sent from an unmonitored mailbox.'+CHAR(13)+'Do not reply to this message; your message will not be read.' SELECT @BODY = '<style type="text/css"> * {font-family: Tahoma, Arial, Verdana;} p {margin-top: 10px; padding-top: 10px; border-top: single 1px dimgray;} p:first-child {margin-top: 10px; padding-top: 0px; border-top: none 0px transparent;} </style>' + @BODY exec system.LogIt @SUBJECT, @BODY BEGIN TRY exec system.SendMail @SMTPHST, @SMTPUSR, @SMTPPWD, @MAILFROM, @ADDRESS, NULL, NULL, @SUBJECT, @BODY, 1 END TRY BEGIN CATCH DECLARE @EMSG NVARCHAR(2048) = 'system.EMAILQUEUE.AI:'+ERROR_MESSAGE() SELECT @ADDL = 'TO:'+@ADDRESS+CHAR(13)+'SUBJECT:'+@SUBJECT+CHAR(13)+'BODY:'+@BODY exec system.LogIt @EMSG,@ADDL END CATCH SELECT @PRVID = @CURID, @CURID = NULL SELECT @CURID = MIN(ADDRESS) FROM @EMAILS WHERE ADDRESS > @PRVID END UPDATE system.EMAILQUEUE SET SENT = getdate() FROM system.EMAILQUEUE E, @ROWS R WHERE E.ROWID = R.ROWID END

    Read the article

  • Any tool to make git build every commit to a branch in a seperate repository?

    - by Wayne
    A git tool that meets the specs below is needed. Does one already exists? If not, I will create a script and make it available on GitHub for others to use or contribute. Is there a completely different and better way to solve the need to build/test every commit to a branch in a git repository? Not just to the latest but each one back to a certain staring point. Background: Our development environment uses a separate continuous integration server which is wonderful. However, it is still necessary to do full builds locally on each developer's PC to make sure the commit won't "break the build" when pushed to the CI server. Unfortunately, with auto unit tests, those build force the developer to wait 10 or 15 minutes for a build every time. To solve this we have setup a "mirror" git repository on each developer PC. So we develop in the main repository but anytime a local full build is needed. We run a couple commands in a in the mirror repository to fetch, checkout the commit we want to build, and build. It's works extremely lovely so we can continue working in the main one with the build going in parallel. There's only one main concern now. We want to make sure every single commit builds and tests fine. But we often get busy and neglect to build several fresh commits. Then if it the build fails you have to do a bisect or manually figure build each interim commit to figure out which one broke. Requirements for this tool. The tool will look at another repo, origin by default, fetch and compare all commits that are in branches to 2 lists of commits. One list must hold successfully built commits and the other lists commits that failed. It identifies any commit or commits not yet in either list and begins to build them in a loop in the order that they were committed. It stops on the first one that fails. The tool appropriately adds each commit to either the successful or failed list after it as attempted to build each one. The tool will ignore any "legacy" commits which are prior to the oldest commit in the success list. This logic makes the starting point possible in the next point. Starting Point. The tool building a specific commit so that, if successful it gets added to the success list. If it is the earliest commit in the success list, it becomes the "starting point" so that none of the commits prior to that are examined for builds. Only linear tree support? Much like bisect, this tool works best on a commit tree which is, at least from it's starting point, linear without any merges. That is, it should be a tree which was built and updated entirely via rebase and fast forward commits. If it fails on one commit in a branch it will stop without building the rest that followed after that one. Instead if will just move on to another branch, if any. The tool must do these steps once by default but allow a parameter to loop with an option to set how many seconds between loops. Other tools like Hudson or CruiseControl could do more fancy scheduling options. The tool must have good defaults but allow optional control. Which repo? origin by default. Which branches? all of them by default. What tool? by default an executable file to be provided by the user named "buildtest", "buildtest.sh" "buildtest.cmd", or buildtest.exe" in the root folder of the repository. Loop delay? run once by default with option to loop after a number of seconds between iterations.

    Read the article

  • NSMutableString leaks on append or replaceOccurrencesOfString

    - by John
    Hello Folks, I know similar questions have been asked time and time again but I ask that you please bear with me as I cannot seem to find an answer that helps. My application has leaks that are driving me out of my mind. Actually, they are not reported as leaks using Leaks, but my net bytes in ObjectAlloc goes up and up and up and never stops, eventually leading to a crash if it goes on long enough (not very long). The problem occurs with NSMutableStrings. I think there is either something fundamental I don't understand about them, or I am facing another problem that I am having difficulty tracking down but keeps hiding behind the NSMutableStrings. Specifically, I am noticing that whenever I append to or perform a replace on a NSMutableString, ObjectAlloc reports what appear to be mismatches in malloc/frees behind the scene when resizing the NSMutableString. I'm sorry to say this is the second time I'm facing this problem - the first time I messed around for hours and hours and finally the problem went away (magic!) but I don't really know why. When I look at the code below (and believe me, I've stared at it for hours) I cannot see the problem. I look at the code and think to myself that I should be fine because I'm releasing the only object for which I am responsible (aString) and that NSMutableString should be taking care of cleaning up after any resizing it does. In the second example, just so you know in case it helps, the string being passed in comes from an ASIHTTPRequest object (it's the responseString) and I don't do anything at all with it. It's being called simply like so ([self DoStuff2:[request responseString]]) and I don't free the request myself either (I'm using a ASINetworkQueue and I assume that the requests are destroyed for me (I tried and caused errors because the request was already being release somewhere else). Also, I know it shouldn't do anything, but I even tried wrapping the code in autorelease pools, which of course did nothing. I should mention that this code is being run inside of an NSOperation. I thought that perhaps I am experiencing problems because NSOperations should create an autorelease pool for themselves, but I've tried that to no avail. Not related to NSMutableString, but I find I also have similar problems using the NSString componentsSeparatedByString method. Sometimes the memory used by the array that gets the separated components is never released. Hmmm...strings in general seem to be somewhat problematic for me it seems. I would appreciate ANY help anyone can provide. If you require more info, I'll be glad to add it. I do promise you that I've struggled with this (and other problems) for weeks and every problem I encounter I research hard and long until I find a solution - this is not an idle request, but a true cry for help! I've written so much code and now I'm trying to seal some small leaks etc and I notice this problem. Honestly, I cannot believe how memory management in Objective C can stump me so at times...I've read Apple's memory mgmt docs many times and I thought I thoroughly understood it and I try to be diligent about releasing objects I own, but sometimes I find myself wondering if I truly understand...I would like to put this to bed once and make sure I understand all this fully - to have this sort of question/problem after writing thousands of lines of code is more than a little scary/embarrassing/annoying. So again, if anybody has any insight, I'd be grateful. Thanks for your time and efforts. -(void)DoStuff { NSString *aString [ [[NSString alloc] initWithFormat:@"text %@ more text", self.strVariable]; [self.someMutableStringVar replaceOccurrencesOfString:@"replace" withString:aString options:NSCaseInsensitiveSearch range:NSMakeRange(0, [self.someMutableStringVar length])]; [aString release]; } -(void)DoStuff2:(NSString *)aString { [self.someMutableStringVar appendString:aString]; }

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95  | Next Page >