Search Results

Search found 9715 results on 389 pages for 'servers'.

Page 370/389 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • Best way to run remote VBScript in ASP.net? WMI or PsExec?

    - by envinyater
    I am doing some research to find out the best and most efficient method for this. I will need to execute remote scripts on a number of Window Servers/Computers (while we are building them). I have a web application that is going to automate this task, I currently have my prototype working to use PsExec to execute remote scripts. This requires PsExec to be installed on the system. A colleague suggested I should use WMI for this. I did some research in WMI and I couldn't find what I'm looking for. I want to either upload the script to the server and execute it and read the results, or already have the script on the server and execute it and read the results. I would prefer the first option though! Which is more ideal, PsExec or WMI? For reference, this is my prototype PsExec code. This script is only executing a small script to get the Windows OS and Service Pack Info. Protected Sub windowsScript(ByVal COMPUTERNAME As String) ' Create an array to store VBScript results Dim winVariables(2) As String nameLabel.Text = Name.Text ' Use PsExec to execute remote scripts Dim Proc As New System.Diagnostics.Process ' Run PsExec locally Proc.StartInfo = New ProcessStartInfo("C:\Windows\psexec.exe") ' Pass in following arguments to PsExec Proc.StartInfo.Arguments = COMPUTERNAME & " -s cmd /C c:\systemInfo.vbs" Proc.StartInfo.RedirectStandardInput = True Proc.StartInfo.RedirectStandardOutput = True Proc.StartInfo.UseShellExecute = False Proc.Start() ' Pause for script to run System.Threading.Thread.Sleep(1500) Proc.Close() System.Threading.Thread.Sleep(2500) 'Allows the system a chance to finish with the process. Dim filePath As String = COMPUTERNAME & "\TTO\somefile.txt" 'Download file created by script on Remote system to local system My.Computer.Network.DownloadFile(filePath, "C:\somefile.txt") System.Threading.Thread.Sleep(1000) ' Pause so file gets downloaded ''Import data from text file into variables textRead("C:\somefile.txt", winVariables) WindowsOSLbl.Text = winVariables(0).ToString() SvcPckLbl.Text = winVariables(1).ToString() System.Threading.Thread.Sleep(1000) ' ''Delete the file on server - we don't need it anymore Dim Proc2 As New System.Diagnostics.Process Proc2.StartInfo = New ProcessStartInfo("C:\Windows\psexec.exe") Proc2.StartInfo.Arguments = COMPUTERNAME & " -s cmd /C del c:\somefile.txt" Proc2.StartInfo.RedirectStandardInput = True Proc2.StartInfo.RedirectStandardOutput = True Proc2.StartInfo.UseShellExecute = False Proc2.Start() System.Threading.Thread.Sleep(500) Proc2.Close() ' Delete file locally File.Delete("C:\somefile.txt") End Sub

    Read the article

  • Curl Wrapper Class does not return any data even though it worked previously?

    - by Scott Faisal
    We changed servers and installed all necessary software and just cannot seem to pin point what is going on. A simple CURL request does not return anything. Command Line CURL commands work just fine. We are using a wrapper for CURL utilizing streams. Do PHP streams require any out of the ordinary configuration? We are using the latest Lamp stack. This is the var_dump: object(cURL_Response)#180 (14) { ["cURL:private"]= resource(288) of type (curl) ["data_stream:private"]= object(elTempStream)#178 (1) { ["fp"]= resource(290) of type (stream) } ["request_header:private"]= NULL ["response_header:private"]= object(cURL_Headers)#179 (1) { ["headers:private"]= string(0) "" } ["response_headers:private"]= array(1) { [0]= object(cURL_Headers)#179 (1) { ["headers:private"]= string(0) "" } } ["error:private"]= string(0) "" ["errno:private"]= int(0) ["info:private"]= array(21) { ["url"]= string(21) "http://www.yahoo.com/" ["content_type"]= string(23) "text/html;charset=utf-8" ["http_code"]= int(200) ["header_size"]= int(1195) ["request_size"]= int(1153) ["filetime"]= int(-1) ["ssl_verify_result"]= int(0) ["redirect_count"]= int(1) ["total_time"]= float(0.486924) ["namelookup_time"]= float(0.003692) ["connect_time"]= float(0.005709) ["pretransfer_time"]= float(0.005714) ["size_upload"]= float(0) ["size_download"]= float(28509) ["speed_download"]= float(58549) ["speed_upload"]= float(0) ["download_content_length"]= float(211) ["upload_content_length"]= float(0) ["starttransfer_time"]= float(0.149365) ["redirect_time"]= float(0.312743) ["request_header"]= string(973) "GET / HTTP/1.0 User-Agent: cURL_ClientBase (PHP v/5.2.6-1+lenny4) Host: www.yahoo.com Accept: / Accept-Encoding: gzip, deflate, compress Referer: http://yahoo.com Cookie: B=e5iber15t7u05&b=3&s=ie; fpc_s=d=GGX6WCTIR29HWsjgLxFejKc_YJWxRqm3jYdEd6lu7W5ophpuAHBm6JGtNvhv97anG4VtaIMHQBPg3JAMOZGq59Lz_tRn_TFXgUT8T_at5HdCktVJLycy&v=2; fpt=d=nt1OT7HPe9wVIkHbMkpzQOgbP3.mQ3o1SPX7k5ztrFrWeeSWK5IgQooRY.8KtTeRMiaSEZ0kv3sO1MWtEsAzjVlRCDAZBoxqOs17v6PaZbPRqmDc92ivoMia.CqjufRs4_guOO4AyhRZ7_ml8rzxFrYeexpR2jLN0oPMyEWT0nbEf6Sdf._Bkh0HMfmI7KBnEx5uZBEEmV.wTfGRLG7zSd9sA4itOFv.r6AjP39CnogSn7NTJnqg_kEcKoiCM.lR5w_MqMc8IgWMBgSAZZgGEZpfmvxlQGnUzPwNh2pSpTe2wxFS3v1zPopDgoo2VsO3uzeyA3A_j7Hlk1P8T08DHbfr6ApDMUcr7d0QIt4pGYIxVV45XzfgpT7mgUdMei6VZrD9ozVQF0oqxrs1Ufri.XzPdB3NdQ--&v=1; fpc=d=sRPCfUfBTW96.RGiQn4hSkfi3p7WnPCAqYl5YoHecI7zjg7gH7PolscoPcq1Esm8dR.Rg1.AbQCpo2WBPXn1St96PpcjeCC.pj2.Upb3mKSRQkYPIVP1vQcL9nL7J8s9Z0VIXjiBFgSUcxyzDeUdP4us2YbVO3PbaVIwaIEfFsX3WI7YgiTbkrTGtwnFgoSYq6l8tnw-&v=2" } ["info_flagged:private"]= array(20) { [1048577]= string(21) "http://www.yahoo.com/" [2097154]= int(200) [2097166]= int(-1) [3145731]= float(0.486924) [3145732]= float(0.003692) [3145733]= float(0.005709) [3145734]= float(0.005714) [3145745]= float(0.149365) [3145747]= float(0.312743) [3145735]= float(0) [3145736]= float(28509) [3145737]= float(58549) [3145738]= float(0) [2097163]= int(1195) [2]= string(973) "GET / HTTP/1.0 User-Agent: cURL_ClientBase (PHP v/5.2.6-1+lenny4) Host: www.yahoo.com Accept: / Accept-Encoding: gzip, deflate, compress Referer: http://yahoo.com Cookie: B=e5iber15t7u05&b=3&s=ie; fpc_s=d=GGX6WCTIR29HWsjgLxFejKc_YJWxRqm3jYdEd6lu7W5ophpuAHBm6JGtNvhv97anG4VtaIMHQBPg3JAMOZGq59Lz_tRn_TFXgUT8T_at5HdCktVJLycy&v=2; fpt=d=nt1OT7HPe9wVIkHbMkpzQOgbP3.mQ3o1SPX7k5ztrFrWeeSWK5IgQooRY.8KtTeRMiaSEZ0kv3sO1MWtEsAzjVlRCDAZBoxqOs17v6PaZbPRqmDc92ivoMia.CqjufRs4_guOO4AyhRZ7_ml8rzxFrYeexpR2jLN0oPMyEWT0nbEf6Sdf._Bkh0HMfmI7KBnEx5uZBEEmV.wTfGRLG7zSd9sA4itOFv.r6AjP39CnogSn7NTJnqg_kEcKoiCM.lR5w_MqMc8IgWMBgSAZZgGEZpfmvxlQGnUzPwNh2pSpTe2wxFS3v1zPopDgoo2VsO3uzeyA3A_j7Hlk1P8T08DHbfr6ApDMUcr7d0QIt4pGYIxVV45XzfgpT7mgUdMei6VZrD9ozVQF0oqxrs1Ufri.XzPdB3NdQ--&v=1; fpc=d=sRPCfUfBTW96.RGiQn4hSkfi3p7WnPCAqYl5YoHecI7zjg7gH7PolscoPcq1Esm8dR.Rg1.AbQCpo2WBPXn1St96PpcjeCC.pj2.Upb3mKSRQkYPIVP1vQcL9nL7J8s9Z0VIXjiBFgSUcxyzDeUdP4us2YbVO3PbaVIwaIEfFsX3WI7YgiTbkrTGtwnFgoSYq6l8tnw-&v=2" [2097164]= int(1153) [2097165]= int(0) [3145743]= float(211) [3145744]= float(0) [1048594]= string(23) "text/html;charset=utf-8" } ["request_url:private"]= string(16) "http://yahoo.com" ["response_url:private"]= string(21) "http://www.yahoo.com/" ["status_code:private"]= int(200) ["cookies:private"]= array(0) { } ["request_headers"]= string(973) "GET / HTTP/1.0 User-Agent: cURL_ClientBase (PHP v/5.2.6-1+lenny4) Host: www.yahoo.com Accept: / Accept-Encoding: gzip, deflate, compress Referer: http://yahoo.com Cookie: B=e5iber15t7u05&b=3&s=ie; fpc_s=d=GGX6WCTIR29HWsjgLxFejKc_YJWxRqm3jYdEd6lu7W5ophpuAHBm6JGtNvhv97anG4VtaIMHQBPg3JAMOZGq59Lz_tRn_TFXgUT8T_at5HdCktVJLycy&v=2; fpt=d=nt1OT7HPe9wVIkHbMkpzQOgbP3.mQ3o1SPX7k5ztrFrWeeSWK5IgQooRY.8KtTeRMiaSEZ0kv3sO1MWtEsAzjVlRCDAZBoxqOs17v6PaZbPRqmDc92ivoMia.CqjufRs4_guOO4AyhRZ7_ml8rzxFrYeexpR2jLN0oPMyEWT0nbEf6Sdf._Bkh0HMfmI7KBnEx5uZBEEmV.wTfGRLG7zSd9sA4itOFv.r6AjP39CnogSn7NTJnqg_kEcKoiCM.lR5w_MqMc8IgWMBgSAZZgGEZpfmvxlQGnUzPwNh2pSpTe2wxFS3v1zPopDgoo2VsO3uzeyA3A_j7Hlk1P8T08DHbfr6ApDMUcr7d0QIt4pGYIxVV45XzfgpT7mgUdMei6VZrD9ozVQF0oqxrs1Ufri.XzPdB3NdQ--&v=1; fpc=d=sRPCfUfBTW96.RGiQn4hSkfi3p7WnPCAqYl5YoHecI7zjg7gH7PolscoPcq1Esm8dR.Rg1.AbQCpo2WBPXn1St96PpcjeCC.pj2.Upb3mKSRQkYPIVP1vQcL9nL7J8s9Z0VIXjiBFgSUcxyzDeUdP4us2YbVO3PbaVIwaIEfFsX3WI7YgiTbkrTGtwnFgoSYq6l8tnw-&v=2" }

    Read the article

  • using a connection string in web.config for crystal report

    - by zombiegx
    I`m having problems between two servers, wich use differente odbc dsn. My apps work great but crystal reports uses the original odbc connection, how can I fix this? I'm thinking of using the same connection string in the web.config, but I don't know how. found this but is too confusing for me this is an example of my code, its a aspx file that loads as a pdf protected void Page_Load(object sender, EventArgs e) { try { var par = Request.QueryString; int pidmun = 0; if (!string.IsNullOrEmpty(Request["id"])) { pidmun = int.Parse(Request["id"]); } string pFechaIni = Request["fi"]; string pFechaFin = Request["ff"]; string pTipo = Request["t"]; string pNombreMunicipio = Request["nm"]; var pos = Request.Form; if (string.IsNullOrEmpty(pFechaIni)) { pFechaIni = "01/01/2010"; } if (string.IsNullOrEmpty(pFechaFin)) { pFechaFin = "01/01/2010"; } if (string.IsNullOrEmpty(pTipo)) { pTipo = "FOLIO"; } if (string.IsNullOrEmpty(pNombreMunicipio)) { pNombreMunicipio = "NombreMunicipio"; } ReporteIngresos report = new ReporteIngresos(); TextObject nom; TextObject periodo; nom = (TextObject)report.ReportDefinition.ReportObjects["TxtNombreMunicipio"]; periodo = (TextObject)report.ReportDefinition.ReportObjects["TxtPeriodo"]; nom.Text = "Ingresos Municipio de " + pNombreMunicipio; periodo.Text = "Periodo del " + pFechaIni + " al " + pFechaFin; report.SetParameterValue("pidMun", pidmun); report.SetParameterValue("pFechaIni", pFechaIni); report.SetParameterValue("pFechaFin", pFechaFin); report.SetParameterValue("pTipo", pTipo); MemoryStream oStream; oStream = (MemoryStream)report.ExportToStream(CrystalDecisions.Shared.ExportFormatType.PortableDocFormat); Response.Clear(); Response.Buffer = true; Response.AddHeader("CustomHeader", "ReporteIngresos"); Response.CacheControl = "No-cache"; Response.ContentType = "application/pdf"; Response.BinaryWrite(oStream.ToArray()); Response.End(); } catch (Exception ex) { ExBi.log(ex); throw ex; } } thanks.

    Read the article

  • Nested Transaction issues within custom Windows Service

    - by pdwetz
    I have a custom Windows Service I recently upgraded to use TransactionScope for nested transactions. It worked fine locally on my old dev machine (XP sp3) and on a test server (Server 2003). However, it fails on my new Windows 7 machine as well as on 2008 Server. It was targeting 2.0 framework; I tried targeting 3.5 instead, but it still fails. The strange part is really in how it fails; no exception is thrown. The service itself merely times out. I added tracing code, and it fails when opening the connection for Database lookup #2 below. I also enabled tracing for System.Transactions; it literally cuts out partway while writing the block for the failed connection. We ran a SQL trace, but only the first lookup shows up. I put in code traces, and it gets to the trace the line before the second lookup, but nothing after. I've had the same experience hitting two different SQL servers (both are SQL 2005 running on Server 2003). The connection string is utilizing a SQL account (not Windows integration). All connections are against the same database in this case, but given the nature of the code it is being escalated to MSDTC. Here's the basic code structure: TransactionOptions options = new TransactionOptions(); options.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted; using (TransactionScope scope = new TransactionScope(TransactionScopeOption.RequiresNew, options)) { // Database lookup #1 TransactionOptions options = new TransactionOptions(); options.IsolationLevel = Transaction.Current != null ? Transaction.Current.IsolationLevel : System.Transactions.IsolationLevel.ReadCommitted; using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, options)) { // Database lookup #2; fails on connection.Open() // Database save (never reached) scope.Complete();<br/> } scope.Complete();<br/> } My local firewall is disabled. The service normally runs using Network Service, but I also tried my user account (same results). The short of it is that I use the same general technique widely in my web applications and haven't had any issues. I pulled out the code and ran it fine within a local Windows Form application. If anyone has any additional debugging ideas (or, even better, solutions) I'd love to hear them.

    Read the article

  • Uploading files to varbinary(max) in SQL Server -- works on one server, not the other

    - by pjabbott
    I have some code that allows users to upload file attachments into a varbinary(max) column in SQL Server from their web browser. It has been working perfectly fine for almost two years, but all of a sudden it stopped working. And it stopped working on only the production database server -- it still works fine on the development server. I can only conclude that the code is fine and there is something up with the instance of SQL Server itself. But I have no idea how to isolate the problem. I insert a record into the ATTACHMENT table, only inserting non-binary data like the title and the content type, and then chunk-upload the uploaded file using the following code: // get the file stream System.IO.Stream fileStream = postedFile.InputStream; // make an upload buffer byte[] fileBuffer; fileBuffer = new byte[1024]; // make an update command SqlCommand fileUpdateCommand = new SqlCommand("update ATTACHMENT set ATTACHMENT_DATA.WRITE(@Data, NULL, NULL) where ATTACHMENT_ID = @ATTACHMENT_ID", sqlConnection, sqlTransaction); fileUpdateCommand.Parameters.Add("@Data", SqlDbType.Binary); fileUpdateCommand.Parameters.AddWithValue("@ATTACHMENT_ID", newId); while (fileStream.Read(fileBuffer, 0, fileBuffer.Length) > 0) { fileUpdateCommand.Parameters["@Data"].Value = fileBuffer; fileUpdateCommand.ExecuteNonQuery(); <------ FAILS HERE } fileUpdateCommand.Dispose(); fileStream.Close(); Where it says "FAILS HERE", it sits for a while and then I get a SQL Server timeout error on the very first iteration through the loop. If I connect to the development database instead, everything works fine (it runs through the loop many, many times and the commit is successful). Both servers are identical (SQL Server 9.0.3042) and the schemas are identical as well. When I open Activity Monitor right after the timeout to see what's going it, it says the last command is (@Data binary(1024),@ATTACHMENT_ID decimal(4,0))update ATTACHMENT set ATTACHMENT_DATA.WRITE(@Data, NULL, NULL) where ATTACHMENT_ID = @ATTACHMENT_ID which I would expect but it also says it has a status of "Suspended" and a wait type of "PAGEIOLATCH_SH". I looked this up and it seems to be a bad thing but I can't find anything specific to my stuation. Ideas?

    Read the article

  • SSL Authentication with Certificates: Should the Certificates have a hostname?

    - by sixtyfootersdude
    Summary JBoss allows clients and servers to authenticate using certificates and ssl. One thing that seems strange is that you are not required to give your hostname on the certificate. I think that this means if Server B is in your truststore, Sever B can pretend to be any server that they want. (And likewise: if Client B is in your truststore...) Am I missing something here? Authentication Steps (Summary of Wikipeida Page) Client Server ================================================================================================= 1) Client sends Client Hello ENCRIPTION: None - highest TLS protocol supported - random number - list of cipher suites - compression methods 2) Sever Hello ENCRIPTION: None - highest TLS protocol supported - random number - choosen cipher suite - choosen compression method 3) Certificate Message ENCRIPTION: None - 4) ServerHelloDone ENCRIPTION: None 5) Certificate Message ENCRIPTION: None 6) ClientKeyExchange Message ENCRIPTION: server's public key => only server can read => if sever can read this he must own the certificate - may contain a PreMasterSecerate, public key or nothing (depends on cipher) 7) CertificateVerify Message ENCRIPTION: clients private key - purpose is to prove to the server that client owns the cert 8) BOTH CLIENT AND SERVER: - use random numbers and PreMasterSecret to compute a common secerate 9) Finished message - contains a has and MAC over previous handshakes (to ensure that those unincripted messages did not get broken) 10) Finished message - samething Sever Knows The client has the public key for the sent certificate (step 7) The client's certificate is valid because either: it has been signed by a CA (verisign) it has been self-signed BUT it is in the server's truststore It is not a replay attack because presumably the random number (step 1 or 2) is sent with each message Client Knows The server has the public key for the sent certificate (step 6 with step 8) The server's certificate is valid because either: it has been signed by a CA (verisign) it has been self-signed BUT it is in the client's truststore It is not a replay attack because presumably the random number (step 1 or 2) is sent with each message Potential Problem Suppose the client's truststore has certs in it: Server A Server B (malicous) Server A has hostname www.A.com Server B has hostname www.B.com Suppose: The client tries to connect to Server A but Server B launches a man in the middle attack. Since server B: has a public key for the certificate that will be sent to the client has a "valid certificate" (a cert in the truststore) And since: certificates do not have a hostname feild in them It seems like Server B can pretend to be Server A easily. Is there something that I am missing?

    Read the article

  • .Net Remote Log Querying

    - by jlafay
    I have a Win Service that I'm working on that consists of the service, WF Service (using WorkflowServiceHost), a Workflow (WorkflowApplication) that queries/processes data from a SQL Server DB, and a Comm Marshall class that handles data flow between the service and the WF. The WF does a lot of heavy data processing and the original app (early VB6) logged all the processing and displayed the results on the screen of the host machine. Critical events will be committed to eventlog because I strongly believe that should be common practice because admins naturally will look there and because it already has support for remote viewing. The workflow will also need to write logging events as it processes and iterates according to our business logic. Such as: records queried, records returned, processed records, etc. The data is very critical and we need to log actions as they occur. The logs are currently kept as text files on disk and I think that is ok. Ideally I would like to record log events in XML so it's easier to query and because it is less costly than a DB, especially since our DB servers do a lot of heavy processing anyways. Since we are replacing essentially a VB6 application with a robust windows service (taking advantage of WF 4.0), it has been requested that a remote client also be created. It receives callbacks from the service after subscribing to it and being added to a collection of subscribers. Basic statistics and summaries are updated client side after receiving basic monitoring data of what is going on with the service. We would like to also provide a way to provide details when we need to examine what is going on further because this is a long running data processing service and issues need to be addressed immediately. What is the best way to implement some type of query from the client that is sent to the service and returned to the client? Would it be efficient to implement another method to expose on the service and then have that pass that off to some querying class/object to examine the XML files by whichever specification and then return it to the client? That's the main concern. I don't want the service to processing to bottleneck much while this occurs. It seems that WF already auto-magically threads well for the most part but I want to make sure this is the right way to go about it. Any suggestions/recommendations on how to architect and implement a small log querying framework for a remote service would be awesome.

    Read the article

  • Unable to read data from the transport connection: the connection was closed

    - by webdreamer
    The exception is Remoting Exception - Authentication Failure. The detailed message says "Unable to read data from the transport connection: the connection was closed." I'm having trouble with creating two simple servers that can comunicate as remote objects in C#. ServerInfo is just a class I created that holds the IP and Port and can give back the address. It works fine, as I used it before, and I've debugged it. Also the server is starting just fine, no exception is thrown, and the channel is registered without problems. I'm using Forms to do the interfaces, and call some of the methods on the server, but didn't find any problems in passing the parameters from the FormsApplication to the server when debugging. All seems fine in that chapter. public ChordServerProgram() { RemotingServices.Marshal(this, "PADIBook"); nodeInt = 0; } public void startServer() { try { serverChannel = new TcpChannel(serverInfo.Port); ChannelServices.RegisterChannel(serverChannel, true); } catch (Exception e) { Console.WriteLine(e.ToString()); } } I run two instances of this program. Then startNode is called on one of the instances of the application. The port is fine, the address generated is fine as well. As you can see, I'm using the IP for localhost, since this server is just for testing purposes. public void startNode(String portStr) { IPAddress address = IPAddress.Parse("127.0.0.1"); Int32 port = Int32.Parse(portStr); serverInfo = new ServerInfo(address, port); startServer(); //node = new ChordNode(serverInfo,this); } Then, in the other istance, through the interface again, I call another startNode method, giving it a seed server to get information from. This is where it goes wrong. When it calls the method on the seedServer proxy it just got, a RemotingException is thrown, due to an authentication failure. (The parameter I'll want to get is the node, I'm just using the int to make sure the ChordNode class has nothing to do with this error.) public void startNode(String portStr, String seedStr) { IPAddress address = IPAddress.Parse("127.0.0.1"); Int32 port = Int32.Parse(portStr); serverInfo = new ServerInfo(address, port); IPAddress addressSeed = IPAddress.Parse("127.0.0.1"); Int32 portSeed = Int32.Parse(seedStr); ServerInfo seedInfo = new ServerInfo(addressSeed, portSeed); startServer(); ChordServerProgram seedServer = (ChordServerProgram)Activator.GetObject(typeof(ChordServerProgram), seedInfo.GetFullAddress()); // node = new ChordNode(serverInfo,this); int seedNode = seedServer.nodeInt; // node.chordJoin(seedNode.self); }

    Read the article

  • Recommended ASP.NET Shared Hosting

    - by coffeeaddict
    Ok, I have to admit I'm getting fed up with www.discountasp.net's pricing model and this annoyance has built up over the past 8 years or so. I've been with them for years and absolutely love them on the technical side, however it's getting ridiculously expensive for so little that you get. I mean here's my scenario: 1) I am running 2 SQL Server databases which costs me $10/ea per month so that's $20/month for 2 and I only get 500 mb disk space which is horrible 2) I am paying $10/mo just for the hosting itself which I only get 1 gig of disk space! I mean common! 3) I am simply running 2 small apps (Screwturn Wiki & Subtext Blog)...so I don't really care if it's up 99% or not, it's not worth paying a total of $300 just to keep these 2 apps running over discountasp.net Anyone else feel the same? Yes, I know they have great support, probably have great servers running behind this but in the end I really don't care as long as my site is up 95% or better. Yes, the hosting toolset rocks. But you know I bet you I can find a similar set somewhere else. I like how I can totally control IIS 7 at discountasp and I can control my own app pool etc. That's very powerful and essential. But anyone have any good alternatives to discountasp that gives me close to the same at a much more reasonable cost point? I mean http://www.m6.net/prices.aspx gives you 10 SQL Databases for $7 and 200 gigs disk space! I don't know about their tools or support but just looking at those numbers and some other hosts I've seen, I feel that discountasp.net is way out of line. They don't even offer any purchasing discounts such as it would be nice if my 2nd SQL Server is only $5/month not $10...stuff like this, to make it much more realistic and fair. Opinions (people who do have discountasp.net, people who have left them, or people who have another host they like)??? But geez $300 just to host a couple DBs and lightweight open source apps? Not worth the price they are charging. I'm almost at a price point that enables me to get a decent dedicated server! I really don't care about beta support. Not a big deal to me.

    Read the article

  • sending and receiving with sockets in java?

    - by Darksole
    I am working on sending and receiving from clients and servers in java, and am stumped at the moment. the client socket is to contact a server at “localhost” port 4321. The client will receive a string from the server and alternate spelling the contents of this string with the server. For example, given the string “Bye Bye”, the client (which always begins sending the first letter) sends “B”, receives “y”, sends “e”, receives “ ”, sends “B”, receives “y”, sends “e”, and receives “done!”, which is the string that either client or server will send after the last letter from the original string is received. After “done!” is transmitted, both client and server close their communications. How would i go about getting the first string and then going back and forth sending and reciving letters that make the string, and when finished either send or get done!? import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.io.PrintWriter; import java.net.Socket; import java.net.UnknownHostException; import java.util.Scanner; public class Program2 { public static void goClient() throws UnknownHostException, IOException{ String server = "localhost"; int port = 4321; Socket socket = new Socket(server, port); InputStream inStream = socket.getInputStream(); OutputStream outStream = socket.getOutputStream(); Scanner in = new Scanner(inStream); PrintWriter out = new PrintWriter(outStream, true); String rec = ""; if(in.hasNext()){ rec = in.nextLine(); } char[] array = new char[rec.length()]; for(int i = 0; i < rec.length(); i++){ array[i] = rec.charAt(i); } while(in.hasNext()){ for(int x = 0; x < array.length + 1; x+=2){ String str = in.nextLine(); str = Character.toString(array[x]); out.println(str); } in.close(); socket.close(); } } }

    Read the article

  • Can highlight the current menu item, but can't add the class= to style unhighleted menu items

    - by bradpotts
    <?php $activesidebar[$currentsidebar]="id=isactive";?> <div class="span3"> <div class="well sidebar-nav hidden-phone"> <ul class="nav nav-list"> <li class="nav-header" <?php echo $activesidebar[1] ?>>Marketing Services</li> <li><a href="#">Marketing Technology</a></li> <li><a href="#">Generate More Sales</a></li> <li><a href="#">Direct Email Marketing</a></li> <li class="nav-header" <?php echo $activesidebar[2] ?>>Advertising Services</li> <li><a href="../services-advertising-mass-media-network.php">Traditional Medias</a></li> <li><a href="#">Online & Social Medias</a></li> <li><a href="#">Media Planing & Purchasing</a></li> <li class="nav-header" <?php echo $activesidebar[3] ?>>Technology Services</li> <li><a href="#">Managed Websites</a></li> <li><a href="#">Managed Web Servers</a></li> <li><a href="#">Managed Databases</a></li> <li class="nav-header" <?php echo $activesidebar[4] ?>>About Us</li> <li><a href="../aboutus-contactus.php">Contact Us</a></li> </ul> </div> This is added to the current page I want to add this on. <?php $currentsidebar =2; include('module-sidebar-navigation.php');?> I had programmed this menu individually on each page, but to make my website dynamic I used one file and use php includes to load the file. I can get the menu to highlight on the current page assigning an id="isactive", how can I assign id="notactive" to the other 3 menu items that are not active on that page. Is there an else or elseif I have to include?

    Read the article

  • Feedback on Optimizing C# NET Code Block

    - by Brett Powell
    I just spent quite a few hours reading up on TCP servers and my desired protocol I was trying to implement, and finally got everything working great. I noticed the code looks like absolute bollocks (is the the correct usage? Im not a brit) and would like some feedback on optimizing it, mostly for reuse and readability. The packet formats are always int, int, int, string, string. try { BinaryReader reader = new BinaryReader(clientStream); int packetsize = reader.ReadInt32(); int requestid = reader.ReadInt32(); int serverdata = reader.ReadInt32(); Console.WriteLine("Packet Size: {0} RequestID: {1} ServerData: {2}", packetsize, requestid, serverdata); List<byte> str = new List<byte>(); byte nextByte = reader.ReadByte(); while (nextByte != 0) { str.Add(nextByte); nextByte = reader.ReadByte(); } // Password Sent to be Authenticated string string1 = Encoding.UTF8.GetString(str.ToArray()); str.Clear(); nextByte = reader.ReadByte(); while (nextByte != 0) { str.Add(nextByte); nextByte = reader.ReadByte(); } // NULL string string string2 = Encoding.UTF8.GetString(str.ToArray()); Console.WriteLine("String1: {0} String2: {1}", string1, string2); // Reply to Authentication Request MemoryStream stream = new MemoryStream(); BinaryWriter writer = new BinaryWriter(stream); writer.Write((int)(1)); // Packet Size writer.Write((int)(requestid)); // Mirror RequestID if Authenticated, -1 if Failed byte[] buffer = stream.ToArray(); clientStream.Write(buffer, 0, buffer.Length); clientStream.Flush(); } I am going to be dealing with other packet types as well that are formatted the same (int/int/int/str/str), but different values. I could probably create a packet class, but this is a bit outside my scope of knowledge for how to apply it to this scenario. If it makes any difference, this is the Protocol I am implementing. http://developer.valvesoftware.com/wiki/Source_RCON_Protocol

    Read the article

  • .NET Windows Service with timer stops responding

    - by Biri
    I have a windows service written in c#. It has a timer inside, which fires some functions on a regular basis. So the skeleton of my service: public partial class ArchiveService : ServiceBase { Timer tickTack; int interval = 10; ... protected override void OnStart(string[] args) { tickTack = new Timer(1000 * interval); tickTack.Elapsed += new ElapsedEventHandler(tickTack_Elapsed); tickTack.Start(); } protected override void OnStop() { tickTack.Stop(); } private void tickTack_Elapsed(object sender, ElapsedEventArgs e) { ... } } It works for some time (like 10-15 days) then it stops. I mean the service shows as running, but it does not do anything. I make some logging and the problem can be the timer, because after the interval it does not call the tickTack_Elapsed function. I was thinking about rewrite it without a timer, using an endless loop, which stops the processing for the amount of time I set up. This is also not an elegant solution and I think it can have some side effects regarding memory. The Timer is used from the System.Timers namespace, the environment is Windows 2003. I used this approach in two different services on different servers, but both is producing this behavior (this is why I thought that it is somehow connected to my code or the framework itself). Does somebody experienced this behavior? What can be wrong? Edit: I edited both services. One got a nice try-catch everywhere and more logging. The second got a timer-recreation on a regular basis. None of them stopped since them, so if this situation remains for another week, I will close this question. Thank you for everyone so far. Edit: I close this question because nothing happened. I mean I made some changes, but those changes are not really relevant in this matter and both services are running without any problem since then. Please mark it as "Closed for not relevant anymore".

    Read the article

  • How do you unit test the real world?

    - by Kim Sun-wu
    I'm primarily a C++ coder, and thus far, have managed without really writing tests for all of my code. I've decided this is a Bad Idea(tm), after adding new features that subtly broke old features, or, depending on how you wish to look at it, introduced some new "features" of their own. But, unit testing seems to be an extremely brittle mechanism. You can test for something in "perfect" conditions, but you don't get to see how your code performs when stuff breaks. A for instance is a crawler, let's say it crawls a few specific sites, for data X. Do you simply save sample pages, test against those, and hope that the sites never change? This would work fine as regression tests, but, what sort of tests would you write to constantly check those sites live and let you know when the application isn't doing it's job because the site changed something, that now causes your application to crash? Wouldn't you want your test suite to monitor the intent of the code? The above example is a bit contrived, and something I haven't run into (in case you haven't guessed). Let me pick something I have, though. How do you test an application will do its job in the face of a degraded network stack? That is, say you have a moderate amount of packet loss, for one reason or the other, and you have a function DoSomethingOverTheNetwork() which is supposed to degrade gracefully when the stack isn't performing as it's supposed to; but does it? The developer tests it personally by purposely setting up a gateway that drops packets to simulate a bad network when he first writes it. A few months later, someone checks in some code that modifies something subtly, so the degradation isn't detected in time, or, the application doesn't even recognize the degradation, this is never caught, because you can't run real world tests like this using unit tests, can you? Further, how about file corruption? Let's say you're storing a list of servers in a file, and the checksum looks okay, but the data isn't really. You want the code to handle that, you write some code that you think does that. How do you test that it does exactly that for the life of the application? Can you? Hence, brittleness. Unit tests seem to test the code only in perfect conditions(and this is promoted, with mock objects and such), not what they'll face in the wild. Don't get me wrong, I think unit tests are great, but a test suite composed only of them seems to be a smart way to introduce subtle bugs in your code while feeling overconfident about it's reliability. How do I address the above situations? If unit tests aren't the answer, what is? Thanks!

    Read the article

  • php connecting to mysql server(localhost) very slow

    - by Ahmad
    actually its little complicated: summary: the connection to DB is very slow. the page rendering takes around 10 seconds but the last statement on the page is an echo and i can see its output while the page is loading in firefox (IE is same). in google chrome the output becomes visible only when the loading finishes. loading time is approximately the same across browsers. on debugging i found out that its the DB connectivity that is creating problem. the DB was on another machine. to debug further. i deployed the DB on my local machine .. so now the DB connection is at 127.0.0.1 but the connectivity still takes long time. this means that the issue is with APACHE/PHP and not with mysql. but then i deployed my code on another machine which connects to DB remotely.and everything seems fine. basically the application uses couple of mod_rewrite.. but i removed all the .htaccess files and the slow connectivity issue remains.. i installed another APACHE on my machine and used default settings. the connection was still very slow. i added following statements to measure the execution time $stime = microtime(); $stime = explode(" ",$stime); $stime = $stime[1] + $stime[0]; // my code -- it involves connection to DB $mtime = microtime(); $mtime = explode(" ",$mtime); $mtime = $mtime[1] + $mtime[0]; $totaltime = ($mtime - $stime); echo $totaltime; the output is 0.0631899833679 but firebug Net panel shows total loading time of 10-11 seconds. same is the case with google chrome i tried to turn off windows firewall.. connectivity is still slow and i just can't quite find the reason.. i've tried multiple DB servers.. multiple apaches.. nothing seems to be working.. any idea of what might be the problem?

    Read the article

  • What libraries provide cross-platform 3D and P2P support?

    - by uckelman
    I'm trying to find a constellation of libraries which, taken together, meet the following requirements: Smooth scaling, rotation, panning (in two dimensions). I'll have a large bitmap (or SVG, in some cases), maybe up to 10000x10000 pixels, which serves as map, with some middling number of small bitmaps (or, again, possibly SVG) that can be dragged around over it. I need to be able to zoom, rotate, and pan this scene; however, the view will always be normal to (i.e., looking head-on at) the large bitmap, so I'm not really using the depth dimension. Peer-to-peer. I'd like for multiple users to be able to connect in order to share one of the scenes mentioned above, preferably peer-to-peer, without much configuration by the user. I'm intending to have a server running for cases where users are unable to connect P2P; I'd like to have the failover happen automatically, or possibly have some way of promoting clients who are capable to be servers themselves. Synchronization. Once a user has started dragging one of the small bitmaps (a piece), no other user should be able to drag that piece until the drag stops. I haven't thought of exactly how to do this---there might be a simple solution, or this kind of synchronization might be something that a library provides. Cross(ish)-platform. I need to be able to run on Linux, Windows, and Mac OS. It would be nice to also be able to run on tablets. Having mostly the same code for all platforms is a plus, but not absolutely necessary. (L)GPL compatible. I'm planning to release under the LGPL or GPL, preferably the latter, so I need libraries which have compatible licenses. I'm not set on any particular language, I'd like to use the library or libraries which make the work easiest, though my preference is to work in at most two languages for the project. (The Model could potentially be in one language and the View in another, so they could talk to each other via some protocol I define, if that would get me a better selection of libraries to use.) Can anyone offer suggestions for what to use?

    Read the article

  • What do I need to distribute (keys, certs) for Python w/ SSL-socket connection?

    - by fandingo
    I'm trying to write a generic server-client application that will be able to exchange data amongst servers. I've read over quite a few OpenSSL documents, and I have successfully setup my own CA and created a cert (and private key) for testing purposes. I'm stuck with Python 2.3, so I can't use the standard "ssl" library. Instead, I'm stuck with PyOpenSSL, which doesn't seem bad, but there aren't many documents out there about it. My question isn't really about getting it working. I'm more confused about the certificates and where they need to go. Here are my two programs that do work: Server: #!/bin/env python from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER|SSL.VERIFY_FAIL_IF_NO_PEER_CERT, verify_cb) # ?????? ctx.use_privatekey_file('./Dmgr-key.pem') ctx.use_certificate_file('Dmgr-cert.pem') # ?????? ctx.load_verify_locations('./CAcert.pem') server = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) server.bind(('', 50000)) server.listen(3) a, b = server.accept() c = a.recv(1024) print(c) Client: from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER, verify_cb) # ?????????? ctx.use_privatekey_file('/home/justin/code/work/CA/private/Dmgr-key.pem') ctx.use_certificate_file('/home/justin/code/work/CA/Dmgr-cert.pem') # ????????? ctx.load_verify_locations('/home/justin/code/work/CA/CAcert.pem') sock = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) sock.connect(('10.0.0.3', 50000)) a = Tester(2, 2) b = pickle.dumps(a) sock.send("Hello, world") sock.flush() sock.send(b) sock.shutdown() sock.close() I found this information from ftp://ftp.pbone.net/mirror/ftp.pld-linux.org/dists/2.0/PLD/i586/PLD/RPMS/python-pyOpenSSL-examples-0.6-2.i586.rpm which contains some example scripts. As you might gather, I don't fully understand the sections between the " # ????????." I don't get why the certificate and private key are needed on both the client and server. I'm not sure where each should go, but shouldn't I only need to distribute one part of the key (probably the public part)? It undermines the purpose of having asymmetric keys if you still need both on each server, right? I tried alternating removing either the pkey or cert on either box, and I get the following error no matter which I remove: OpenSSL.SSL.Error: [('SSL routines', 'SSL3_READ_BYTES', 'sslv3 alert handshake failure'), ('SSL routines', 'SSL3_WRITE_BYTES', 'ssl handshake failure')] Could someone explain if this is the expected behavior for SSL. Do I really need to distribute the private key and public cert to all my clients? I'm trying to avoid any huge security problems, and leaking private keys would tend to be a big one... Thanks for the help!

    Read the article

  • C# some sort of plugin system

    - by nLL
    Hi, I am a mobile web developer and trying to monetize my traffic with mobile ad services and i have a problem. First of all to get most of out of your ads you usually need to do server side request to advert company's servers and there are quite few ad services. Problem starts when you want to use them in one site. All have different approaches to server side calls and trying to maintain and implement those ad codes becomes pain after a while. So I decided to write a class system where i can simply create methods for every company and upload it to my site. So far i have public Advert class public AdPublisher class with GetAd method that returns an Advert public Adservice class that has Service names as enum I also have converted server request codes of all ad services i use to classes. It works ok but I want to be able to create an ad service class upload it so that asp.net app can import/recognize it automatically like a plugin system. As I am new to .net I have no idea where to start or how to do it. To make thing clear here are my classes namespace Mobile.Publisher { public class AdPublisher { public AdPublisher() { IsTest = false; } public bool IsTest { get; set; } public HttpRequest CurrentVisitorRequestInfo { get; set; } public Advert GetAd(AdService service) { Advert returnAd = new Advert(); returnAd.Success = true; if (this.CurrentVisitorRequestInfo == null) { throw new Exception("CurrentVisitorRequestInfo for AdPublisher not set!"); } if (service == null) { throw new Exception("AdService not set!"); } if (service.ServiceName == AdServices.Admob) { returnAd.ReturnedAd = AdmobAds("000000"); } return returnAd; } } public enum AdServices { Admob, ServiceB, ServiceC } public class Advert { public bool Success { get; set; } public string ReturnedAd { get; set; } } public partial class AdService { public AdServices ServiceName { get; set; } public string PublisherOrSiteId { get; set; } public string ZoneOrChannelId { get; set; } } private string AdmobAds(string publisherid) { //snip return "test" } } Basically i want to be able to add another ad service and code like private string AdmobAds(string publisherid){ } So that it can be imported and recognised as ad service. I hope i was clear enough

    Read the article

  • Should I HttpCombine Google Jquery Hosted File?

    - by chobo2
    Hi I am using something called HttpCombiner: http://code.msdn.microsoft.com/HttpCombiner An HTTP handler that combines multiple CSS, Javascript or URL into one response for faster page load. It can combine, compress and cache response which results in faster page load and better scalability of web application It's a good practice to use many small Javascript and CSS files instead of one large Javascript/CSS file for better code maintainability, but bad in terms of website performance. Although you should write your Javascript code in small files and break large CSS files into small chunks but when browser requests those javascript and css files, it makes one Http request per file. Every Http Request results in a network roundtrip form your browser to the server and the delay in reaching the server and coming back to the browser is called latency. So, if you have four javascripts and three css files loaded by a page, you are wasting time in seven network roundtrips. Within USA, latency is average 70ms. So, you waste 7x70 = 490ms, about half a second of delay. Outside USA, average latency is around 200ms. So, that means 1400ms of waiting. Browser cannot show the page properly until Css and Javascripts are fully loaded. So, the more latency you have, the slower page loads. You can reduce the wait time by using a CDN. Read my previous blog post about using CDN. However, a better solution is to deliver multiple files over one request using an HttpHandler that combines several files and delivers as one output. So, instead of putting many or tag, you just put one and one tag, and point them to the HttpHandler. You tell the handler which files to combine and it delivers those files in one response. This saves browser from making many requests and eliminates the latency. This Http Handler reads the file names defined in a configuration and combines all those files and delivers as one response. It delivers the response as gzip compressed to save bandwidth. Moreover, it generates proper cache header to cache the response in browser cache, so that, browser does not request it again on future visit. Now I am wondering since it can handle adding links should I put in it the jquery file? The reason I am not sure is if it gets combined with my other files I think I might close the advantages of it being hosted on googles servers such as caching(my thinking is if it gets combined it will look different so even if a user has it in it's cache I am not sure if it will use the one for the cahce or not). So should I combine it or only the finals that I am using locally?

    Read the article

  • Nagios Creating lots of zombie process

    - by pradeepchhetri
    In my monitoring box, I have lots of zombie process created by nagios and they gets remove quickly also. I am using active checks to perform monitoring of my servers. I accumulated the defunct processes created using the following command: $ top -d 0.25 -b -n 20 > topout.txt This collected the output of top with 0.25s delay 20 times. I did grep on the topout.txt for the defunct process. $ cat topout.txt | grep defunct I get the following output. 8957 nagios 20 0 0 0 0 Z 6.0 0.0 0:00.02 nagios <defunct> 8951 nagios 20 0 0 0 0 Z 3.0 0.0 0:00.01 nagios <defunct> 8954 nagios 20 0 0 0 0 Z 3.0 0.0 0:00.01 nagios <defunct> 8945 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 8946 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 8980 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9000 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.00 nagios <defunct> 9024 nagios 20 0 0 0 0 Z 7.0 0.0 0:00.02 nagios <defunct> 9025 nagios 20 0 0 0 0 Z 3.5 0.0 0:00.01 nagios <defunct> 9040 nagios 20 0 0 0 0 Z 3.1 0.0 0:00.01 nagios <defunct> 9086 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9087 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9123 nagios 20 0 0 0 0 Z 6.1 0.0 0:00.02 nagios <defunct> 9126 nagios 20 0 0 0 0 Z 3.0 0.0 0:00.01 nagios <defunct> 9131 nagios 20 0 0 0 0 Z 3.0 0.0 0:00.01 nagios <defunct> 9091 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.05 nagios <defunct> 9111 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9119 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9118 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9151 nagios 20 0 0 0 0 Z 2.9 0.0 0:00.02 nagios <defunct> 9153 nagios 20 0 0 0 0 Z 2.9 0.0 0:00.02 nagios <defunct> 9150 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9164 nagios 20 0 0 0 0 Z 3.5 0.0 0:00.02 nagios <defunct> 9171 nagios 20 0 0 0 0 Z 3.5 0.0 0:00.02 nagios <defunct> 9154 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9156 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9163 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9167 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9178 nagios 20 0 0 0 0 Z 3.8 0.0 0:00.02 nagios <defunct> 9174 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9179 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9182 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> Can somebody help me in finding out the reason of these zombie processes and how i can prevent these zombie processes ?

    Read the article

  • Strange DNS issue with internal Windows DNS

    - by Brady
    I've encountered a strange issue with our internal Windows DNS infrastructure. We have a website hosted on Amazon EC2 with the DNS running on Amazon Route 53. In the publicly facing DNS we have the wildcard record setup as an A record Alias pointing to an AWS Elastic Load Balancer sitting in front of our EC2 instances. For those who are not aware, the A record Alias behaves like a CNAME record, however no extra lookup is required on the client side (See http://docs.amazonwebservices.com/Route53/latest/DeveloperGuide/CreatingAliasRRSets.html for more information). We have a secondary domain that has the www subdomain as a CNAME pointing to a subdomain on the primary domain, which resolves against the wildcard entry. For example the subdomain www.secondary.com is a CNAME to sub1.primary.com, but there is no explicit entry for sub1.primary.com, so it resolves to wildcard record. This setup work without issue publicly. The issue comes in our internal DNS at our corporate office where we use the same primary domain for some internal only facing sites. In this setup we have two Active Directory DNS servers with one Server 2003 and one Server 2008 R2 instance. The zone is an AD integrated zone, but it is not the AD domain. In the internal DNS we have the wildcard record pointing to a third external domain, that is also hosted on Route 53 with an A record Alias pointing to the same ELB instance. For example, *.primary.com is a CNAME to tertiary.com, so in effect you have www.secondary.com as a CNAME to *.primary.com, which is a CNAME to tertiary.com. In this setup, attempting to resolve www.secondary.com will fail. Clearing the cache on the Server 2003 instance will allow it to resolve once, but subsequent attempts will fail. It fails even with a clean cache against the 2008 R2 server. It seems that only Windows clients are affected. A Mac running OSX Mountain Lion does not experience this issue. I'm even able to replicate the issue using nslookup. Against the 2003 server, with a freshly cleaned cache, I recieve the appropriate response from www.secondary.com: Non-authoritative answer: Name: subdomain.primary.com Address: x.x.x.x (Public IP) Aliases: www.secondary.com Subsequent checks simply return: Non-authoritative answer: Name: www.secondary.com If you set the type to CNAME you get the appropriate responses all the time. www.secondary.com gives you: Non-authoritative answer: www.secondary.com canonical name = subdomain.primary.com And subdomain.primary.com gives you: subdomain.primary.com canonical name = tertiary.com And setting type back to A gives you the appropriate response for tertiary.com: Non-authoritative answer: Name: tertiary.com Address: x.x.x.x (Public IP) Against the 2008 R2 server things are a little different. Even with a clean cache, www.secondary.com returns just: Non-authoritative answer: Name: www.secondary.com The CNAME records are returned appropriately. www.secondary.com returns: Non-authoritative answer: www.secondary.com canonical name = subdomain.primary.com And subdomain.primary.com gives you: subdomain.primary.com canonical name = tertiary.com tertiary.com internet address = x.x.x.x (Public IP) tertiary.com AAAA IPv6 address = x::x (Public IPv6) And setting type back to A gives you the appropriate response for tertiary.com: Non-authoritative answer: Name: tertiary.com Address: x.x.x.x (Public IP) Requests directly against subdomain.primary.com work correctly.

    Read the article

  • Capistrano asks for SSH password when deploying from local machine to server

    - by GhostRider
    When I try to ssh to a server, I'm able to do it as my id_rsa.pub key is added to the authorized keys in the server. Now when I try to deploy my code via Capistrano to the server from my local project folder, the server asks for a password. I'm unable to understand what could be the issue if I'm able to ssh and unable to deploy to the same server. $ cap deploy:setup "no seed data" triggering start callbacks for `deploy:setup' * 13:42:18 == Currently executing `multistage:ensure' *** Defaulting to `development' * 13:42:18 == Currently executing `development' * 13:42:18 == Currently executing `deploy:setup' triggering before callbacks for `deploy:setup' * 13:42:18 == Currently executing `db:configure_mongoid' * executing "mkdir -p /home/deploy/apps/development/flyingbird/shared/config" servers: ["dev1.noob.com", "176.9.24.217"] Password: Cap script: # gem install capistrano capistrano-ext capistrano_colors begin; require 'capistrano_colors'; rescue LoadError; end require "bundler/capistrano" # RVM bootstrap # $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require 'rvm/capistrano' set :rvm_ruby_string, 'ruby-1.9.2-p290' set :rvm_type, :user # or :user # Application setup default_run_options[:pty] = true # allow pseudo-terminals ssh_options[:forward_agent] = true # forward SSH keys (this will use your SSH key to get the code from git repository) ssh_options[:port] = 22 set :ip, "dev1.noob.com" set :application, "flyingbird" set :repository, "repo-path" set :scm, :git set :branch, fetch(:branch, "master") set :deploy_via, :remote_cache set :rails_env, "production" set :use_sudo, false set :scm_username, "user" set :user, "user1" set(:database_username) { application } set(:production_database) { application + "_production" } set(:staging_database) { application + "_staging" } set(:development_database) { application + "_development" } role :web, ip # Your HTTP server, Apache/etc role :app, ip # This may be the same as your `Web` server role :db, ip, :primary => true # This is where Rails migrations will run # Use multi-staging require "capistrano/ext/multistage" set :stages, ["development", "staging", "production"] set :default_stage, rails_env before "deploy:setup", "db:configure_mongoid" # Uncomment if you use any of these databases after "deploy:update_code", "db:symlink_mongoid" after "deploy:update_code", "uploads:configure_shared" after "uploads:configure_shared", "uploads:symlink" after 'deploy:update_code', 'bundler:symlink_bundled_gems' after 'deploy:update_code', 'bundler:install' after "deploy:update_code", "rvm:trust_rvmrc" # Use this to update crontab if you use 'whenever' gem # after "deploy:symlink", "deploy:update_crontab" if ARGV.include?("seed_data") after "deploy", "db:seed" else p "no seed data" end #Custom tasks to handle resque and redis restart before "deploy", "deploy:stop_workers" after "deploy", "deploy:restart_redis" after "deploy", "deploy:start_workers" after "deploy", "deploy:cleanup" 'Create symlink for public uploads' namespace :uploads do task :symlink do run <<-CMD rm -rf #{release_path}/public/uploads && mkdir -p #{release_path}/public && ln -nfs #{shared_path}/public/uploads #{release_path}/public/uploads CMD end task :configure_shared do run "mkdir -p #{shared_path}/public" run "mkdir -p #{shared_path}/public/uploads" end end namespace :rvm do desc 'Trust rvmrc file' task :trust_rvmrc do run "rvm rvmrc trust #{current_release}" end end namespace :db do desc "Create mongoid.yml in shared path" task :configure_mongoid do db_config = <<-EOF defaults: &defaults host: localhost production: <<: *defaults database: #{production_database} staging: <<: *defaults database: #{staging_database} EOF run "mkdir -p #{shared_path}/config" put db_config, "#{shared_path}/config/mongoid.yml" end desc "Make symlink for mongoid.yml" task :symlink_mongoid do run "ln -nfs #{shared_path}/config/mongoid.yml #{release_path}/config/mongoid.yml" end desc "Fill the database with seed data" task :seed do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake db:seed" end end namespace :bundler do desc "Symlink bundled gems on each release" task :symlink_bundled_gems, :roles => :app do run "mkdir -p #{shared_path}/bundled_gems" run "ln -nfs #{shared_path}/bundled_gems #{release_path}/vendor/bundle" end desc "Install bundled gems " task :install, :roles => :app do run "cd #{release_path} && bundle install --deployment" end end namespace :deploy do task :start, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Restart the app" task :restart, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Start the workers" task :stop_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:stop_workers" end desc "Restart Redis server" task :restart_redis do "/etc/init.d/redis-server restart" end desc "Start the workers" task :start_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:start_workers" end end

    Read the article

  • Roaming user profile issues on Server 2008

    - by Alicia White
    I thought I cleared a user's profile from 2008, but it keeps coming back. So, I was looking for the best way to clear a roaming profile in Server 2008, but I have been unable to find anything. But, I did see the post here: http://serverfault.com/questions/18724/user-profile-keeps-loading-temp-profile I wanted to add a comment to that post, but it was closed as not being related to sysadmin. But, I think it IS related because I dealt with precisely this same problem on our Wndows 2008 terminal server. Here was the issue: we have a user who was getting an "unable to load your roaming profile" type of error at logon in Windows 2008. Looking at the server, we could see her temp profile listed in the profile list while she was loggged (listed as a "temporary" and not a "roaming" profile). While she was logged on, a folder called C:\Users\Temp.DOMAIN existed in the users folder, but that disappeared as soon as she logged out. When this thing happened in 2003, we would clear the contents of the roaming profile folder & delete the temp folder in C:\Documents and Settings. The thing is, 2008 behaves a bit differently. Server 2008 created a new roaming profile folder in the roaming profile folder share: \SERVER\ProfileShare\UserName.V2 The local profile disappears from the profile list in System Properties, so there is no profile to clear Also the local profile folder, C:\Users\Temp.DOMAIN doesn't stay on the server when the user logs out, so we can't delete that as we would normally do when this sort of thing happens in Windows 2003 Despite all of this, every time the user logs back on, the frickin' Temp profile always comes back. One of my team-mates, who is much more experienced with 2008, said I should check the registry for the user's profile in this key (the users are listed by SID): HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WindowsNT\CurrentVersion\ProfileList I saw the user's SID listed there, but it ended in .BAK. I checked several other servers where she is having the same profile errors: in all cases, her SID ended with .BAK. For example (xxx replacing the LONG SID): HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\S-1-5-21-xxxxx-xxxx.bak On the server she was logged on to, there were two keys for her profile in the registry: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\S-1-5-21-xxxxx-xxxx and HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\S-1-5-21-xxxxx-xxxx.bak So, here is how I cleared up the issue. I had the user log off. I deleted the apparently bad profiles ending in .BAK from the ProfileList key on each server where it appeared. I made sure her roaming profile folder was empty I made sure that all the TEMP profile folders were gone The user logged back on: no more profile errors! Anyway, I wanted to make a comment on that closed question, but I didn't see any way to re-open the question so I could add it. But, I also would like to know if this is the best practice to clear out a bad roaming profile for Server 2008? I'm having a hard time finding any instructions on line on how best to do this, but this method I used seemed to work. I'd like to find some documentation to give to our Level 1 support staff so they will know how to clear user profiles on 2008 since this seems to be more involved that clearing user profiles in server 2003. Thanks, Alicia

    Read the article

  • iproute2 not functioning ("RTNETLINK answers: Operation not supported")

    - by James Watt
    The command and error message: gtwy ~ # ip rule add from 64.251.23.186 table t1 RTNETLINK answers: Operation not supported Older article of the same problem, but it did not help me: http://forums.gentoo.org/viewtopic-t-696982-start-0-postdays-0-postorder-asc-highlight-.html I have looked on google at great lengths to try to find a solution. It seems that my kernel configuration is missing something? Any help or ideas would be appreciated. My system/kernel is: 2.6.36-gentoo-r5 #3 SMP Thu Jan 13 10:49:06 EST 2011 x86_64 Intel(R) Xeon(R) CPU X3220 @ 2.40GHz GenuineIntel GNU/Linux. I am posting this on SuperUser since this system is used as a workstation and this problem is unrelated to specific tasks that are handled exclusively by servers. iproute2 is installed: gtwy etc # emerge --search iproute2 Searching... [ Results for search key : iproute2 ] [ Applications found : 1 ] * sys-apps/iproute2 Latest version available: 2.6.35-r2 Latest version installed: 2.6.35-r2 Size of files: 378 kB Homepage: http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2 Description: kernel routing and traffic control utilities License: GPL-2 A small snippet of my kernel .config (view entire .config): gtwy linux # cat .config | grep NETLINK CONFIG_NETFILTER_NETLINK=y CONFIG_NETFILTER_NETLINK_QUEUE=y CONFIG_NETFILTER_NETLINK_LOG=y CONFIG_NF_CT_NETLINK=y CONFIG_SCSI_NETLINK=y gtwy linux # cat .config | grep IP_ADVANCED_ROUTER CONFIG_IP_ADVANCED_ROUTER=y gtwy linux # cat .config | grep INGRESS CONFIG_NET_SCH_INGRESS=y gtwy linux # cat .config | grep NET_SCHED CONFIG_NET_SCHED=y emerge --info Portage 2.1.9.25 (default/linux/amd64/10.0, gcc-4.1.2, glibc-2.10.1-r1, 2.6.36-gentoo-r5 x86_64) ================================================================= System uname: Linux-2.6.36-gentoo-r5-x86_64-Intel-R-_Xeon-R-_CPU_X3220_@_2.40GHz-with-gentoo-1.12.13 Timestamp of tree: Thu, 13 Jan 2011 01:15:01 +0000 app-shells/bash: 4.0_p37 dev-java/java-config: 1.3.7-r1, 2.1.10 dev-lang/python: 2.4.6, 2.5.4-r4, 2.6.5-r2, 3.1.2-r3 sys-apps/baselayout: 1.12.13 sys-apps/sandbox: 1.6-r2 sys-devel/autoconf: 2.13, 2.65 sys-devel/automake: 1.9.6-r2::<unknown repository>, 1.10.2, 1.11.1 sys-devel/binutils: 2.20.1-r1 sys-devel/gcc: 4.1.2, 4.3.4, 4.4.3-r2 sys-devel/gcc-config: 1.4.1 sys-devel/libtool: 2.2.6b sys-devel/make: 3.81 virtual/os-headers: 2.6.30-r1 (sys-kernel/linux-headers) ACCEPT_KEYWORDS="amd64" ACCEPT_LICENSE="*" CBUILD="x86_64-pc-linux-gnu" CFLAGS="-march=nocona -O2 -pipe" CHOST="x86_64-pc-linux-gnu" CONFIG_PROTECT="/etc /var/bind" CONFIG_PROTECT_MASK="/etc/ca-certificates.conf /etc/env.d /etc/env.d/java/ /etc/fonts/fonts.conf /etc/gconf /etc/php/apache2-php5/ext-active/ /etc/php/cgi-php5/ext-active/ /etc/php/cli-php5/ext-active/ /etc/revdep-rebuild /etc/sandbox.d /etc/terminfo" CXXFLAGS="-march=nocona -O2 -pipe" DISTDIR="/usr/portage/distfiles" FEATURES="assume-digests binpkg-logs distlocks fixlafiles fixpackages news parallel-fetch protect-owned sandbox sfperms strict unknown-features-warn unmerge-logs unmerge-orphans userfetch" GENTOO_MIRRORS="http://gentoo.chem.wisc.edu/gentoo" LC_ALL="en_US.UTF-8" LDFLAGS="-Wl,-O1 -Wl,--as-needed" LINGUAS="en" MAKEOPTS="-j5" PKGDIR="/usr/portage/packages" PORTAGE_CONFIGROOT="/" PORTAGE_RSYNC_OPTS="--recursive --links --safe-links --perms --times --compress --force --whole-file --delete --stats --timeout=180 --exclude=/distfiles --exclude=/local --exclude=/packages" PORTAGE_TMPDIR="/var/tmp" PORTDIR="/usr/portage" PORTDIR_OVERLAY="/usr/local/portage" SYNC="rsync://rsync.namerica.gentoo.org/gentoo-portage" USE="acl amd64 apache2 berkdb bzip2 cli cracklib crypt ctype cups curl cxx dri fortran gdbm gpm iconv jpeg jpeg2k libwww mmx modules mudflap multilib mysql ncurses nls nptl nptlonly openmp pam pcre perl php png pppd python readline session sockets sse sse2 ssl symlink sysfs tcpd threads unicode vhosts xml xorg xsl zlib" ALSA_CARDS="ali5451 als4000 atiixp atiixp-modem bt87x ca0106 cmipci emu10k1x ens1370 ens1371 es1938 es1968 fm801 hda-intel intel8x0 intel8x0m maestro3 trident usb-audio via82xx via82xx-modem ymfpci" ALSA_PCM_PLUGINS="adpcm alaw asym copy dmix dshare dsnoop empty extplug file hooks iec958 ioplug ladspa lfloat linear meter mmap_emul mulaw multi null plug rate route share shm softvol" APACHE2_MODULES="actions alias auth_basic authn_alias authn_anon authn_dbm authn_default authn_file authz_dbm authz_default authz_groupfile authz_host authz_owner authz_user autoindex cache cgi cgid dav dav_fs dav_lock deflate dir disk_cache env expires ext_filter file_cache filter headers include info log_config logio mem_cache mime mime_magic negotiation rewrite setenvif speling status unique_id userdir usertrack vhost_alias" COLLECTD_PLUGINS="df interface irq load memory rrdtool swap syslog" ELIBC="glibc" GPSD_PROTOCOLS="ashtech aivdm earthmate evermore fv18 garmin garmintxt gpsclock itrax mtk3301 nmea ntrip navcom oceanserver oldstyle oncore rtcm104v2 rtcm104v3 sirf superstar2 timing tsip tripmate tnt ubx" INPUT_DEVICES="keyboard mouse evdev" KERNEL="linux" LCD_DEVICES="bayrad cfontz cfontz633 glk hd44780 lb216 lcdm001 mtxorb ncurses text" LINGUAS="en" PHP_TARGETS="php5-3" RUBY_TARGETS="ruby18" USERLAND="GNU" VIDEO_CARDS="fbdev glint intel mach64 mga neomagic nouveau nv r128 radeon savage sis tdfx trident vesa via vmware dummy v4l" XTABLES_ADDONS="quota2 psd pknock lscan length2 ipv4options ipset ipp2p iface geoip fuzzy condition tee tarpit sysrq steal rawnat logmark ipmark dhcpmac delude chaos account" Unset: CPPFLAGS, CTARGET, EMERGE_DEFAULT_OPTS, FFLAGS, INSTALL_MASK, LANG, PORTAGE_BUNZIP2_COMMAND, PORTAGE_COMPRESS, PORTAGE_COMPRESS_FLAGS, PORTAGE_RSYNC_EXTRA_OPTS

    Read the article

  • Debian, 6rd tunnel, and connection troubles

    - by Chris B
    Long story short I am having issues with IPv6 using a 6rd tunnel with my ISP, charter business. They offer a 6rd tunnel that I think I have properly set up, but the server doesn’t reply to every ipv6 request. When the server has the network interfaces idle with no traffic for about 10 minutes, then IPv6 stops accepting inbound connections. to re-allow it, I must go into the server, and make it do a outbound ipv6 connection (normally a ping) to start it back up. Whats weird though i that if I run iptraf when its not working, it still shows a inbound ipv6 packet… the server is just not replying, and I can’t figure out why. Also, if I try to access my server over IPv6 from a house about 1 mile away on the same ISP, it is never able to connect. it always times out, but again the iptraf shows a ipv6 inbound packet. Again, it just does not reply. To test if my server is accessible through IPv6 I always have to use my vzw 4g phone (they use IPv6) or ipv6proxy dot net. Here is all of the configuration information my ISP gives on there tunnel server: 6rd Prefix = 2602:100::/32 Border Relay Address = 68.114.165.1 6rd prefix length = 32 IPv4 mask length = 0 Here is my /etc/network/interfaces for ipv6 (used x's to block real addresses) auto charterv6 iface charterv6 inet6 v4tunnel address 2602:100:189f:xxxx::1 netmask 32 ttl 64 gateway ::68.114.165.1 endpoint 68.114.165.1 local 24.159.218.xxx up ip link set mtu 1280 dev charterv6 here is my iptables config filter :INPUT DROP [0:0] :fail2ban-ssh – [0:0] :OUTPUT ACCEPT [0:0] :FORWARD DROP [0:0] :hold – [0:0] -A INPUT -p tcp -m tcp —dport 22 -j fail2ban-ssh -A INPUT -m state —state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m multiport -j ACCEPT —dports 80,443,25,465,110,995,143,993,587,465,22 -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp —dport 10000 -j ACCEPT -A INPUT -p tcp -m tcp —dport 5900:5910 -j ACCEPT -A fail2ban-ssh -j RETURN -A INPUT -p icmp -j ACCEPT COMMIT and last here is my ip6tables firewall config filter :INPUT DROP [1653:339023] :FORWARD DROP [0:0] :OUTPUT ACCEPT [60141:13757903] :hold – [0:0] -A INPUT -m state —state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m multiport —dports 80,443,25,465,110,995,143,993,587,465,22 -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp —dport 10000 -j ACCEPT -A INPUT -p tcp -m tcp —dport 5900:5910 -j ACCEPT -A INPUT -p ipv6-icmp -j ACCEPT COMMIT So Summary: 1.iptraf always shows IPv6 traffic, so its always making it to the server 2.server stops replying on ipv6 after no traffic for awhile (10 minutesish) until a outbound connection is made, then the process repeats. 3.server is NEVER accessable vi same ISP (yet iptraf still shows ipv6 request) Notes: When I try to access it from the same ISP from across town, even with iptables and ip6tables allowing ALL inbound traffic, this is what iptraf shows. IPv6 (92 bytes) from 97.92.18.xxx to 24.159.218.xxx on eth0 ICMP dest unrch (port) (120 bytes) from 24.159.218.xxx to 97.92.18.xxx on eth1 its strange, like its trying to forward to LAN? (eth1 is LAN, eth0 is WAN) even with the IPv6 address being set in the hosts file to the servers domain name. With iptables set up normally with the above configurations it only says this: IPv6 (100 bytes) from 97.92.18.xxx to 24.159.218.xxx on eth0 Im REALLY stuck on this, and any help would be GREATLY appreciated.

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >