Search Results

Search found 21263 results on 851 pages for 'website deployment'.

Page 201/851 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • Restricted Flow Of Power

    - by user13827
    I'm sure all is fine, but i need some reassurance. Last month my company launched consolidated two of their websites into one new website. www.fdmgroup.com and www.fdmacademy.com into a newly designed www.fdmgroup.com. Because the FDM Academy grew as it's own brand we decided not to just forward the domain to the fdmgroup website, but instead just mirror the new FDM Group website and use a canonical tags to the FDM Group domain (so the link juice will pass to the FDM Group domain pages) The website has be live for nearly a month and i don't believe any power has passed down through the FDM Group website to it's deeper pages even though 301 redirects from the legacy group and academy domains in place. I am also seeing the same problem on the FDM Academy domain, but i expect to see this as every page has a canonical to the same page on the Is there anything which is restricting the flow of power through the site, or am i just being impatient. Thanks in advance Jon

    Read the article

  • What is the reason why some websites are hacked? [closed]

    - by adietan63
    I just want to know. What is the reason why some website are hacked? Is it the website itself or is it the web server? I'm so curious about this because i want to develop my website and I just want to know what are the things I need to do to protect my website? Assuming that i will start it from the scratch. Please give me advice or other technical stuff that will open my mind to developed my website that has security features.. Thank you.

    Read the article

  • How to find and fix issue with Pound and HAProxy

    - by javano
    Pound sits in front of HAProxy (on the same box) to perform SSL off-load. Requests are passed to 127.0.0.1:80 where HAProxy then balances the requests across backend servers for a hosted ASP .NET web app. A user is getting HTTP error 500 (Internal Server Error) returned to their browser this morning and I can see it is comming from Pound. They see no log entry in their web app (IIS) server logs, so its not hitting the back end servers. I think the problem is possibly with HAProxy. Lets review the logs: Initialy the users (1.2.3.4) hits Pound on the load balancer: Nov 12 10:02:24 lb1 pound: a-website.com 1.2.3.4 - - [12/Nov/2012:10:02:23 +0000] "POST /eventmanagement/EditEvent.aspx?eventOid=623fc423-2329-4cab-8be5-72a97709570d HTTP/1.1" 200 155721 "https://a-website.com/eventmanagement/EditEvent.aspx?eventOid=623fc423-2329-4cab-8be5-72a97709570d" "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.96 Safari/537.4" Nov 12 10:02:24 lb1 pound: a-website.com 1.2.3.4 - - [12/Nov/2012:10:02:24 +0000] "GET /Controls/ReferringOrganisationLogoImageHandler.ashx HTTP/1.1" 200 142 "https://a-website.com/eventmanagement/EditEvent.aspx?eventOid=623fc423-2329-4cab-8be5-72a97709570d" "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.96 Safari/537.4" Nov 12 10:02:24 lb1 pound: a-website.com 1.2.3.4 - - [12/Nov/2012:10:02:24 +0000] "GET /eventmanagement/WebCoreModule.ashx?__ac=1&__ac_wcmid=RAWCIL&__ac_lib=Radactive.WebControls.ILoad&__ac_key=RAWVCO_11&__ac_sid=fnoz2hmvirfivb2btbubbw45&__ac_cn=&__ac_cp=BVDXDWFLDWFMHDFJBOEGBDFLFOD5EEFD&__ac_fr=634883113445054092&__ac_ssid= HTTP/1.1" 200 11206 "https://a-website.com/eventmanagement/EditEvent.aspx?eventOid=623fc423-2329-4cab-8be5-72a97709570d" "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.96 Safari/537.4" Nov 12 10:02:24 lb1 pound: a-website.com 1.2.3.4 - - [12/Nov/2012:10:02:24 +0000] "GET /eventmanagement/WebCoreModule.ashx?__ac=1&__ac_wcmid=RAWCIL&__ac_lib=Radactive.WebControls.ILoad&__ac_key=RAWCCIL_11&__ac_sid=fnoz2hmvirfivb2btbubbw45&__ac_cn=&__ac_cp=BVDXDWFLDWFMHDFJBOEGBDFLFOD5EEFD&__ac_fr=634883113445054092 HTTP/1.1" 200 43496 "https://a-website.com/eventmanagement/EditEvent.aspx?eventOid=623fc423-2329-4cab-8be5-72a97709570d" "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.96 Safari/537.4" Nov 12 10:02:42 lb1 pound: (7f819fff8700) e500 for 1.2.3.4 response error read from 127.0.0.1:80/POST /eventmanagement/EditEvent.aspx?eventOid=623fc423-2329-4cab-8be5-72a97709570d HTTP/1.1: Connection timed out (15.121 secs) Above we can see the request comming in from the user at IP address 1.2.3.4, eventually Pound returns error 500 with the message "Connection timed out (15.121 secs)". Running HAProxy in debug mode, we can see the request come in; user@box:/var/log$ sudo /etc/init.d/haproxy restart Restarting haproxy: haproxy[WARNING] 316/100042 (19218) : <debug> mode incompatible with <quiet> and <daemon>. Keeping <debug> only. Available polling systems : sepoll : pref=400, test result OK epoll : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK Total: 4 (4 usable), will use sepoll. Using sepoll() as the polling mechanism. ....... 00000008:iis-servers.srvrep[0008:0009]: HTTP/1.1 200 OK 00000008:iis-servers.srvhdr[0008:0009]: Cache-Control: private 00000008:iis-servers.srvhdr[0008:0009]: Pragma: no-cache 00000008:iis-servers.srvhdr[0008:0009]: Content-Length: 22211 00000008:iis-servers.srvhdr[0008:0009]: Content-Type: text/plain; charset=utf-8 00000008:iis-servers.srvhdr[0008:0009]: Server: Microsoft-IIS/7.0 00000008:iis-servers.srvhdr[0008:0009]: X-AspNet-Version: 2.0.50727 00000008:iis-servers.srvhdr[0008:0009]: X-Powered-By: ASP.NET 00000008:iis-servers.srvhdr[0008:0009]: Date: Mon, 12 Nov 2012 10:01:25 GMT 00000009:iis-servers.accept(0004)=000a from [127.0.0.1:53556] 00000009:iis-servers.clireq[000a:ffff]: GET /Logoff.aspx HTTP/1.1 00000009:iis-servers.clihdr[000a:ffff]: Host: a-website.com 00000009:iis-servers.clihdr[000a:ffff]: Connection: keep-alive 00000009:iis-servers.clihdr[000a:ffff]: User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.96 Safari/537.4 00000009:iis-servers.clihdr[000a:ffff]: Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 00000009:iis-servers.clihdr[000a:ffff]: Referer: https://a-website.com/eventmanagement/eventmanagement.aspx 00000009:iis-servers.clihdr[000a:ffff]: Accept-Encoding: gzip,deflate,sdch 00000009:iis-servers.clihdr[000a:ffff]: Accept-Language: en-GB,en;q=0.8,it;q=0.6 00000009:iis-servers.clihdr[000a:ffff]: Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 00000009:iis-servers.clihdr[000a:ffff]: Cookie: ASP.NET_SessionId=fnoz2hmvirfivb2btbubbw45; apps=apps2; AuthHint=true; __utma=190546871.552451749.1340295610.1352454675.1352711624.159; __utmb=190546871.2.10.1352711624; __utmc=190546871; __utmz=190546871.1349966519.143.3.utmcsr=en.wikipedia.org|utmccn=(referral)|utmcmd=referral|utmcct=/wiki/Single_transferable_vote; Sequence=162; SessionId=80e603f9-7e73-474b-8b7c-e198b2f11218; SecureSessionId=00000000-0000-0000-0000-000000000000; __utma=58336506.1016936529.1332752550.1352454680.1352711626.456; __utmb=58336506.28.10.1352711626; __utmc=58336506; __utmz=58336506.1352711626.456.155.utmcsr=a-website.com|utmccn=(referral)|utmcmd=referral|utmcct=/ 00000009:iis-servers.clihdr[000a:ffff]: X-SSL-cipher: RC4-SHA SSLv3 Kx=RSA Au=RSA Enc=RC4(128) Mac=SHA1 00000009:iis-servers.clihdr[000a:ffff]: X-Forwarded-For: 1.2.3.4 00000008:iis-servers.srvcls[0008:0009] 00000008:iis-servers.clicls[0008:0009] 00000008:iis-servers.closed[0008:0009] ....... 0000000e:iis-servers.srvrep[0008:0009]: HTTP/1.1 200 OK 0000000e:iis-servers.srvhdr[0008:0009]: Cache-Control: no-cache 0000000e:iis-servers.srvhdr[0008:0009]: Pragma: no-cache 0000000e:iis-servers.srvhdr[0008:0009]: Content-Length: 12805 0000000e:iis-servers.srvhdr[0008:0009]: Content-Type: text/html; charset=utf-8 0000000e:iis-servers.srvhdr[0008:0009]: Server: Microsoft-IIS/7.0 0000000e:iis-servers.srvhdr[0008:0009]: X-AspNet-Version: 2.0.50727 0000000e:iis-servers.srvhdr[0008:0009]: X-Powered-By: ASP.NET 0000000e:iis-servers.srvhdr[0008:0009]: Date: Mon, 12 Nov 2012 10:02:22 GMT 0000000f:iis-servers.accept(0004)=000c from [127.0.0.1:53609] 0000000f:iis-servers.clireq[000c:ffff]: GET /Controls/ReferringOrganisationLogoImageHandler.ashx HTTP/1.1 0000000f:iis-servers.clihdr[000c:ffff]: Host: a-website.com 0000000f:iis-servers.clihdr[000c:ffff]: Connection: keep-alive 0000000f:iis-servers.clihdr[000c:ffff]: User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.96 Safari/537.4 0000000f:iis-servers.clihdr[000c:ffff]: Accept: */* 0000000f:iis-servers.clihdr[000c:ffff]: Referer: https://a-website.com/eventmanagement/EditEvent.aspx?eventOid=623fc423-2329-4cab-8be5-72a97709570d 0000000f:iis-servers.clihdr[000c:ffff]: Accept-Encoding: gzip,deflate,sdch 0000000f:iis-servers.clihdr[000c:ffff]: Accept-Language: en-GB,en;q=0.8,it;q=0.6 0000000f:iis-servers.clihdr[000c:ffff]: Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 0000000f:iis-servers.clihdr[000c:ffff]: Cookie: ASP.NET_SessionId=fnoz2hmvirfivb2btbubbw45; apps=apps2; __utma=190546871.552451749.1340295610.1352454675.1352711624.159; __utmb=190546871.2.10.1352711624; __utmc=190546871; __utmz=190546871.1349966519.143.3.utmcsr=en.wikipedia.org|utmccn=(referral)|utmcmd=referral|utmcct=/wiki/Single_transferable_vote; AuthHint=true; __utma=58336506.1016936529.1332752550.1352454680.1352711626.456; __utmb=58336506.33.10.1352711626; __utmc=58336506; __utmz=58336506.1352711626.456.155.utmcsr=a-website.com|utmccn=(referral)|utmcmd=referral|utmcct=/; SessionId=69cd415c-2f4e-4ace-b8f7-926d054f87c2; SecureSessionId=00000000-0000-0000-0000-000000000000; Sequence=170 0000000f:iis-servers.clihdr[000c:ffff]: X-SSL-cipher: RC4-SHA SSLv3 Kx=RSA Au=RSA Enc=RC4(128) Mac=SHA1 0000000f:iis-servers.clihdr[000c:ffff]: X-Forwarded-For: 1.2.3.4 0000000f:iis-servers.srvrep[000c:000d]: HTTP/1.1 200 OK 0000000f:iis-servers.srvhdr[000c:000d]: Cache-Control: private 0000000f:iis-servers.srvhdr[000c:000d]: Content-Length: 142 0000000f:iis-servers.srvhdr[000c:000d]: Content-Type: image/png 0000000f:iis-servers.srvhdr[000c:000d]: Server: Microsoft-IIS/7.0 0000000f:iis-servers.srvhdr[000c:000d]: X-AspNet-Version: 2.0.50727 0000000f:iis-servers.srvhdr[000c:000d]: Set-Cookie: SessionId=69cd415c-2f4e-4ace-b8f7-926d054f87c2; path=/ 0000000f:iis-servers.srvhdr[000c:000d]: Set-Cookie: SecureSessionId=00000000-0000-0000-0000-000000000000; path=/; secure 0000000f:iis-servers.srvhdr[000c:000d]: X-Powered-By: ASP.NET 0000000f:iis-servers.srvhdr[000c:000d]: Date: Mon, 12 Nov 2012 10:02:25 GMT 0000000e:iis-servers.srvcls[0008:0009] 0000000e:iis-servers.clicls[0008:0009] 0000000e:iis-servers.closed[0008:0009] 0000000f:iis-servers.srvcls[000c:000d] 0000000f:iis-servers.clicls[000c:000d] 0000000f:iis-servers.closed[000c:000d] 00000009:iis-servers.srvcls[000a:000b] 00000009:iis-servers.clicls[000a:000b] 00000009:iis-servers.closed[000a:000b] Where in the chain is the issue here?

    Read the article

  • Connection error in java website. Tnsping shows that the service is running

    - by user1439090
    I have a java website application running in windows 7 which uses oracle database for its functionalities. The database has default SID name orcl. When I use tnsping, I can see that the orcl service is active. Also most of the application is working fine except for one part. I was wondering if someone could help me with the following error:- 1. cause: message:null,java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.olat.course.statistic.StatisticAutoCreator.createController(StatisticAutoCreator.java:73) at org.olat.course.statistic.StatisticActionExtension.createController(StatisticActionExtension.java:40) at org.olat.course.statistic.StatisticMainController.createController(StatisticMainController.java:80) at org.olat.core.gui.control.generic.layout.GenericMainController.getContentCtr(GenericMainController.java:258) at org.olat.core.gui.control.generic.layout.GenericMainController.event(GenericMainController.java:221) at org.olat.core.gui.control.DefaultController.dispatchEvent(DefaultController.java:196) 2. cause: message:Could not get JDBC Connection; nested exception is java.sql.SQLException: The Network Adapter could not establish the connection,org.springframework.jdbc.CannotGetJdbcConnectionException at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:80) at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:381) at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:455) at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:463) at org.springframework.jdbc.core.JdbcTemplate.queryForObject(JdbcTemplate.java:471) at org.springframework.jdbc.core.JdbcTemplate.queryForObject(JdbcTemplate.java:476) at org.springframework.jdbc.core.JdbcTemplate.queryForLong(JdbcTemplate.java:480) at org.olat.course.statistic.SimpleStatisticInfoHelper.doGetFirstLoggingTableCreationDate(SimpleStatisticInfoHelper.java:63) at org.olat.course.statistic.SimpleStatisticInfoHelper.getFirstLoggingTableCreationDate(SimpleStatisticInfoHelper.java:81) at org.olat.course.statistic.StatisticDisplayController.getStatsSinceStr(StatisticDisplayController.java:517) 3. cause: message:The Network Adapter could not establish the connection,java.sql.SQLException at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70) at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:199) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:480) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:413) at oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:508) at oracle.jdbc.driver.T4CConnection.(T4CConnection.java:203) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:33) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:510) at java.sql.DriverManager.getConnection(DriverManager.java:582) 4. cause: message:The Network Adapter could not establish the connection,oracle.net.ns.NetException at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:328) at oracle.net.resolver.AddrResolution.resolveAndExecute(AddrResolution.java:421) at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:634) at oracle.net.ns.NSProtocol.connect(NSProtocol.java:208) at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:966) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:292) at oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:508) at oracle.jdbc.driver.T4CConnection.(T4CConnection.java:203) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:33) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:510) 5. cause: message:Connection timed out: connect,java.net.ConnectException at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:525) at java.net.Socket.connect(Socket.java:475) at java.net.Socket.(Socket.java:372) at java.net.Socket.(Socket.java:186) at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:127)

    Read the article

  • How do access a secure website within a sharepoint webpart?

    - by Bill
    How do access a secure website within a sharepoint webpart? The following code works fine as a console application but if you run it in a webpart, you will get a access violation WebRequest request = WebRequest.Create("https://somesecuresite.com"); WebResponse firstResponse = null; try { firstResponse = request.GetResponse(); } catch (WebException ex) { writer.WriteLine("Error: " + ex.ToString()); return; } if you access a non secure site, it also works. Any ideas? Error: System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a receive. --- System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at System.Net.UnsafeNclNativeMethods.NativePKI.CertVerifyCertificateChainPolicy(IntPtr policy, SafeFreeCertChain chainContext, ChainPolicyParameter& cpp, ChainPolicyStatus& ps) at System.Net.PolicyWrapper.VerifyChainPolicy(SafeFreeCertChain chainContext, ChainPolicyParameter& cpp) at System.Net.Security.SecureChannel.VerifyRemoteCertificate(RemoteCertValidationCallback remoteCertValidationCallback) at System.Net.Security.SslState.CompleteHandshake() at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Net.TlsStream.CallProcessAuthentication(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.ConnectStream.WriteHeaders(Boolean async) --- End of inner exception stack trace --- at System.Net.HttpWebRequest.GetResponse()

    Read the article

  • A Knights Tale

    - by Phil Factor
    There are so many lessons to be learned from the story of Knight Capital losing nearly half a billion dollars as a result of a deployment gone wrong. The Knight Capital Group (KCG N) was an American global financial services firm engaging in market making, electronic execution, and institutional sales and trading. According to the recent order (File No.3.15570) against Knight Capital by U.S. Securities and Exchange Commission?, Knight had, for many years used some software which broke up incoming “parent” orders into smaller “child” orders that were then transmitted to various exchanges or trading venues for execution. A tracking ‘cumulative quantity’ function counted the number of ‘child’ orders and stopped the process once the total of child orders matched the ‘parent’ and so the parent order had been completed. Back in the mists of time, some code had been added to it  which was excuted if a particular flag was set. It was called ‘power peg’ and seems to have had a similar design and purpose, but, one guesses, would have shared the same tracking function. This code had been abandoned in 2003, but never deleted. In 2005, The tracking function was moved to an earlier point in the main process. It would seem from the account that, from that point, had that flag ever been set, the old ‘Power Peg’ would have been executed like Godzilla bursting from the ice, making child orders without limit without any tracking function. It wasn’t, presumably because the software that set the flag was removed. In 2012, nearly a decade after ‘Power Peg’ was abandoned, Knight prepared a new module to their software to cope with the imminent Retail Liquidity Program (RLP) for the New York Stock Exchange. By this time, the flag had remained unused and someone made the fateful decision to reuse it, and replace the old ‘power peg’ code with this new RLP code. Had the two actions been done together in a single automated deployment, and the new deployment tested, all would have been well. It wasn’t. To quote… “Beginning on July 27, 2012, Knight deployed the new RLP code in SMARS in stages by placing it on a limited number of servers in SMARS on successive days. During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.” (para 15) “On August 1, Knight received orders from broker-dealers whose customers were eligible to participate in the RLP. The seven servers that received the new code processed these orders correctly. However, orders sent with the repurposed flag to the eighth server triggered the defective Power Peg code still present on that server. As a result, this server began sending child orders to certain trading centers for execution. Because the cumulative quantity function had been moved, this server continuously sent child orders, in rapid sequence, for each incoming parent order without regard to the number of share executions Knight had already received from trading centers. Although one part of Knight’s order handling system recognized that the parent orders had been filled, this information was not communicated to SMARS.” (para 16) SMARS routed millions of orders into the market over a 45-minute period, and obtained over 4 million executions in 154 stocks for more than 397 million shares. By the time that Knight stopped sending the orders, Knight had assumed a net long position in 80 stocks of approximately $3.5 billion and a net short position in 74 stocks of approximately $3.15 billion. Knight’s shares dropped more than 20% after traders saw extreme volume spikes in a number of stocks, including preferred shares of Wells Fargo (JWF) and semiconductor company Spansion (CODE). Both stocks, which see roughly 100,000 trade per day, had changed hands more than 4 million times by late morning. Ultimately, Knight lost over $460 million from this wild 45 minutes of trading. Obviously, I’m interested in all this because, at one time, I used to write trading systems for the City of London. Obviously, the US SEC is in a far better position than any of us to work out the failings of Knight’s IT department, and the report makes for painful reading. I can’t help observing, though, that even with the breathtaking mistakes all along the way, that a robust automated deployment process that was ‘all-or-nothing’, and tested from soup to nuts would have prevented the disaster. The report reads like a Greek Tragedy. All the way along one wants to shout ‘No! not that way!’ and ‘Aargh! Don’t do it!’. As the tragedy unfolds, the audience weeps for the players, trapped by a cruel fate. All application development and deployment requires defense in depth. All IT goes wrong occasionally, but if there is a culture of defensive programming throughout, the consequences are usually containable. For financial systems, these defenses are required by statute, and ignored only by the foolish. Knight’s mistakes weren’t made by just one hapless sysadmin, but were progressive errors by an  IT culture spanning at least ten years.  One can spell these out, but I think they’re obvious. One can only hope that the industry studies what happened in detail, learns from the mistakes, and draws the right conclusions.

    Read the article

  • How to organize pictures on website using css? [on hold]

    - by user3624023
    Here is my website without any CSS: http://www.wmcicompsci.ca/cs20/students/theglowcloud/Bare%20bones%20website/classics_bare.html I am new to CSS and I would like to organize pictures these pictures in this fashion: http://css-tricks.com/examples/SlideinCaptions/ I would just like this layout for the pictures but I do not need the sliding of the captions(although I would like to but it does not work my browser). I would like the captions to be like titles on top of the pictures. Here is my current html code: <!DOCTYPE html> <html> <head> <title> My favourite Fantasy books</title> <meta charset = "utf-8"> <link rel="stylesheet" href="css.css"> </head> <body> <nav id="main_nav"> <ul> <li><a href = " homepage_css.html"> Homepage</a></li> <li><a href="science_fiction_css.html">Science Fiction</a></li> <li><a href="classics_css.html">Classics</a></li> <li><a href="fantasy_css.html">Fantasy</a></li> </ul> </nav> <h1> Fantasy Genre</h1> <p> Here are my favourites:</p> <ul> <li> Goblet of Fire by J.K Rowling (4th book in the Harry Potter Series) </li> <li><img class= displayed src="pics/fantasy/goblet_of_fire.jpg" width="200" alt="Goblet of Fire book cover"></li> <li> Graceling by Kristan Cashore </li> <li><img src="pics/fantasy/graceling.jpg" width="200" alt = " Graceling book cover"></li> <li> Serpent's Shadow by Rick Riordan (3rd book in the Kane Chronicles) </li> <li><img src="pics/fantasy/serpents_shadow.jpg" width="200" alt="Serpent's Shadow book cover"></li> <li> The Hobbit by J.R.R. Tolkein </li> <li><img src="pics/fantasy/the_hobbit.jpg" width="200" alt="The Hobbit book cover"></li> <li> The False Prince by Jennifer Neilson (1st book in the Ascendance Triology) </li> <li><img src="pics/fantasy/the_false_prince.jpg" width="200" alt="The False Prince book cover"></li> </ul>

    Read the article

  • Photoshop Retro Vintage Design Tutorials

    - by Aditi
    Gone are the days when designers only wanted to create high glossy web2.0 gradient rich website designs. Now a days designers are coming up with rugged, retro & vintage themes for their website designs. Colorful or subtle with that worn out look the website seems like a masterpiece. It is not hard to pick up on such Photoshop techniques to master the art of making themes that are retro & vibrant. We have complied a list of tutorials you would like to learn from..rest is in your hands & creativity. Photochrom Vintage Postcard effect Turn your high definition photos into vintage postcards and use them in your website concepts. Learn More Add Retro Look to your Images Give that 1970’s retro look to your images and web concepts. It’s a very easy process using either patterns, brushes, colors or gradients, layer modes and variable opacity. Learn More Brushed metal effect, Just like World War Airplanes texture This is one of a kind photoshop tutorial that teaches how to use  noise and blur filters to create a brushed metal effect unlike other gradient based effects, Also it covers a few layer styles to create airplane graphic. Learn More Transform a New Image into Illustration, Retro Poster Style With the help of this tutorial you can create brilliant poster style or illustrative images and concepts for your new website. This tutorial is superb example of image enhancement & creative use of blending options in photoshop. Learn More Retro Neon Style Text Tutorial Just like the old days, the rainbow neon curvy text format that can be seen on many posters etc, can now be made for your use on website. This tutorial gives you a easy step by step procedure. Learn More Retro Dotted Photo Tutorial Find how to make a dotted poster of your image, pure retro feel. Learn More

    Read the article

  • securing server to server http post

    - by ad-inf
    Website is developed on JSF, Servlet, using apache web server. In my website, I accept data submission from few restricted websites using HTTP POST method. We exchange some secure key to ensure that correct source is sending data. But is there any way to ensure that the data is submitted from specific domain / IP address only? In application level I can check request.header('Referer') , but some proxy or firewall might hide the referer. Can this configuration done on firewall or webserver level to authenticate server to server communication? Eg. Say my website is a payment gateway website, integrated with www.abc.com. I want only abc.com to submit data. So a user using abc.com should be able to submit data to my website only through abc.com, and not any other website.

    Read the article

  • Recent improvements in Console Performance

    - by loren.konkus
    Recently, the WebLogic Server development and support organizations have worked with a number of customers to quantify and improve the performance of the Administration Console in large, distributed configurations where there is significant latency in the communications between the administration server and managed servers. These improvements fall into two categories: Constraining the amount of time that the Console stalls waiting for communication Reducing and streamlining the amount of data required for an update A few releases ago, we added support for a configurable domain-wide mbean "Invocation Timeout" value on the Console's configuration: general, advanced section for a domain. The default value for this setting is 0, which means wait indefinitely and was chosen for compatibility with the behavior of previous releases. This configuration setting applies to all mbean communications between the admin server and managed servers, and is the first line of defense against being blocked by a stalled or completely overloaded managed server. Each site should choose an appropriate timeout value for their environment and network latency. In the next release of WebLogic Server, we've added an additional console preference, "Management Operation Timeout", to the Console's shared preference page. This setting further constrains how long certain console pages will wait for slowly responding servers before returning partial results. While not all Console pages support this yet, key pages such as the Servers Configuration and Control table pages and the Deployments Control pages have been updated to support this. For example, if a user requests a Servers Table page and a Management Operation Timeout occurs, the table is displayed with both local configuration and remote runtime information from the responding managed servers and only local configuration information for servers that did not yet respond. This means that a troublesome managed server does not impede your ability to manage your domain using the Console. To support these changes, these Console pages have been re-written to use the Work Management feature of WebLogic Server to interact with each server or deployment concurrently, which further improves the responsiveness of these pages. The basic algorithm for these pages is: For each configuration mbean (ie, Servers) populate rows with configuration attributes from the fast, local mbean server Find a WorkManager For each server, Create a Work instance to obtain runtime mbean attributes for the server Schedule Work instance in the WorkManager Call WorkManager.waitForAll to wait WorkItems to finish, constrained by Management Operation Timeout For each WorkItem, if the runtime information obtained was not complete, add a message indicating which server has incomplete data Display collected data in table In addition to these changes to constrain how long the console waits for communication, a number of other changes have been made to reduce the amount and scope of managed server interactions for key pages. For example, in previous releases the Deployments Control table looked at the status of a deployment on every managed server, even those servers that the deployment was not currently targeted on. (This was done to handle an edge case where a deployment's target configuration was changed while it remained running on previously targeted servers.) We decided supporting that edge case did not warrant the performance impact for all, and instead only look at the status of a deployment on the servers it is targeted to. Comprehensive status continues to be available if a user clicks on the 'status' field for a deployment. Finally, changes have been made to the System Status portlet to reduce its impact on Console page display times. Obtaining health information for this display requires several mbean interactions with managed servers. In previous releases, this mbean interaction occurred with every display, and any delay or impediment in these interactions was reflected in the display time for every page. To reduce this impact, we've made several changes in this portlet: Using Work Management to obtain health concurrently Applying the operation timeout configuration to constrain how long we will wait Caching health information to reduce the cost during rapid navigation from page to page and only obtaining new health information if the previous information is over 30 seconds old. Eliminating heath collection if this portlet is minimized. Together, these Console changes have resulted in significant performance improvements for the customers with large configurations and high latency that we have worked with during their development, and some lesser performance improvements for those with small configurations and very fast networks. These changes will be included in the 11g Rel 1 patch set 2 (10.3.3.0) release of WebLogic Server.

    Read the article

  • How to change the Nginx default folder?

    - by Ido Bukin
    I setup a server with Nginx and i set my Public_HTML in - /home/user/public_html/website.com/public And its always redirect to - /usr/local/nginx/html/ How can i change this ? Nginx.conf - user www-data www-data; worker_processes 4; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 5; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /usr/local/nginx/sites-enabled/*; } /usr/local/nginx/sites-enabled/default - server { listen 80; server_name localhost; location / { root html; index index.php index.html index.htm; } # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } /usr/local/nginx/sites-available/website.com - server { listen 80; server_name website.com; rewrite ^/(.*) http://www.website.com/$1 permanent; } server { listen 80; server_name www.website.com; access_log /home/user/public_html/website.com/log/access.log; error_log /home/user/public_html/website.com/log/error.log; location / { root /home/user/public_html/website.com/public/; index index.php index.html; } # pass the PHP scripts to FastCGI server listening on # 127.0.0.1:9000 location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/user/public_html/website.com/public/$fastcgi_script_name; } } The error message I get is Fatal error: require_once() [function.require]: Failed opening required '/usr/local/nginx/html/202-config/functions.php' the server try to find the file in the Nginx folder and not in my Public_Html

    Read the article

  • How to get local business nationwide exposure? [closed]

    - by guisasso
    here's the situation: This company offers local home services (construction...), but also fabricates many custom items that can be shipped nationally, and even internationally. Since i started working on this website, it has ranked pretty well on alexa global and locally, and i have made many SEO improvements that doubled the visits to the website in 6 months. The website is listed in many different directories (dmoz & etc...), maps (google maps & etc...), business listing sites (yelp & etc..), trade specific websites (angie's list, houzz & etc...), state specific business listings and etc, there are many links to pictures displayed on the website, links to the website itself, i have a google analytics and webmaster tools account, with sitemaps, newsletters, facebook page.... the list goes on and on. All of which have been working pretty well locally. We have had some success with doing business in other states and even other countries, but it is still a pretty small percentage of the market. I also advertise on google adwords locally, and since this would be the obvious answer, my question is: Without paid advertisement, how can i improve the visibility of this local business website nationally to attract customers in all US States?

    Read the article

  • Users can benefit from Session Tracking

    I use to work for a large Dental Plan marketing website a few years ago and they had a large customer-driven website that sold Dental Plans to consumers. Their website started tracking users as soon as they hit their web servers, and then they logged everything they could about the user. There are a lot of benefits for using session tracking for both the user and the website. Users can benefit from session tracking due to the fact that a website can retain pertaining information for the user so that they do not have to re-enter the same information repeatedly. In addition, websites can hold specific items in a cart for each user so that they can pay for all of their  items at once when they are ready to complete their purchases. Websites can also benefit from session tracking because they can determine where a specific user came from and which advertising partner gave them a sale. This information is very useful when deciding on where to spend an advertising budget. There is only one real disadvantage when it comes to session tracking, Users can not really control what is actually tracked by a website. Yes, they can disable cookies and this will help, but that means that no tracking can be done at all. Most sites require users to have cookies enabled in order for users to make purchases or login to their accounts.

    Read the article

  • MSI wrapper for msdeploy?

    - by Nate
    Just moved to vs2010 and found that deployment is quite different. The old way I did things (vs2008) was as follows: 1) I right-click on my web app and "Add web deployment project" 2) Start a web setup project, dump that web deployment project output in it, and add any custom installer actions (connection strings or other user input) as necessary using the custom installer class. Easy! Now that I don't have the "Add web deployment project" option (or can't find it!?!?), msdeploy seems to be the thing to do. I have successfully got this to work via command line, but I still need some custom actions. In this example, we have separated our web services deployment from our UI deployment (Silverlight) so that they can be hosted separately or together. So... during/after install I need to know from the user where the web services are located. So I tried this: 1) Start a web setup project, and include all of the deploy package files (.deploy.cmd, /setparameters.xml, etc) 2) Gather user input in my custom installer class, and shell execute the deploy.cmd. Problem is... the deploy fails on deleting the zip file since it's use by another process (I assume it's my installer). Anyone have any ideas on how to get around this? Or is there a better way to accomplish this task? Any input would be appreciated! Nate

    Read the article

  • Deploying an ADF Secure Application using WLS Console

    - by juan.ruiz
    Last week I worked on a requirement from a customer that wanted to understand how to deploy to WLS an application with ADF Security without using JDeveloper. The main question was, what steps where needed in order to set up Enterprise Roles, Security Policies and Application Credentials. In this entry I will explain the steps taken using JDeveloper 11.1.1.2. 0 Requirements: Instead of building a sample application from scratch, we can use Andrejus 's sample application that contains all the security pieces that we need. Open and migrate the project. Also make sure you adjust the database settings accordingly. Creating the EAR file Review the Security settings of the application by going into the Application -> Secure menu and see that there are two enterprise roles as well as the ADF Policies enforcing security on the main page. Make sure the Application Module uses the Data Source instead of JDBC URL for its connection type, also take note of the data source name - in my case I have: java:comp/env/jdbc/HrDS To facilitate the access to this application once we deploy it. Go to your ViewController project properties select the Java EE Application category and give it a meaningful name to the context root as well to the Application Name Go to the ADFSecurityWL Application properties -> Deployment  and create a new EAR deployment profile. Uncheck the Auto generate and Synchronize weblogic-jdbc.xml Descriptors During Deployment Deploy the application as an EAR file. Deploying the Application to WLS using the WLS Console On the WLS console create a JNDI data source. This is the part that I found more tricky of the hole exercise given that the name should match the AM's data source name, however the naming convention that worked for me was jdbc.HrDS Now, deploy the application manually by selecting deployments ->Install look for the EAR and follow the default steps. If this is the firs time you deploy the application, once the deployment finishes you will be asked to Activate Changes on the domain, these changes contain all the security policies and application roles insertion into the WLS instance. Creating Roles and User Groups for the Application To finish the after-deployment set up, we need to create the groups that are the equivalent of the Enterprise Roles of ADF Security. For our sample we have two Enterprise Roles employeesApplication and managersApplication. After that, we create the application users and assign them into their respective groups. Now we can run the application and test the security constraints

    Read the article

  • Open World Day 1 Continued

    - by Antony Reynolds
    A Day in the Life of an Oracle OpenWorld Attendee Part II A couple of things I forgot to mention about yesterdays OpenWorld. First I attended a presentation on SOA Suite and Virtualization which explained how Oracle Virtual Assembly Builder (OVAB) can be used to accelerate the deployment of an Enterprise Deployment Guide (EDG) compliant SOA Suite infrastructure.  OVAB provides the ability to introspect a deployed software component such as WebLogic Server, SOA Suite or other components and extract the configuration and package it up for rapid deployment into an Oracle Virtual Machine.  OVAB allows multiple machines to be configured and connections made between the machines and outside resources such as databases.  That by itself is pretty cool and has been available for a while in OVAB.  What is new is that Oracle has done this for an EDG compliant installations and made it available as an OVAB assembly for customers to use, significantly accelerating the deployment of an EDG deployment.  A real help for customers standing up EDG environments, particularly in test, dev and QA environments. The other thing I forgot to mention was the most memorable demo I saw at OpenWorld.  This was done by my co-author Matt Wright who was showcasing the products of his company Rubicon Red.  They showed a really cool application called OneSpot which puts all the information about a single users business processes in one spot!  Apparently a customer suggested the name.  It allows business flows to be defined that map onto events.  As events occur the status of the business flow is updated to reflect the change.  The interface is strongly reminiscent of social media sites and provides a graphical view of business flows.  So how does this differ from BPEL and BPM process flows?  The OneSpot process flow is more like a BAM process flow, it is based on events arriving from multiple sources, and is focused on the clients view of the process, not the actual business process.  This is important because it allows an end user to get a view of where his current business flow is and what actions, if any, are required of him.  This by itself is great, but better still is that OneSpot has a real time updating view of events that have occurred (BAM style no need to refresh the browser).  This means that as new events occur the end user can see them and jump to the business flow or take other appropriate actions.  Under the covers OneSpot makes use of Oracle Human Workflow to provide a forms interface, but this is not the HWF GUI you know!  The HWF GUI screens are much prettier and have more of a social media feel about them due to their use of images and pulling in relevant related information.  If you are at OOW I strongly recommend you visit Matt or John at the Rubicon Red stand and ask, no demand a demo of OneSpot!

    Read the article

  • Before the Summit of 2012

    - by Ajarn Mark Caldwell
    Today, Monday, was the first day of the PASS Summit Preconference training events, but instead I spent the day at the free SQL in the City event put on by Red Gate. For me this was not a financial decision (pre-con sessions cost extra above the general Summit registration) but rather a matter of interest.  I had already included money for pre-cons in this year’s training budget, but none of them really stood out to me, so even if the Red-Gate event were not going on at the same time, I probably would not have gone to any pre-cons this year.  However, the topics being presented at the SQL in the City event were of great interest to me.  There promised to be good information on Continuous Integration and automated deployment of database changes, which lately has been a real hot topic at my work.  And indeed, Red-Gate announced the release of a new tool (still in Early Access Program…a.k.a. Beta) which is called the Deployment Manager.  Since we are in the middle of a TFS implementation project, it will be interesting to see how this plays out and compares to what we put together with the automated builds in TFS.  But, as I understand it, the primary focus of Deployment Manager is not to be the Build process (Red Gate uses JetBrains’ Team City for that in their shop) but rather to aid in the deployment of those build packages, as well as providing easy rollback and a good visualization of which versions of software are in which environments.  It looks promising and I’ve already downloaded the installer package to play with it later. Overall, I was quite impressed with the SQL in the City event.  Having heard many current and past members of the PASS Board of Directors describe the challenges of putting on a large conference, and the growing pains that the PASS Summit has gone through, I am even more impressed that the Red Gate event ran as smoothly as it did.  And it is quite impressive the amount of money that Red Gate must have spent given that this was a no-charge event to attend, they had a very nice hot lunch, and the after-event drinks celebration.  Well done, folks! Of course it was great to hear from a variety of speakers.  Today I listened to some folks from Red Gate like Grant Fritchey (blog | @GFritchey) and David Atkinson (Product Manager for SQL Source Control and now the Deployment Manager tool set); and also Brent Ozar (blog | @BrentO) and Buck Woody (blog | @BuckWoody).  By the way, if you have never seen either Brent or Buck speak, you really should.  Different styles, but both are very entertaining and educational at the same time.  I love Buck’s sense of humor (here’s a tip…don’t be late to Buck’s session or you’ll become part of the presentation) and I praise Brent’s slides.  Brent’s style very much reminds me of that espoused by Garr Reynolds on his Presentation Zen blog (and book) and I am impressed that he can make a technical presentation so engaging. It was a great day, a great way to kick off the week, and I am excited to get into the full Summit!

    Read the article

  • 2 Servers 1 Database - Can I use Redis?

    - by Aust
    Ok I have a couple of questions here. First let me give you some background information. I'm starting a project where I have a node.js server running my application and my website running on another normal server. My application will allow multiple users simultaneous connections and updates to the database so Redis seemed like a good fit there because of its speed and atomic functions. For someone to access my application they have to login with an account. To get an account, they have to signup for one through my website. So my website needs a database, but its not important to have a database like Redis here because it doesn't need it. Which leads me to my first question: 1. Can Redis even be used without node.js? It seems like it would be convenient if both of my servers were using the same database to keep track of information. In some cases, they will keep track of the same information (as in user information) and in other cases, they will be keeping track of separate information. So even if the website wouldn't be taking full advantage of all that Redis has to offer it seems like it would be more convenient. So assuming Redis could be used in this situation that leads to my next question: 2. Since Redis is linked with JavaScript, how would I handle the security from my website users? What would be stopping my website users from opening firebug or chrome's inspector and making changes to the database? Maybe if I designed my site with the layout like this: apply.php-update.php-home.php. Where after they submitted their form it would redirect them to the update page where the JavaScript would run and then redirect them after the database updated to the home page. I don't really know I'm just taking shots in the dark at this point. :) Maybe a better alternative would be to have my node.js application access its own Redis database and also have access to another MySQL database that my website also has access to. Or maybe there is another database that would be better suited for this situation other than Redis. Anyways any direction on this matter would be greatly appreciated. :)

    Read the article

  • Transfer websites and domains to new server

    - by Albert
    We have currently around 40 websites and 80+ domains/sub-domains in a shared 1&1 hosting package, and we just acquired a managed dedicated server with 1&1 as well. Now it's time to start transferring everything over to the new server. Transferring just the websites and databases wouldn't be a problem, it would take time but it's pretty straight forward. The problem comes when transferring the domains, let me explain why. Many of the websites we have are accessible via sub-domains of a parent domain. Ideally, we would like to transfer the sites one by one, in order to check for each one that everything works fine in the new server. However, since we also need to transfer the domain so it's managed in the new server, once we do that means that all the websites using that domain need to be already in the new server before transferring that domain, thus not allowing the "one by one" philosophy. Another issue is the downtime when transferring the domain, from the moment it stops working in the hosting package and becomes active in the new server. I believe there's nothing we can do here. So my question is if there's any way we can do the "one by one" transferring of the websites (and their corresponding sub-domains) in the circumstances described above. One idea I had would be: 1. Let's say we have website A, which is accessible using subdomain.mydomain.com (and there are many other websites accessible via other sub-domains of mydomain.com) 2. Transfer the files of website A to the new server 3. Point a test domain in the new server to the website A's folder (the new server comes with a "test" domain) 4. Test if website A works with that "test" domain 5. In the old hosting, somehow point the real sub-domain (subdomain.mydomain.com) to the new location of website A, in a way that user always see the same URL as always 6. Repeat 2-5 for every website belonging to the same domain 7. Once all are working in the new server, do the actual transfer of the domain to the new server, and then re-create all the sub-domains and point them to their corresponding website That way, users wouldn't notice that there's been a change (except for a small down time of the websites when doing the domain transfer). The part I'm not sure about is point 5 of the above. Is there any way to do that? I mean do it in a way that users see the original domain all the time in their browser, even for internal pages (so not only for the "home page", which would be sub-domain.mydomain.com, but also for example for the contact page, which would be sub-domain.mydomain.com/contact.php). Is there any way to do this? Or are we SOL and we're going to have to transfer all at the same time?

    Read the article

  • Safely deploying changes to production servers

    - by oazabir
    When you deploy incremental changes on a production server, which is running and live all the time, you some times see error messages like “Compiler Error Message: The Type ‘XXX’ exists in both…”. Sometimes you find Application_Start event not firing although you shipped a new class, dll or web.config. Sometimes you find static variables not getting initialized and so on. There are so many weird things happen on webservers when you incrementally deploy changes to the server and the server has been up and running for several weeks. So, I came up with a full proof house keeping steps that we always do whenever we deploy some incremental change to our websites. These steps ensure that the web sites are properly recycled , cached are cleared, all the data stored at Application level is initialized. First of all you should have multiple web servers behind load balancer. This way you can take one server our of the production traffic, do your deployment and house keeping tasks like restarting IIS, and then put it back. Then you can do it for the second server and so on. This ensures there’s no outage for customer. If you can do it reasonable fast, hopefully customers won’t notice discrepancy between the servers some having new code and some having old code. You should only do this when your changes aren’t drastic. For ex, you aren’t delivering a complete revamped UI. In that case, some users hitting server1 with latest UI will suddenly get a completely different experience and then on next page refresh, they might hit server2 with old code and get a totally different experience. This works for incremental non-dramatic changes only.   During deployment you should follow these steps: Take server X out of load balancer so that it does not get any traffic. Stop all windows services on the server. Stop IIS. Delete the Temporary ASP.NET folders of all .NET versions incase you have multiple .NET versions running. You can follow this link. Deploy the changes. Flush any distributed cache you have, for ex, Velocity or Memcached. Start IIS. Start the windows services on the server. Warm up all websites by hitting major URLs on the websites. You should have some automated script to do this. You can use tinyget to hit some major URLs, especially pages that take a lot of time to compile. Read my post on keeping websites warm with zero coding. Put server X back to load balancer so that it starts receiving traffic. That’s it. It should give you a clean deployment and prevent unexpected errors. You should print these steps and hang on the desk of your deployment guys so that they never forget during deployment pressure.

    Read the article

  • DNS with name.com and Amazon S3

    - by aledalgrande
    I have a website on a bucket in Amazon S3, and recently started to get emails from Google "Googlebot can't access your site". When I go to Webmaster Tools and I try to fetch in fact it doesn't work. Also people in locations different from mine sometimes reported they could not access the website. Now for curiosity I tried from my terminal: $ host xxx xxx is an alias for xxx.s3-website-us-west-1.amazonaws.com. xxx.s3-website-us-west-1.amazonaws.com is an alias for s3-website-us-west-1.amazonaws.com. s3-website-us-west-1.amazonaws.com has address yyy.yyy.yyy.yyy And when I try with dig: $ dig xxx ; <<>> DiG 9.8.3-P1 <<>> xxx ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17860 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;xxx. IN A ;; ANSWER SECTION: xxx. 300 IN CNAME xxx.s3-website-us-west-1.amazonaws.com. xxx.s3-website-us-west-1.amazonaws.com. 60 IN CNAME s3-website-us-west-1.amazonaws.com. s3-website-us-west-1.amazonaws.com. 60 IN A yyy ;; Query time: 1514 msec ;; SERVER: 75.75.75.75#53(75.75.75.75) ;; WHEN: Fri Aug 22 12:32:13 2014 ;; MSG SIZE rcvd: 127 It seems OK to me. Why would Google tell me there is a DNS error? UPDATE: Google also cannot fetch robots.txt, but I can fetch it from my browser. UPDATE 2: I have a forwarding on the root to the www.* hostname: $ dig thenifty.me ; <<>> DiG 9.8.3-P1 <<>> thenifty.me ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49286 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;thenifty.me. IN A ;; AUTHORITY SECTION: thenifty.me. 300 IN SOA ns1hwy.name.com. support.name.com. 1 10800 3600 604800 300 ;; Query time: 148 msec ;; SERVER: 75.75.75.75#53(75.75.75.75) ;; WHEN: Fri Aug 22 13:32:56 2014 ;; MSG SIZE rcvd: 88

    Read the article

  • Alternative to google map api, so that I can use it on a HTTPS/SSL encrypted website.

    - by Zeeshan Rang
    I did find a solution for this on Google map api page, and I made the following changes as mentioned in it. 1.Use Google Maps API for Flash version 1.9a or later. 2.Add the following to your Flash application before the map is instantiated: Security.allowInsecureDomain("maps.googleapis.com"); Ref:http://code.google.com/apis/maps/faq.html#flash_ssl My code looks like this, after the changes: <mx:TitleWindow verticalAlign="middle" horizontalAlign="center" xmlns:mx="http://www.adobe.com/2006/mxml" xmlns:maps="com.google.maps.*" width="1000" height="600" layout="absolute" backgroundAlpha="0" borderAlpha="0" borderThickness="0" showCloseButton="true" close="PopUpManager.removePopUp(this);"> <mx:VBox width="70%" height="100%" > <maps:Map id="map" key="ABQIAAAA0L1JEoR6rWjh-BBQnLMtMBSVuZ5VlaqlIqiYPFMK_I5M2UTmHhSq_BJxLHiYcTDW9RxSF6HewNY7uA" mapevent_mapready="onMapReady(event)" width="100%" height="100%" /> </mx:VBox> <mx:Script> <![CDATA[ //import flashx.textLayout.formats.Direction; import mx.effects.AddItemAction; //import flashx.textLayout.factory.TruncationOptions; import mx.controls.Alert; import mx.managers.PopUpManager; import mx.rpc.events.ResultEvent; import com.adobe.serialization.json.JSON; import flash.events.Event; import com.google.maps.*; import com.google.maps.overlays.*; import com.google.maps.services.*; import com.google.maps.controls.ZoomControl; import com.google.maps.controls.PositionControl; import com.google.maps.controls.MapTypeControl; import com.google.maps.services.ClientGeocoderOptions; import com.google.maps.LatLng; import com.google.maps.Map; import com.google.maps.MapEvent; import com.google.maps.MapMouseEvent; import com.google.maps.MapType; import com.google.maps.services.ClientGeocoder; import com.google.maps.services.GeocodingEvent; import com.google.maps.overlays.Marker; import com.google.maps.overlays.MarkerOptions; import com.google.maps.InfoWindowOptions; private function onMapReady(event:MapEvent):void { Security.allowInsecureDomain("maps.googleapis.com"); map.setCenter(new LatLng(41.651505,-72.094455), 13, MapType.NORMAL_MAP_TYPE); map.addControl(new ZoomControl()); map.addControl(new PositionControl()); map.addControl(new MapTypeControl()); map.enableScrollWheelZoom(); map.enableContinuousZoom(); } ]]> </mx:Script> </mx:TitleWindow> But i still get the following error using this: The requested URL /mapsapi/publicapi?file=flashapi&url=https%3A%2F%2Fvirtual.c7beta.com%2Findex_cloud.swf&key=ABQIAAAA0L1JEoR6rWjh-BBQnLMtMBTW_Qkp6J0z76Etz3qzo8Hg3HdUQhSnD6lqp53NB0UrBmg5Xm2DlazWqA&v=1.18&flc=xt was not found on this server. Any suggestions to what am I doing wrong here, what should i do to make this work. Regards zee

    Read the article

  • Why this strange behavior of sqlbulkcopy in a asp.net website running under iis?

    - by Pandiya Chendur
    I'm using SqlClient.SqlBulkCopy to try and bulk copy a csv file into a database. I am getting the following error after calling the ..WriteToServer method. "The given value of type String from the data source cannot be converted to type bit of the specified target column." Here is my code, dt.Columns.Add("IsDeleted", typeof(byte)); dt.Columns.Add(new DataColumn("CreatedDate", typeof(DateTime))); foreach (DataRow dr in dt.Rows) { if (dr["MobileNo2"] == "" && dr["DriverName2"] == "") { dr["MobileNo2"] = null; dr["DriverName2"] = ""; } dr["IsDeleted"] = Convert.ToByte(0); dr["CreatedDate"] = Convert.ToDateTime(System.DateTime.Now.ToString()); } string connectionString = System.Configuration.ConfigurationManager. ConnectionStrings["connectionString"].ConnectionString; SqlBulkCopy sbc = new SqlBulkCopy(connectionString); sbc.DestinationTableName = "DailySchedule"; sbc.ColumnMappings.Add("WirelessId", "WirelessId"); sbc.ColumnMappings.Add("RegNo", "RegNo"); sbc.ColumnMappings.Add("DriverName1", "DriverName1"); sbc.ColumnMappings.Add("MobileNo1", "MobileNo1"); sbc.ColumnMappings.Add("DriverName2", "DriverName2"); sbc.ColumnMappings.Add("MobileNo2", "MobileNo2"); sbc.ColumnMappings.Add("IsDeleted", "IsDeleted"); sbc.ColumnMappings.Add("CreatedDate", "CreatedDate"); sbc.WriteToServer(dt); sbc.Close(); There is no error when running under visual studio developement server but it gives me an error when running under iis..... Here is my sql server table details, [Id] [int] IDENTITY(1,1) NOT NULL, [WirelessId] [int] NULL, [RegNo] [nvarchar](50) NULL, [DriverName1] [nvarchar](50) NULL, [MobileNo1] [numeric](18, 0) NULL, [DriverName2] [nvarchar](50) NULL, [MobileNo2] [numeric](18, 0) NULL, [IsDeleted] [tinyint] NULL, [CreatedDate] [datetime] NULL,

    Read the article

  • log4Net EventlogAppender does not work for Asp.Net 2.0 WebSite?

    - by Amitabh
    I have configured log4Net EventLogAppender for Asp.Net 2.0. However it does not log anything. I have following in my Web.Config. <log4net> <appender name="EventLogAppender" type="log4net.Appender.EventLogAppender"> <param name="LogName" value="Test Log" /> <param name="ApplicationName" value="Test-Web" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline" /> </layout> </appender> <root> <priority value="ERROR"/> <appender-ref ref="EventLogAppender"/> </root> <logger name="NHibernate"> <level value="ERROR" /> <appender-ref ref="EventLogAppender" /> </logger> </log4net> I already have Test-Log Event Log created and AspNet user has permission on the Event Log registry entry. I also have log4Net configured in Global.asax Application_Start. log4net.Config.XmlConfigurator.Configure();

    Read the article

  • Having a hard time implementing jquery serialScroll to a website. Need help!

    - by Martin Defatte
    Here's the page I am attempting to implement it on. www.yosoh.com/2010/advertising/beyond/ I have a custom script with figures out the width of all the images, adds it up, then sets the width of that page... I'd like to be able to set the arrows at the top to scroll to the next div (div.portfolioImage). I've followed Ariel Fischer's demo and documentation as best I can... but something keeps escaping me. I've finally gotten some "movement" on page load.. but to tell you the truth, I can't figure out if it's my html structure, css styles, or implementation of serialScroll causing the issue. here's the code for the buttons: <ul id="portfolioNav"> <li><a href="" id="prev">&larr;</a></li> <li><a href="" id="next">&rarr;</a></li> </ul> Here's the script, as it is right now: $('#mainContent').css('overflow', 'hidden'); $('#mainContent').serialScroll({ items:'.portfolioItem', prev:'a#prev', next:'a#next', axis: 'x', duration:1200, force:true, stop:true, lock:false, easing:'easeOutQuart', jump: true });

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >