Search Results

Search found 9596 results on 384 pages for 'remote assistance'.

Page 345/384 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • Backing up my locally hosted rails apps in preparation for OS upgrade

    - by stephen murdoch
    I have some apps running on Heroku. I will be upgrading my OS in two weeks. The last time I upgraded though (6 months ago) I ran into some problems. Here's what I did: copied all my rails apps onto DVD upgraded OS transferred rails apps from DVD to new OS Then, after setting up new SSH-keys I tried to push to some of my heroku apps and, whilst I can't remember the exact error message off-hand, it more or less amounted to "fatal exception the remote end hung up" So I know that I'm doing something wrong here. First of all, is there any need for me to be putting my heroku hosted rails apps onto DVD? Would I be better just pulling all my apps from their heroku repos once I've done the upgrade? What do others do here? The reason I stuck them on DVD is because I tend to push a specific production branch to Heroku and sometimes omit large development files from it... Secondly, was this problem caused by SSH keys? Should I have backed up the old keys and transferred them from my old OS to my new one too, or is Heroku perfectly happy to let you change OS's like that? My solution in the end was to just create new heroku apps and reassign the custom domain names in heroku add-ons menu... I never actually though of pulling from the heroku repos as I tend to push a specific branch to heroku and that branch doesn't always have all the development files in it... I realise that the error message I mentioned doesn't particularly help anyone but I didn't think to remember it 6 months ago. Any advice would be appreciated PS - when I say upgrade, I mean full install of the new version with full format of the HDD.

    Read the article

  • Why can't I create a database in an empty ASP MVC 2 project using Project->Add->New Item->SQL Server

    - by Dr Dork
    I'm diving head first into ASP MVC and am playing around with creating and manipulating a database. I did a search and found this tutorial for creating a database, however when I follow it, I get this error when trying to add a new database to my fresh, empty ASP MVC 2 project... A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) The only requirement the tutorial mentioned was SQL Server Express, but when I went to download it, it said it was already installed. I'm assuming it was part of the VS 2010 RC I installed and am running. So I don't know what else I need if I am missing something. This is all new to me, so I'm sure I'm missing something obvious here and after I'm done posting this question, I plan to do some more research into the topic of databases and how they work with ASP MVC. In the meantime, I was you could help me answer a couple high level questions... What am I missing/forgetting to do that is causing this error? Any suggestions for good resources/tutorials that focus on using databases with ASP MVC? I've done a lot of database programming in the past, so I'm familiar with the concepts of relational databases and the SQL language. I wish I could find a good resource for learning how to work with them in an ASP dev environment, as well as a good breakdown of all the related technologies used for working with them (i.e. LINQ to SQL). Thanks so much in advance for all your help! I'm going to start researching these questions right now.

    Read the article

  • Automation Error when exporting Excel data to SQL Server

    - by brohjoe
    I'm getting an Automation error upon running VBA code in Excel 2007. I'm attempting to connect to a remote SQL Server DB and load data to from Excel to SQL Server. The error I get is, "Run-time error '-2147217843(80040e4d)': Automation error". I checked out the MSDN site and it suggested that this may be due to a bug associated with the sqloledb provider and one way to mitigate this is to use ODBC. Well I changed the connection string to reflect ODBC provider and associated parameters and I'm still getting the same error. Here is the code with ODBC as the provider: Dim cnt As ADODB.Connection Dim rst As ADODB.Recordset Dim stSQL As String Dim wbBook As Workbook Dim wsSheet As Worksheet Dim rnStart As Range Public Sub loadData() 'This was set up using Microsoft ActiveX Data Components version 6.0. 'Create ADODB connection object, open connection and construct the connection string object. Set cnt = New ADODB.Connection cnt.ConnectionString = "Driver={SQL Server}; Server=onlineSQLServer2010.foo.com; Database=fooDB;Uid=logonalready;Pwd='helpmeOB1';" cnt.Open On Error GoTo ErrorHandler 'Open Excel and run query to export data to SQL Server. strSQL = "SELECT * INTO SalesOrders FROM OPENDATASOURCE('Microsoft.ACE.OLEDB.12.0'," & _ "'Data Source=C:\Database.xlsx; Extended Properties=Excel 12.0')...[SalesOrders$]" cnt.Execute (strSQL) 'Error handling. ErrorExit: 'Reclaim memory from the connection objects Set rst = Nothing Set cnt = Nothing Exit Sub ErrorHandler: MsgBox Err.Description, vbCritical Resume ErrorExit 'clean up and reclaim memory resources. If CBool(cnt.State And adStateOpen) Then Set rst = Nothing Set cnt = Nothing cnt.Close End If End Sub

    Read the article

  • SecurityNegotiationException in WCF Service Hosted on IIS

    - by Ram
    Hi, I have hosted a WCF service on IIS. The configuration file is as follows <?xml version="1.0"?> <!-- Note: As an alternative to hand editing this file you can use the web admin tool to configure settings for your application. Use the Website->Asp.Net Configuration option in Visual Studio. A full list of settings and comments can be found in machine.config.comments usually located in \Windows\Microsoft.Net\Framework\v2.x\Config --> <configuration> <configSections> <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="Everywhere" /> <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> </sectionGroup> </sectionGroup> </sectionGroup> </configSections> <appSettings/> <connectionStrings/> <system.web> <!-- Set compilation debug="true" to insert debugging symbols into the compiled page. Because this affects performance, set this value to true only during development. --> <compilation debug="false"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </assemblies> </compilation> <!-- The <authentication> section enables configuration of the security authentication mode used by ASP.NET to identify an incoming user. --> <authentication mode="Windows" /> <!-- The <customErrors> section enables configuration of what to do if/when an unhandled error occurs during the execution of a request. Specifically, it enables developers to configure html error pages to be displayed in place of a error stack trace. <customErrors mode="RemoteOnly" defaultRedirect="GenericErrorPage.htm"> <error statusCode="403" redirect="NoAccess.htm" /> <error statusCode="404" redirect="FileNotFound.htm" /> </customErrors> --> <pages> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false"/> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </httpModules> </system.web> <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="WarnAsError" value="false"/> </compiler> </compilers> </system.codedom> <system.web.extensions> <scripting> <webServices> <!-- Uncomment this section to enable the authentication service. Include requireSSL="true" if appropriate. <authenticationService enabled="true" requireSSL = "true|false"/> --> <!-- Uncomment these lines to enable the profile service, and to choose the profile properties that can be retrieved and modified in ASP.NET AJAX applications. <profileService enabled="true" readAccessProperties="propertyname1,propertyname2" writeAccessProperties="propertyname1,propertyname2" /> --> <!-- Uncomment this section to enable the role service. <roleService enabled="true"/> --> </webServices> <!-- <scriptResourceHandler enableCompression="true" enableCaching="true" /> --> </scripting> </system.web.extensions> <!-- The system.webServer section is required for running ASP.NET AJAX under Internet Information Services 7.0. It is not necessary for previous version of IIS. --> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules> <add name="ScriptModule" preCondition="integratedMode" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </handlers> </system.webServer> <system.serviceModel> <services> <service name="IISTest2.Service1" behaviorConfiguration="IISTest2.Service1Behavior"> <!-- Service Endpoints --> <endpoint address="" binding="wsHttpBinding" contract="IISTest2.IService1"> <!-- Upon deployment, the following identity element should be removed or replaced to reflect the identity under which the deployed service runs. If removed, WCF will infer an appropriate identity automatically. --> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="IISTest2.Service1Behavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> The client configuration file is as follows <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_IService1" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> </wsHttpBinding> </bindings> <client> <endpoint address="http://yyy.zzz.xxx.net/IISTest2/Service1.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IService1" contract="ServTest.IService1" name="WSHttpBinding_IService1"> <identity> <dns value="localhost" /> </identity> </endpoint> </client> </system.serviceModel> </configuration> When I tried to access the service from client application, I got SecurityNegotiationException and details are Secure channel cannot be opened because security negotiation with the remote endpoint has failed. This may be due to absent or incorrectly specified EndpointIdentity in the EndpointAddress used to create the channel. Please verify the EndpointIdentity specified or implied by the EndpointAddress correctly identifies the remote endpoint. If I host the service on ASP .NET Dev server, it work well but if I host on IIS above mentioned error occurs. Thanks, Ram

    Read the article

  • icacls giving an error message "The System cannot Find the File Specified" When executed on folder

    - by someshine
    I have a remote share in windows 7 and accessing the same share from windows 2008. I have a small script in PHP which denys the read permission on a folder in a share and again give the full permissions on that folder in the share and deletes that folder .Here the Problem is "Denying the Read Permission is working fine and again assigning full permission on that folder in the share is not working .Iam using icacls for assigning all the permissions .When i try to assign the Full permission on that folder in the share It is giving error message "The system cannot find the file specified ".When we output the command from the script and execute it on commandline it is working fine .Why is it not working from my PHP script .Please Advise me $dir_name = "testdir"; $share_path = "\\SERVERABCD\myshare"; mkdir($share_path."\$dir_name"); $path_escaped = '"'.$path.'"'; $user = getenv('USERNAME'); $cmd = 'icacls ' . $path_escaped . ' /deny ' . $user . ':(OI)(CI)(R)'; exec($cmd); //From This statement it is giving error message "the System cannot find the file Specified" $cmd = 'icacls ' . $path_escaped . ' /remove:d' .' '. $user; exec($cmd); $cmd = 'icacls ' . $path_escaped . ' /grant ' . $user . ':(OI)(CI)(F)'; exec($cmd); if(file_exists($share_path."\$dir_name")){ rmdir($share_path."\$dir_name"); } ? //This Scripts Works Fine For Files on shares where as it says "The System cannot find the file specified" for folders from second $cmd statement .

    Read the article

  • How to estimate size of data to transfer when using DbCommand.ExecuteXXX?

    - by Yadyn
    I want to show the user detailed progress information when performing potentially lengthy database operations. Specifically, when inserting/updating data that may be on the order of hundreds of KB or MB. Currently, I'm using in-memory DataTables and DataRows which are then synced with the database via TableAdapter.Update calls. This works fine and dandy, but the single call leaves little opportunity to glean any kind of progress info to show to the user. I have no idea how much data is passing through the network to the remote DB or its progress. Basically, all I know is when Update returns and it is assumed complete (barring any errors or exceptions). But this means all I can show is 0% and then a pause and then 100%. I can count the number of rows, even going so far to cound how many are actually Modified or Added, and I could even maybe calculate per DataRow its estimated size based on the datatype of each column, using sizeof for value types like int and checking length for things like strings or byte arrays. With that, I could probably determine, before updating, an estimated total transfer size, but I'm still stuck without any progress info once Update is called on the TableAdapter. Am I stuck just using an indeterminate progress bar or mouse waiting cursor? Would I need to radically change our data access layer to be able to hook into this kind of information? Even if I can't get it down to the precise KB transferred (like a web browser file download progress bar), could I at least know when each DataRow/DataTable finishes or something? How do you best show this kind of progress info using ADO.NET?

    Read the article

  • Where are the real risks in network security?

    - by Barry Brown
    Anytime a username/password authentication is used, the common wisdom is to protect the transport of that data using encryption (SSL, HTTPS, etc). But that leaves the end points potentially vulnerable. Realistically, which is at greater risk of intrusion? Transport layer: Compromised via wireless packet sniffing, malicious wiretapping, etc. Transport devices: Risks include ISPs and Internet backbone operators sniffing data. End-user device: Vulnerable to spyware, key loggers, shoulder surfing, and so forth. Remote server: Many uncontrollable vulnerabilities including malicious operators, break-ins resulting in stolen data, physically heisting servers, backups kept in insecure places, and much more. My gut reaction is that although the transport layer is relatively easy to protect via SSL, the risks in the other areas are much, much greater, especially at the end points. For example, at home my computer connects directly to my router; from there it goes straight to my ISPs routers and onto the Internet. I would estimate the risks at the transport level (both software and hardware) at low to non-existant. But what security does the server I'm connected to have? Have they been hacked into? Is the operator collecting usernames and passwords, knowing that most people use the same information at other websites? Likewise, has my computer been compromised by malware? Those seem like much greater risks. What do you think?

    Read the article

  • Amazon API ItemSearch returns (400) Bad Request.

    - by BuzzBubba
    I'm using a simple example from Amazon documentation for ItemSearch and I get a strange error: "The remote server returned an unexpected response: (400) Bad Request." This is the code: public static void Main() { //Remember to create an instance of the amazon service, including you Access ID. AWSECommerceServicePortTypeClient service = new AWSECommerceServicePortTypeClient(new BasicHttpBinding(), new EndpointAddress( "http://webservices.amazon.com/onca/soap?Service=AWSECommerceService")); AWSECommerceServicePortTypeClient client = new AWSECommerceServicePortTypeClient( new BasicHttpBinding(), new EndpointAddress("http://webservices.amazon.com/onca/soap?Service=AWSECommerceService")); // prepare an ItemSearch request ItemSearchRequest request = new ItemSearchRequest(); request.SearchIndex = "Books"; request.Title = "Harry+Potter"; request.ResponseGroup = new string[] { "Small" }; ItemSearch itemSearch = new ItemSearch(); itemSearch.Request = new ItemSearchRequest[] { request }; itemSearch.AWSAccessKeyId = accessKeyId; // issue the ItemSearch request try { ItemSearchResponse response = client.ItemSearch(itemSearch); // write out the results foreach (var item in response.Items[0].Item) { Console.WriteLine(item.ItemAttributes.Title); } } catch(Exception e) { Console.ForegroundColor = ConsoleColor.Red; Console.WriteLine(e.Message); Console.ForegroundColor = ConsoleColor.White; Console.WriteLine("Press any key to quit..."); Clipboard.SetText(e.Message); } Console.ReadKey(); What is wrong?

    Read the article

  • Why isn't module_filters filetering Mail::IspMailGate in CPAN::Mini?

    - by user304122
    Edited - Ummm - now have a module in schwigon giving same problem !! I am on a corporate PC that forces mcshield on everything that moves. I get blocked when trying to mirror on ... authors/id/J/JV/JV/EekBoek-2.00.01.tar.gz ... updated authors/id/J/JV/JV/CHECKSUMS ... updated Could not stat tmpfile '/cygdrive/t/cpan_mirror/authors/id/J/JW/JWIED/Mail-IspMailGate-1.1013.tar.gz-4712': No such file or directory at /usr/lib/perl5/site_perl/5.10/LWP/UserAgent.pm line 851. authors/id/J/JW/JWIED/Mail-IspMailGate-1.1013.tar.gz At this point, I get the Virus scanner mcshield sticking its Awr in. To maintain my mirror I execute:- #!/usr/bin/perl CPAN::Mini->update_mirror( remote => "http://mirror.eunet.fi/CPAN", local => "/cygdrive/t/cpan_mirror/", trace => 1, errors => 1, module_filters => [ qr/kjkjhkjhkjkj/i, qr/clamav/i, qr/ispmailgate/i, qr/IspMailGate/, qr/Mail-IspMailGate/, qr/mail-ispmailgate/i, ], path_filters => [ qr/ZZYYZZ/, #qr/WIED/, #qr/RJBS/, ] ); It skips OK if I enable the path_filter WIED. Just cannot get it to skip the module failing module to complete other WIED modules. Any ideas?

    Read the article

  • RemoteRef.invoke implementation

    - by phill
    Just finished a basic implementation of RMI for a class project, and now I am interested in how it is actually done. Sun is kind enough to provide the source for the majority of the Java classes with the JDK, however an implementation of RemoteRef doesn't seem to be there. I have the source for the interface RemoteRef along with the ServerRef interface and one implementation, ProxyRef, which just calls invoke on another RemoteRef, but none of the classes that implement actual code, ActivatableRef or UnicastRef for example, are included. I mention ActivatableRef and UnicastRef because I believe these have proper implementations of invoke thanks to the wonder that is Eclipse and its class file editor showing that it is more then just a method declaration. Although I can tell that it is more then a declaration, I can't get much more out of it, building a string here, throw exception there, but nothing about the process that is taking place to send the remote method call. Would anyone here happen to know where I can get this code, or if its even available? If it is not available, would anyone happen to know what the message being sent to the server looks like? phill

    Read the article

  • java RMI connection to server

    - by user85116
    I have a very simple rmi client / server application. I don't use the "rmiregistry" application though, I use this to create the server: server = new RemoteServer(); registry = LocateRegistry.createRegistry(PORT); registry.bind("RemoteServer", server); The client part is: registry = LocateRegistry.getRegistry(IPADDRESS, PORT); remote = (IRemoteServer) registry.lookup("RemoteServer"); Here is the fascinating problem: The application works perfectly when both server and client are running in my (private) local network. As soon as I put the server on a public server, the application hangs for a minute, then gives me the following error: java.rmi.ServerException: RemoteException occurred in server thread; nested exception is: java.rmi.ConnectException: Connection refused to host: 192.168.x.y; nested exception is: java.net.ConnectException: Connection timed out: connect at sun.rmi.server.UnicastServerRef.dispatch(Unknown Source) at sun.rmi.transport.Transport$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Unknown Source) at sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown Source) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(Unknown Source ... (the rest is truncated) The key I think is that the client (running on my private network) cannot connect to myself (my address is 192.168.x.y where x.y is some other numbers, but the real error message shows my ip address listed there) If I kill the rmi server on the public internet, then I instantly get a "connection refused to host: a.b.c.d") message, so I know that something at the server end is at least working. Any suggestions? EDIT: just to make this a little more clear: 192.168.x.y is the client address, a.b.c.d is the server address. The stacktrace shows the client cannot connect to the client.

    Read the article

  • Having trouble coming up with a good architecture for a client/server application

    - by rmw1985
    I am writing a remote backup service meant to support 1000+ users. It is going to use librsync to store reverse diffs (like rdiff-backup) and make data transfer efficient. My trouble is that I do not know the "best" way to implement the client/server model. I have thought of doing it like rsync/rdiff-backup do it by having the client open an SSH connection and running a server executable and communicating across pipes. Another alternative would be to write a server which would handle authentication and communicate with the client via SSL. The reason I have thought of this is that there is "state" information like how many backup jobs are setup, etc. that must be maintained. Another alternative that I have thought about is running a "web service" using Pylons or Django to handle the authentication, but I do not know how to bridge that the the "storage" side. Since I am using librsync, I cannot use "dumb" storage. Is there a way to pipe data through Pylons or Django to a server side handler that would do the rsync calculation? This seems to me like maybe a dumb question but I am sort of lost. Any tips or suggestions from more experienced developers would be extremely helpful.

    Read the article

  • Locating memory leak in Apache httpd process, PHP/Doctrine-based application

    - by Sam
    I have a PHP application using these components: Apache 2.2.3-31 on Centos 5.4 PHP 5.2.10 Xdebug 2.0.5 with Remote Debugging enabled APC 3.0.19 Doctrine ORM for PHP 1.2.1 using Query Caching and Results Caching via APC MySQL 5.0.77 using Query Caching I've noticed that when I start up Apache, I eventually end up 10 child processes. As time goes on, each process will grow in memory until each one approaches 10% of available memory, which begins to slow the server to a crawl since together they grow to take up 100% of memory. Here is a snapshot of my top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1471 apache 16 0 626m 201m 18m S 0.0 10.2 1:11.02 httpd 1470 apache 16 0 622m 198m 18m S 0.0 10.1 1:14.49 httpd 1469 apache 16 0 619m 197m 18m S 0.0 10.0 1:11.98 httpd 1462 apache 18 0 622m 197m 18m S 0.0 10.0 1:11.27 httpd 1460 apache 15 0 622m 195m 18m S 0.0 10.0 1:12.73 httpd 1459 apache 16 0 618m 191m 18m S 0.0 9.7 1:13.00 httpd 1461 apache 18 0 616m 190m 18m S 0.0 9.7 1:14.09 httpd 1468 apache 18 0 613m 190m 18m S 0.0 9.7 1:12.67 httpd 7919 apache 18 0 116m 75m 15m S 0.0 3.8 0:19.86 httpd 9486 apache 16 0 97.7m 56m 14m S 0.0 2.9 0:13.51 httpd I have no long-running scripts (they all terminate eventually, the longest being maybe 2 minutes long), and I am working under the assumption that once each script terminates, the memory it uses gets deallocated. (Maybe someone can correct me on that). My hunch is that it could be APC, since it stores data between requests, but at the same time, it seems weird that it would store data inside the httpd process. How can I track down which part of my app is causing the memory leak? What tools can I use to see how the memory usage is growing inside the httpd process and what is contributing to it?

    Read the article

  • Why do i get exc bad access in cases when object is not nil?

    - by DixieFlatline
    I have an app that receives remote notifications. My view controller that is shown after push has a tableview. App crashes very randomly (1 in 20 tries) at line setting frame: if (!myTableView) { NSLog(@"self.myTableView is nil"); } myTableView.frame=CGRectMake(0, 70, 320, 376); This only happens when i open the app, then open some other apps and then receive the push notification. I guess it has something to do with memory. I use ARC (ios 5). The strange thing is that nslog is not displayed, so tableview is not nil. Crash log: Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x522d580c Crashed Thread: 0 Thread 0 name: Dispatch queue: com.apple.main-thread Thread 0 Crashed: 0 libobjc.A.dylib 0x352b1f7e objc_msgSend + 22 1 Foundation 0x37dc174c NSKVOPendingNotificationCreate + 216 2 Foundation 0x37dc1652 NSKeyValuePushPendingNotificationPerThread + 62 3 Foundation 0x37db3744 NSKeyValueWillChange + 408 4 Foundation 0x37d8a848 -[NSObject(NSKeyValueObserverNotification) willChangeValueForKey:] + 176 5 Foundation 0x37e0ca14 _NSSetPointValueAndNotify + 76 6 UIKit 0x312af25a -[UIScrollView(Static) _adjustContentOffsetIfNecessary] + 1890 7 UIKit 0x312cca54 -[UIScrollView setFrame:] + 548 8 UIKit 0x312cc802 -[UITableView setFrame:] + 182 9 POViO 0x000913cc -[FeedVC viewWillAppear:] (FeedVC.m:303) Dealloc is not called because it is not logged: - (void)dealloc { NSLog(@"dealloc"); }

    Read the article

  • unable to place breakpoints in eclipse

    - by anjanb
    I am using eclipse europa (3.5) on windows vista home premium 64-bit using JDK 1.6.0_18 (32 BIT). Normally, I am able to put breakpoints just fine; However, for a particular class which is NOT part of the project (this class is inside a .JAR file (.JAR file is part of the project) ), although I have attached a source directory to this .JAR file, I am unable to place a breakpoint in this class. If I double-click on the breakpoint pane(left border), I notice that a class breakpoint is placed. I was wondering if there was NO debug info; However, found that this particular class was compiled using ant/javac task using debug="true" and debuglevel="lines,vars,source". I even ran jad on this class to confirm that it indeed contained the debug info. So, why is eclipse preventing me from placing a breakpoint ? EDIT : Just so everyone understands the context, this is a webapp running under tomcat 6.0. I am remote debugging the application from eclipse after having started tomcat outside. The application is working just fine. I am trying to understand the behavior of the above class which I'm unable to do since eclipse is not letting me set a BP. P.S : I saw a few threads here talking about BPs not being hit but in my case, I am unable to place the BP! P.P.S : I tried JDK 1.6.0_16 before trying out 1.6.0_18. Thanks for any pointers.

    Read the article

  • Jsch how to list files along with directories

    - by Rajeev
    In the following code when the command ls -a is executed why is that i see only the list of directories and not any files that are on a remote linux server JSch jsch = new JSch(); Session session = jsch.getSession(username1, ip1, 22); java.util.Properties config = new java.util.Properties(); config.put("StrictHostKeyChecking", "no"); session.setConfig(config); //session.setPassword(password.getText().toString()); session.setPassword(password1); session.connect(); final ListView mainlist = (ListView)findViewById(R.id.list); final RelativeLayout rl = (RelativeLayout)findViewById(R.id.rl); mainlist.setVisibility(View.VISIBLE); rl.setVisibility(View.INVISIBLE); String command = "ls -a"; Channel channel = session.openChannel("exec"); ((ChannelExec) channel).setCommand(command); channel.setInputStream(null); ((ChannelExec) channel).setErrStream(System.err); InputStream in = channel.getInputStream();

    Read the article

  • Integrating Magento with SAP ECC 6.0 Backend

    - by ikarous
    I'm a freshly graduated (read: inexperienced) developer who's been tasked with determining the feasibility of integrating Magento with an SAP-based backend. No developer at our company has any experience working with either SAP or Magento, so I was hoping that the Stack Overflow community may be able to point me in the right direction with my research. We're a small company (four full-time developers) and the timeline on this project would be tight, so I'm trying to gather as much information as possible. The client has a tiered pricing structure, tax calculation logic, promotional deals, and automatic freight determination all implemented in an SAP ECC 6.0 system. They would like to migrate all their online stores over to Magento while continuing to utilize all existing functionality in SAP. The idea is to accomplish this by overriding certain modules in Magento to place remote calls to SAP BAPIs. I've investigated SAPRFC, which looks promising but relatively stale in terms of update frequency. Do any developers have experience using SAPRFC with SAP ECC 6.0 (with or without Magento integration)? If so, what were your experiences, and what were the biggest risk factors involved? Any comments, suggestions, or links to resources would be greatly appreciated.

    Read the article

  • Problem with relative path to image in XAML?

    - by Giri
    I am trying to reference a PNG file in my applications working directory through XAML with the following: <Image Name="contactImage"> <Image.Source> <BitmapImage UriSource="/Images/contact.png"> </Image.Source> </Image> Now in my code-behind I try to get the height of the image with contactImage.Source.Height This fails with System.IOException - cannot locate resource 'images/contact.png'. If I use something like PngBitmapDecoder p = new PngBitmapDecoder(new Uri("./Images/contact.png"), UriKind.Relative, BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.Default); Everything is happy. How can I reference an image in xaml to a path relative to the working deirectory of the app. BTW- this is being run on a remote machine (if that makes a difference). I have tried "./Images/contact.png" and ".\Images\contact.png" and several other combinations of back/forward slashes and dots. Here is the primary difference- Any time the file is referenced in XAML, it shows up as pack://aplication:,,, blah blah blah when I use the PngBitmapDecoder, it shows up correctly as "./Images/contact.png". How do I reference the image file in XAML and get it show a source as "./Images/contact.png" instead of a pack://application,,,blah blah blah?

    Read the article

  • Unable to upload large files on FTP using Apache commons-net-3.1

    - by Nitin
    I am trying to upload the one large file ( more than 8 MB) using storeFile(remote, local) method of FTPClient but it results false.It get uploaded with some extra bytes.Following is the code with Output: public class Main { public static void main(String[] args) { FTPClient client = new FTPClient(); FileInputStream fis = null; try { client.connect("208.106.181.143"); client.setFileTransferMode(client.BINARY_FILE_TYPE); client.login("abc", "java"); int reply = client.getReplyCode(); System.out.println("Received Reply from FTP Connection:" + reply); if(FTPReply.isPositiveCompletion(reply)){ System.out.println("Connected Success"); } client.changeWorkingDirectory("/"+"Everbest"+"/"); client.makeDirectory("ETPSupplyChain5.3-EvbstSP3"); client.changeWorkingDirectory("/"+"Everbest"+"/"+"ETPSupplyChain5.3-EvbstSP3"+"/"); FTPFile[] names = client.listFiles(); String filename = "E:\\Nitin\\D-Drive\\Installer.rar"; fis = new FileInputStream(filename); boolean result = client.storeFile("Installer.rar", fis); int replyAfterupload = client.getReplyCode(); System.out.println("Received Reply from FTP Connection replyAfterupload:" + replyAfterupload); System.out.println("result:"+result); for (FTPFile name : names) { System.out.println("Name = " + name); } client.logout(); fis.close(); client.disconnect(); } catch (SocketException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } o/p: Received Reply from FTP Connection:230 Connected Success 32 /Everbest/ETPSupplyChain5.3-EvbstSP3 Received Reply from FTP Connection replyAfterupload:150 result:false

    Read the article

  • Elmah sends error mail on development server, but not on production

    - by Adrian Grigore
    Hi, I am trying to set up Elmah so that it sends me an e-mail when a new error occurs. This works fine on my development server, but on the production server no e-mail is sent. The exception is logged on the production server, it's just the e-mail that does not get sent. Here are my elmah configuration settings: <elmah> <security allowRemoteAccess="yes"/> <errorMail from="<MYGOOGLELOGIN>@googlemail.com" to="<MYGOOGLELOGIN>@googlemail.com" subject="ERROR From Elmah" async="false" smtpPort="587" useSsl="true" smtpServer="smtp.gmail.com" userName="<MYGOOGLELOGIN>@googlemail.com" password="<MYGOOGLEPASSWORD>" /> </elmah> I've tried different mail servers, both local and remote, and I tried both synchronous and asynchronous mail sending but to no avail. Now I don't have the slightest idea how to proceed (apart from debugging Elmah on my production server, which seems like a lot of effort to set up). Please help! Thanks, Adrian Edit: I might also add that I tried switching off the firewall on the production server, but that did not make any difference either.

    Read the article

  • Solution for cleaning an image cache directory on the SD card

    - by synic
    I've got an app that is heavily based on remote images. They are usually displayed alongside some data in a ListView. A lot of these images are new, and a lot of the old ones will never be seen again. I'm currently storing all of these images on the SD card in a custom cache directory (ala evancharlton's magnatune app). I noticed that after about 10 days, the directory totals ~30MB. This is quite a bit more than I expected, and it leads me to believe that I need to come up with a good solution for cleaning out old files... and I just can't think of a great one. Maybe you can help. These are the ideas that I've had: Delete old files. When the app starts, start a background thread, and delete all files older than X days. This seems to pose a problem, though, in that, if the user actively uses the app, this could make the device sluggish if there are hundreds of files to delete. After creating the files on the SD card, call new File("/path/to/file").deleteOnExit(); This will cause all files to be deleted when the VM exits (I don't even know if this method works on Android). This is acceptable, because, even though the files need to be cached for the session, they don't need to be cached for the next session. It seems like this will also slow the device down if there are a lot of files to be deleted when the VM exits. Delete old files, up to a max number of files. Same as #1, but only delete N number of files at a time. I don't really like this idea, and if the user was very active, it may never be able to catch up and keep the cache directory clean. That's about all I've got. Any suggestions would be appreciated.

    Read the article

  • GWT serialization issue

    - by Eddy
    I am having a heck of a time returning an ArrayList of objects that implement IsSerializable via RPC. The IsSerializable pojo contains one variable, a String, and has a 0 parameter constructor. I have removed the .gwt.rpc file from my war and still I get: com.google.gwt.user.client.rpc.SerializationException: Type 'com.test.myApp.client.model.Test' was not included in the set of types which can be serialized by this SerializationPolicy or its Class object could not be loaded. For security purposes, this type will not be serialized.: instance = com.test.myApp.client.model.Test@17a9692 at com.google.gwt.user.server.rpc.impl.ServerSerializationStreamWriter.serialize(ServerSerializationStreamWriter.java:610) I am using GWT 2.0.2 with jdk 1.6.0_18. Any ideas on what might be going on or what I am doing wrong? Here is the code for the Test class and the remote method is returning ArrayList. I even modified the code for it to just return one instance of Test with the same result: the exception above. package com.test.myApp.client.model; import com.google.gwt.user.client.rpc.IsSerializable; public class Test implements IsSerializable{ private String s; public Test() {} public Test(String s) { this.s = s; } public String getS() { return s; } public void setS(String s) { this.s = s; } } Greatly appreciate the help! Eddy

    Read the article

  • Set up Gitosis, but can't clone

    - by Tim Rupe
    I've set up Gitosis on a remote Ubuntu box which I will refer to as linuxserver as my host in the following commands. I'm also connecting from a Windows box using Cygwin. I followed the instructions according to: http://scie.nti.st/2007/11/14/hosting-git-repositories-the-easy-and-secure-way I had no problems up until I needed to clone the gitosis-admin repository to my local machine git clone git@linuxserver:gitosis-admin.git When I do this, the command executes, but hangs there displaying nothing until I ctrl-c to get back to a command prompt. No messages are displayed at all. I'm pretty sure I have my ssh keys set up properly, because logging in using "ssh linuxserver" into my regular account works perfectly without asking for a password. Edit: Over the weekend I set up a near identical Ubuntu box at home, and had no problem setting up Gitosis. The only difference was that I was connecting from OSX instead of Cygwin. Edit: I've also discovered that when using the Bash Shell provided with "Git Extensions", I have no problems, so the issue definitely seems to be some kind of Cygwin conflict. Edit: Just an update, but about a month after posting this question, I switched to Mercurial, and found that I prefer it much more than git. Thanks for the suggestions, but I don't plan on going back to git to try any of them out.

    Read the article

  • Deadlock in ThreadPoolExecutor

    - by Vitaly
    Encountered a situation when ThreadPoolExecutor is parked in execute(Runnable) function while all the ThreadPool threads are waiting in getTask func, workQueue is empty. Does anybody have any ideas? The ThreadPoolExecutor is created with ArrayBlockingQueue, corePoolSize == maximumPoolSize = 4 [Edit] To be more precise, the thread is blocked in ThreadPoolExecutor.exec(Runnable command) func. It has the task to execute, but doesn't do it. [Edit2] The executor is blocked somewhere inside the working queue (ArrayBlockingQueue). [Edit3] The callstack: thread = front_end(224) at sun.misc.Unsafe.park(Native methord) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:747) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:778) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1114) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.ArrayBlockingQueue.offer(ArrayBlockingQueue.java:224) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:653) at net.listenThread.WorkersPool.execute(WorkersPool.java:45) at the same time the workQueue is empty (checked using remote debug) [Edit4] Code working with ThreadPoolExecutor: public WorkersPool(int size) { pool = new ThreadPoolExecutor(size, size, IDLE_WORKER_THREAD_TIMEOUT, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(WORK_QUEUE_CAPACITY), new ThreadFactory() { @NotNull private final AtomicInteger threadsCount = new AtomicInteger(0); @NotNull public Thread newThread(@NotNull Runnable r) { final Thread thread = new Thread(r); thread.setName("net_worker_" + threadsCount.incrementAndGet()); return thread; } }, new RejectedExecutionHandler() { public void rejectedExecution(@Nullable Runnable r, @Nullable ThreadPoolExecutor executor) { Verify.warning("new task " + r + " is discarded"); } }); } public void execute(@NotNull Runnable task) { pool.execute(task); } public void stopWorkers() throws WorkersTerminationFailedException { pool.shutdownNow(); try { pool.awaitTermination(THREAD_TERMINATION_WAIT_TIME, TimeUnit.SECONDS); } catch (InterruptedException e) { throw new WorkersTerminationFailedException("Workers-pool termination failed", e); } } }

    Read the article

  • YUI "Get" utility to parse JSON response?

    - by Sean
    The documentation page for the YUI "Get" utility says: Get Utility is ideal for loading your own scripts or CSS progressively (lazy-loading) or for retrieving cross-domain JSON data from sources in which you have total trust. ...but doesn't have any actual examples for how to do so. Their one example doesn't actually request a JSON document from a remote server, but instead a document containing actual JavaScript along with the JSON data. I'm just interested in the JSON response from the Google Maps API HTTP (REST) interface. Because I can't do cross-site scripting with the "Connect" utility, I am trying the "Get" utility. But merely inserting some JSON data into the page isn't going to do anything, of course. I have to assign it to a variable. But how? Also, just inserting JSON data into the page makes Firefox complain that there's a JavaScript error. And understandably! Plain ol' JSON data isn't going to parse as valid JavaScript. Any ideas?

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >