Search Results

Search found 73679 results on 2948 pages for 'get http client info'.

Page 85/2948 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Is there a way to test HTTP Live Streaming via an iSight camera?

    - by bpapa
    I'm working on an iPhone app that will use HTTP Live Streaming. Using Apple's provided tools (particularly mediafilesegmenter), I'm able to successfully segment and serve an archived video. Now I want to test Live Streaming stuff. I don't own any sort of camcorder, I just have my iSight built-in to my Mac. Is there a way to leverage this camera to test out Live Streaming? Run iSight from the command line maybe? If so, I need a port number for mediastreamsegmenter.

    Read the article

  • HTTP through a proxy server is not allowed

    - by jidma
    When I try to connect to my Tomcat server on http://<servername>:8080 it works fine, but from another ISP provided it gives the following error: HTTP through a proxy server is not allowed. Some ISP apparently don't allow http over the port 8080, as they think the client uses a proxy. I also have a httpd running on port 80 for my website. So in order to avoid the proxy error, I would like to make to following routing: If the user connects to http://<servername>, then the website is served via apache. If the user connects to http://<servername>/AppName, then the port is rerouted to 8080, without the client (or his ISP) knowing. Is that possible (using iptables or something else) ? Thank you

    Read the article

  • How to resolve a "driver failure" error in the Cisco VPN client connecting from a Windows 7 client

    - by JosephStyons
    I have recently upgraded my laptop from Windows Vista SP1 to Windows 7 Professional. After the upgrade, if I try to use the Cisco VPN client to connect to a network, I get this message: Secure VPN Connection terminated locally by the Client. Reason 440: Driver Failure. Prior to the upgrade, I was able to connect with no problems. The version of the client I am using is 5.0.05.0290.

    Read the article

  • How to upgrade all dependencies to a specific version

    - by Calm Storm
    Hi, I tried doing a mvn dependency:tree and I get a tree of dependencies. My question is, My project depends on many modules which internally depends on many spring artifacts. There are a few version clashes. I want to upgrade all spring related libraries to say the latest one (2.6.x or above). What is the preferred way to do this? Should I declare all the deps spring-context, spring-support (and 10 other artifacts) in my pom.xml and point them to 2.6.x ? Is there any other better method ? [INFO] +- com.xxxx:yyy-jar:jar:1.0-SNAPSHOT:compile [INFO] | +- com.xxxx:zzz-commons:jar:1.0-SNAPSHOT:compile [INFO] | | +- org.springframework:spring-dao:jar:2.0.7:compile [INFO] | | +- org.springframework:spring-jdbc:jar:2.0.7:compile [INFO] | | +- org.springframework:spring-web:jar:2.0.7:compile [INFO] | | +- org.springframework:spring-support:jar:2.0.7:compile [INFO] | | +- net.sf.ehcache:ehcache:jar:1.2:compile [INFO] | | +- commons-collections:commons-collections:jar:3.2:compile [INFO] | | +- aspectj:aspectjweaver:jar:1.5.3:compile [INFO] | | +- betex-commons:betex-commons:jar:5.5.1-2:compile [INFO] | | \- javax.servlet:servlet-api:jar:2.4:compile [INFO] | +- org.springframework:spring-beans:jar:2.0.7:compile [INFO] | +- org.springframework:spring-jmx:jar:2.0.7:compile [INFO] | +- org.springframework:spring-remoting:jar:2.0.7:compile [INFO] | +- org.apache.cxf:cxf-rt-core:jar:2.0.2-incubator:compile [INFO] | | +- org.apache.cxf:cxf-api:jar:2.0.2-incubator:compile [INFO] | | | +- org.apache.geronimo.specs:geronimo-activation_1.1_spec:jar:1.0-M1:compile [INFO] | | | +- org.codehaus.woodstox:wstx-asl:jar:3.2.1:compile [INFO] | | | +- org.apache.neethi:neethi:jar:2.0.2:compile [INFO] | | | \- org.apache.cxf:cxf-common-schemas:jar:2.0.2-incubator:compile UPDATE : I have removed the extra question about "\-" so my question is now what the subject asks for :)

    Read the article

  • Server http://www.myopenid.com/server responds that the 'check_authentication' call is not valid

    - by viatropos
    I've been struggling with this for a few days now, haven't pinpointed the problem. I am trying to get OpenID to work in Rails 2.3 and Rails 3, using ruby-openid rack-openid open_id_authentication I am logging in using my viatropos.myopenid.com account, but it consistently returns this error: Server http://www.myopenid.com/server responds that the 'check_authentication' call is not valid What could that be from, it's not a very descriptive error... Does it have to do with something ruby-specific, or is this entirely on the OpenID protocol side of things? More specifically, I am using Authlogic and ActiveRecord, so could this be a problem with my User or UserSession models somehow? Or is it more to do with the header or request? In ruby response I'm getting (from puts inside ruby-openid) is: #<OpenID::Consumer::FailureResponse:0x25e282c @reference=nil, @endpoint=#<OpenID::OpenIDServiceEndpoint:0x2601984 @local_id="http://viatropos.myopenid.com/", @display_identifier=nil, @type_uris=["http://specs.openid.net/auth/2.0/signon", "http://openid.net/sreg/1.0", "http://openid.net/extensions/sreg/1.1", "http://schemas.openid.net/pape/policies/2007/06/phishing-resistant", "http://openid.net/srv/ax/1.0"], @used_yadis=true, @server_url="http://www.myopenid.com/server", @canonical_id=nil, @claimed_id="http://viatropos.myopenid.com/">, @message="Server http://www.myopenid.com/server responds that the 'check_authentication' call is not valid", @contact=nil> Any tips would be greatly appreciated. Thanks

    Read the article

  • Maven doesn't see my <repository> in <dependencyManagement>

    - by Ondra Žižka
    To make Maven "deploy" to a directory, I use this: <distributionManagement> <downloadUrl>http://code.google.com/p/junitdiff/downloads/list</downloadUrl> <repository> <id>local-hack-repo</id> <name>LocalDir</name> <url>file://${project.basedir}/dist-maven</url> </repository> <snapshotRepository> <id>jboss-snapshots-repository</id> <name>JBoss Snapshots Repository</name> <!-- <url>https://repository.jboss.org/nexus/content/repositories/snapshots</url> --> <url>file://${project.basedir}/dist-maven</url> </snapshotRepository> </distributionManagement> This appears in the efffective pom. ... <distributionManagement> <repository> <id>local-hack-repo</id> <name>LocalDir</name> <url>file:///home/ondra/work/TOOLS/JUnitDiff/github/dist-maven</url> </repository> <snapshotRepository> <id>jboss-snapshots-repository</id> <name>JBoss Snapshots Repository</name> <url>file:///home/ondra/work/TOOLS/JUnitDiff/github/dist-maven</url> </snapshotRepository> <downloadUrl>http://code.google.com/p/junitdiff/downloads/list</downloadUrl> </distributionManagement> But still, Maven insists that it's not there: [INFO] [ERROR] Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy (default-deploy) on project JUnitDiff: Deployment failed: repository element was not specified in the POM inside distributionManagement element or in -DaltDeploymentRepository=id::layout::url parameter -> [Help 1] [INFO] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy (default-deploy) on project JUnitDiff: Deployment failed: repository element was not specified in the POM inside distributionManagement element or in -DaltDeploymentRepository=id::layout::url parameter [INFO] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:217) [INFO] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) [INFO] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) [INFO] at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84) [INFO] at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59) [INFO] at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183) [INFO] at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161) [INFO] at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320) [INFO] at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156) [INFO] at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537) [INFO] at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196) [INFO] at org.apache.maven.cli.MavenCli.main(MavenCli.java:141) [INFO] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [INFO] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) [INFO] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [INFO] at java.lang.reflect.Method.invoke(Method.java:601) [INFO] at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290) [INFO] at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230) [INFO] at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409) [INFO] at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352) [INFO] Caused by: org.apache.maven.plugin.MojoExecutionException: Deployment failed: repository element was not specified in the POM inside distributionManagement element or in -DaltDeploymentRepository=id::layout::url parameter [INFO] at org.apache.maven.plugin.deploy.DeployMojo.getDeploymentRepository(DeployMojo.java:235) [INFO] at org.apache.maven.plugin.deploy.DeployMojo.execute(DeployMojo.java:118) [INFO] at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101) [INFO] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209) [INFO] ... 19 more I am using it through the maven-release-plugin. What's wrong?

    Read the article

  • Unable to map to web folder using WebDAV client on Windows Server 2008 R2

    - by user74989
    I have a client running Windows Server 2008 R2 on several servers. One of the servers is also running SharePoint 3.0 and my client has created a web folder to map to. I can map to the web folder from all Server 2008 R2 boxes that have the WebDAV client (part of Desktop Experience feature) installed, except for the server the folder resides on. When I attempt to map to the web folder on the server which the folder resides, I am repeatedly prompted to enter my credentials. I am using the same account that I used to map the web folder on the other servers. I have also tried mapping from the command line and receive 'Access Denied' What may be causing the problem? I would think that if I can map to the drive from one server, I should be able to map the drive from the rest as long as the WebDAV client is installed, especially on the server where the folder is located. Jesse

    Read the article

  • List of social networks which allow developers to find out friend of friends info

    - by Jack
    I have been working on social application development for some time now. I now need to build an application which makes use of friend of friends data. Any info like friend count, interest, location etc. would be helpful. Here's the list I have till now 1) Networks where you can find info about your friend of friends Twitter,Digg 2) Complement Facebook, MySpace, Orkut I am more interested in the latter category. Any help will be appreciated.

    Read the article

  • Do I need to use http redirect code 302 or 307?

    - by Iain Fraser
    I am working on a CMS that uses a search facility to output a list of content items. You can use this facility as a search engine, but in this instance I am using it to output the current month's Media Releases from an archive of all Media Releases. The default parameters for these "Data Lists" as they are called, don't allow you to specify "current month" or "current year" for publication date - only "last x days" or "from dateA to dateB". The search facility will accept querystring parameters though, so I intend to code around it like this: Page loads How many days into the current month are we? Do we have a query string that asks for a list including this many days? If no, redirect the client back to this page with the appropriate query-string included. If yes, allow the CMS to process the query Now here's the rub. Suppose the spider from your favourite search engine comes along and tries to index your main Media Releases page. If you were to use a 301 redirect to the default query page, the spider would assume the main page was defunct and choose to add the query page to its index instead of the main page. Now I see that 302 and 307 indicate that a page has been moved temporarily; if I do this, are spiders likely to pop the main page into their index like I want them to? Thanks very much in advance for your help and advice. Kind regards Iain

    Read the article

  • NFS Client reports Permission Denied, Server reports Permission Granted

    - by VxJasonxV
    I have two RedHat 4 Servers. The client is 4.6, the server is 4.5. I'm attempting to mount a share from the server, onto the client via NFS. The /etc/exports configuration is as follows: /opt/data/config bkup(rw,no_root_squash,async) /opt/data/db bkup(rw,no_root_squash,async) exportfs returns these (among other) shares, nfs is running according to ps output. I've been attempting to use autofs on the client, but have opted to just mount the share manually considering the issues I'm having. So, I issue the mount request: mount dist:/opt/data/config /mnt/config mount: dist:/opt/data/config failed, reason given by server: Permission denied Ok, so let's see what the server has to say for itself. May 6 23:17:55 dist mountd[3782]: authenticated mount request from bkup:662 for /opt/data/config (/opt/data/config) It says it allowed the mount to take place. How can I diagnose why the client and server are disagreeing on the result?

    Read the article

  • WCF 3.5 Service and multiple http bindings

    - by mortenvpdk
    Hi I can't get my WCF service to work with more than one http binding. In IIS 7 I have to bindings http:/service and http:/service.test both at port 80 In my web.config I have added the baseAddressPrefixFilters but I can't add more than one <serviceHostingEnvironment> <baseAddressPrefixFilters> <add prefix="http://service"/> <add prefix="http://service.test"/> </baseAddressPrefixFilters> </serviceHostingEnvironment> This gives almost the same error "This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. " as if no filers were specified at all (This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. Parameter name: item) If I add only one filter then the service works but only responds on the added filter address. I've also tried with specifing multiple endpoints like (and only one filter): <endpoint address="http://service.test" binding="basicHttpBinding" bindingConfiguration="" contract="IService" /> <endpoint address="http://service" binding="basicHttpBinding" bindingConfiguration="" contract="IService" /> Then still only the address also specified in the filter works and the other returns this error: Server Error in Application "ISPSERVICE" HTTP Error 400.0 - Bad Request Regards Morten

    Read the article

  • Need to add an array into another array at a specified key value

    - by sologhost
    Ok, I have an array like so, but it's not guaranteed to be laid out in this order all of the time... $array = array( 'sadness' => array( 'info' => 'some info', 'info2' => 'more info', 'value' => 'value', ), 'happiness' => array( 'info' => 'some info', 'info2' => 'more info', 'value' => 'the value', ), 'peace' => array( 'info' => 'some info', 'info2' => 'more info', 'value' => 'the value', ) ); Ok, and I'd like to throw in this array right after the happiness key is defined. I can't use the key of "peace" since it must go directly after happiness, and peace might not come after happiness as this array changes. So here's what I need to add after happiness... $another_array['love'] = array( 'info' => 'some info', 'info2' => 'more info', 'value' => 'the value of love' ); So the final output after it gets inputted directly after happiness should look like this: $array = array( 'sadness' => array( 'info' => 'some info', 'info2' => 'more info', 'value' => 'value', ), 'happiness' => array( 'info' => 'some info', 'info2' => 'more info', 'value' => 'the value', ), 'love' => array( 'info' => 'some info', 'info2' => 'more info', 'value' => 'the value of love', ), 'peace' => array( 'info' => 'some info', 'info2' => 'more info', 'value' => 'the value', ) ); Can someone please give me a hand with this. Using array_shift, array_pop, or array_merge doesn't help me at all, since these go at the beginning and at the end of the array. I need to place it directly after a KEY position within $array. Thanks :)

    Read the article

  • What does the \- mean in the mvn dependency tree output

    - by Calm Storm
    Hi, I tried doing a mvn dependency:tree and I get a tree of dependencies. The output looks like below. I want to know what is the "-" symbol that is shown at times and the "+-" symbol for other dependencies (it doesnt seem to be the scope) My actual question is, My project depends on many modules which internally depends on many spring artifacts. There are a few version clashes. I want to upgrade all spring related libraries to say the latest one (2.6.x or above). What is the preferred way to do this? Should I declare all the deps spring-context, spring-support (and 10 other artifacts) in my pom.xml and point them to 2.6.x ? Is there any other better method ? [INFO] +- com.xxxx:yyy-jar:jar:1.0-SNAPSHOT:compile [INFO] | +- com.xxxx:zzz-commons:jar:1.0-SNAPSHOT:compile [INFO] | | +- org.springframework:spring-dao:jar:2.0.7:compile [INFO] | | +- org.springframework:spring-jdbc:jar:2.0.7:compile [INFO] | | +- org.springframework:spring-web:jar:2.0.7:compile [INFO] | | +- org.springframework:spring-support:jar:2.0.7:compile [INFO] | | +- net.sf.ehcache:ehcache:jar:1.2:compile [INFO] | | +- commons-collections:commons-collections:jar:3.2:compile [INFO] | | +- aspectj:aspectjweaver:jar:1.5.3:compile [INFO] | | +- betex-commons:betex-commons:jar:5.5.1-2:compile [INFO] | | \- javax.servlet:servlet-api:jar:2.4:compile [INFO] | +- org.springframework:spring-beans:jar:2.0.7:compile [INFO] | +- org.springframework:spring-jmx:jar:2.0.7:compile [INFO] | +- org.springframework:spring-remoting:jar:2.0.7:compile [INFO] | +- org.apache.cxf:cxf-rt-core:jar:2.0.2-incubator:compile [INFO] | | +- org.apache.cxf:cxf-api:jar:2.0.2-incubator:compile [INFO] | | | +- org.apache.geronimo.specs:geronimo-activation_1.1_spec:jar:1.0-M1:compile [INFO] | | | +- org.codehaus.woodstox:wstx-asl:jar:3.2.1:compile [INFO] | | | +- org.apache.neethi:neethi:jar:2.0.2:compile [INFO] | | | \- org.apache.cxf:cxf-common-schemas:jar:2.0.2-incubator:compile

    Read the article

  • Adobe Reader Wants Sensitive Email Details

    - by KDM
    When I run Adobe Reader, it tells me: Either there is no default mail client or the current mail client cannot fulfill the messaging request. Please run Microsoft Outlook and set it as the default mail client. I have a couple of issues with this: 1) It presupposes everyone has Microsoft Office installed. Not all home users have the budget or inclination for this. 2) It presupposes everyone wants Microsoft Outlook to be their default mail client. 3) I have Microsoft Office (incl. Outlook) installed and set as my default mail client. Even if I make it the default mail client from within the Adobe Reader Preferences, that doesn't stop the dialog appearing. 4) I thought I'd give Adobe Reader a new email address in the preferences, just to get it to stop bugging me. I notice, though, that it want's the SMTP and POP addresses and the account password? They have got to be kidding? I just want to view PDF files. How do I get the message to go away without telling Adobe my life story, giving them my mother's maiden name, my favourite movie, my place of birth, the name of my first goldfish and emptying the contents of my wallet for them?

    Read the article

  • Windows 7 VPN Client Default IPsec Configuration?

    - by bwerks
    As far as I can tell, the windows VPN client doesn't provide a lot of flexibility in its IPsec settings. Assuming full configurability on the site end of a client-site VPN configuration, does anyone how to configure the site to match the windows client? Bonus points: how would I discover these settings for myself?

    Read the article

  • VMWare Guest Info - Wrong IP Returned

    - by Jon Bailey
    We're running a VDI environment with vSphere 4.0 and Oracle VDI 3.2.2 and are having a bit of a problem with users that connect to an IPSec VPN from within their VM. For some reason, once connected to the VPN, the VMWare API returns GuestInfo.ipAddress as the VPN IP rather than the primary IP of the only NIC on the system. The IP address shown in net[0].ipAddress is the correct address and is what vSphere client is reporting. Is there any way to get VMWare tools to report the net[0].ipAddress as GuestInfo.ipAddress? Below is sample output from the guestinfo.pl script. 172.16.1.2 is the example "bad" VPN address that our VDI software is seeing. VMXFLEX01 guestFamily: windowsGuest VMXFLEX01 guestFullName: Microsoft Windows XP Professional (32-bit) VMXFLEX01 guestId: winXPProGuest VMXFLEX01 guestState: running VMXFLEX01 hostName: VMXFLEX01 VMXFLEX01 ipAddress: 172.16.1.2 VMXFLEX01 toolsStatus: VMware Tools is running and the version is current. VMXFLEX01 toolsVersion: 8194 VMXFLEX01 Screen - Height: 600 VMXFLEX01 Screen - Width: 800 VMXFLEX01 Disk[0]: Capacity 42935926784 VMXFLEX01 Disk[0]: Path : C:\ VMXFLEX01 Disk[0]: freespace : 33272619008 VMXFLEX01 net[0] - connected : 1 VMXFLEX01 net[0] - deviceConfigId : 4000 VMXFLEX01 net[0] - macAddress : 00:50:56:95:1f:c9 VMXFLEX01 net[0] - network : VM Network VMXFLEX01 net[0] - ipAddress : 10.0.0.2

    Read the article

  • Log - Server kernel: INFO: task httpd:000000 blocked for more than 120 seconds

    - by valter
    Almost everyday my server is crashing due to hight server load, and even restarting apache or mysql can't solve the problem. I need to reboot the server to solve, or it crash again due to the high load. The log system records something like this when it crashes: Aug 11 18:33:53 server kernel: INFO: task httpd:20008 blocked for more than 120 seconds. Aug 11 18:33:53 server kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 11 18:33:53 server kernel: httpd D ffffffff801538ac 0 20008 5816 20066 19809 (NOTLB) Aug 11 18:33:53 server kernel: ffff81025a299dc8 0000000000000082 ffff81033b4c0740 ffffffff80009a14 Aug 11 18:33:53 server kernel: ffff8101063f8d80 0000000000000009 ffff8100b758f7e0 ffff8101c57187e0 Aug 11 18:33:53 server kernel: 00009436d4100b6c 000000000001d50f ffff8100b758f9c8 000000083b531588 Aug 11 18:33:53 server kernel: Call Trace: Aug 11 18:33:53 server kernel: [<ffffffff80009a14>] __link_path_walk+0x173/0xfb9 Aug 11 18:33:53 server kernel: [<ffffffff8002cc16>] mntput_no_expire+0x19/0x89 Aug 11 18:33:53 server kernel: [<ffffffff80063c4f>] __mutex_lock_slowpath+0x60/0x9b Aug 11 18:33:53 server kernel: [<ffffffff80023908>] __path_lookup_intent_open+0x56/0x97 Aug 11 18:33:53 server kernel: [<ffffffff80063c99>] .text.lock.mutex+0xf/0x14 Aug 11 18:33:53 server kernel: [<ffffffff8001b21f>] open_namei+0xea/0x712 Aug 11 18:33:54 server kernel: [<ffffffff8002768a>] do_filp_open+0x1c/0x38 Aug 11 18:33:54 server kernel: Firewall: *UDP_IN Blocked* IN=eth1 OUT= MAC=ff:ff:ff:ff:ff:ff:00:30:48:9e:6e:99:08:00 SRC=208.43.135.158 DST=255.255.255.255 LEN=151 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=38354 DPT=6112 LEN=131 Aug 11 18:33:54 server kernel: [<ffffffff8001a061>] do_sys_open+0x44/0xbe Aug 11 18:33:54 server kernel: [<ffffffff8005d28d>] tracesys+0xd5/0xe0 I googled a lot trying to find a solution. But it looks that the solution is just to update the kernel or disk driver, thinks that I don't know how to do. In this url http://bugs.centos.org/view.php?id=4515 a lot o people report similar problems, except the fact that they are not related to httpd like mine. According to one member, one solution would be to add "elevator=noop " to /etc/grub.conf like in this example: title CentOS (2.6.18-238.12.1.el5xen) root (hd0,0) kernel /vmlinuz-2.6.18-238.12.1.el5xen ro root=/dev/VolGroup00/LogVol00 elevator=noop initrd /initrd-2.6.18-238.12.1.el5xen.img Would this really solve the problem? My disk are working in RAID. Can this cause some problem to my server? Is there any other solution?

    Read the article

  • use SVN @ client end and CM synergy @ server side

    - by Ravisha
    We are using CM synergy client as version control,but we find it very complicated.We are mostly biased with SVN client tool.Is there a way to configure svn at client end but maintain CM synergy @ server side.This would help us a lot ,bcos merging and conflict resolution is very simple in SVN.I do not know where to go about this solution,any initial help will be very helpful CM synergy www.windriver.com/cgi-bin/partnerships/directory/viewProd.cgi?id=1451 SVN tortoisesvn.tigris.org/

    Read the article

  • Run Rails server on client's local machine for single tenancy [on hold]

    - by rigyt
    We are building a Rails application for a client that wants to run the app locally to be close to their data center for best performance. I had assumed we would use a standard web host. What requirements should we demand of the client's infrastructure..all I can think of so far is remote access for developers and a Linux machine? We don't have server admin expertise so would need to contract this, I doubt the client has the expertise. Does this sound like a recipe for disastor?!

    Read the article

  • Log website performance client side.

    - by NitroxDM
    I have a client that is have an issue (it's slow) with a website on one of my servers. The server is a windows 2003 box running WebSphere 6.1. I can't find anything in the logs that would indicates an issue. Is there some free software out there that can help me figure out why the site is slow on the client's end? The client has an IT department, but I need to be sure it's not my side of things.

    Read the article

  • Why does NX Client for Windows silently closes after connection?

    - by pavel
    Hey! I connect remotely to my Ubuntu server from Vista machine. Now I need to run a GUI application on the server (Wireshark). So I decided to use FreeNX server/client to view Ubuntu GUI on Vista I have successfully installed FreeNX on Ubuntu and NX Client on Vista. I was following this guide Unfortunately, now I found myself stuck with the following problem. At the client, the !M logo window appears, but after a few seconds that window just closes, even without showing any error message. Guys, I'm really stuck, please help! Maybe I should have installed some graphical environment on the server? These are the details from NX client, it seems there are no errors. ----------------- Info: Display running with pid '7768' and handler '0x670d24'. NXPROXY - Version 3.4.0 Copyright (C) 2001, 2007 NoMachine. See http://www.nomachine.com/ for more information. Info: Proxy running in client mode with pid '2168'. Session: Starting session at 'Sat Dec 19 10:58:35 2009'. Warning: Connected to remote version 3.3.0 with local version 3.4.0. Info: Connection with remote proxy completed. Info: Using WAN link parameters 768/24/1/0. Info: Using cache parameters 4/4096KB/16384KB/16384KB. Info: Using pack method 'adaptive-9' with session 'kde'. Info: Using ZLIB data compression 1/1/32. Info: Using ZLIB stream compression 1/1. Info: No suitable cache file found. Info: Forwarding X11 connections to display ':0'. Info: Listening to font server connections on port '11000'. Session: Session started at 'Sat Dec 19 10:58:35 2009'. Info: Established X server connection. Info: Using shared memory parameters 0/0K. Session: Terminating session at 'Sat Dec 19 10:58:37 2009'. Session: Session terminated at 'Sat Dec 19 10:58:37 2009'. -----------

    Read the article

  • Multiple client connecting to master MySQL over SSL

    - by Bastien974
    I successfully configured a MySQL replication over SSL between 2 servers accross the internet. Now I want a second server in the same location as the replication slave, to open a connection to the master db over ssl. I used the same command found here http://dev.mysql.com/doc/refman/5.1/en/secure-create-certs.html to generate a new set of client-cert.pem and client-key.pem with the same master db ca-cert/key.pem and I also used a different Common Name. When I try to initiate a connection between this new server and the master db, it fails : mysql -hmasterdb -utestssl -p --ssl-ca=/var/lib/mysql/newcerts/ca-cert.pem --ssl-cert=/var/lib/mysql/newcerts/client-cert.pem --ssl-key=/var/lib/mysql/newcerts/client-key.pem ERROR 2026 (HY000): SSL connection error It's working without SSL.

    Read the article

  • Content Length and Transfer Encoding Chunked nginx, node-http-proxy

    - by rampr
    I have the following setup - node-http-proxy acts as a reverse proxy forwarding all requests to nginx/socket.io as necessary My problem is this When I send a HTTP DELETE request from the browser, node-http-proxy adds a header "Transfer Encoding Chunked" as the request from the browser had no Content Length. The request from the browser had no Content Length as it had no body. Nginx doesn't like the Transfer Encoding Chunked Header and throws a 411 asking for Content-Length. The problem gets solved when I send dummy data as part of the DELETE request so there is a Content Length and node-http-proxy doesn't add Transfer Encoding Chunked header and nginx is happy. I want to understand if node-http-proxy isn't working as expected, because it adds a Transfer Encoding Chunked header when Content Length is missing because there is no Content Body.

    Read the article

  • NetBackup's bplist doesn't get user/group info for Windows files

    - by Gnustavo
    I'm trying to get information about storage consumption from NetBackup's bplist output. I'm running NBU 6.0MP5 on a RHEL 3 server. The server is backing up several Solaris, Linux, and Windows machines. When I use bplist to get information about files backed up on any UNIX machine I get something like this: # bplist -C unixclient -R 99 -l -s 01/28/2006 -e 01/29/2006 / drwxr-xr-x test ccase 0 Nov 16 09:28 /l/home2/test/ -rw------- test ccase 4737 Jan 06 17:54 /l/home2/test/.bash_history -rw-rw-r-- test ccase 104 Nov 11 2004 /l/home2/test/.bashrc However, when I use it to list files backed up on any Windows client I can't get the user and group information. They both always appear as 'root'. Like this: # bplist -C winclient -t 13 -R 99 -l -s 02/20/2006 / drwx------ root root 0 Feb 20 14:26 /C/temp/ -rwx------ root root 41 Feb 20 14:26 /C/temp/asdf.txt drwx------ root root 0 May 25 2004 /C/temp/CTRMNGR/ Does anyone know why bplist doesn't show the correct user/group for Windows files? If it can't, is there a way to get that information using another command? Thanks. Gustavo.

    Read the article

  • Need only to change links from https to http to access files with no SSL?

    - by spirytus
    I have SSL enabled for subdomain.mydomain.com so I can access files via https://subdomain.mydomain.com. Now please tell me if I'm right.. if I have file somwhere in subdomain.mydomain.com called index.php I can securely access it via: https://subdomain.mydomain.com/someFolder/index.php but I can also access it via http://subdomain.mydomain.com/someFolder/index.php This time communication won't be encrypted though. So now it comes down to links only if I access files in subdomain.mydomain.com securely or not? I will have another related question (and many more probably), but will post it as separate topic to keep things clean :)

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >