Search Results

Search found 52872 results on 2115 pages for 'does the htmleditor control raise any client side events'.

Page 134/2115 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • Control-Backspace (unix-kill-rubout) for readline?

    - by Xepoch
    In readline(3) I should be able to map Control-Backspace to the same function as Control-W (unix-kill-rubout). Regardless of what I put in ~/.inputrc I'm unable to get this to be recognized. \C-\b: unix-kill-rubout ...for instance does not work. Can I map Control-Backspace to the unix-kill-rubout in readline?

    Read the article

  • Shaping outbound Traffic to Control Download Speeds with Linux

    - by Kyle Brandt
    I have a situation where a server makes lots of requests from big webservers all at the same time. Currently, I have not control over the amount of requests or the rate of the requests from the application that does this. The responses from these webservers is more than the internet line can handle. (Basically, we are launching a DoS on ourselves). I am going to get push to get this fixed at the application level, but for the time being, is there anyway I can use traffic shaping on the Linux server to control this? I know I can only shape outbound traffic, but maybe there is a way I can slow the TCP responses so the other side will detect congestion and this will help my situation? If there is anything like this with tc, what might the configuration look like? The idea is that the traffic control might help me control which packets get dropped before they reach my router.

    Read the article

  • OpenWRT + OpenVPN client forwarding from lan to vpn not working

    - by Dariusz Górecki
    I've OpenWRT router with Backfire 10.03.1-rc3 (arch:brcm 2.6 kernel) I've set up an OpenVPN client connecting my router with workplace lan, and it works nicely, I can connect from router to networks (several) in workplace. My OpenVPN client uci-config looks like: config 'openvpn' 'stream_client' option 'nobind' '1' option 'float' '1' option 'client' '1' option 'reneg_sec' '0' option 'management' '127.0.0.1 31194' option 'explicit_exit_notify' '1' option 'verb' '3' option 'persist_tun' '1' option 'persist_key' '1' list 'remote' 'remote.address.cutted' option 'ca' '/lib/uci/upload/cbid.openvpn.stream_client.ca' option 'key' '/lib/uci/upload/cbid.openvpn.stream_client.key' option 'cert' '/lib/uci/upload/cbid.openvpn.stream_client.cert' option 'enable' '1' option 'dev' 'tun1' I've set the 'STREAM_VPN' Zone to allow in/out traffic, and I've added rules for zone-to-zone lan<-vpn and vpn<-lan config 'zone' option 'name' 'stream_vpn' option 'network' 'stream_vpn' option 'input' 'ACCEPT' option 'output' 'ACCEPT' option 'forward' 'REJECT' config 'forwarding' option 'src' 'lan' option 'dest' 'stream_vpn' config 'forwarding' option 'src' 'stream_vpn' option 'dest' 'lan' And interface config: config 'interface' 'stream_vpn' option 'proto' 'none' option 'ifname' 'tun1' option 'defaultroute' '0' option 'peerdns' '0' Now, from my router everything works nicely, the problem is that I cannot connect from computer inside a lan to hosts in networks provided by vpn connection :/ What I've missed, or what I'm doing wrong? And how can I force using specified DNS when connected to vpn? (I know that sever should use PUSH DNS option, but is PUSHes only routes)

    Read the article

  • SSH client not showing prompt after successful login

    - by user431949
    I'm having problems with my SSH client on Ubuntu 10.10. When I switch on my computer and open a Terminal and execute the command ssh user@host, it gives me a password prompt after which I enter the right password, I then get a prompt to execute my commands on the remote computer. Now the problem is, after a little while (probably around 10 minutes), the terminal window stops accepting commands (No matter what I type, nothing shows). Once this happens, I close the Terminal window and try to start all over again by opening another Terminal window. But this time around, after entering the right password, I don't get a welcome message or prompt. The cursor just keeps blinking on a new line. I ran the ssh command with -v parameter and the message I get after a successful login is: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_GB.utf8 Still the cursor keeps blinking on a new line without a prompt. However, Putty SSH client works perfectly on the same machine. Thank you very much for your time. Your help would be greating appreciated.

    Read the article

  • Getting 401 when using client certificate with IIS 7.5

    - by Jacob
    I'm trying to configure a web site hosted under IIS 7.5 so that requests to a specific location require client certificate authentication. With my current setup, I still get a "401 - Unauthorized: Access is denied due to invalid credentials" when accessing the location with my client cert. Here's the web.config fragment that sets things up: <location path="MyWebService.asmx"> <system.webServer> <security> <access sslFlags="Ssl, SslNegotiateCert"/> <authentication> <windowsAuthentication enabled="false"/> <anonymousAuthentication enabled="false"/> <digestAuthentication enabled="false"/> <basicAuthentication enabled="false"/> <iisClientCertificateMappingAuthentication enabled="true" oneToOneCertificateMappingsEnabled="true"> <oneToOneMappings> <add enabled="true" certificate="MIICFDCCAYGgAwIBAgIQ+I0z6z8OWqpBIJt2lJHi6jAJBgUrDgMCHQUAMCQxIjAgBgNVBAMTGURldiBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkwHhcNMTAxMjI5MjI1ODE0WhcNMzkxMjMxMjM1OTU5WjAaMRgwFgYDVQQDEw9kZXYgY2xpZW50IGNlcnQwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBANJi10hI+Zt0OuNr6eduiUe6WwPtyMxh+hZtr/7eY3YezeJHC95Z+NqJCAW0n+ODHOsbkd3DuyK1YV+nKzyeGAJBDSFNdaMSnMtR6hQG47xKgtUphPFBKe64XXTG+ueQHkzOHmGuyHHD1fSli62i2V+NMG1SQqW9ed8NBN+lmqWZAgMBAAGjWTBXMFUGA1UdAQROMEyAENGUhUP+dENeJJ1nw3gR0NahJjAkMSIwIAYDVQQDExlEZXYgQ2VydGlmaWNhdGUgQXV0aG9yaXR5ghB6CLh2g6i5ikrpVODj8CpBMAkGBSsOAwIdBQADgYEAwwHjpVNWddgEY17i1kyG4gKxSTq0F3CMf1AdWVRUbNvJc+O68vcRaWEBZDo99MESIUjmNhjXxk4LDuvV1buPpwQmPbhb6mkm0BNIISapVP/cK0Htu4bbjYAraT6JP5Km5qZCc0iHZQJZuch7Uy6G9kXQXaweJMiHL06+GHx355Y="/> </oneToOneMappings> </iisClientCertificateMappingAuthentication> </authentication> </security> </system.webServer> </location> The client certificate I'm using in my web browser matches what I've placed in the web.config. What am I doing wrong here?

    Read the article

  • Setting Outlook Web Access as default mail client in Firefox

    - by Barton Chittenden
    The company that I work for is nice enough to let me use Linux for work, but they use Outlook. I've been using Outlook Web Access (OWA) as my mail client, which is more or less acceptable. The only problem is that whenever I click on a mailto link or use the "Send Link" menu option in firefox, I'm prompted to use evolution. Since connecting to an exchange server through evolution seems to be sketchy at best, I would like to set OWA as my default mail client. I'm using Firefox 3.6.13 Here's what I've found so far: Default mail client can be found at Edit Menu -> Preferences -> Applications Tab -> mailto When I click on the drop down menu, one of the options is "Application Details" This shows two options by default: Google Yahoo! Mail Each of these shows how to launch that service. For Gmail: https://mail.google.com/mail/?extsrc=mailto&url=%s For Yahoo!: http://compose.mail.yahoo.com/?To=%s I presume that Outlook Web Access has something similar. Based on the googling that I've done so far, I think that this should look something like this: https://<server name>/owa/?cmd=compose... A little experimentation on my part shows that the following will compose a message: https://<email server>/owa/?ae=Item&a=New&t=IPM.Note but I still don't know how to specify recipient, subject or body of the email to be composed... What I want to know is a) does anyone know the URL parameters to compose a mailto in Outlook Web Access, including subject, recipient and body? else b) can someone give me a decent pointer for where to get this information?

    Read the article

  • 530 5.7.1 Client was not authenticated Exchange 2010 for some computers within mask

    - by user1636309
    We have a classic problem with Client not Authenticated but with a specific twist: We have an Exchange 2010 cluster, let's say EX01 and EX02, the connection is always to smtp.acme.com, then it is switched through load balancer. We have an application server, call it APP01 There are clients connected to the APP01. There is a need for anonymous mail relay from both clients and APP01. The Anonymous Users setting of the Exchange is DISABLED, but the specific computers - APP01 and clients by the mask, let's say, 192.168.2.* - are enabled. For internal relay, a "Send Connector" is created, and then the above IP addresses are added for the connector to allow computers, servers, or any other device such as a copy machine to use the exchange server to relay email to recipients. The problem is that the relay works for APP01 and some clients, but not others (we get "Client not Authenticated") - all inside the same network and the same mask. This is basically what we do to test it outside of our application: http://smtp25.blogspot.sk/2009/04/530-571-client-was-not-authenticated.html So, I am looking for ideas: What can be the reason for such a strange behaviour? Where I can see the trace of what's going on at the Exchange side?

    Read the article

  • Getting prompted for password accessing page through script even when client and server are in same

    - by Munawar
    I'm trying to pull up an internal webpage in automated fashion using the methods in 'Internetexplorer.Application' using vbscript. But I'm getting prompted for password, although the client and the server both are in the same domain. Predictably when I manually try to access the web page, I don't have any problem. Only when I try using cscript.exe or iexplore.exe, I get prompted. I'm trying to automate some of the smoke test we do after a new build is deployed. But this password prompt is getting in the way. Following are the system specs Client machine - IE 7.0, OS is Windows server 2003 Server machine - Windows Server 2008 Both are in the same domain. So far I've unsuccessfully tried following to automate the password input system.diagnostics.process.start var WinHttpReq = new ActiveXObject("WinHttp.WinHttpRequest.5.1"); WinHttpReq.Open("GET", "http://website", false); WinHttpReq.SetCredentials("username", "password", 0); Nothing seems to work I checked in IIS. we have only anonymous and forms authentication enabled Is there any configuration setting in the client machine that can be tweaked to bypass this, although I'd hate to do it since you step on the toes of twenty people trying to do that. Preferable way would be to programmatically input it if its possible. Also, if you can suggest a more appropriate forum, that'd be great too. Please help.

    Read the article

  • "Network Error - 53" while trying to mount NFS share in Windows Server 2008 client

    - by Mike B
    CentOS | Windows 2008 I've got a CentOS 5.5 server running nfsd. On the Windows side, I'm running Windows Server 2008 R2 Enterprise. I have the "Files Services" server role enabled and both Client for NFS and Server for NFS are on. I'm able to successfully connect/mount to the CentOS NFS share from other linux systems but am experiencing errors connecting to it from Windows. When I try to connect, I get the following: C:\Users\fooadmin>mount -o anon 10.10.10.10:/share/ z: Network Error - 53 Type 'NET HELPMSG 53' for more information. (IP and share name have been changed to protect the innocent :-) ) Additional information: I've verified low-level network connectivity between the Windows client and the NFS server with telnet (to the NFS on TCP/2049) so I know the port is open. I've further confirmed that inbound and outbound firewall ports are present and enabled. I came across a Microsoft tech note that suggested changing the "Provider Order" so "NFS Network" is above other items like Microsoft Windows Network. I changed this and restarted the NFS client - no luck. I've confirmed that the share folder on the NFS server is readable/writable by all (777) I've tried other variations of the mount command like: mount 10.10.10.10:/share/ z: and mount 10.10.10.10:/share z: and mount -o anon mtype=hard \\10.10.10.10:/share * No luck. As per the command output, I tried typing NET HELPMSG 53 but that doesn't tell me much. Just "The network path was not found". I'm lost on how to proceed with troubleshooting. Any ideas?

    Read the article

  • control + enter in browser takes only .com?

    - by Abhilash M
    I have this strange problem, how exactly does control + enter decides the domain of a website?? If i type stackoverflow,then hit control+enter, it works and takes to homepage, but i type ubuntuforums, then hit control + enter, it does not recognise its ubuntuforums.org, but goes to ubuntuforums.com?? How does this exactly work? If i need to change this behaviour, how should i do it?

    Read the article

  • SQuirreL Client: Opening up a table in a separate tab

    - by Dalin Seivewright
    I've switched to SQuirrel Client to view an Oracle database with a variety of tables. At any given time, I need to look at the contents of several related tables at the same time. The problem is that, at least by default, SQuirrel Client does not open up multiple tables at a time. When you click on a different Table object, the main view is refreshed with newly selected table data. Oracle's SQLDeveloper (which I am trying to move away from) did this by default if I recall, but there was an option to "freeze" panes. Does a similar option exist in SQuirrel Client? I do not require the ability to view the contents of two different tables on the same screen (i.e. a split view) but I would like the ability to have tabs for each table so I can quickly switch between each table view rather then having to find the table in the table list over and over again. Note: I am using this in a professional capacity but if it does not "belong" here then I suppose it could be moved to superuser.

    Read the article

  • Nginx Cache-Control

    - by optixx
    Iam serving my static content with ngnix. location /static { alias /opt/static/blog/; access_log off; etags on; etag_hash on; etag_hash_method md5; expires 1d; add_header Pragma "public"; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } The resulting header looks like this: Cache-Control:public, must-revalidate, proxy-revalidate Cache-Control:max-age=86400 Connection:close Content-Encoding:gzip Content-Type:application/x-javascript; charset=utf-8 Date:Tue, 11 Sep 2012 08:39:05 GMT Etag:e2266fb151337fc1996218fafcf3bcee Expires:Wed, 12 Sep 2012 08:39:05 GMT Last-Modified:Tue, 11 Sep 2012 06:22:41 GMT Pragma:public Server:nginx/1.2.2 Transfer-Encoding:chunked Vary:Accept-Encoding Why is nginx sending 2 Cache-Control entries, could this be a problem for the clients?

    Read the article

  • Client unable to access OWA website after temporarily changing SSL certificate on the server

    - by Lorenz Meyer
    I have the following issue: One client computer (Windows XP) cannot access the OWA website. All other client computers can (Except another one in the same remote office). How this happened: I temporarily changed the SSL certificate on the Exchange Server yesterday. After a few minutes, I reverted back, an now the same certificate that was installed for years is back again. During these few minutes, they were in OWA on this computer and got a certificate error. What exactly happens: Internet Explorer displays the error Internet Explorer cannot display the webpage, Firefox displays The connection was reset and Crome shows This webpage is not available. The connection to ... was interrupted. What I already did to try to get this working: Restart the client computer Restart the exchange server Deleted Internet Explorer browsing history In IE, Internet Options, tab Content, under Certificates deletes SSL cache Restored Internet Explorer to the default parameters I looked into certmgr.msc, but did not find a certificate related to the issue What could I do else to narrow down the origin of this problem (or better: resolve it) ? Can you give any advice ?

    Read the article

  • Apache load balancer with https real servers and client certificates

    - by Jack Scheible
    Our network requirements state that ALL network traffic must be encrypted. The network configuration looks like this: ------------ /-- https --> | server 1 | / ------------ |------------| |---------------|/ ------------ | Client | --- https --> | Load Balancer | ---- https --> | server 2 | |------------| |---------------|\ ------------ \ ------------ \-- https --> | server 3 | ------------ And it has to pass client certificates. I've got a config that can do load balancing with in-the-clear real servers: <VirtualHost *:8666> DocumentRoot "/usr/local/apache/ssl_html" ServerName vmbigip1 ServerAdmin [email protected] DirectoryIndex index.html <Proxy *> Order deny,allow Allow from all </Proxy> SSLEngine on SSLProxyEngine On SSLCertificateFile /usr/local/apache/conf/server.crt SSLCertificateKeyFile /usr/local/apache/conf/server.key <Proxy balancer://mycluster> BalancerMember http://1.2.3.1:80 BalancerMember http://1.2.3.2:80 # technically we aren't blocking anyone, but could here Order Deny,Allow Deny from none Allow from all # Load Balancer Settings # A simple Round Robin load balancer. ProxySet lbmethod=byrequests </Proxy> # balancer-manager # This tool is built into the mod_proxy_balancer module allows you # to do simple mods to the balanced group via a gui web interface. <Location /balancer-manager> SetHandler balancer-manager Order deny,allow Allow from all </Location> ProxyRequests Off ProxyPreserveHost On # Point of Balance # Allows you to explicitly name the location in the site to be # balanced, here we will balance "/" or everything in the site. ProxyPass /balancer-manager ! ProxyPass / balancer://mycluster/ stickysession=JSESSIONID </VirtualHost> What I need is for the servers in my load balancer to be BalancerMember https://1.2.3.1:443 BalancerMember https://1.2.3.2:443 But that does not work. I get SSL negotiation errors. Even when I do get that to work, I will need to pass client certificates. Any help would be appreciated.

    Read the article

  • Accessing ActiveX control through web server

    - by user847455
    I have developed the ActiveX control & register with Common CLSID number . using the CLSID number accessing the active X control on the internet explorer (as web page).using following object tag used in .html file OBJECT id="GlobasysActiveX" width="1000" height="480" runat="server" classid="CLSID:E86A9038-368D-4e8f-B389-FDEF38935B2F" i want to access this web page through web server .I have place this web page into the vitual directory & access using localhost\my.html it's working. but when i have accessed from LAN computer it will not access the activeX control from my computer . how to embed or download the activeX control form my computer into the LAN computer through web server thanks in advance

    Read the article

  • Windows VPN client connect on different port

    - by John Gardeniers
    Scenario: Two Windows Server 2003 machines running RRAS VPNs. The firewall port forwards 1723 to one of those machines for normal remote access. I'd like to find a way to connect to the second machine as well. Not because I need to but just because it's the sort of thing I reckon should be possible but can't figure out how to do. Is it possible to have the Windows PPTP VPN client (on XP in this instance) connect on a port other than 1723? If so, I can simply port forward another port to the second server. I've done a fair bit of Googling over the last few days and have only found others asking the same question but no answers. I have of course tried to add a port number in the host name or IP connection box, in various formats, but to no avail. While this might be possible with a third part client I'm really only interested in whether or not it can be done with the Windows built-in client and if so how?. Perhaps there's a registry hack I'm not aware of?

    Read the article

  • Problems setting up VLC Sever/client streaming

    - by Ayos
    I'm trying to set up a Linux machine as the server and a Windows XP machine as the client. Both machines are connected to the same local network via a Wi-Fi router. I setup the stream with the following properties : http stream port 8080 play locally And not much else. No firewall on the windows client(Windows firewall is disabled) When I try to open network stream via the client machine(Using VLC or Windows Media Player) I get the following errors: Media Player error code : 0xC00D11B3: Encountered a network Problem. VLC Console: main warning: connection timed out access_mms error: cannot connect to 192.168.1.3:8080 main debug: no access module matching "http" could be loaded main debug: TIMER module_need() : 12625.810 ms - Total 12625.810 ms / 1 intvls (Avg 12625.809 ms) main error: open of `http://192.168.1.3:8080' failed main debug: dead input main debug: repeating item main debug: starting playback of the new playlist item main debug: resyncing on http://192.168.1.3:8080 main debug: http://192.168.1.3:8080 is at 0 main debug: creating new input thread main debug: Creating an input for 'http://192.168.1.3:8080' main debug: using timeshift granularity of 50 MiB, in path 'C:\DOCUME~1\Accer\LOCALS~1\Temp' main debug: `http://192.168.1.3:8080' gives access `http' demux `' path `192.168.1.3:8080' main debug: creating demux: access='http' demux='' location='192.168.1.3:8080' file='\\192.168.1.3:8080' main debug: looking for access_demux module: 0 candidates main debug: no access_demux module matched "http" main debug: TIMER module_need() : 0.461 ms - Total 0.461 ms / 1 intvls (Avg 0.461 ms) main debug: creating access 'http' location='192.168.1.3:8080', path='\\192.168.1.3:8080' main debug: looking for access module: 2 candidates access_http debug: http: server='192.168.1.3' port=8080 file='' main debug: net: connecting to 192.168.1.3 port 8080 qt4 debug: IM: Deleting the input main debug: TIMER input launching for 'http://192.168.1.3:8080' : 13397.979 ms - Total 13397.979 ms / 1 intvls (Avg 13397.978 ms) qt4 debug: IM: Setting an input Need Help. Thanks in advance.

    Read the article

  • Bizarre client IP switch-up on VPN

    - by B. VB.
    Let A.B.C.D be the public IP of my VPN server. Let W.X.Y.Z be the IP of the client before it connects to the VPN. My VPN server's IP address on the LAN in 10.8.0.1, and the client is 10.8.0.6. I also run a webserver on the same machine hosting the VPN. On it is a simple webpage that performs the exact same thing as whatismyip.org (i.e., simply prints the IP of the requester) Let me illustrate the scenario for you. In a Chrome window I have three tabs, what I have in parenthesis is the URL: Tab 1 (http://whatismyip.org): A.B.C.D This is what I expect to see. It's the public IP of the VPN server. Tab 2 (http://10.8.0.1): 10.8.0.6 ok, looks expected. They are behind the same LAN now. Tab 3 (http://A.B.C.D) W.X.Y.Z WTF?? Basically, if I access the webserver while tunneled, in shows the IP address of my machine PRIOR to tunelling! Remember, tab2 and tab3 are the same webpage. Why does Tab3 not show the client IP as it's own IP (i.e., show A.B.C.D)??? I hope this question is clear, thanks in advance!

    Read the article

  • Remap control key in gnome-terminal?

    - by Colin
    I just installed Ubuntu to get more familiar with it since I'll be using it in a new job shortly. I use Macs at home and in my current job, so I'd like to make it as Mac-like as possible. I've remapped the command and control characters using the following .xmodmap: remove control = Control_L Control_R remove mod4 = Super_L Super_R add control = Super_L Super_R add mod4 = Control_L Control_R Which works great for everything except the terminal, since Ctrl-C is now mapped to CMD-C, and still conflicts with what I'd like to use to copy. Is there any way I can remap the Control key just for the terminal? I'm willing to consider gnome-terminal alternatives if required.

    Read the article

  • Terminal/Putty showing control characters (^M) after updating

    - by jaycee48
    I updated my system yesterday afternoon using the recommended updates from Update Manager. After it completed, I shutdown my system and went home for the day. I come in this morning and I am getting control characters displayed when using vi in both the standard terminal emulator, putty, and putty inside of my Windows virtual that I run with VirtualBox. I have made no other system changes and I cannot figure out how this occurred. It's as if every text file I have was created in DOS. I've searched the forums and I haven't found any answers. I am using xterm as my emulator and I checked with 3 of my coworkers and none of them are having this problem so we do not believe it is a server side issue. Especially since I've checked 3 different servers. There's nothing in my .profile other than PATH variables so I'm using the same terminal settings as everyone else. Some files are fine (I can open and read both /etc/environment and my .profile) but most of any kind of server generated log file is trash. Running cat or head on the same file displays the contents without the characters. Any ideas would be appreciated. Thanks.

    Read the article

  • Is Visual Source Safe (The latest Version) really that bad? Why? What's the Best Alternative? Why? [closed]

    - by hanzolo
    Over the years I've constantly heard horror stories, had people say "Real Programmers Dont Use VSS", and so on. BUT, then in the workplace I've worked at two companies, one, a very well known public facing high traffic website, and another high end Financial Services "Web-Based" hosted solution catering to some very large, very well known companies, which is where I currently Reside and everything's working just fine (KNOCK KNOCK!!). I'm constantly interfacing with EXTREMELY Old technology with some of these financial institutions.. OLD LIKE YOU WOULDN'T BELIEVE.. which leads me to the conclusion that if it works "LEAVE IT", and that maybe there's some value in old technology? at least enough value to overrule a rewrite!? right?? Is there something fundamentally flawed with the underlying technology that VSS uses? I have a feeling that if i said "someone said VSS Sucks" they would beg to differ, most likely give me this look like i dont know -ish, and I'd never gain back their respect and my credibility (well, that'll be hard to blow.. lol), BUT, give me an argument that I can take to someone whose been coding for 30 years, that builds Platforms that leverage current technology (.NET 3.5 / SQL 2008 R2 ), write's their own ORM with scaffolding and is able to provide a quality platform that supports thousands of concurrent users on a multi-tenant hosted solution, and does not agree with any benefits from having Source Control Integrated, and yet uses the Infamous Visual Source Safe. I have extensive experience with TFS up to 2010, and honestly I think it's great when a team (beyond developers) can embrace it. I've worked side by side with someone whose a die hard SVN'r and from a purist standpoint, I see the beauty in it (I need a bit more, out of my SS, but it surely suffices). So, why are such smarties not running away from Visual Source Safe? surely if it was so bad, it would've have been realized by now, and I would not be sitting here with this simple old, Check In, Check Out, Version Resistant, Label Intensive system. But here I am... I would love to drop an argument that would be the end all argument, but if it's a matter of opinion and personal experience, there seems to be too much leeway for keeping VSS. UPDATE: I guess the best case is to have the VSS supporters check other people's experiences and draw from that until we (please no) experience the breaking factor ourselves. Until then, i wont be engaging in a discussion to migrate off of VSS.. UPDATE 11-2012: So i was able to convince everyone at my work place that since MS is sun downing Visual Source Safe it might be time to migrate over to TFS. I was able to convince them and have recently upgraded our team to Visual Studio 2012 and TFS 2012. The migration was fairly painless, had to run analyze.exe which found a bunch of errors (not sure they'll ever affect the project) and then manually run the VSSConverter.exe. Again, painless, except it took 16 hours to migrate 5 years worth of everything.. and now we're on TFS.. much more integrated.. much more cooler.. so all in all, VSS served it's purpose for years without hick-up. There were no horror stories and Visual Source Save as source control worked just fine. so to all the nay sayers (me included). there's nothing wrong with using VSS. i wouldnt start a new project with it, and i would definitely consider migrating to TFS. (it's really not super difficult and a new "wizard" type converter is due out any day now so migrating should be painless). But from my experience, it worked just fine and got the job done.

    Read the article

  • Do Distributed Version Control Systems promote poor backup habits?

    - by John
    In a DVCS, each developer has an entire repository on their workstation, to which they can commit all their changes. Then they can merge their repo with someone else's, or clone it, or whatever (as I understand it, I'm not a DVCS user). To me that flags a side-effect, of being more vulnerable to forgetting to backup. In a traditional centralised system, both you as a developer and the people in charge know that if you commit something, it's held on a central server which can have decent backup solutions in place. But using a DVCS, it seems you only have to push your work to a server when you feel like sharing it. It's all very well you have the repo locally so you can work on your feature branch for a month without bothering anyone, but it means (I think) that checking in your code to the repo is not enough, you have to remember to do regular pushes to a backed-up server. It also means, doesn't it, that a team lead can't see all those nice SVN commit emails to keep a rough idea what's going on in the code-base? Is any of this a real issue?

    Read the article

  • WCF: SecurityNegotiationException when using client

    - by bradhe
    So I've been trying to set up certificate authentication for my clients and services. The eventual goal is to partition data based on the certificate a client connects with (i.e. the certificate becomes their credentials in to the greater system and their data is partitioned based on these credentials). I have been able to set it up successfully on both the client and the server side. I have created a certificate and a private key, installed them on my computer, and set up my server such that 1) it has a certificate-based service credential and 2) if a client connects without providing a certificate-based credential an exception is thrown. What I then did was create a simple client and add a certificate credential to the configuration and try to call a simple operation on the service. It looks like the client connects OK, and it looks like the certificate is accepted by the server, but I do get this: SecurityNegotiationException: "The caller was not authenticated by the service." That seems rather ambiguous to me. Note that I am using wsHttpBinding, which supposedly defaults to Windows auth for transport security...but all of these processes are being run as my user account as I'm running in my debug environment. Here is my server configuration: <system.serviceModel> <bindings> <wsHttpBinding> <binding name="MyServiceBinding"> <security mode="Message"> <transport clientCredentialType="None"/> <message clientCredentialType="Certificate"/> </security> </binding> </wsHttpBinding> </bindings> <services> <service behaviorConfiguration="MyServiceBehavior" name="MyService"> <endpoint binding="wsHttpBinding" bindingConfiguration="MyServiceBinding" contract="IMyContract"/> <endpoint binding="mexHttpBinding" address="mex" contract="IMetadataExchange"> <identity> <dns value="localhost"/> </identity> </endpoint> </service> </services> <behaviors> <serviceBehaviors> <behavior name="MyServiceBehavior"> <serviceMetadata httpGetEnabled="true" policyVersion="Policy15" /> <serviceDebug includeExceptionDetailInFaults="false" /> <serviceCredentials> <serviceCertificate storeLocation="CurrentUser" storeName="My" x509FindType="FindBySubjectName" findValue="tmp123"/> </serviceCredentials> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> Here is my client config -- note that I'm using the same cert for the client that I use on the service: <system.serviceModel> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_IMyService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384"/> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false"/> <security mode="Message"> <!--<transport clientCredentialType="Windows" proxyCredentialType="None" realm=""/>--> <message clientCredentialType="Certificate" negotiateServiceCredential="true" algorithmSuite="Default"/> </security> </binding> </wsHttpBinding> </bindings> <client> <endpoint address="http://localhost:50120/UserServices.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IMyService" behaviorConfiguration="IMyService_Behavior" contract="UserServices.IUserServices" name="WSHttpBinding_IMyService"> <identity> <certificate encodedValue="Some RSA stuff"/> </identity> </endpoint> </client> <behaviors> <endpointBehaviors> <behavior name="IMyService_Behavior"> <clientCredentials> <clientCertificate storeLocation="CurrentUser" storeName="My" x509FindType="FindBySubjectName" findValue="tmp123"/> </clientCredentials> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> Can anyone please help provide some insight as to what might be up here? Thanks,

    Read the article

  • Which network protocol to use for lightweight notification of remote apps (Delphi 2005)

    - by Chris Thornton
    I have this situation.... Client-initiated SOAP 1.1 communication between one server and let's say, tens of thousands of clients. Clients are external, coming in through our firewall, authenticated by certificate, https, etc.. They can be anywhere, and usually have their own firewalls, NAT routers, etc... They're truely external, not just remote corporate offices. They could be in a corporate/campus network, DSL/Cable, even Dialup. Currently, clients push new data to the server and pull new data from the server on 15-minute polling loop. The server currently does not push data - the client hits the "messagecount" method, to see if there is new data to pull. If 0, it sleeps for another 15 min and checks again. We're trying to get that down to 7 seconds. If this were an internal app, with one or just a few dozen clients, we'd write a cilent "listener" soap service, and would push data to it. But since they're external, sit behind their own firewalls, and sometimes private networks behind NAT routers, this is not practical. So we're left with polling on a much quicker loop. 10K clients, each checking their messagecount every 10 seconds, is going to be 1000/sec messages that will mostly just waste bandwidth, server, firewall, and authenticator resources. So I'm trying to design something better than what would amount to a self-inflicted DoS attack. I don't think it's practical to have the server send soap messages to the client (push) as this would require too much configuration at the client end. But I think there are alternatives that I don't know about. Such as: 1) Is there a way for the client to make a request for GetMessageCount() via Soap 1.1, and get the response, and then perhaps, "stay on the line" for perhaps 5-10 minutes to get additional responses in case new data arrives? i.e the server says "0", then a minute later in response to some SQL trigger (the server is C# on Sql Server, btw), knows that this client is still "on the line" and sends the updated message count of "5"? 2) Is there some other protocol that we could use to "ping" the client, using information gathered from their last GetMessageCount() request? 3) I don't even know. I guess I'm looking for some magic protocol where the client can send a GetMessageCount() request, which would include info for "oh by the way, in case the answer changes in the next hour, ping me at this address...". Also, I'm assuming that any of these "keep the line open" schemes would seriously impact the server sizing, as it would need to keep many thousands of connections open, simultaneously. That would likely impact the firewalls too, I think. Is there anything out there like that? Or am I pretty much stuck with polling? TIA, Chris

    Read the article

  • Are there any frameworks for data subscription and update?

    - by Timothy Pratley
    There is one server with multiple clients. The clients are viewing subsets of the servers entire data. If the data that a client is viewing changes, the client should be informed of the changes so that it displays the current data. Example: Two clients are viewing a list of users in an administration screen. One client adds a new user to the list and modifies the permissions of another user. The other client sees the changes propagated to their view.

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >