Search Results

Search found 17955 results on 719 pages for 'sub domain'.

Page 114/719 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • RequestBuilder timeouts and browser connection limits per domain.

    - by WesleyJohnson
    This is specifically about GWT's RequestBuilder, but should apply to general XHR as well. My company is having me build a near realtime chat application over HTTP. Yes, I do realize there are better ways to do chat aplications, but this is what they want. Eventually we want it working on the iPad/iPhone as well so flash is out, which rules out websockets and comet as well, I think? Anyway, I'm running into issues were I've set GWT's RequestBuilder timeout to 10 seconds and we get very random and sporadic timeouts. We've got error handling and emailing on the server side and never get any errors, which suggests the underlying XHR request that RequestBuilder is built on, never gets to the server and times out after 10 seconds. We're using these request to poll the server for new messages rather often and also for sending new messages to the server and also polling (less frequently) for other parts of application. What I'm afraid of is that we're running into the browsers limit on concurrent connections to the same domain (2 for IE by default?). Now my question is - If I construct a RequestBuilder and call it's send() method and the browser blocks it from sending until one of the 2 connections per domain is free, does the timeout still start while the request is being blocked or will it not start until the browser actually releases the underlying XHR? I hope that's clear, if not please let me know and I'll try to explain more.

    Read the article

  • Two references to the same domain/entity model

    - by Sbossb
    Problem I want to save the attributes of a model that have changed when a user edits them. Here's what I want to do ... Retrieve edited view model Get domain model and map back updated value Call the update method on repository Get the "old" domain model and compare values of the fields Store the changed values (in JSON) into a table However I am having trouble with step number 4. It seems that the Entity Framework doesn't want to hit the database again to get the model with the old values. It just returns the same entity I have. Attempted Solutions I have tried using the Find() and the SingleOrDefault() methods, but they just return the model I currently have. Example Code private string ArchiveChanges(T updatedEntity) { //Here is the problem! //oldEntity is the same as updatedEntity T oldEntity = DbSet.SingleOrDefault(x => x.ID == updatedEntity.ID); Dictionary<string, object> changed = new Dictionary<string, object>(); foreach (var propertyInfo in typeof(T).GetProperties()) { var property = typeof(T).GetProperty(propertyInfo.Name); //Get the old value and the new value from the models var newValue = property.GetValue(updatedEntity, null); var oldValue = property.GetValue(oldEntity, null); //Check to see if the values are equal if (!object.Equals(newValue, oldValue)) { //Values have changed ... log it changed.Add(propertyInfo.Name, newValue); } } var ser = new System.Web.Script.Serialization.JavaScriptSerializer(); return ser.Serialize(changed); } public override void Update(T entityToUpdate) { //Do something with this string json = ArchiveChanges(entityToUpdate); entityToUpdate.AuditInfo.Updated = DateTime.Now; entityToUpdate.AuditInfo.UpdatedBy = Thread.CurrentPrincipal.Identity.Name; base.Update(entityToUpdate); }

    Read the article

  • Run PHP code on a specific domain only

    - by curtismchale
    I need to echo out some specific php code only on the sub-domain of a site. This is where I am so far. <?php if($_SERVER['SERVER_NAME'] != "http://support.demo.com") echo "<?php bb_head(); ?>"; ?> Of course if this worked I'd not be asking a question. Help is appreciated.

    Read the article

  • USB Flash Drive not Detected on 12.10 x64

    - by Falguni Roy
    My Mediatek usb flash drive is not get detected. The o/p of lsusb falguni@falguni-M61PME-S2P:~$ lsusb Bus 002 Device 002: ID 0e8d:0003 MediaTek Inc. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub and the o/p of usb-devices falguni@falguni-M61PME-S2P:~$ usb-devices T: Bus=01 Lev=00 Prnt=00 Port=00 Cnt=00 Dev#= 1 Spd=480 MxCh=10 D: Ver= 2.00 Cls=09(hub ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=1d6b ProdID=0002 Rev=03.05 S: Manufacturer=Linux 3.5.0-18-generic ehci_hcd S: Product=EHCI Host Controller S: SerialNumber=0000:00:02.1 C: #Ifs= 1 Cfg#= 1 Atr=e0 MxPwr=0mA I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub T: Bus=02 Lev=00 Prnt=00 Port=00 Cnt=00 Dev#= 1 Spd=12 MxCh=10 D: Ver= 1.10 Cls=09(hub ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=1d6b ProdID=0001 Rev=03.05 S: Manufacturer=Linux 3.5.0-18-generic ohci_hcd S: Product=OHCI Host Controller S: SerialNumber=0000:00:02.0 C: #Ifs= 1 Cfg#= 1 Atr=e0 MxPwr=0mA I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub But in 12.04, the o/p of usb-devices was: falguni@falguni-M61PME-S2P:~$ usb-devices T: Bus=01 Lev=00 Prnt=00 Port=00 Cnt=00 Dev#= 1 Spd=480 MxCh=10 D: Ver= 2.00 Cls=09(hub ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=1d6b ProdID=0002 Rev=03.05 S: Manufacturer=Linux 3.5.0-18-generic ehci_hcd S: Product=EHCI Host Controller S: SerialNumber=0000:00:02.1 C: #Ifs= 1 Cfg#= 1 Atr=e0 MxPwr=0mA I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub T: Bus=02 Lev=00 Prnt=00 Port=00 Cnt=00 Dev#= 1 Spd=12 MxCh=10 D: Ver= 1.10 Cls=09(hub ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=1d6b ProdID=0001 Rev=03.05 S: Manufacturer=Linux 3.5.0-18-generic ohci_hcd S: Product=OHCI Host Controller S: SerialNumber=0000:00:02.0 C: #Ifs= 1 Cfg#= 1 Atr=e0 MxPwr=0mA I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub T: Bus=02 Lev=01 Prnt=01 Port=04 Cnt=01 Dev#= 2 Spd=12 MxCh= 0 D: Ver= 2.00 Cls=02(commc) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=0e8d ProdID=0003 Rev=02.00 S: Manufacturer=MediaTek Inc S: Product=MT6235 C: #Ifs= 2 Cfg#= 1 Atr=80 MxPwr=500mA I: If#= 0 Alt= 0 #EPs= 2 Cls=0a(data ) Sub=00 Prot=00 Driver=cdc_acm I: If#= 1 Alt= 0 #EPs= 1 Cls=02(commc) Sub=02 Prot=01 Driver=cdc_acm It was working fine in 12.04. Now after upgrading to 12.10 the problem started. Where is the problem and how to solve it?

    Read the article

  • Google chrome is always searching in local google domain instead of Google.com

    - by Pablo
    I have changed in the searched preferences to google.com but still when I do search from the address bar (instant or non-instant) it will go to google.co.kr. Even though I change "Google.com in English", still same... The only way is to open google.com website first, then do search in it. So the question - is there any way to force Chrome to search in Google.com instead of google.co.kr? I understand there is some geolocational checking/redirecting, but there must be some way to force...

    Read the article

  • Lync CMS replication is failing for all Domain Computers

    - by Ravi Kanneganti
    I have Lync Server 2010 and Active Directory installed on 2 different Windows Server 2008 R2 machines. I have added a Windows 7 PC to AD. And I have added this computer to Trusted Application Servers Pool and published the topology. I want to build an UCMA application to extend Lync Server functionality. I have installed UCMA 3.0 SDK in the same computer where Lync Server is residing. But, CMS Replication isn't happening and "Get-CsManagementStoreReplicationStatus" always gives Uptodate as "False" for my Windows 7 PC. I have even tried "Invoke-CSManagementStoreReplication" but nothing changed. Also, this is the error message that I can see in the log file: TL_WARN(TF_COMPONENT) [2]0500.07B8::04/05/2012-14:55:07.296.00000f85 (XDS_Replica_Replicator,FileDistributeTask.Execute:filedistributetask.cs(165))(000000000043B3FA)**Could not distribute the file. Exception: [System.IO.IOException: The process cannot access the file because it is being used by another process.** at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.File.Move(String sourceFileName, String destFileName) at Microsoft.Rtc.Xds.Replication.Replicator.Common.FileDistributeTask.Execute()]. TL_NOISE(TF_DIAG) [2]0500.07B8::04/05/2012-14:55:07.296.00000f86 (XDS_Replica_Replicator,ReplicaTaskContainer<T>.OnError:replicataskcontainer.cs(166))(00000000005C39D4)Enter. TL_INFO(TF_COMPONENT) [2]0500.07B8::04/05/2012-14:55:07.296.00000f87 (XDS_Replica_Replicator,ReplicaTaskContainer<T>.OnError:replicataskcontainer.cs(171))(00000000005C39D4)Task error callback is about to be called. TL_VERBOSE(TF_DIAG) [2]0500.07B8::04/05/2012-14:55:07.296.00000f88 (XDS_Replica_Replicator,PerReplicaTaskManager<T>.HandleTaskError:perreplicataskmanager.cs(230))(000000000385E79C)Enter. TL_INFO(TF_COMPONENT) [2]0500.07B8::04/05/2012-14:55:07.296.00000f89 (XDS_Replica_Replicator,PerReplicaTaskManager<T>.HandleTaskError:perreplicataskmanager.cs(234))(000000000385E79C)Task encountered an error: [ReplicaTaskContainer<FileDistributeTask>{FileDistributeTask{E:\RtcReplicaRoot\xds-replica\from-master\data.zip, E:\RtcReplicaRoot\xds-replica\working\replication\from-master\data.zip, **Access failed**. (E:\RtcReplicaRoot\xds-replica\from-master\data.zip)}, FileDistributeTask{E:\RtcReplicaRoot\xds-replica\from-master\data.zip, E:\RtcReplicaRoot\xds-replica\working\replication\from-master\data.zip, }}]

    Read the article

  • Redirect request from https domain to https subdomain with only one certificate

    - by Sean K.
    I'm trying to redirect users to a subdomain in server2 if they make an https request to server1. I only have one certificate, and that's installed on server2. So for instance, from (server1) https://www.example.com to (server2) https://ssl.example.com My best guess is that I will need a certificate for https://www.example.com as the hostname is encrypted inside the HTTP header so my server won't know to redirect until it's decrypted. However, I'm curious if this is possible without two certificates?

    Read the article

  • Changing default gateway on workstations connected to Windows Domain SBS server

    - by Gary B2312321321
    We have xp workstations connected onto a small business server acting as active directory/isa firewall/proxy (no dhcp). Is there a reason that after installing a 2nd firewall on the network (same subnet etc), that changing the default gateway on the workstations isnt sufficient to route inet traffic through the new firewall? A freshly setup linux box connects straight on to the alternate firewall with just ip, default gateway. dns settings. Will having ISA still active on the network confuse the process? Are there further config settings deeper down in windows that need attention? Any ideas pointers on this would be appreciated? Other info: Firewalls tried: Smoothwall and Ipcop; small ethernet netwoork 40 pcs; can ping to new firwalls from workstations; activating web proxy on new firewall and reconfiguring workstation browser works fine; Point of 2nd firewall is lack of some necessary features on ISA for a linux app; Would be nice to have some redundancy to though

    Read the article

  • Samba Domain Controller corrupts Windows workstations profiles?

    - by MrZombie
    Oooooook, so here's my problem. I have a Mac OS X Server 10.5 to which Windows XP workstations are bound. I happened upon some errors and warnings in my log, from Userenv. Namely, error 1504, 1509. The warning complains that some setting on the share about offline caching. I found some guides to correct this if the problem was referring to a Windows server, but since those are Samba shares, the guide of course doesn't apply. Does anyone know what to do so that my profiles don't corrupt, and I still can use roaming profiles so that they're backed up by the server?

    Read the article

  • Printer Management in AD Domain

    - by Untalented
    Hello, I normally push out all my printers via group policy preferences. However, the new copy machines I have are using some stranger drivers and I can not install x64/x86 drivers on the same machine for my clients to pull drivers from. So now I have two machines setup with the printer so they can pull drivers. Ontop of this there is specific driver configuration settings such as requiring the user to enter an access code to print set. Once the printer is installed via GPP, it puts everything to the default such as color mode, and other custom settings we like. I considered just using a Windows Print Server for this, but I do not know a way to push/delete these from clients like I can with GPP. Does anyone know how I can have a GPP copy the custom configuration I have set in the driver or have any recommendations?

    Read the article

  • Setting up multiple servers for one domain

    - by Joseph Torraca
    So I am starting up a new website and I was wondering how to set up 5 servers to host the site. I have already purchased 5 Apple XServes, one will be used as a test server and the other 4 will be for the live site. So I have read some website on the internet and they all reference using one server and installing software onto it and have that server do the load balancing. I have also read that you could use a hardware, rack-mounted system and plug the servers into that. The load balancer would then distribute the load. So I have a few questions about each: 1) How do you set up the software version and have the other servers as "slaves" and have one "master" to direct traffic? 2) Which of the two options above are more reliable, and better suited for a startup that doesn't have many users per month, yet(hopefully)? 3) Is there a theoretical max limit of servers that can be connected to a software load balancing system? Note: Obviously this will change from software to software, but in terms of the server being able to handle it? 4) In your own opinion, what are you using for your sites? Have you had any problems setting up that system or operating it once its running? Are there any things you would stay away from if you had to start over? 5) I also purchased a Apple RAID system, so if you are familiar with it, is there any way to connect it to multiple Xserves so they all serve the same data? I'm a little confused on this, so thanks for all your help and being patient with me. Note: Take it easy on me, I am learning this as I go along, so I may have used terms incorrectly or explained things that don't really make sense. Sorry. P.S. If you need me to supply the specs on the servers to determine which system makes the most sense, I can post them for you.

    Read the article

  • Have to enter google sites through second-level domain

    - by Anton Geraschenko
    I'm having the same problem as this guy. I own two domains hosted on google sites, mydomain.com and mydomain.net. When I go to mydomain.com, it redirects me to the site located at www.mydomain.com (this is the desired behavior). This used to also work on mydomain.net, but now when I go to mydomain.net, I get a Google 404. To see the content, I have to go to www.mydomain.net. As far as I can tell, the DNS settings and Google apps settings for both domains are identical. Does anybody have any idea about what could be happening?

    Read the article

  • Transition domain to new web host without waiting for DNS propagation

    - by jcmoney
    I was considering switching to Amazon EC2 to host my website to handle more traffic. It seems like I would have to update DNS records to point to the new server but I was wondering if there was a way to avoid having to wait for the new DNS record to propagate. Putting the code on both hosts would not work for me since the app writes to a database pretty frequently. I thought about just using a meta redirect or php redirect on the old host to redirect to the new host ip but was wondering if there's a better more accepted way of doing this.

    Read the article

  • Map a domain to another subdomain - Rackspace

    - by Gorgi Rankovski
    I am using a subdomain as a parameter to an ASP.NET MVC 4 application. It's working well. Now I need to test my approach, so I have the application hosted on appharbor. It works well with subdomains here too. Our DNS registers are on Rackspace, but I have no control over it. Another guy is responsible for that. So, myapp.apphb.com can be accessed through myapp.com. Also abc.myapp.com is working as expected. I use abc as a parameter. So, now I want abc.com to be mapped in those Rackspace DNS registers to abc.myapp.com Is this possible at all? Can you explain to us how to do that? Will I have some problems with it? Anything that I should be aware of? Please have in mind that I am newbie when it comes to DNS. And no experience with Rackspace. Thanks.

    Read the article

  • Assigning static IP and domain name mapping to local server in LAN

    - by yashbinani
    I have developed a web application which will be deployed in a LAN environemnt. Clients will be Computers/Android Tablets/IPAD In order for communication between client and local server 1) need to assign a static IP to local server. 2) need a domian name mapping for that IP address in Local environment. 3) router should assign the same static IP if it gets restarted etc. I am using a windows XP machine as Local server OS. Do i need to take care of router configurations before buying one, or all routers will have same capability to perform this task. I am not a network specialist, so Sorry if this question sounds stupid. Thanks

    Read the article

  • Setup Domain Keys / DKIM on Exchange 2003

    - by Campo
    I need some suggestions for setting up DKIM on my exchange server 2003. We already use SPF but I feel a lot of email providers use this DKIM method. I would like to utilize both systems. This site was the best I could find with step by step instructions. If anyone could get more detailed that would be excellent. Let me know if you need more info.

    Read the article

  • Redirect absolutely anything to new domain with .htaccess

    - by John Hunt
    Ok, so I'm in need a simple redirect: Redirect 301 / http://www.new.com/ Similar to that, except I want it to catch anything, such as: www.old.com/blah/blah/?xyz=123&aaaaabbbb=erewr3ttt#ewtjhirhjerh and send the user to: www.new.com Should be easy right? Finding out how to do this is not so easy. Using the above rule we're still getting 404's for things that aren't there rather than the Redirect rule just getting everything.

    Read the article

  • Set up internal domain to use external SMTP in Exchange 2007

    - by Geoffrey
    I'm moving to Google Apps and have setup dual-delivery. Everything is fine, but for mail sent internally (from [email protected] to [email protected]), Exchange is not using the send connectors I have pointing to Google's servers. I believe my question is similar to this question: How to force internal email through an smtp connector in exchange 2007 Again, if a user is connected to the Exchange server and tries to send to [email protected] it works just fine, but I cannot seem to force *@mydomain.com to route correctly. This should be a fairly simple, but according to this: google.com/support/forum/p/Google+Apps/thread?tid=30b6ad03baa57289&hl=en (can't post two links due to spam prevention) It does not seem possible. Any ideas?

    Read the article

  • domain is pointing to default static page on server but settings look correct

    - by Cues
    I have edited my apache vhost file in /etc/apache2/sites-enabled to add the following: <VirtualHost *:80> ServerName www.mysite.cn ServerAlias mysite.cn *.mysite.cn DocumentRoot /home/user/static/mysite/cn </VirtualHost> It still points to the default site on the server when i browse to mysite.cn but when i enter anything along the lines of ww3.mysite.cn it point to the new correct document root any clues of what the problem could be as i am lost.

    Read the article

  • PC version of Google Chrome doesn't recognize ".local" domain name

    - by prosseek
    With Bonjour installed in PC, I can access my server in Mac with ".local". For example, I can access my mac with the name "prosseek.local". The problem is that in Chrome for PC, it doesn't recognize "local" to open search page instead of accessing mac server. This issue isn't happening with other web browsers (explore/firefox) in PC. What is even wierder is that chrome seems to recognize the ".local" sometimes, but not always. How to solve this issue? Or, how can I teach chrome that ".local" is a part of page name in order not to direct to search page?

    Read the article

  • Can SPF records contain domain name wildcards?

    - by deltanovember
    Part of my SPF record contains: include:google.com I'm still getting soft fail because the actual e-mail is delivered by the following Received: from mail-yx0-f172.google.com (mail-yx0-f172.google.com [209.85.213.172] Which has a completely different IP from google.com. However I don't want to put in mail-yx0-f172.google.com because it might be dynamic. Is there some equivalent of *.google.com that I can use in the record

    Read the article

  • How to serve static files for multiple Django projects via nginx to same domain

    - by thanley
    I am trying to setup my nginx conf so that I can serve the relevant files for my multiple Django projects. Ultimately I want each app to be available at www.example.com/app1, www.example.com/app2 etc. They all serve static files from a 'static-files' directory located in their respective project root. The project structure: Home Ubuntu Web www.example.com ref logs app app1 app1 static bower_components templatetags app1_project templates static-files app2 app2 static templates templatetags app2_project static-files app3 tests templates static-files static app3_project app3 venv When I use the conf below, there are no problems for serving the static-files for the app that I designate in the /static/ location. I can also access the different apps found at their locations. However, I cannot figure out how to serve all of the static files for all the apps at the same time. I have looked into using the 'try_files' command for the static location, but cannot figure out how to see if it is working or not. Nginx Conf - Only serving static files for one app: server { listen 80; server_name example.com; server_name www.example.com; access_log /home/ubuntu/web/www.example.com/logs/access.log; error_log /home/ubuntu/web/www.example.com/logs/error.log; root /home/ubuntu/web/www.example.com/; location /static/ { alias /home/ubuntu/web/www.example.com/app/app1/static-files/; } location /media/ { alias /home/ubuntu/web/www.example.com/media/; } location /app1/ { include uwsgi_params; uwsgi_param SCRIPT_NAME /app1; uwsgi_modifier1 30; uwsgi_pass unix:///home/ubuntu/web/www.example.com/app1.sock; } location /app2/ { include uwsgi_params; uwsgi_param SCRIPT_NAME /app2; uwsgi_modifier1 30; uwsgi_pass unix:///home/ubuntu/web/www.example.com/app2.sock; } location /app3/ { include uwsgi_params; uwsgi_param SCRIPT_NAME /app3; uwsgi_modifier1 30; uwsgi_pass unix:///home/ubuntu/web/www.example.com/app3.sock; } # what to serve if upstream is not available or crashes error_page 400 /static/400.html; error_page 403 /static/403.html; error_page 404 /static/404.html; error_page 500 502 503 504 /static/500.html; # Compression gzip on; gzip_http_version 1.0; gzip_comp_level 5; gzip_proxied any; gzip_min_length 1100; gzip_buffers 16 8k; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # Some version of IE 6 don't handle compression well on some mime-types, # so just disable for them gzip_disable "MSIE [1-6].(?!.*SV1)"; # Set a vary header so downstream proxies don't send cached gzipped # content to IE6 gzip_vary on; } Essentially I want to have something like (I know this won't work) location /static/ { alias /home/ubuntu/web/www.example.com/app/app1/static-files/; alias /home/ubuntu/web/www.example.com/app/app2/static-files/; alias /home/ubuntu/web/www.example.com/app/app3/static-files/; } or (where it can serve the static files based on the uri) location /static/ { try_files $uri $uri/ =404; } So basically, if I use try_files like above, is the problem in my project directory structure? Or am I totally off base on this and I need to put each app in a subdomain instead of going this route? Thanks for any suggestions TLDR: I want to go to: www.example.com/APP_NAME_HERE And have nginx serve the static location: /home/ubuntu/web/www.example.com/app/APP_NAME_HERE/static-files/;

    Read the article

  • How to configure DNS server to forward queries about particular domain AND all of its subdomains

    - by user71061
    I have DNS server (linux box with bind9), which is authorative for some domains, and forward all other queries to external DNS server of my ISP provider. So far no problem. Now I want that queries about some specific domains were forwarded to my internal DNS server, f.e.: zone "some_domain" { type forward; forwarders { some_internal_dns_ip; }; }; So far still no problem, all works ok. But then, I want also to forward some reverse DNS queries to my internal DNS. So, I have added: zone "16.172.in-addr.arpa" { type forward; forwarders { some_internal_dns_ip; }; }; And this doesn't work as I expect. Queries about "16.172.in-addr.arpa" (for example 1.16.172.in-addr.arpa) are resolved correctly, but reverse queries about full address (for example 1.1.16.172.in-addr.arpa) are not. I understand that my server should use here some recursive query, but could not configure it. I have already tried adding following options recursion yes; allow-recursion { 127.0.0.1; }; allow-recursion-on { 127.0.0.1; }; but with no success . (I have used loopback address here, because I need this functionality only for my DNS host, and not for its clients) Any suggestions?

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >