Search Results

Search found 28014 results on 1121 pages for 'local domain'.

Page 22/1121 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Using an old penalized domain for a new website

    - by MiladSafaei
    I had a website with 2 domains like these: firstdomain.com and first-domain.com. The main domain was first-domain.com and the other one was 301 redirected to first one. The main domain got a Google Penguin penalty some months ago. I uploaded the site on an new domain and removed Google index of old domain by using the remove URL tool in Webmaster Tools. Now, I want to use firstdomain.com (which was redirected to the penalized domain) for a new and fresh website with new and perfect content. Is it probable that history of this domain affects the new website and harms its ranking?

    Read the article

  • Where can I safetly search domain whois without worrying about the search engine parking on the domain immediately after the search?

    - by Evan Plaice
    There are a lot of companies that provide domain whois but I've heard of a lot of people who had bad experiences where the domain was bought soon after the whois search and the price was increased dramatically. Where can I gain access to a domain whois where I don't have to worry about that happening? Update: Apparently, the official name for this practice is called Domain Front Running and some sites go as far as to create explicit policies stating that they don't do it. This is where a domain registrar or an intermediary (like a domain lookup site) mines the searches for possibly attractive domains and then either sells the data to a third-party, or goes ahead and registers the name themselves ahead of you. In one case a registrar took advantage of what's known as the "grace period" and registered every single domain users looked up through them and held on to them for 5 days before releasing them back into the pool at no cost to themselves. Source: domainwarning.com And apparently, after ICANN was notified of the practice, they wrote it off as a coincidence of random 'domain tasting'. Source: See for yourself

    Read the article

  • View Public Key in Domain Key for a Domain

    - by Josh
    Using Jeff's blog post I'm creating domain keys for my account. I wanted to verify the setup using Get or Host command with Bind for Windows but I'm lost one of the commands. I can see view the _domainkey. txt file with this command: host -t txt _domainkey.stackoverflow.com but I'm at a loss at how I'd find the selector record. Jeff points out it can be anything before the before the period in "._domainkey.domain.com" but how would I list all records if I didn't know the exact query name? Is there a wildcard I could use to view all TXT or all records under this section?

    Read the article

  • Resolving host names to their domain name in an internal BIND domain

    - by Adam Plumb
    I'm setting up a domain on my home network for learning purposes, using BIND on CentOS to act as the name server. I've got the name server up and running as type master for my internal domain (plumbnicoll.family), and can do forward and reverse lookups from other computers in my LAN. For example, host office2.plumbnicoll.family correctly returns office2.plumbnicoll.family has address 192.168.1.3. What I'd like is to be able to resolve just office2 to its address, without needing to put .plumbnicoll.family at the end. Is this possible, or even desirable to do? I'm running a mixed environment at home with both Linux and Windows computers.

    Read the article

  • Downloading content greater than 2000 bytes from local network hangs in browser on Windows XP

    - by artplastika
    We have web application that runs under Tomcat in a local network. Our customers experience strange problem using this web application. Let's say Tomcat server runs on host1 and we open webapp URL in browser on host2. Any browser on host 2 starts opening page and downloading of content "hangs" for hours. We've made bunch of experiments and found that any content larger than 2000 bytes makes browser request hang. Tried in Internet Explorer 8, Opera 12, Firefox. At the same time if user opens website from internet, it works. Opening webapp from the same host1 where Tomcat is running works normally. Local network is organized with D-Link DGS-3120-48TC switch. Additional info. During experiments we've noticed XP Tweaker installed on hosts. Network settings from that tool: MTU is manually set to 1500 RWIN = 14600 Support of TCP frames larger than 64 KB is on Time to Live = 32 SACK is on

    Read the article

  • Phantom Local Disks appearing in my drive list

    - by Paul
    I seem to have several phantom Local Disks mapped to different letters that are of 0 bytes in size. Strangely, they do not show up when I view my drives through Windows Explorer. But if I open an application such as ACDSee Pro or MS Word and then go to open a file I can see all these Local Disks mapped to different letters. This means when I plug in my external hard disk it ends up mapped to letter R instead of its usual G which messes up any programs I have pointing to it by default. How did they get there and more importantly, how do I get rid of them? I'm on a Window 7 Home Premium 32 bit machine.

    Read the article

  • Access device with local ip over internet

    - by Joe Perrin
    I apologize up front if this is the wrong place to post this question. It seemed like the best fit. I have a device which is connected to my local network which has an IP of 192.168.1.10 from my router. Additionally I use a Windows 7 machine that runs some software called DirectUpdate which allows me to resolve the local IP of the Windows 7 machine (192.168.1.5) to be accessible to the internet via my domain (example.com) - Basic dynamic DNS updating. I'd like to access the device from example.com. I am unsure how to do this as I don't have any way to install DirectUpdate (or any software) on the device to make the device available to the internet. Any insight here would be appreciated. Thank you.

    Read the article

  • set up way of getting mysite.$domain

    - by jose silva
    Hi I have several domains, only one website and one databse table for each domain. example: wbesite.us - data from USA goes to database table main_usa wbesite.co.uk - data form UK goes to database table main_uk Only have one database with name of the website. Having only one website structured, and having variables like $sql="select * from main_".$countrycode." where bla..bla..., and many other variables to catch the domain extension, and so on... Now, instead of having one full website for each domain, how can set a script and wher do I put it in order to detect the domain that the user uses. In my server root do I create something like website.$domain ? Something like website OLX but for different purposes. I hope I made myself clear. Thank you.

    Read the article

  • Move site to new domain divided by language across subdomains

    - by mark
    I managed to find a nice domain for a fairly fledgling site of mine that actually hasn't been parked by scumbag squatters. Given the upcoming move I'm thinking I'd take the opportunity to split the content across subdomains according to language, much like wikipedia for example: current: www.old-domain.com/en/subject # English www.old-domain.com/subjecto # Spanish (default so not locale in url) proposed en.new-domain.com/subject es.new-domain.com/subjecto The advantage of doing this is a fairly competitive keyword such that I may wish to put a copy of my application on a Spanish slice in order to gain a few serp's. Also pure vanity. Google's webmaster tools allows me to move to the new domain and I can add the root domain and the subdomains but forward to only one. I'll 301 from the old domain appropriately but is there anything I should know about webmaster tools in this respect where effectively I'm moving to two addresses? (Feel free to dissuade me from doing this if it's a bad idea in comments.) I've now asked this same question on google's forums.

    Read the article

  • 301 redirect from a country specific domain

    - by Raj
    I originally started using a .do domain extension for my site, but later realized that this country specific domain would prevent us from appearing in search results for places outside of the Dominican Republic. We started using a .co domain extension and redirected all requests to the new domain using an HTTP 301. The "Crawl Stats" in Google Webmaster Tools shows me that the .co domain is being crawled, but the "Index Status" shows the number of pages indexed at 0. The "Crawl Stats" for the .do domain says that it's being crawled and the "Index Status" shows a number greater than 0. I also set a "Change Of Address" in Google Webmaster Tools to have the .do domain point to the new .co domain. We're still not appearing in search results at all even for very specific strings where I would expect to find us. Am I doing something wrong?

    Read the article

  • Amazon AWS Ec2 instance, Elastic IP, Domain name from external domainseller, and Google Apps for Email

    - by Sid
    We are hosting our site on an Ec2 instance. Our Elastic IP is w.x.y.z and Public DNS is: ec2-w-x-y-z.compute-1.amazonaws.com. We've bought a domain name domainname.com from a lesser known domain-name-seller. We added an A-record pointing domainname.com to w.x.y.z. Will this work or do we need a CNAME record to point to the same too? We wanted to use Google apps for emailing so adjusted the TXT/MX records according to the Google Apps instructions to be able to send/recv email using @domainname.com email addresses. Have we got it right, more important, we came across queries relating to email sent from ec2-w-x-y-z.compute-1.amazonaws.com (our users can send email from their onsite accounts) going to spam (rDNS not pointing to domainname.com but to ec2-w-x-y-z.compute-1.amazonaws.com). How can we fix this? We came across SPF records, do they provide a complete solution? We aren't sure as to how to use them. Can you help pls? Thank you, Sid

    Read the article

  • Domain joined computer unable to access servers through VPN

    - by kscott
    Our servers are in a virtual off site hosting center, our office has a vpn connection to the data center, but for reasons I don't understand we also have to connect to the Citrix Access Gateway (CAG) client in order to access the servers. I am a programmer with rather limited ops knowledge including a weak grasp of networking and terminology. Bear with me. I was just given a new laptop, which is a 64 bit Windows 7 system unlike my previous 32 bit Windows XP desktop which was able to connect without issue. My laptop has been joined to the domain so that I login with my AD credentials, I am able to connect to the CAG and get authenticated, and after doing this I can ping our servers and they resolve to the correct internal IP addresses, but I am unable to use remote desktop to the servers, connect to SQL servers through my local SQL Management Studio, navigate to them through the file system, or view any of our internal intranet websites. All of which I was able to do previously. I have tried turning off my Windows firewall and the problem remains, the DNS servers are set to the correct IPs of our domain controllers, and the ops guys here are a little stumped. Does any one have any suggestions?

    Read the article

  • why sendmail resolves to ISP domain?

    - by digital illusion
    I wish to setup a local mail server for debugging purposes using fedora 15 I set up sendmail, but there is a problem. When I'm not connected to the internet, the local mail server delivers correctly (to localhost). And in /var/log/mail I see that I correctly delivered a mail to [email protected]: Jun 21 18:24:56 PowersourceII sendmail[6019]: p5LGOttt006019: [email protected], size=328, class=0, nrcpts=1, msgid=<[email protected]>, relay=adriano@localhost Jun 21 18:24:56 PowersourceII sendmail[6020]: p5LGOuSV006020: from=<[email protected]>, size=506, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=PowersourceII.localdomain [127.0.0.1] Jun 21 18:24:56 PowersourceII sendmail[6019]: p5LGOttt006019: [email protected], [email protected] (500/500), delay=00:00:01, xdelay=00:00:00, mailer=relay, pri=30328, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (p5LGOuSV006020 Message accepted for delivery) When I connect, networkmanager fills in /etc/resolv.conf with: domain fastwebnet.it search fastwebnet.it localdomain nameserver 62.101.93.101 nameserver 83.103.25.250 Now sendmail does not work any longer and tries to send messages to my ISP domain, as seen in the log: Jun 21 18:40:02 PowersourceII sendmail[6348]: p5LGe1LV006348: [email protected], [email protected] (500/500), delay=00:00:01, xdelay=00:00:01, mailer=relay, pri=30327, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (p5LGe10n006352 Message accepted for delivery) Jun 21 18:40:02 PowersourceII sendmail[6354]: p5LGe10n006352: to=<[email protected]>, delay=00:00:01, xdelay=00:00:00, mailer=esmtp, pri=120651, relay=mx3.fastwebnet.it. [85.18.95.21], dsn=5.1.1, stat=User unknown As you can see, it tries to deliver a mail to [email protected], and fails The setup is working under other ISPs. How can I avoid the fastweb ISP DNS relay? Thank you

    Read the article

  • Domain names timing out after VPS IP change

    - by Fourjays
    I rent a CentOS 5 VPS from a UK-based provider, with DirectAdmin also installed. On Thursday night, they carried out planned maintenance to changed the two IPs I had been assigned to two new ones. On Friday, after the change had taken place, I updated my domain name records to reflect the IP change. Since then, all of the domains pointing to the VPS are timing out. Additionally, DirectAdmin was also not responding, but was was resolved by running the ipswap scripts as found in the DirectAdmin knowledgebase. It did not fix my domains though. I have contacted the VPS provider but I have been waiting for a response for some time now. I have checked again, and again, and all the IPs referenced in DirectAdmin are correct. If I go to the server IP in my browser it responds with "Apache is functioning normally." Email accounts on the server are also functioning correctly. But if I access a domain itself, it times out. Running a ping and a DNS look-up, I can confirm the nameserver IPs are correct. If I run a trace route it reaches an IP that is similar to my VPS IPs (last 2 blocks are different) before timing out (it never shows my server IP). I am relatively new to VPS management so don't have a vast wealth of experience with troubleshooting problems on them. I have checked all of the httpd configuration files, which don't seem to have any IP references in them at all. Looking in the Apache error logs, what errors there are do not coincide with times I have tried to access the site. Is this issue at my provider's end? Is there anything else I can check or test, to rule out post-IP-change problems with my server configuration? It was all running fine prior to the IP change.

    Read the article

  • Domain Controller died, now get authentication boxes in IE for SDL Tridion 2009

    - by Rob Stevenson-Leggett
    We had a major network issue where our secondary domain controller (responsible for Win2k3 boxes) died and had to be rebuilt (I beleive this is what happened, I am a developer not network admin). Anyway, I am working remotely via VPN at the moment and since this happened, I am getting an authentication box when trying to access certain areas of SDL Tridion via IE (Tridion 2009 SP1 is IE only) it seems like somewhere my credentials are not being passed correctly or the ones cached on my laptop do not match the ones the Domain Controller has. This only seems to affect Windows 2003 servers. Our IT support thinks that the only way to sort it out is to connect my laptop directly to the network. I am not planned to go to the office for a few weeks at least and this issue means I have to work with Tridion via Remote Desktop. We thought changing the password on my account might work but this didn't help. So basically my question is, is there any way I can reset my credential cache without having to reconnect to the network? Or is it IE that is causing the problem perhaps, since I can RDP to servers and use Tridion 2011 instances in other browsers fine? I am on Windows 7 using SonicWall VPN client.

    Read the article

  • Authenticate domain-user credentials on unjoined virtual machine?

    - by bwerks
    Hi all, This question may sound silly, and perhaps a bit insane, but--is there any way to run a process on a machine not joined to a domain using credentials from a user in that domain? In my case, I'm running virtual machines installed with release binaries from our build process, as well as Visual Studio. Visual Studio is there to debug our release binaries, however it's being executed with vm-local user credentials. This means that it can't authenticate to our TFS deployment when executing "tf.exe view" to utilize our Source Server for debugging. Team Explorer manages to authenticate to TFS using a UI prompt, however I suspect that it's because we supply it with the TFS deployment's URI, and it's designed to display a prompt to facilitate workgroup scenarios; i.e. it's not like we're getting it for free. My instincts tell me the only way to authenticate on this vm is to join it or somehow form a one-way trust or something, but is there an easier way? For automation we're going to want to script this eventually, but I'm first surveying the feasibility of the thing.

    Read the article

  • ASP.NET and HTML5 Local Storage

    - by Stephen Walther
    My favorite feature of HTML5, hands-down, is HTML5 local storage (aka DOM storage). By taking advantage of HTML5 local storage, you can dramatically improve the performance of your data-driven ASP.NET applications by caching data in the browser persistently. Think of HTML5 local storage like browser cookies, but much better. Like cookies, local storage is persistent. When you add something to browser local storage, it remains there when the user returns to the website (possibly days or months later). Importantly, unlike the cookie storage limitation of 4KB, you can store up to 10 megabytes in HTML5 local storage. Because HTML5 local storage works with the latest versions of all modern browsers (IE, Firefox, Chrome, Safari), you can start taking advantage of this HTML5 feature in your applications right now. Why use HTML5 Local Storage? I use HTML5 Local Storage in the JavaScript Reference application: http://Superexpert.com/JavaScriptReference The JavaScript Reference application is an HTML5 app that provides an interactive reference for all of the syntax elements of JavaScript (You can read more about the application and download the source code for the application here). When you open the application for the first time, all of the entries are transferred from the server to the browser (all 300+ entries). All of the entries are stored in local storage. When you open the application in the future, only changes are transferred from the server to the browser. The benefit of this approach is that the application performs extremely fast. When you click the details link to view details on a particular entry, the entry details appear instantly because all of the entries are stored on the client machine. When you perform key-up searches, by typing in the filter textbox, matching entries are displayed very quickly because the entries are being filtered on the local machine. This approach can have a dramatic effect on the performance of any interactive data-driven web application. Interacting with data on the client is almost always faster than interacting with the same data on the server. Retrieving Data from the Server In the JavaScript Reference application, I use Microsoft WCF Data Services to expose data to the browser. WCF Data Services generates a REST interface for your data automatically. Here are the steps: Create your database tables in Microsoft SQL Server. For example, I created a database named ReferenceDB and a database table named Entities. Use the Entity Framework to generate your data model. For example, I used the Entity Framework to generate a class named ReferenceDBEntities and a class named Entities. Expose your data through WCF Data Services. I added a WCF Data Service to my project and modified the data service class to look like this:   using System.Data.Services; using System.Data.Services.Common; using System.Web; using JavaScriptReference.Models; namespace JavaScriptReference.Services { [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class EntryService : DataService<ReferenceDBEntities> { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { config.UseVerboseErrors = true; config.SetEntitySetAccessRule("*", EntitySetRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } // Define a change interceptor for the Products entity set. [ChangeInterceptor("Entries")] public void OnChangeEntries(Entry entry, UpdateOperations operations) { if (!HttpContext.Current.Request.IsAuthenticated) { throw new DataServiceException("Cannot update reference unless authenticated."); } } } }     The WCF data service is named EntryService. Notice that it derives from DataService<ReferenceEntitites>. Because it derives from DataService<ReferenceEntities>, the data service exposes the contents of the ReferenceEntitiesDB database. In the code above, I defined a ChangeInterceptor to prevent un-authenticated users from making changes to the database. Anyone can retrieve data through the service, but only authenticated users are allowed to make changes. After you expose data through a WCF Data Service, you can use jQuery to retrieve the data by performing an Ajax call. For example, I am using an Ajax call that looks something like this to retrieve the JavaScript entries from the EntryService.svc data service: $.ajax({ dataType: "json", url: “/Services/EntryService.svc/Entries”, success: function (result) { var data = callback(result["d"]); } });     Notice that you must unwrap the data using result[“d”]. After you unwrap the data, you have a JavaScript array of the entries. I’m transferring all 300+ entries from the server to the client when the application is opened for the first time. In other words, I transfer the entire database from the server to the client, once and only once, when the application is opened for the first time. The data is transferred using JSON. Here is a fragment: { "d" : [ { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(1)", "type": "ReferenceDBModel.Entry" }, "Id": 1, "Name": "Global", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "object", "ShortDescription": "Contains global variables and functions", "FullDescription": "<p>\nThe Global object is determined by the host environment. In web browsers, the Global object is the same as the windows object.\n</p>\n<p>\nYou can use the keyword <code>this</code> to refer to the Global object when in the global context (outside of any function).\n</p>\n<p>\nThe Global object holds all global variables and functions. For example, the following code demonstrates that the global <code>movieTitle</code> variable refers to the same thing as <code>window.movieTitle</code> and <code>this.movieTitle</code>.\n</p>\n<pre>\nvar movieTitle = \"Star Wars\";\nconsole.log(movieTitle === this.movieTitle); // true\nconsole.log(movieTitle === window.movieTitle); // true\n</pre>\n", "LastUpdated": "634298578273756641", "IsDeleted": false, "OwnerId": null }, { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(2)", "type": "ReferenceDBModel.Entry" }, "Id": 2, "Name": "eval(string)", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "function", "ShortDescription": "Evaluates and executes JavaScript code dynamically", "FullDescription": "<p>\nThe following code evaluates and executes the string \"3+5\" at runtime.\n</p>\n<pre>\nvar result = eval(\"3+5\");\nconsole.log(result); // returns 8\n</pre>\n<p>\nYou can rewrite the code above like this:\n</p>\n<pre>\nvar result;\neval(\"result = 3+5\");\nconsole.log(result);\n</pre>", "LastUpdated": "634298580913817644", "IsDeleted": false, "OwnerId": 1 } … ]} I worried about the amount of time that it would take to transfer the records. According to Google Chome, it takes about 5 seconds to retrieve all 300+ records on a broadband connection over the Internet. 5 seconds is a small price to pay to avoid performing any server fetches of the data in the future. And here are the estimated times using different types of connections using Fiddler: Notice that using a modem, it takes 33 seconds to download the database. 33 seconds is a significant chunk of time. So, I would not use the approach of transferring the entire database up front if you expect a significant portion of your website audience to connect to your website with a modem. Adding Data to HTML5 Local Storage After the JavaScript entries are retrieved from the server, the entries are stored in HTML5 local storage. Here’s the reference documentation for HTML5 storage for Internet Explorer: http://msdn.microsoft.com/en-us/library/cc197062(VS.85).aspx You access local storage by accessing the windows.localStorage object in JavaScript. This object contains key/value pairs. For example, you can use the following JavaScript code to add a new item to local storage: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You can use the Google Chrome Storage tab in the Developer Tools (hit CTRL-SHIFT I in Chrome) to view items added to local storage: After you add an item to local storage, you can read it at any time in the future by using the window.localStorage.getItem() method: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You only can add strings to local storage and not JavaScript objects such as arrays. Therefore, before adding a JavaScript object to local storage, you need to convert it into a JSON string. In the JavaScript Reference application, I use a wrapper around local storage that looks something like this: function Storage() { this.get = function (name) { return JSON.parse(window.localStorage.getItem(name)); }; this.set = function (name, value) { window.localStorage.setItem(name, JSON.stringify(value)); }; this.clear = function () { window.localStorage.clear(); }; }   If you use the wrapper above, then you can add arbitrary JavaScript objects to local storage like this: var store = new Storage(); // Add array to storage var products = [ {name:"Fish", price:2.33}, {name:"Bacon", price:1.33} ]; store.set("products", products); // Retrieve items from storage var products = store.get("products");   Modern browsers support the JSON object natively. If you need the script above to work with older browsers then you should download the JSON2.js library from: https://github.com/douglascrockford/JSON-js The JSON2 library will use the native JSON object if a browser already supports JSON. Merging Server Changes with Browser Local Storage When you first open the JavaScript Reference application, the entire database of JavaScript entries is transferred from the server to the browser. Two items are added to local storage: entries and entriesLastUpdated. The first item contains the entire entries database (a big JSON string of entries). The second item, a timestamp, represents the version of the entries. Whenever you open the JavaScript Reference in the future, the entriesLastUpdated timestamp is passed to the server. Only records that have been deleted, updated, or added since entriesLastUpdated are transferred to the browser. The OData query to get the latest updates looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated%20gt%20634301199890494792L) If you remove URL encoding, the query looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated gt 634301199890494792L) This query returns only those entries where the value of LastUpdated > 634301199890494792 (the version timestamp). The changes – new JavaScript entries, deleted entries, and updated entries – are merged with the existing entries in local storage. The JavaScript code for performing the merge is contained in the EntriesHelper.js file. The merge() method looks like this:   merge: function (oldEntries, newEntries) { // concat (this performs the add) oldEntries = oldEntries || []; var mergedEntries = oldEntries.concat(newEntries); // sort this.sortByIdThenLastUpdated(mergedEntries); // prune duplicates (this performs the update) mergedEntries = this.pruneDuplicates(mergedEntries); // delete mergedEntries = this.removeIsDeleted(mergedEntries); // Sort this.sortByName(mergedEntries); return mergedEntries; },   The contents of local storage are then updated with the merged entries. I spent several hours writing the merge() method (much longer than I expected). I found two resources to be extremely useful. First, I wrote extensive unit tests for the merge() method. I wrote the unit tests using server-side JavaScript. I describe this approach to writing unit tests in this blog entry. The unit tests are included in the JavaScript Reference source code. Second, I found the following blog entry to be super useful (thanks Nick!): http://nicksnettravels.builttoroam.com/post/2010/08/03/OData-Synchronization-with-WCF-Data-Services.aspx One big challenge that I encountered involved timestamps. I originally tried to store an actual UTC time as the value of the entriesLastUpdated item. I quickly discovered that trying to work with dates in JSON turned out to be a big can of worms that I did not want to open. Next, I tried to use a SQL timestamp column. However, I learned that OData cannot handle the timestamp data type when doing a filter query. Therefore, I ended up using a bigint column in SQL and manually creating the value when a record is updated. I overrode the SaveChanges() method to look something like this: public override int SaveChanges(SaveOptions options) { var changes = this.ObjectStateManager.GetObjectStateEntries( EntityState.Modified | EntityState.Added | EntityState.Deleted); foreach (var change in changes) { var entity = change.Entity as IEntityTracking; if (entity != null) { entity.LastUpdated = DateTime.Now.Ticks; } } return base.SaveChanges(options); }   Notice that I assign Date.Now.Ticks to the entity.LastUpdated property whenever an entry is modified, added, or deleted. Summary After building the JavaScript Reference application, I am convinced that HTML5 local storage can have a dramatic impact on the performance of any data-driven web application. If you are building a web application that involves extensive interaction with data then I recommend that you take advantage of this new feature included in the HTML5 standard.

    Read the article

  • PHP OOP: Avoid Singleton/Static Methods in Domain Model Pattern

    - by sunwukung
    I understand the importance of Dependency Injection and its role in Unit testing, which is why the following issue is giving me pause: One area where I struggle not to use the Singleton is the Identity Map/Unit of Work pattern (Which keeps tabs on Domain Object state). //Not actual code, but it should demonstrate the point class Monitor{//singleton construction omitted for brevity static $members = array();//keeps record of all objects static $dirty = array();//keeps record of all modified objects static $clean = array();//keeps record of all clean objects } class Mapper{//queries database, maps values to object fields public function find($id){ if(isset(Monitor::members[$id]){ return Monitor::members[$id]; } $values = $this->selectStmt($id); //field mapping process omitted for brevity $Object = new Object($values); Monitor::new[$id]=$Object return $Object; } $User = $UserMapper->find(1);//domain object is registered in Id Map $User->changePropertyX();//object is marked "dirty" in UoW // at this point, I can save by passing the Domain Object back to the Mapper $UserMapper->save($User);//object is marked clean in UoW //but a nicer API would be something like this $User->save(); //but if I want to do this - it has to make a call to the mapper/db somehow $User->getBlogPosts(); //or else have to generate specific collection/object graphing methods in the mapper $UserPosts = $UserMapper->getBlogPosts(); $User->setPosts($UserPosts); Any advice on how you might handle this situation? I would be loathe to pass/generate instances of the mapper/database access into the Domain Object itself to satisfy DI - At the same time, avoiding that results in lots of calls within the Domain Object to external static methods. Although I guess if I want "save" to be part of its behaviour then a facility to do so is required in its construction. Perhaps it's a problem with responsibility, the Domain Object shouldn't be burdened with saving. It's just quite a neat feature from the Active Record pattern - it would be nice to implement it in some way.

    Read the article

  • ajax on parked domain

    - by Daryl
    I'm currently writing this jquery and for some reason (I don't know why) it works on the normal domain, but on the parked domain it doesn't. Normal domain - http://www.thefinishedbox.com Parked domain - http://www.tfbox.com If you scroll down to the colony news and hit the click me link you'll see it will retrieve data via jquery ajax on the Normal domain, but on the parked domain it wont. Here is the jQuery code I have so far (its pretty standard): $(function() { $.ajaxSetup({ cache: false }); var ajax_load = "Load me plz"; // load() functions var loadUrl = "http://thefinishedbox.com/wp-content/themes/tfbox-beta/test.php"; $('.overlay').css({ opacity: '0' }); $('.toggle').click(function() { $('.overlay').css({ display: 'block' }).animate({ opacity: '1' }, 300); $(".overlay .content").html(ajax_load).load(loadUrl); return false; }); $('.close').click(function() { $('.overlay').animate({ opacity: '0' }, 300); $('.overlay').queue(function() { $(this).css({ display: 'none' }); $(this).dequeue(); }); return false; }); I'm a complete noob when it comes to ajax so any help would be massivly appreciated.

    Read the article

  • Sniff UNIX domain socket

    - by gonvaled
    I know that some process is writing to a certain unix domain socket (/var/run/asterisk/asterisk.ctl), but I do not known the pid of the sender. How can I find out who is writing to the socket? I have tried with: sudo lsof /var/run/asterisk/asterisk.ctl but it just list the owner of the socket. I would like to know who is writing / reading to this socket, and I would also like to sniff the data. Is this possible?

    Read the article

  • How much time do I have to remove a domain controller that was lost using metadata cleanup?

    - by dasko
    I am wondering how long after a Domain Controller has been permanently lost, from the domain due to it dying from hardware perspective, do I have before I have to clean up references of it using meta data cleanup? Planning on doing it soon but not sure if there is anything I should be concerned about. This dead or lost domain controller had no roles other than being a Domain Controller and having integrated Active Directory DNS. As well it held NO FSMO roles, it was just an older replication Domain Controller.

    Read the article

  • Using uk domain names on us hosting

    - by Steve Cooper
    Hi, all. I'm thinking of transferring my UK websites to a US hosting company, and they assure me they can host UK domains. However, as a bit of a n00b I don't understand the relationship between UK domain registration and US hosting. If anyone can explain this relationship I'd be very grateful. What pitfalls and problems should I be alert to? Many thanks.

    Read the article

  • How to reference a Domain Controller out of the Local Network?

    - by Adrian
    We have multiple servers scattered over different hosting providers. For learning, experimenting and, ultimately, production purposes, I set one of them as a Domain Controller. That went well, most of our services are now authenticating via AD, which helps us a lot. What I want to do now is to simplify the authentication for the multiple servers, by making each of them look at the Domain Controller. This way, our Devs can log into (Remote Desktop) the multiple servers with the same credentials from AD. I know I have to configure each server to look at the Domain Controller. But when I try to add the Domain Controller to the Computer, it cannot find it, although the Domain Controller address is a valid, reachable internet sub-domain (as in "ad.ourcompany.com"). This is the detailed error message: Note: This information is intended for a network administrator. If you are not your network's administrator, notify the administrator that you received this information, which has been recorded in the file C:\Windows\debug\dcdiag.txt. The following error occurred when DNS was queried for the service location (SRV) resource record used to locate an Active Directory Domain Controller for domain ad.ourcompany.com: The error was: "DNS name does not exist." (error code 0x0000232B RCODE_NAME_ERROR) The query was for the SRV record for _ldap._tcp.dc._msdcs.ad.ourcompany.com Common causes of this error include the following: - The DNS SRV records required to locate a AD DC for the domain are not registered in DNS. These records are registered with a DNS server automatically when a AD DC is added to a domain. They are updated by the AD DC at set intervals. This computer is configured to use DNS servers with the following IP addresses: 109.188.207.9 109.188.207.10 - One or more of the following zones do not include delegation to its child zone: ad.ourcompany.com ourcompany.com com . (the root zone) For information about correcting this problem, click Help. What am I missing? I'm an experienced Dev, but a newbie Sysdamin experimenting with new stuff. Disclaimer All IP addresses and domains/subdomains were changed to preserve security. If by any chance you still can see private information, please let me know so that I can change it.

    Read the article

  • The Active Directory integrated DNS zone _msdcs.COMPANY.LOCAL was not found.

    - by MadBoy
    Recently we renamed our domain from single domain name COMPANY to COMPANY.LOCAL due to multiple problems. However now I get this information from BPA. Issue: The Active Directory integrated DNS zone _msdcs.COMPANY.LOCAL was not found. Impact: DNS queries for the Active Directory integrated zone _msdcs.COMPANY.LOCAL might fail. Resolution: Restore the Active Directory integrated DNS zone _msdcs.COMPANY.LOCAL. Clearly there is no _msdcs.COMPANY.LOCAL as there is only old one _msdcs.COMPANY however when i check under COMPANY there is no _msdsc, but there is one when i check inside COMPANY.LOCAL. So it seems to me that _msdcs.COMPANY.LOCAL should use the one that is inside COMPANY.LOCAL? Should it not? Should I try to recreate it by hand (since it wasn't created on domain rename).

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >