Search Results

Search found 18475 results on 739 pages for 'auth to local'.

Page 3/739 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Cakephp Auth Flash Messages losing style

    - by Michael
    My Auth flash messages were working earlier -- a bright green background with text in it, such as "You have successfully logged out". However, I have made quite a few changes to the site since then, and this green background has disappeared. What are the possible causes of this? (I've run my CSS through a validator -- so that doesn't seem to be the issue). Any other ideas? Thanks!

    Read the article

  • removing the .local in local network name serving

    - by Paul Nathan
    I have several local machines; I use a OSX 10.6 machine to do most of the serving. Annoyingly, it postfixes its network name with .local. How would I set up a system so that I could access it by its hostname? server: httpd apache2 default install I am accessing it with a web browser(surprisingly). also when I ping my osx machine as name, it doesn't work; ping name.local does work.

    Read the article

  • configuring linux server to send traffic to local machines using local IP address

    - by gkdsp
    Two linux servers, server1 and server2, are on the same local network (they also have access to an external network). Server2 has a local IP of 192.168.0.2 and a host name of host2.mydomain.com. QUESTION 1: If an application on server1 sends traffic to server2 using a host name of host2.mydomain.com, what determines whether this traffic is routed to server2 using the local or external network? QUESTION 2: To ensure that all traffic sent from server1 to server2 always uses the local network, could I simply include in the server1 /etc/hosts file the following? 192.168.0.2 host2.mydomain.com ...the thinking being, if the servers are always on the same network there should never be a need for server2 to send traffic to server1 via the external network (that I can think of anyway). Is this done in practice, or is some other method preferred?

    Read the article

  • gpresult for local users on local machine?

    - by Jonas
    I would like to list the group policies for local users on a machine I'm setting up. However, when I run gpresult /v /u localmachine\user I get the error that I do have to specify a server name, and when I run gpresult /v /s 127.0.0.1 /u localmachine\user I get the message user credentials for local system are ignored, and I get the group policies for the local administrator as a result. How do I get the settings for the users?

    Read the article

  • Kohana 3: Auth module

    - by Thomas
    Hi, i'm trying to learn the Kohana's Auth module but login method always return false. Controller: <?php defined('SYSPATH') OR die('No Direct Script Access'); class Controller_Auth extends Controller { public function action_index() { if($_POST) { $this->login(); } $this->template = View::factory('login'); echo $this->template; } private function login() { $user = ORM::factory('user'); $data = array('username' => 'wilson', 'password' => '123'); if(!$user->login($data)) { echo 'FAILED!'; } } private function logout() { } } ?> Model: <?php defined('SYSPATH') or die('No direct script access.'); class Model_User extends Model_Auth_User { } ?>

    Read the article

  • Trying and expand the contrib.auth.user model and add a "relatipnships" manage

    - by dotty
    I have the following model setup. from django.db import models from django.contrib.auth.models import User class SomeManager(models.Manager): def friends(self): # return friends bla bla bla class Relationship(models.Model): """(Relationship description)""" from_user = models.ForeignKey(User, related_name='from_user') to_user = models.ForeignKey(User, related_name='to_user') has_requested_friendship = models.BooleanField(default=True) is_friend = models.BooleanField(default=False) objects = SomeManager() relationships = models.ManyToManyField(User, through=Relationship, symmetrical=False) relationships.contribute_to_class(User, 'relationships') Here i take the User object and use contribute_to_class to add 'relationships' to the User object. The relationship show up, but if call User.relationships.friends it should run the friends() method, but its failing. Any ideas how i would do this? Thanks

    Read the article

  • Having problems using haml and rails3

    - by Victor Rodrigues
    After installing rails3, I'm experiencing problems when trying to use haml with it. I have the updated gem installed, and after rails PROJECT_NAME , I did haml --rails in its root. It apparently had worked fine, since I have haml folder inside plugins, init.rb, as expected. But when I try to rake, or rails server, I get: rake aborted! no such file to load -- haml With --trace I get this: ** Invoke default (first_time) ** Invoke test (first_time) ** Execute test ** Invoke test:units (first_time) ** Invoke db:test:prepare (first_time) ** Invoke db:abort_if_pending_migrations (first_time) ** Invoke environment (first_time) ** Execute environment rake aborted! no such file to load -- haml /usr/local/lib/ruby/gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:167:in `require' /usr/local/lib/ruby/gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:167:in `require' /usr/local/lib/ruby/gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:537:in `new_constants_in' /usr/local/lib/ruby/gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:167:in `require' RAILS_PROJECT_ROOT/vendor/plugins/haml/init.rb:5 /usr/local/lib/ruby/gems/1.8/gems/railties-3.0.0.beta/lib/rails/plugin.rb:49 /usr/local/lib/ruby/gems/1.8/gems/railties-3.0.0.beta/lib/rails/initializable.rb:25:in `instance_exec' /usr/local/lib/ruby/gems/1.8/gems/railties-3.0.0.beta/lib/rails/initializable.rb:25:in `run' /usr/local/lib/ruby/gems/1.8/gems/railties-3.0.0.beta/lib/rails/initializable.rb:55:in `run_initializers' /usr/local/lib/ruby/gems/1.8/gems/railties-3.0.0.beta/lib/rails/initializable.rb:54:in `each' /usr/local/lib/ruby/gems/1.8/gems/railties-3.0.0.beta/lib/rails/initializable.rb:54:in `run_initializers' /usr/local/lib/ruby/gems/1.8/gems/railties-3.0.0.beta/lib/rails/application.rb:71:in `initialize!' /usr/local/lib/ruby/gems/1.8/gems/railties-3.0.0.beta/lib/rails/application.rb:112:in `initialize_tasks' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `call' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `execute' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `each' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `execute' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:in `invoke_with_call_chain' /usr/local/lib/ruby/1.8/monitor.rb:242:in `synchronize' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_chain' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:607:in `invoke_prerequisites' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in `each' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in `invoke_prerequisites' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:596:in `invoke_with_call_chain' /usr/local/lib/ruby/1.8/monitor.rb:242:in `synchronize' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_chain' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:607:in `invoke_prerequisites' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in `each' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in `invoke_prerequisites' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:596:in `invoke_with_call_chain' /usr/local/lib/ruby/1.8/monitor.rb:242:in `synchronize' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_chain' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:607:in `invoke_prerequisites' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in `each' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in `invoke_prerequisites' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:596:in `invoke_with_call_chain' /usr/local/lib/ruby/1.8/monitor.rb:242:in `synchronize' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_chain' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:in `invoke' /usr/local/lib/ruby/gems/1.8/gems/railties-3.0.0.beta/lib/rails/test_unit/testing.rake:45 /usr/local/lib/ruby/gems/1.8/gems/railties-3.0.0.beta/lib/rails/test_unit/testing.rake:43:in `collect' /usr/local/lib/ruby/gems/1.8/gems/railties-3.0.0.beta/lib/rails/test_unit/testing.rake:43 /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `call' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `execute' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `each' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `execute' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:in `invoke_with_call_chain' /usr/local/lib/ruby/1.8/monitor.rb:242:in `synchronize' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_chain' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:607:in `invoke_prerequisites' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in `each' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in `invoke_prerequisites' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:596:in `invoke_with_call_chain' /usr/local/lib/ruby/1.8/monitor.rb:242:in `synchronize' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_chain' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:in `invoke' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2051:in `invoke_task' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `each' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:in `top_level' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:in `run' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in `run' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake:31 /usr/local/bin/rake:19:in `load' /usr/local/bin/rake:19

    Read the article

  • /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID

    - by user1495181
    using ubuntu with net-snmp snmp work but in sys.log i see a lot of errors about snmpd.conf snmpd.conf: rwcommunity community 10.0.0.1 rwcommunity community 10.0.0.2 agentAddress udp:10.0.0.1:161 view systemonly included .1.3.6.1.2.1.1 view systemonly included .1.3.6.1.2.1.25.1 # Default access to basic system info rocommunity public default -V systemonly rouser authOnlyUser sysLocation Sitting on the Dock of the Bay sysContact Me <[email protected]> sysServices 72 proc mountd proc ntalkd 4 proc sendmail 10 1 disk / 10000 disk /var 5% includeAllDisks 10% load 12 10 5 trapsink localhost public iquerySecName internalUser rouser internalUser defaultMonitors yes linkUpDownNotifications yes master agentx errors: Sep 12 16:35:00 test snmpd[5485]: payload OID: prNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: prNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: prErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: prErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: prErrorFlag Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: memErrorName Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: memErrorName Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: memSwapErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: memSwapErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: memSwapError Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: extNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: extNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: extOutput Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: extOutput Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: extResult Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: dskPath Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: dskPath Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: dskErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: dskErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: dskErrorFlag Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: laNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: laNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: laErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: laErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: laErrorFlag Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: fileName Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: fileName Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: fileErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: fileErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: fileErrorFlag Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: snmperrErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: snmperrErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: snmperrErrorFlag Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: Turning on AgentX master support. Sep 12 16:35:00 test snmpd[5485]: net-snmp: 33 error(s) in config file(s)

    Read the article

  • Apache 403 Forbidden Error when accessing local web server using local IP address

    - by amjo324
    I have an odd problem when attempting to browse to pages stored on a local web server (Apache 2.2). The pages are served as expected when I browse to localhost or 127.0.0.1 on port 80. Yet when I attempt to browse to the same pages by referencing the local IP address (192.168.x.x), I receive a HTTP 403 (Forbidden) error. In essence, http://localhost:80 works but 192.168.x.x:80 doesn't even though I'm specifying the IP of the local machine. You may be thinking "who cares? just use localhost". However, this is the first step in troubleshooting why I cannot remotely access these pages from different hosts on my LAN. I'm presuming this can't be a firewall issue as I'm only connecting to the local machine. Even so, I verified there was no iptables rules that could be having an effect. I've checked the Apache error logs and the corresponding line of relevance is: [Sat Oct 19 07:38:35 2013] [error] [client 192.168.x.x] client denied by server configuration: /var/www/ I've inspected most of the apache config files and they don't appear to differ from what you would expect with a default install. I can't see anything in apache2.conf that would be a problem and httpd.conf is an empty file. This is an excerpt from /etc/apache2/sites-enabled/000-default: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> Any insight as to where I can look next to find a solution ? Thanks in advance.

    Read the article

  • How to generate the right password format for Apache2 authentication in use with DBD and MySQL 5.1?

    - by Walkman
    I want to authenticate users for a folder from a MySQL 5.1 database with AuthType Basic. The passwords are stored in plain text (they are not really passwords, so doesn't matter). The password format for apache however only allows for SHA1, MD5 on Linux systems as described here. How could I generate the right format with an SQL query ? Seems like apache format is a binary format with a lenght of 20, but the mysql SHA1 function return 40 long. My SQL query is something like this: SELECT CONCAT('{SHA}', BASE64_ENCODE(SHA1(access_key))) FROM user_access_keys INNER JOIN users ON user_access_keys.user_id = users.id WHERE name = %s where base64_encode is a stored function (Mysql 5.1 doesn't have TO_BASE64 yet). This query returns a 61 byte BLOB which is not the same format that apache uses. How could I generate the same format ? You can suggest other method for this too. The point is that I want to authenticate users from a MySQL5.1 database using plain text as password.

    Read the article

  • Local and public IPs on the same switch?

    - by Andrew
    It's all pretty much in the title. Is it possible to assign both local and public IPs to different nodes connected to the same switch? I have 4 servers with 2 gigabit ethernet ports each. I want one of each to have a public IP, and the remaining ports to have local IPs for server-to-server traffic. Sorry if it's a dumb question. I didn't see anything about it on here already.

    Read the article

  • Sendmail to local domain ignoring MX records (part 2)

    - by FractalizeR
    Hello. I have the exact problem, like in this post: http://serverfault.com/questions/25068/sendmail-to-local-domain-ignoring-mx-records I am also using email provider like GMail For Your Domain (which stores your mail and manages it). I am sending mail from my server directly, but receiving mail is done via Yandex (email provider). Since the server hosts forum, I prefer to send mail directly from it because using another mail provider can slow things. Also, when I send 300.000 emails to my subscribers, email provider will surely block me thinking I send spam. My DNS zone now is: ; ; GSMFORUM.RU ; $TTL 1H gsmforum.ru. SOA ns1.hc.ru. support.hc.ru. ( 2009122268 ; Serial 1H ; Refresh 30M ; Retry 1W ; Expire 1H ) ; Minimum gsmforum.ru. NS ns1.hc.ru. gsmforum.ru. NS ns2.hc.ru. @ A 79.174.68.223 *.gsmforum.ru. CNAME @ ns1 A 79.174.68.223 ns2 A 79.174.68.224 @ MX 10 mx.yandex.ru. mail CNAME domain.mail.yandex.net. yamail-xxxxxxxxx CNAME mail.yandex.ru. Server hostname is server.gsmforum.ru. May be this is the cause? Can someone explain the reason of the matter (the rules that make sendmail consider domain to be local)? Can I easily change *.gsmforum.ru. CNAME @ into *.gsmforum.ru. A 79.174.68.224 to solve this problem? [root@server ~]# cat /etc/mail/local-host-names localhost localhost.localdomain This server hosts gsmforum.ru so I cannot put it into another domain like David Mackintosh suggests. Putting domain in mailertable doesn't solve the problem also. sendmail -bt still shows, that address is local. DontProbeInterfaces is also set to true at sendmail config. M4 file follows: divert(-1)dnl dnl # dnl # This is the sendmail macro config file for m4. If you make changes to dnl # /etc/mail/sendmail.mc, you will need to regenerate the dnl # /etc/mail/sendmail.cf file by confirming that the sendmail-cf package is dnl # installed and then performing a dnl # dnl # make -C /etc/mail dnl # include(`/usr/share/sendmail-cf/m4/cf.m4')dnl VERSIONID(`setup for linux')dnl OSTYPE(`linux')dnl dnl # dnl # Do not advertize sendmail version. dnl # dnl define(`confSMTP_LOGIN_MSG', `$j Sendmail; $b')dnl dnl # dnl # default logging level is 9, you might want to set it higher to dnl # debug the configuration dnl # dnl define(`confLOG_LEVEL', `9')dnl dnl # dnl # Uncomment and edit the following line if your outgoing mail needs to dnl # be sent out through an external mail server: dnl # dnl define(`SMART_HOST', `smtp.your.provider')dnl dnl # define(`confDEF_USER_ID', ``8:12'')dnl dnl define(`confAUTO_REBUILD')dnl define(`confTO_CONNECT', `1m')dnl define(`confTRY_NULL_MX_LIST', `True')dnl define(`confDONT_PROBE_INTERFACES',`True') define(`PROCMAIL_MAILER_PATH', `/usr/bin/procmail')dnl define(`ALIAS_FILE', `/etc/aliases')dnl define(`STATUS_FILE', `/var/log/mail/statistics')dnl define(`UUCP_MAILER_MAX', `2000000')dnl define(`confUSERDB_SPEC', `/etc/mail/userdb.db')dnl define(`confPRIVACY_FLAGS', `authwarnings,novrfy,noexpn,restrictqrun')dnl define(`confAUTH_OPTIONS', `A')dnl dnl # dnl # The following allows relaying if the user authenticates, and disallows dnl # plaintext authentication (PLAIN/LOGIN) on non-TLS links dnl # dnl define(`confAUTH_OPTIONS', `A p')dnl dnl # dnl # PLAIN is the preferred plaintext authentication method and used by dnl # Mozilla Mail and Evolution, though Outlook Express and other MUAs do dnl # use LOGIN. Other mechanisms should be used if the connection is not dnl # guaranteed secure. dnl # Please remember that saslauthd needs to be running for AUTH. dnl # dnl TRUST_AUTH_MECH(`EXTERNAL DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl dnl define(`confAUTH_MECHANISMS', `EXTERNAL GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl dnl # dnl # Rudimentary information on creating certificates for sendmail TLS: dnl # cd /usr/share/ssl/certs; make sendmail.pem dnl # Complete usage: dnl # make -C /usr/share/ssl/certs usage dnl # dnl define(`confCACERT_PATH', `/etc/pki/tls/certs')dnl dnl define(`confCACERT', `/etc/pki/tls/certs/ca-bundle.crt')dnl dnl define(`confSERVER_CERT', `/etc/pki/tls/certs/sendmail.pem')dnl dnl define(`confSERVER_KEY', `/etc/pki/tls/certs/sendmail.pem')dnl dnl # dnl # This allows sendmail to use a keyfile that is shared with OpenLDAP's dnl # slapd, which requires the file to be readble by group ldap dnl # dnl define(`confDONT_BLAME_SENDMAIL', `groupreadablekeyfile')dnl dnl # dnl define(`confTO_QUEUEWARN', `4h')dnl dnl define(`confTO_QUEUERETURN', `5d')dnl dnl define(`confQUEUE_LA', `12')dnl dnl define(`confREFUSE_LA', `18')dnl define(`confTO_IDENT', `0')dnl dnl FEATURE(delay_checks)dnl FEATURE(`no_default_msa', `dnl')dnl FEATURE(`smrsh', `/usr/sbin/smrsh')dnl FEATURE(`mailertable', `hash -o /etc/mail/mailertable.db')dnl FEATURE(`virtusertable', `hash -o /etc/mail/virtusertable.db')dnl FEATURE(redirect)dnl FEATURE(always_add_domain)dnl FEATURE(use_cw_file)dnl FEATURE(use_ct_file)dnl dnl # dnl # The following limits the number of processes sendmail can fork to accept dnl # incoming messages or process its message queues to 20.) sendmail refuses dnl # to accept connections once it has reached its quota of child processes. dnl # dnl define(`confMAX_DAEMON_CHILDREN', `20')dnl dnl # dnl # Limits the number of new connections per second. This caps the overhead dnl # incurred due to forking new sendmail processes. May be useful against dnl # DoS attacks or barrages of spam. (As mentioned below, a per-IP address dnl # limit would be useful but is not available as an option at this writing.) dnl # dnl define(`confCONNECTION_RATE_THROTTLE', `3')dnl dnl # dnl # The -t option will retry delivery if e.g. the user runs over his quota. dnl # FEATURE(local_procmail, `', `procmail -t -Y -a $h -d $u')dnl FEATURE(`access_db', `hash -T<TMPF> -o /etc/mail/access.db')dnl FEATURE(`blacklist_recipients')dnl EXPOSED_USER(`root')dnl dnl # dnl # For using Cyrus-IMAPd as POP3/IMAP server through LMTP delivery uncomment dnl # the following 2 definitions and activate below in the MAILER section the dnl # cyrusv2 mailer. dnl # dnl define(`confLOCAL_MAILER', `cyrusv2')dnl dnl define(`CYRUSV2_MAILER_ARGS', `FILE /var/lib/imap/socket/lmtp')dnl dnl # dnl # The following causes sendmail to only listen on the IPv4 loopback address dnl # 127.0.0.1 and not on any other network devices. Remove the loopback dnl # address restriction to accept email from the internet or intranet. dnl # DAEMON_OPTIONS(`Name=MTA,Port=smtp') dnl # dnl # The following causes sendmail to additionally listen to port 587 for dnl # mail from MUAs that authenticate. Roaming users who can't reach their dnl # preferred sendmail daemon due to port 25 being blocked or redirected find dnl # this useful. dnl # dnl DAEMON_OPTIONS(`Port=submission, Name=MSA, M=Ea')dnl dnl # dnl # The following causes sendmail to additionally listen to port 465, but dnl # starting immediately in TLS mode upon connecting. Port 25 or 587 followed dnl # by STARTTLS is preferred, but roaming clients using Outlook Express can't dnl # do STARTTLS on ports other than 25. Mozilla Mail can ONLY use STARTTLS dnl # and doesn't support the deprecated smtps; Evolution <1.1.1 uses smtps dnl # when SSL is enabled-- STARTTLS support is available in version 1.1.1. dnl # dnl # For this to work your OpenSSL certificates must be configured. dnl # dnl DAEMON_OPTIONS(`Port=smtps, Name=TLSMTA, M=s')dnl dnl # dnl # The following causes sendmail to additionally listen on the IPv6 loopback dnl # device. Remove the loopback address restriction listen to the network. dnl # dnl DAEMON_OPTIONS(`port=smtp,Addr=::1, Name=MTA-v6, Family=inet6')dnl dnl # dnl # enable both ipv6 and ipv4 in sendmail: dnl # dnl DAEMON_OPTIONS(`Name=MTA-v4, Family=inet, Name=MTA-v6, Family=inet6') dnl # dnl # We strongly recommend not accepting unresolvable domains if you want to dnl # protect yourself from spam. However, the laptop and users on computers dnl # that do not have 24x7 DNS do need this. dnl # FEATURE(`accept_unresolvable_domains')dnl dnl # dnl FEATURE(`relay_based_on_MX')dnl dnl # dnl # Also accept email sent to "localhost.localdomain" as local email. dnl # LOCAL_DOMAIN(`localhost.localdomain')dnl dnl # dnl # The following example makes mail from this host and any additional dnl # specified domains appear to be sent from mydomain.com dnl # dnl MASQUERADE_AS(`mydomain.com')dnl dnl # dnl # masquerade not just the headers, but the envelope as well dnl # dnl FEATURE(masquerade_envelope)dnl dnl # dnl # masquerade not just @mydomainalias.com, but @*.mydomainalias.com as well dnl # dnl FEATURE(masquerade_entire_domain)dnl dnl # dnl MASQUERADE_DOMAIN(localhost)dnl dnl MASQUERADE_DOMAIN(localhost.localdomain)dnl dnl MASQUERADE_DOMAIN(mydomainalias.com)dnl dnl MASQUERADE_DOMAIN(mydomain.lan)dnl MAILER(smtp)dnl MAILER(procmail)dnl dnl MAILER(cyrusv2)dnl FEATURE(`dnsbl',`zen.spamhaus.org',`Rejected - your IP is blacklisted by http://www.spamhaus.org')

    Read the article

  • Enable Claims based Auth on a SP2010 website, after it has been provisioned

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). When you provision a web app in SP2010, you can choose it to use Claims Based Auth or Classic Auth right through the GUI.  However, after you have provisioned a web app, there is no GUI to switch from Classic to Claims based. So the below powershell script will let you convert a SP2010 website to claims based auth after it has been provisioned. 1: $w = Get-SPWebApplication "http://sp2010" 2: $w.UseClaimsAuthentication = "True"; 3: $w.Update() The user running the above script should be a member of the SharePoint_Shell_Access role on the config DB, and a member of the WSS_ADMIN_WPG local group. Comment on the article ....

    Read the article

  • Clone remote CentOS server to local test server?

    - by dannymcc
    We have a dedicated server running ContOS 5.5. The server runs our Magento store and a basic php website with mysql. I have a spare rack server in my office (HP ProLiant DL360 G4) that has more than enough storage space to store a duplicate of our dedicated server. I would like to clone the dedicated server entirely and have a local duplicate. It wouldn't need to be kept in sync because I can do that with GIT. The reason I want to do this is simple; learn more about the dedicated server and CentOS. Is this possible? I have SSH access to the dedicated server and obviously complete access to the local server.

    Read the article

  • Dropbox alternative with local sync support?

    - by srid
    I am currently using Dropbox. Just decided to sync my huge (about 5 GB) iTunes Library (music collection) in Dropbox. For that I must subscribe to their paid account. But before I do so, I'd like evaluate the alternatives. Is there an alternative that does this? Local LAN sync (eg: sync my huge music collection across computers in local network without uploading/downloading them to internet) The following would be nice (but not required): Native android client - so music will be made available in the Android music app / SDHC card Selective sync: sync particular folders / exclude certain folders on certain computers .. eg: excluding porn folder on work computers ;-) Just like Dropbox, it MUST work on 64-bit Windows, Linux and Mac. Know of any? (I am currently evaluating Spideroak. Boy, was it so complicated to use?)

    Read the article

  • Testing domains on intranet/local network?

    - by meder
    This may sound like a very silly question, but how could I setup domains ( eg www.foo.com ) on my local network? I know that all a domain is, is just a name registered to a name server and that nameserver has a zone record, and in the zone record there are several records of which the A Record is the most important in dictating where the lookup goes to, which machine it should point to. I basically want to make it so that I can refer to my other computer/webserver as 'www.foo.com' and make my local sites accessible by that, mess with virtualhost records in Apache and zone records for the domain except locally so I can explore and fiddle around and learn instead of having to rely on the domains I own on a public registrar that I could only access through the internet. Once again I apologize if this is a silly question, or if I'm completely thinking backwards. Background information: My OS is Debian, I'm a novice at Linux. I've done very small edits in zone records on a Bind9 Server but that's the extent of my networking experience.

    Read the article

  • WDS - Access to network share via local user

    - by Kenny Bones
    When installing Windows 7 using WDS, a local user account is used during the set up after the main image of Win7 is installed. And I've got this application that lies on the network and not the deploymentshare itself that I want to install. But logically I get no access to that share via the local user account. Is it possible to do this somehow? Or do I have to move the Share to the Deployment share? Or possibly map the share to a separate drive or something?

    Read the article

  • Windows 7, network shares, and authentication via local group instead of local user

    - by Donovan
    I have been doing some troubleshooting of my home network lately and have come to an odd conclusion that I was hoping to get some clarification on. I'm used to managing share permissions in a domain environment via groups instead of individual user accounts. I have a box at home running windows 7 ultimate and I decided to share some directories on that machine. I set it up to disallow guest access and require specifically granted permissions. (password moe?). Anyway, after a whole bunch of time i figured out that even though the shares I created were allowed via a local group i could not access them until i gave specific allowance to the intended user. I just didn't think i would have to do that. So here is the breakdown. Network is windows workgroup, not homegroup or nt domain PC_1 - win 7 ultimate - sharing in classic mode - user BOB - groups Admins PC_2 - win 7 starter - client - user BOB - groups admins PC_3 - win xp pro - client - user BOB - groups admins the share on PC_1 granted permission to only the local group administrators. local user BOB on PC_1 was a member of administrators. Both PC_2 and PC_3 could not browse the intended share on PC_1 because they were denied access. Also, no challenge was presented. They were simply denied. After adding BOB specifically to the intended share everything works just fine. Remember, its not an nt domain just a workgroup. But still, shouldn't i be able to manage share permissions via groups instead of individual user accounts? D.

    Read the article

  • Exim4 Disable local delivery?

    - by Robert Ross
    Hey all, I'm running exim4 as my MTA and it works great to send emails to outside emails other than my hostname. When I send an email to my gmail via command line (sendmail [email protected], etc...) it works fine. When I send an email to my website's domain, which is also the hostname for the server, i'm assuming it just does local delivery... which won't work because my email is received by another server (Google Apps). So how do I disable local delivery in Exim4? dpkg-reconfigure exim4-config did not give any real results.

    Read the article

  • How to connect externally to local network from local network?

    - by ventrilobug
    I have this weird problem with a Ventrilo server software running in my linux machine. If I connect the server inside my local network, messages keep lagging etc. But users joining the server outside my network do not experience any problems. So the question is, how can I trick the software inside the local network to make me appear as joining from external network. Do I need some proxy running in some external network machine? Does someone offer this kind of service? All the problems started when I changed my router so I guess it has something to do with the new router, but I haven't found a solution by changing settings in my router so I try to fix this in some other way.

    Read the article

  • Postfix Vacation.pl with local users

    - by Simiyu
    Hi, I am trying to setup the vacation.pl script on a mail servers which has local users only (since they are only 10 users). I have installed the SquirrelMail plugin and the Auto respond option is available for the users, but when an email is sent to the addresses no auto reply email is sent to the sender. There are also no logs on the /var/log/vacation folder which i created as well as the normal log files. Most of the examples online refer to virtual users, can it work with local users? and if so how? regards, Arthur

    Read the article

  • Downloading content greater than 2000 bytes from local network hangs in browser on Windows XP

    - by artplastika
    We have web application that runs under Tomcat in a local network. Our customers experience strange problem using this web application. Let's say Tomcat server runs on host1 and we open webapp URL in browser on host2. Any browser on host 2 starts opening page and downloading of content "hangs" for hours. We've made bunch of experiments and found that any content larger than 2000 bytes makes browser request hang. Tried in Internet Explorer 8, Opera 12, Firefox. At the same time if user opens website from internet, it works. Opening webapp from the same host1 where Tomcat is running works normally. Local network is organized with D-Link DGS-3120-48TC switch. Additional info. During experiments we've noticed XP Tweaker installed on hosts. Network settings from that tool: MTU is manually set to 1500 RWIN = 14600 Support of TCP frames larger than 64 KB is on Time to Live = 32 SACK is on

    Read the article

  • Phantom Local Disks appearing in my drive list

    - by Paul
    I seem to have several phantom Local Disks mapped to different letters that are of 0 bytes in size. Strangely, they do not show up when I view my drives through Windows Explorer. But if I open an application such as ACDSee Pro or MS Word and then go to open a file I can see all these Local Disks mapped to different letters. This means when I plug in my external hard disk it ends up mapped to letter R instead of its usual G which messes up any programs I have pointing to it by default. How did they get there and more importantly, how do I get rid of them? I'm on a Window 7 Home Premium 32 bit machine.

    Read the article

  • Access device with local ip over internet

    - by Joe Perrin
    I apologize up front if this is the wrong place to post this question. It seemed like the best fit. I have a device which is connected to my local network which has an IP of 192.168.1.10 from my router. Additionally I use a Windows 7 machine that runs some software called DirectUpdate which allows me to resolve the local IP of the Windows 7 machine (192.168.1.5) to be accessible to the internet via my domain (example.com) - Basic dynamic DNS updating. I'd like to access the device from example.com. I am unsure how to do this as I don't have any way to install DirectUpdate (or any software) on the device to make the device available to the internet. Any insight here would be appreciated. Thank you.

    Read the article

  • ASP.NET and HTML5 Local Storage

    - by Stephen Walther
    My favorite feature of HTML5, hands-down, is HTML5 local storage (aka DOM storage). By taking advantage of HTML5 local storage, you can dramatically improve the performance of your data-driven ASP.NET applications by caching data in the browser persistently. Think of HTML5 local storage like browser cookies, but much better. Like cookies, local storage is persistent. When you add something to browser local storage, it remains there when the user returns to the website (possibly days or months later). Importantly, unlike the cookie storage limitation of 4KB, you can store up to 10 megabytes in HTML5 local storage. Because HTML5 local storage works with the latest versions of all modern browsers (IE, Firefox, Chrome, Safari), you can start taking advantage of this HTML5 feature in your applications right now. Why use HTML5 Local Storage? I use HTML5 Local Storage in the JavaScript Reference application: http://Superexpert.com/JavaScriptReference The JavaScript Reference application is an HTML5 app that provides an interactive reference for all of the syntax elements of JavaScript (You can read more about the application and download the source code for the application here). When you open the application for the first time, all of the entries are transferred from the server to the browser (all 300+ entries). All of the entries are stored in local storage. When you open the application in the future, only changes are transferred from the server to the browser. The benefit of this approach is that the application performs extremely fast. When you click the details link to view details on a particular entry, the entry details appear instantly because all of the entries are stored on the client machine. When you perform key-up searches, by typing in the filter textbox, matching entries are displayed very quickly because the entries are being filtered on the local machine. This approach can have a dramatic effect on the performance of any interactive data-driven web application. Interacting with data on the client is almost always faster than interacting with the same data on the server. Retrieving Data from the Server In the JavaScript Reference application, I use Microsoft WCF Data Services to expose data to the browser. WCF Data Services generates a REST interface for your data automatically. Here are the steps: Create your database tables in Microsoft SQL Server. For example, I created a database named ReferenceDB and a database table named Entities. Use the Entity Framework to generate your data model. For example, I used the Entity Framework to generate a class named ReferenceDBEntities and a class named Entities. Expose your data through WCF Data Services. I added a WCF Data Service to my project and modified the data service class to look like this:   using System.Data.Services; using System.Data.Services.Common; using System.Web; using JavaScriptReference.Models; namespace JavaScriptReference.Services { [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class EntryService : DataService<ReferenceDBEntities> { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { config.UseVerboseErrors = true; config.SetEntitySetAccessRule("*", EntitySetRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } // Define a change interceptor for the Products entity set. [ChangeInterceptor("Entries")] public void OnChangeEntries(Entry entry, UpdateOperations operations) { if (!HttpContext.Current.Request.IsAuthenticated) { throw new DataServiceException("Cannot update reference unless authenticated."); } } } }     The WCF data service is named EntryService. Notice that it derives from DataService<ReferenceEntitites>. Because it derives from DataService<ReferenceEntities>, the data service exposes the contents of the ReferenceEntitiesDB database. In the code above, I defined a ChangeInterceptor to prevent un-authenticated users from making changes to the database. Anyone can retrieve data through the service, but only authenticated users are allowed to make changes. After you expose data through a WCF Data Service, you can use jQuery to retrieve the data by performing an Ajax call. For example, I am using an Ajax call that looks something like this to retrieve the JavaScript entries from the EntryService.svc data service: $.ajax({ dataType: "json", url: “/Services/EntryService.svc/Entries”, success: function (result) { var data = callback(result["d"]); } });     Notice that you must unwrap the data using result[“d”]. After you unwrap the data, you have a JavaScript array of the entries. I’m transferring all 300+ entries from the server to the client when the application is opened for the first time. In other words, I transfer the entire database from the server to the client, once and only once, when the application is opened for the first time. The data is transferred using JSON. Here is a fragment: { "d" : [ { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(1)", "type": "ReferenceDBModel.Entry" }, "Id": 1, "Name": "Global", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "object", "ShortDescription": "Contains global variables and functions", "FullDescription": "<p>\nThe Global object is determined by the host environment. In web browsers, the Global object is the same as the windows object.\n</p>\n<p>\nYou can use the keyword <code>this</code> to refer to the Global object when in the global context (outside of any function).\n</p>\n<p>\nThe Global object holds all global variables and functions. For example, the following code demonstrates that the global <code>movieTitle</code> variable refers to the same thing as <code>window.movieTitle</code> and <code>this.movieTitle</code>.\n</p>\n<pre>\nvar movieTitle = \"Star Wars\";\nconsole.log(movieTitle === this.movieTitle); // true\nconsole.log(movieTitle === window.movieTitle); // true\n</pre>\n", "LastUpdated": "634298578273756641", "IsDeleted": false, "OwnerId": null }, { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(2)", "type": "ReferenceDBModel.Entry" }, "Id": 2, "Name": "eval(string)", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "function", "ShortDescription": "Evaluates and executes JavaScript code dynamically", "FullDescription": "<p>\nThe following code evaluates and executes the string \"3+5\" at runtime.\n</p>\n<pre>\nvar result = eval(\"3+5\");\nconsole.log(result); // returns 8\n</pre>\n<p>\nYou can rewrite the code above like this:\n</p>\n<pre>\nvar result;\neval(\"result = 3+5\");\nconsole.log(result);\n</pre>", "LastUpdated": "634298580913817644", "IsDeleted": false, "OwnerId": 1 } … ]} I worried about the amount of time that it would take to transfer the records. According to Google Chome, it takes about 5 seconds to retrieve all 300+ records on a broadband connection over the Internet. 5 seconds is a small price to pay to avoid performing any server fetches of the data in the future. And here are the estimated times using different types of connections using Fiddler: Notice that using a modem, it takes 33 seconds to download the database. 33 seconds is a significant chunk of time. So, I would not use the approach of transferring the entire database up front if you expect a significant portion of your website audience to connect to your website with a modem. Adding Data to HTML5 Local Storage After the JavaScript entries are retrieved from the server, the entries are stored in HTML5 local storage. Here’s the reference documentation for HTML5 storage for Internet Explorer: http://msdn.microsoft.com/en-us/library/cc197062(VS.85).aspx You access local storage by accessing the windows.localStorage object in JavaScript. This object contains key/value pairs. For example, you can use the following JavaScript code to add a new item to local storage: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You can use the Google Chrome Storage tab in the Developer Tools (hit CTRL-SHIFT I in Chrome) to view items added to local storage: After you add an item to local storage, you can read it at any time in the future by using the window.localStorage.getItem() method: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You only can add strings to local storage and not JavaScript objects such as arrays. Therefore, before adding a JavaScript object to local storage, you need to convert it into a JSON string. In the JavaScript Reference application, I use a wrapper around local storage that looks something like this: function Storage() { this.get = function (name) { return JSON.parse(window.localStorage.getItem(name)); }; this.set = function (name, value) { window.localStorage.setItem(name, JSON.stringify(value)); }; this.clear = function () { window.localStorage.clear(); }; }   If you use the wrapper above, then you can add arbitrary JavaScript objects to local storage like this: var store = new Storage(); // Add array to storage var products = [ {name:"Fish", price:2.33}, {name:"Bacon", price:1.33} ]; store.set("products", products); // Retrieve items from storage var products = store.get("products");   Modern browsers support the JSON object natively. If you need the script above to work with older browsers then you should download the JSON2.js library from: https://github.com/douglascrockford/JSON-js The JSON2 library will use the native JSON object if a browser already supports JSON. Merging Server Changes with Browser Local Storage When you first open the JavaScript Reference application, the entire database of JavaScript entries is transferred from the server to the browser. Two items are added to local storage: entries and entriesLastUpdated. The first item contains the entire entries database (a big JSON string of entries). The second item, a timestamp, represents the version of the entries. Whenever you open the JavaScript Reference in the future, the entriesLastUpdated timestamp is passed to the server. Only records that have been deleted, updated, or added since entriesLastUpdated are transferred to the browser. The OData query to get the latest updates looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated%20gt%20634301199890494792L) If you remove URL encoding, the query looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated gt 634301199890494792L) This query returns only those entries where the value of LastUpdated > 634301199890494792 (the version timestamp). The changes – new JavaScript entries, deleted entries, and updated entries – are merged with the existing entries in local storage. The JavaScript code for performing the merge is contained in the EntriesHelper.js file. The merge() method looks like this:   merge: function (oldEntries, newEntries) { // concat (this performs the add) oldEntries = oldEntries || []; var mergedEntries = oldEntries.concat(newEntries); // sort this.sortByIdThenLastUpdated(mergedEntries); // prune duplicates (this performs the update) mergedEntries = this.pruneDuplicates(mergedEntries); // delete mergedEntries = this.removeIsDeleted(mergedEntries); // Sort this.sortByName(mergedEntries); return mergedEntries; },   The contents of local storage are then updated with the merged entries. I spent several hours writing the merge() method (much longer than I expected). I found two resources to be extremely useful. First, I wrote extensive unit tests for the merge() method. I wrote the unit tests using server-side JavaScript. I describe this approach to writing unit tests in this blog entry. The unit tests are included in the JavaScript Reference source code. Second, I found the following blog entry to be super useful (thanks Nick!): http://nicksnettravels.builttoroam.com/post/2010/08/03/OData-Synchronization-with-WCF-Data-Services.aspx One big challenge that I encountered involved timestamps. I originally tried to store an actual UTC time as the value of the entriesLastUpdated item. I quickly discovered that trying to work with dates in JSON turned out to be a big can of worms that I did not want to open. Next, I tried to use a SQL timestamp column. However, I learned that OData cannot handle the timestamp data type when doing a filter query. Therefore, I ended up using a bigint column in SQL and manually creating the value when a record is updated. I overrode the SaveChanges() method to look something like this: public override int SaveChanges(SaveOptions options) { var changes = this.ObjectStateManager.GetObjectStateEntries( EntityState.Modified | EntityState.Added | EntityState.Deleted); foreach (var change in changes) { var entity = change.Entity as IEntityTracking; if (entity != null) { entity.LastUpdated = DateTime.Now.Ticks; } } return base.SaveChanges(options); }   Notice that I assign Date.Now.Ticks to the entity.LastUpdated property whenever an entry is modified, added, or deleted. Summary After building the JavaScript Reference application, I am convinced that HTML5 local storage can have a dramatic impact on the performance of any data-driven web application. If you are building a web application that involves extensive interaction with data then I recommend that you take advantage of this new feature included in the HTML5 standard.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >