Search Results

Search found 17958 results on 719 pages for 'local delivery'.

Page 279/719 | < Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >

  • db4o Replication System: NullReferenceException?

    - by virtualmic
    Hi, I am trying to do standard bi-directional replication as follows. However, I get a NullReferenceException. This is a separate replication project. I did import the classes involved in the original project (such as Item, Category etc.) in this replication project. What am I doing wrong? (If I debug using VS, I can see that changedObjects does have all the changed objects; there seems to be some problem inside Replicate function) IObjectContainer local = Db4oFactory.OpenFile(@"G:\Work\School\MIS\VINMIS\Inventory\bin\Debug\vin.db4o"); IObjectContainer far = Db4oFactory.OpenFile(@"\\crs-lap\c$\vinmis\vin.db4o"); ; IReplicationSession replication = Replication.Begin(local, far); IObjectSet changedObjects = replication.ProviderA().ObjectsChangedSinceLastReplication(); while(changedObjects.HasNext()) replication.Replicate(changedObjects.Next()); // Exception!!! replication.Commit(); changedObjects = replication.ProviderB().ObjectsChangedSinceLastReplication(); while (changedObjects.HasNext()) replication.Replicate(changedObjects.Next()); replication.Commit(); Regards, Saurabh.

    Read the article

  • Postfix aliases and duplicate e-mails, how to fix?

    - by macke
    I have aliases set up in postfix, such as the following: [email protected]: [email protected], [email protected] ... When an email is sent to [email protected], and any of the recipients in that alias is cc:ed which is quite common (ie: "Reply all"), the e-mail is delivered in duplicates. For instance, if an e-mail is sent to [email protected] and [email protected] is cc:ed, it'll get delivered twice. According to the Postfix FAQ, this is by design as Postfix sends e-mail in parallel without expanding the groups, which makes it faster than sendmail. Now that's all fine and dandy, but is it possible to configure Postfix to actually remove duplicate recipients before sending the e-mail? I've found a lot of posts from people all over the net that has the same problem, but I have yet to find an answer. If this is not possible to do in Postfix, is it possible to do it somewhere on the way? I've tried educating my users, but it's rather futile I'm afraid... I'm running postfix on Mac OS X Server 10.6, amavis is set as content_filter and dovecot is set as mailbox_command. I've tried setting up procmail as a content_filter for smtp delivery (as per the suggestion below), but I can't seem to get it right. For various reasons, I can't replace the standard OS X configuration, meaning postfix, amavis and dovecot stay put. I can however add to it if I wish.

    Read the article

  • Bacula & Multiple Tape Devices, and so on

    - by Tom O'Connor
    Bacula won't make use of 2 tape devices simultaneously. (Search for #-#-# for the TL;DR) A little background, perhaps. In the process of trying to get a decent working backup solution (backing up 20TB ain't cheap, or easy) at $dayjob, we bought a bunch of things to make it work. Firstly, there's a Spectra Logic T50e autochanger, 40 slots of LTO5 goodness, and that robot's got a pair of IBM HH5 Ultrium LTO5 drives, connected via FibreChannel Arbitrated Loop to our backup server. There's the backup server.. A Dell R715 with 2x 16 core AMD 62xx CPUs, and 32GB of RAM. Yummy. That server's got 2 Emulex FCe-12000E cards, and an Intel X520-SR dual port 10GE NIC. We were also sold Commvault Backup (non-NDMP). Here's where it gets really complicated. Spectra Logic and Commvault both sent respective engineers, who set up the library and the software. Commvault was running fine, in so far as the controller was working fine. The Dell server has Ubuntu 12.04 server, and runs the MediaAgent for CommVault, and mounts our BlueArc NAS as NFS to a few mountpoints, like /home, and some stuff in /mnt. When backing up from the NFS mountpoints, we were seeing ~= 290GB/hr throughput. That's CRAP, considering we've got 20-odd TB to get through, in a <48 hour backup window. The rated maximum on the BlueArc is 700MB/s (2460GB/hr), the rated maximum write speed on the tape devices is 140MB/s, per drive, so that's 492GB/hr (or double it, for the total throughput). So, the next step was to benchmark NFS performance with IOzone, and it turns out that we get epic write performance (across 20 threads), and it's like 1.5-2.5TB/hr write, but read performance is fecking hopeless. I couldn't ever get higher than 343GB/hr maximum. So let's assume that the 343GB/hr is a theoretical maximum for read performance on the NAS, then we should in theory be able to get that performance out of a) CommVault, and b) any other backup agent. Not the case. Commvault seems to only ever give me 200-250GB/hr throughput, and out of experimentation, I installed Bacula to see what the state of play there is. If, for example, Bacula gave consistently better performance and speeds than Commvault, then we'd be able to say "**$.$ Refunds Plz $.$**" #-#-# Alas, I found a different problem with Bacula. Commvault seems pretty happy to read from one part of the mountpoint with one thread, and stream that to a Tape device, whilst reading from some other directory with the other thread, and writing to the 2nd drive in the autochanger. I can't for the life of me get Bacula to mount and write to two tape drives simultaneously. Things I've tried: Setting Maximum Concurrent Jobs = 20 in the Director, File and Storage Daemons Setting Prefer Mounted Volumes = no in the Job Definition Setting multiple devices in the Autochanger resource. Documentation seems to be very single-drive centric, and we feel a little like we've strapped a rocket to a hamster, with this one. The majority of example Bacula configurations are for DDS4 drives, manual tape swapping, and FreeBSD or IRIX systems. I should probably add that I'm not too bothered if this isn't possible, but I'd be surprised. I basically want to use Bacula as proof to stick it to the software vendors that they're overpriced ;) I read somewhere that @KyleBrandt has done something similar with a modern Tape solution.. Configuration Files: *bacula-dir.conf* # # Default Bacula Director Configuration file Director { # define myself Name = backuphost-1-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 20 Password = "yourekiddingright" # Console password Messages = Daemon DirAddress = 0.0.0.0 #DirAddress = 127.0.0.1 } JobDefs { Name = "DefaultFileJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } JobDefs { Name = "DefaultTapeJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = "SpectraLogic" Messages = Standard Pool = AllTapes Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" Prefer Mounted Volumes = no } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultFileJob" } Job { Name = "BackupThisVolume" JobDefs = "DefaultTapeJob" FileSet = "SpecialVolume" } #Job { # Name = "BackupClient2" # Client = backuphost-12-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultFileJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=backuphost-1-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /srv/bacula/restore } FileSet { Name = "SpecialVolume" Include { Options { signature = MD5 } File = /mnt/SpecialVolume } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } File = /usr/sbin } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = backuphost-1-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "surelyyourejoking" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = backuphost-12-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "i'mnotjokinganddontcallmeshirley" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "lalalalala" Device = FileStorage Media Type = File } Storage { Name = "SpectraLogic" Address = localhost SDPort = 9103 Password = "linkedinmakethebestpasswords" Device = Drive-1 Device = Drive-2 Media Type = LTO5 Autochanger = yes } # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; DB Address = ""; dbuser = "bacula"; dbpassword = "bbmaster63" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } Pool { Name = AllTapes Pool Type = Backup Recycle = yes AutoPrune = yes # Prune expired volumes Volume Retention = 31 days # one Moth } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = backuphost-1-mon Password = "LastFMalsostorePasswordsLikeThis" CommandACL = status, .status } bacula-sd.conf # # Default Bacula Storage Daemon Configuration file # Storage { # definition of myself Name = backuphost-1-sd SDPort = 9103 # Director's port WorkingDirectory = "/var/lib/bacula" Pid Directory = "/var/run/bacula" Maximum Concurrent Jobs = 20 SDAddress = 0.0.0.0 # SDAddress = 127.0.0.1 } # # List Directors who are permitted to contact Storage daemon # Director { Name = backuphost-1-dir Password = "passwordslinplaintext" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = backuphost-1-mon Password = "totalinsecurityabound" Monitor = yes } Device { Name = FileStorage Media Type = File Archive Device = /srv/bacula/archive LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; } Autochanger { Name = SpectraLogic Device = Drive-1 Device = Drive-2 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg4 } Device { Name = Drive-1 Drive Index = 0 Archive Device = /dev/nst0 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } Device { Name = Drive-2 Drive Index = 1 Archive Device = /dev/nst1 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = backuphost-1-dir = all } bacula-fd.conf # # Default Bacula File Daemon Configuration file # # # List Directors who are permitted to contact this File daemon # Director { Name = backuphost-1-dir Password = "hahahahahaha" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = backuphost-1-mon Password = "hohohohohho" Monitor = yes } # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = backuphost-1-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 20 #FDAddress = 127.0.0.1 FDAddress = 0.0.0.0 } # Send all messages except skipped files back to Director Messages { Name = Standard director = backuphost-1-dir = all, !skipped, !restored }

    Read the article

  • emacs tramp performance

    - by Oleg Pavliv
    Is there a way to improve emacs tramp performance? For me it's faster to open an external ftp client (filezilla), transfer files to the local disk and open them in an external editor (notepad) than open them with emacs. I use emacs23.1 under windows xp. I tried different tramp-default-method (telnet, pscp, ftp), all of them have the same performance. Profiling results with elp-instrument-package are the following (I opened 3 remote files of 1.5 MB each one) tramp-file-name-handler 1461 350.41599999 0.2398466803 tramp-sh-file-name-handler 1461 350.02699999 0.2395804243 tramp-send-command 227 179.63400000 0.7913392070 tramp-send-command-and-check 205 177.77600000 0.8672000000 tramp-wait-for-regexp 227 176.47800000 0.7774361233 tramp-wait-for-output 226 176.40000000 0.7805309734 tramp-barf-unless-okay 18 133.46699999 7.4148333333 tramp-handle-insert-file-contents 3 132.046 44.015333333 tramp-handle-file-local-copy 3 131.281 43.760333333 tramp-accept-process-output 2375 112.95100000 0.0475583157 So, actual file transfer takes 132 sec, about 1/3 of total time. Why does it spend so much time in tramp-sh-file-name-handler? I tried to advice a function tramp-sh-file-name-handler to store and return cached results but it does not work, probably this function has some side effects. Any ideas how to improve tramp performance? (I use emacs 23.1 under WindowsXP)

    Read the article

  • emacs: Can I set compilation-error-regexp-alist in a mode hook fn?

    - by Cheeso
    I am trying to set the compilation-error-regexp-alist in a function that I add as a mode hook. (defun cheeso-javascript-mode-fn () (turn-on-font-lock) ...bunch of other stuff ;; for JSLINT (make-local-variable 'compilation-error-regexp-alist) (setq compilation-error-regexp-alist '( ("^[ \t]*\\([A-Za-z.0-9_: \\-]+\\)(\\([0-9]+\\)[,]\\( *[0-9]+\\))\\( Microsoft JScript runtime error\\| JSLINT\\): \\(.+\\)$" 1 2 3) )) ;;(make-local-variable 'compile-command) (setq compile-command (let ((file (file-name-nondirectory buffer-file-name))) (concat "%windir%\\system32\\cscript.exe \\cheeso\\bin\\jslint.js " file))) ) (add-hook 'javascript-mode-hook 'cheeso-javascript-mode-fn) The mode hook runs. The various things I Set in the mode hook work. The compile-command gets set. But for some reason, the compilation-error-regexp-alist value doesn't take effect. If I later do a M-x describe-variable on compilation-error-regexp-alist, it shows me the value I think it should have. But .. the errors in the compilation buffer don't get highlighted, and M-x next-error does not work. If I add the error regexp value to the compilation-error-regexp-alist via setq-default, like this: (setq-default compilation-error-regexp-alist '( ... jslint regexp here ... ... many other regexp's here... )) ...then it works. The errors in the compilation buffer get properly highlighted and M-x next-error functions as expected.

    Read the article

  • Updating multiple related tables in SQLite

    - by PerryJ
    Just some background, sorry so long winded. I'm using the System.Data.SQLite ADO.net adapter to create a local sqlite database and this will be the only process hitting the database, so I don't need to worry about concurrency. I'm building the database from various sources and don't want to build this all in memory using datasets or dataadapters or anything like that. I want to do this using SQL (DdCommands). I'm not very good with SQL and complete noob in sqlite. I'm basically using sqlite as a local database / save file structure. The database has a lot of related tables and the data has nothing to do with People or Regions or Districts, but to use a simple analogy, imagine: Region table with auto increment RegionID, RegionName column and various optional columns. District table with auto increment DistrictID, DistrictName, RegionId, and various optional columns Person table with auto increment PersonID, PersonName, DistrictID, and various optional columns So I get some data representing RegionName, DistrictName,PersonName, and other Person related data. The Region, District and/or Person may or may not be created at this point. Once again, not being the greatest with this, my thoughts would be something like: Check to see if Region exists and if so get the RegionID else create it and get RegionID Check to see if District exists and if so get the DistrictID else create it adding in RegionID from above and get DistrictID Check to see if Person exists and if so get the PersonID else create it adding in DistrictID from above and get PersonID Update Person with rest of data. In MS SQL Server I would create a stored procedure to handle all this. Only way I can see to do this with sqlite is a lot of commands. So I'm sure I'm not getting this. I've spent hours looking around on various sites but just don't feel like I'm going down the right road. Any suggestions would be greatly appreciated.

    Read the article

  • How to run autoconf in OSX linking to specified OS SDK

    - by kroko
    Hello! I have written a small command line tool that includes some GPL code. Everything runs smothly. Using os 10.6. The external code used has a config.h header file, made by calling autoconf. I'd like to deploy the tool to different OS versions. Thus config.h could look like // config.h #if MAC_OS_X_VERSION_MAX_ALLOWED == MAC_OS_X_VERSION_10_4 // autoconf created config.h content for 10.4 comes here #elif MAC_OS_X_VERSION_MAX_ALLOWED == MAC_OS_X_VERSION_10_5 // autoconf created config.h content for 10.5 comes here #elif MAC_OS_X_VERSION_MAX_ALLOWED == MAC_OS_X_VERSION_10_6 // autoconf created config.h content for 10.6 comes here #else #error "muahahaha" #endif What is the way to tell autoconf to use /Developer/SDKs/MacOSX10.XXXX.sdk/usr/ while generating the config.h? In order to test it I have run #!/bin/bash # for 10.6 export CC="/usr/bin/gcc-4.2" export CXX="/usr/bin/g++-4.2" export MACOSX_DEPLOYMENT_TARGET="10.6" export OSX_SDK="/Developer/SDKs/MacOSX10.6.sdk" export OSX_CFLAGS="-isysroot $OSX_SDK -arch x86_64 -arch i386" export OSX_LDFLAGS="-Wl,-syslibroot,$OSX_SDK -arch x86_64 -arch i386" export CFLAGS=$OSX_CFLAGS export CXXFLAGS=$OSX_CFLAGS export LDFLAGS=$OSX_LDFLAGS before calling ./configure on OS 10.6. I know that the configure script looks for libintl.h, which is not in the "out of box 10.6 / SDK", but is present in the local machine under /usr/local The config.h header file produced with method described above has info that libintl.h is in the system- thus "linking" autoconf only to SDK has failed. Is it happening because... "we don't have a crystal ball"? :). Or is it incorrect "setup"/flag-export before running autoconf, which I hope is the case? If so, then what would be the correct way to set up envvariables? Many thanks in advance.

    Read the article

  • What does this svn2git error mean?

    - by Hisham
    I am trying to import my repository from svn to git using svn2git, but it seems like it's failing when it hits a branch. What's the problem? Found possible branch point: https://s.aaa.com/repo/trunk/project => https://s.aaa.com/repo/branches/project-beta1.0, 128 Use of uninitialized value in substitution (s///) at /opt/local/libexec/git-core/git-svn line 1728. Use of uninitialized value in concatenation (.) or string at /opt/local/libexec/git-core/git-svn line 1728. refs/remotes/trunk: 'https://s.aaa.com/repo' not found in '' Running command: git branch -l --no-color * master Running command: git branch -r --no-color trunk Running command: git checkout trunk Note: checking out 'trunk'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b new_branch_name HEAD is now at f4e6268... Changing svn repository in cap files Running command: git branch -D master Deleted branch master (was f4e6268). Running command: git checkout -f -b master Switched to a new branch 'master' Running command: git gc Counting objects: 450, done. Delta compression using up to 2 threads. Compressing objects: 100% (368/368), done. Writing objects: 100% (450/450), done. Total 450 (delta 63), reused 450 (delta 63)

    Read the article

  • Emails going to Junk for Hotmail recipients

    - by David George
    We send daily mass emails to our customers (~30,000+ emails per day). We have problems with Hotmail users receiving our emails. Sometimes the email goes to the Junk folder, but often it will got to their inbox, but the content is blocked so the user sees a message saying "This email was blocked and may be dangerous". If an email is sent to GMAIL it is usually not blocked, but it does show up as from "Uknown" instead of the company. Please be advised I've done the following: 1. No RBLs Checked on - http://multirbl.valli.org/ 2. We do have SPF records published 3. We do have reverse DNS setup 4. Our company even signed up for the Junk Mail Reports Program at Hotmail Here is a sample header, I've noticed the X-SID-Result and the X-AUTH-Result both FAIL every time at Hotmail: X-Message-Delivery: Vj0xLjE7dXM9MDtsPTA7YT0wO0Q9MTtTQ0w9MQ== X-Message-Status: n:0 X-SID-Result: Fail X-AUTH-Result: FAIL X-Message-Info: JGTYoYF78jFqAaC29fBlDlD/ZI36+S6WoFmkQN10UxWFe1xLHhP+rDthGRZM87uHYM926hUBS+s0q46Yx9y6jdurhN6fx0bK Received: from privatecompany.com ([WanIPAddress]) by col0-mc3-f30.Col0.hotmail.com with Microsoft SMTPSVC(6.0.3790.4675); Wed, 5 May 2010 08:41:27 -0700 X-AuditID: ac10fe93-000013bc00000534-46-4be191a1618e Received: from INTERNAL-Email-SERVER([InternalIPAddress]) by privatecompany.com with Microsoft SMTPSVC(6.0.3790.4675); Wed, 5 May 2010 11:41:21 -0400 From: Private Company, Inc.<[email protected]> To: [email protected] Message-Id: <[email protected]> Subject: Date: Wed, 5 May 2010 11:42:46 -0400 MIME-Version: 1.0 Reply-To: [email protected] Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 8bit X-Brightmail-Tracker: AAAAAA== Return-Path: [email protected] X-OriginalArrivalTime: 05 May 2010 15:41:27.0837 (UTC) FILETIME=[6D06E4D0:01CAEC69]

    Read the article

  • Passenger: "Missing these required gems redgreen"

    - by Michael Stum
    Hello, total ruby newbie, trying to setup a Rails/MongoDB application on Mac OS X Snow leopard. Installed Ruby 1.9.1 and RubyGems 1.3.7, which ruby and which gem point to the same directory. I'm using the Snow Leopard built-in apache and Passenger 2.2.11. I'm using the rails template from the mongo-site which seems to work okay overall. The exact error that passenger gives me is: /Users/User/Sites/feuerapp/vendor/rails/railties/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement **Notice: C extension not loaded. This is required for optimum MongoDB Ruby driver performance. You can install the extension as follows: gem install bson_ext If you continue to receive this message after installing, make sure that the bson_ext gem is in your load path and that the bson_ext and mongo gems are of the same version. Missing these required gems: redgreen You're running: ruby 1.9.1.376 at /usr/local/bin/ruby rubygems 1.3.7 at /Users/User/.gem/ruby/1.9.1, /usr/local/lib/ruby/gems/1.9.1 Runrake gems:installto install the missing gems. The weird thing is that redgreen is installed and looks fine to me: Dahlia:feuerapp User$ ls -la vendor/gems/ total 0 drwxr-xr-x 7 User staff 238 May 18 22:56 . drwxr-xr-x 5 User staff 170 May 18 23:00 .. drwxr-xr-x 11 User staff 374 May 18 22:56 factory_girl-1.2.4 drwxr-xr-x 11 User staff 374 May 18 22:56 mocha-0.9.8 drwxr-xr-x 7 User staff 238 May 18 22:56 mongo_mapper-0.7.6 drwxr-xr-x 7 User staff 238 May 18 22:56 redgreen-1.2.2 drwxr-xr-x 11 User staff 374 May 18 22:56 shoulda-2.10.3 Commenting out this line in environment.rb "solves" the issue, but that's not really want I want: config.gem 'redgreen' I don't understand anything of gems yet, but from my limited understanding, redgreen should be there and found?

    Read the article

  • iPhone - UIImage imageScaledToSize Memory Issue

    - by bbullis21
    I have done research and tried several times to release the UIImage memory and have been unsuccessful. I saw one other post on the internet where someone else was having this same issue. Everytime imageScaledToSize is called, the ObjectAlloc continues to climb. In the following code I am pulling a local image from my resource directory and resizing it with some blur. Can someone provide some help on how to release the memory of the UIImages called....scaledImage and labelImage. This is the chunk of code where the iPhone Intruments has shown to have the ObjectAlloc build up. This chunk of code is called several times with an NSTimer. //Get local image from inside resource NSString * fileLocation = [[NSBundle mainBundle] pathForResource:imgMain ofType:@"jpg"]; NSData * imageData = [NSData dataWithContentsOfFile:fileLocation]; UIImage * blurMe = [UIImage imageWithData:imageData]; //Resize and blur image UIImage * scaledImage = [blurMe _imageScaledToSize:CGSizeMake(blurMe.size.width / dblBlurLevel, blurMe.size.width / dblBlurLevel) interpolationQuality:3.0]; UIImage * labelImage = [scaledImage _imageScaledToSize:blurMe.size interpolationQuality:3.0]; imgView.image = labelImage;

    Read the article

  • Remote DLL Registration without access to HKEY_CLASSES_ROOT

    - by mohlsen
    We have a legacy VB6 application that updates itself on startup by pulling down the latest files and registering the COM components. This works for both local (regsvr32) ActiveX COM Components and remote (clireg32) ActiveX COM components registered in COM+ on another machine. New requirements are preventing us from writing to HKEY_LOACL_MACHINE (HKLM) for security reasons, which is what obviously happens by default when calling regsvr32 and clireg32. We have come up with an way to register the local COM componet under HKEY_CURRENT_USER\Software\Classes (HKCU) using the RegOverridePredefKey Windows API method. This works by redirecting the inserts into the registry to the HKCU location. Then when the COM components are instantiated, windows first looks to HKCU before looking for component information in HKLM. This replaces what regsvr32 is doing. The problem we are experiencing at this time is when we attempt to register VBR / TLB using clireg32, this registration process also adds registration keys to HKEY_LOACL_MACHINE. Is there a way to redirect clireg32.exe to register component is HKEY_CURRENT_USER? Are there any other methods that would allow us to register these COM+ components on clients machine with limited security access? Our only solution at this time would be to manually write the registration information to the registry, but that is not ideal and would be a maint issue.

    Read the article

  • VS2010 compiles solution without errors, msbuild fails: "fatal error CS0002: Unable to load message string from resources"

    - by Nathan Ridley
    I'm having a lot of trouble trying to track down the cause of this error message. I have a large Visual Studio 2010 solution which compiles without error on my local machine but on the build server, msbuild fails on one of the projects with the error: fatal error CS0002: Unable to load message string from resources Here's the red error section at the end: Build FAILED. "C:\TeamCity\buildAgent\work\85eff164854b9e67\Libraries\Domainface.Proxy.Common\Domainface.Proxy.Common.csproj" (default target) (9) -> (CoreCompile target) -> CSC : fatal error CS0002: Unable to load message string from resources. [C:\TeamCity\buildAgent\work\85eff164854b9e67\Libraries\Domainface.Proxy.Common\Domainface.Proxy.Common.csproj] 0 Warning(s) 1 Error(s) The entire msbuild output from the build server is here: http://pastie.org/3660842 What does the error generally refer to, that would cause it to build locally but not on the build server? UPDATE I have just run msbuild /version on both machines and it turns out the .net framework versions are very slightly different. Local machine is 4.0.30319.488 and build server is 4.0.30319.1. I'm about to run windows update on the server to allow it to install some updates, as several seem to be .net framework-related, so I'll see if that makes a difference. UPDATE Installing the updates didn't help. Just remembered I copied up csc.exe from the async preview a little while ago in order to facilitate async compilation (the actual async preview had failed to install on the server due to visual studio not being there, but installing visual studio team viewer seems to have fixed that, so i've just run the proper async ctp3 installer to see if that makes a difference.

    Read the article

  • Maven Eclipse plugin and classpath issues in Eclipse

    - by subferno
    I am using Eclipse 3.51, Maven 2.0.9, Java 1.4, with the Maven Eclipse plugin 2.7 with WTP 2.0 (not m2Eclipse). I have a flat multi module project that is set up as follows (module Parent with the parent pom, module A and B is dependent on C). Importing the four modules in for the first time will show compile errors as expected since I have not ran the eclipse plugin. With my local repository empty, running eclipse clean will resolve all compile errors and dependencies within the my local workspace. If I were to make some minor code changes to module B and run the eclipse plugin again, compile errors would show up in module A and B. Compile errors about classes that cannot be found. Its like module C is no longer in the classpath for A and B to see. I look at the .classpath file and its definitely looking at the right modules in the Eclipse workspace. If I delete the maven repository and do the eclipse clean again, the compile errors about unresolved classes are fixed. Also, if I run the eclpse clean command with the useProjectReferences flag to false and then rerun it with true, Eclipse would rebuild my workspace and the errors will go away. Whats going on?

    Read the article

  • Compile MonoDevelop 4.2.3

    - by user2942643
    I need help, I'm trying to compile monodevelop code, but when I use the command "./configure" tells me that I need to have installed a version of mono, but I have it installed [raven@localhost ~]$ mono -V Mono JIT compiler version 3.2.8 (tarball Fri May 30 08:15:47 CDT 2014) Copyright (C) 2002-2014 Novell, Inc, Xamarin Inc and Contributors. www.mono-project.com TLS: __thread SIGSEGV: altstack Notifications: epoll Architecture: amd64 Disabled: none Misc: softdebug LLVM: supported, not enabled. GC: sgen [raven@localhost ~]$ cd /home/raven/Downloads/monodevelop-4.2.3 [raven@localhost monodevelop-4.2.3]$ ./configure checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /usr/bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking how to create a ustar tar archive... gnutar checking whether to enable maintainer-specific portions of Makefiles... no checking for mono... /usr/local/bin/mono checking for gmcs... /usr/local/bin/gmcs checking for pkg-config... /usr/bin/pkg-config configure: error: You need mono 3.0.4 or newer [raven@localhost monodevelop-4.2.3]$

    Read the article

  • custom view on iphone's native media player(MPMoviePlayerController)

    - by sneha
    I am building an application that implements a custom view on iPhone’s native media player. I want your help in deciding directions to lay this effort. At present I have find out that iPhone SDK doesn’t support APIs to customize media player. I need these things in the player: I would like to have custom views i.e. want to change all control buttons on player like Play/Pause, seek bar etc. The background of player will also need to be different. The player has to play audio or video file from local/remote location. Can i use MPMoviePlayerController if it can be customized (How to do it ??). However, any other third party player approved by iPhone which has an ability to download and play the media file from local/remote location is also fine. It will be great to have an access to media player buffer so that it can be encrypted. I have following questions: 1.Any help in building/customizing player..... 2.Do you see issues in signing of application? 3.Does Apple have any restrictions on customizing media player? 4.Any sample iPhone application where media player is customized Any help in this regard is highly appreciated.

    Read the article

  • Get coordinates of google maps markers

    - by App_beginner
    I am creating a databse containing the names and coordinates of all bus stops in my local area. I have all the names stored in my database, and now I need to add the coordinates. I am trying to get these of a website that contains them all as placemarks on a google map. It seems to me like they are being generated from a local server, and then added to the map. However I am unable to find exactly where the server is queried for the coordinates. I am hoping to collect these coordinates through the use of a screen scraper. However unless I am able to find where in the source code the coordinates are created this seems to be impossible. I can of course search and collect all these placemarks manually, but that will be very time consuming. So I am hoping that someone in here can help me. This is the website I am trying to scrape. The placemarks are marked by the blue bus sign: http://reiseplanlegger.skyss.no/scripts/travelmagic/TravelMagicWE.dll/?from=Brimnes+ferjekai+%28Eidfjord%29&to= You can also get the coordinates of a single placemark by writing the name of the stop in the search field and pushing the "Vis i kart" button. I hope someone can help me with this.

    Read the article

  • WinForms Load Event / Static Initialization Strangeness

    - by Eric J.
    Background I'm troubleshooting an WinForms 2.0 program that's already been burned to CD for distribution to an internet-challenged target audience. Some users are experiencing a fatal error that I can reproduce locally. Reproducing the Error I get the fatal error when I log into my Vista box using a standard user that I just created, even if I run the program as administrator. I do not get the fatal error when I log in as local administrator. I'm not sure that being administrator is necessarily the trigger (since runas did not help). I have reproduced this half a dozen times under each account with consistent results. The faulty code Base.cs (base class for several user controls, only one of which is shown on first screen) private void BaseWindow_Load(object sender, EventArgs e) { // This message shown once in both cases MessageBox.Show("BaseWindow_Load for " + this.GetType().FullName); SkinManager.ApplySkin(this); } SkinManager.cs private static Skin skin = null; public static void ApplySkin(UserControl applyTo) { if (skin == null) { skin = new Skin(SkinsDirectory, "Default"); } } Skin.cs internal Skin(string skinPath, string skinName) { config = SkinConfig.Load(path); } SkinConfig.cs public static SkinConfig Load(string path) { // This message shown only once running as Admin but twice running as standard user System.Windows.Forms.MessageBox.Show("@1"); // !!! LOCK path HERE !!! } A user control loads on the first form, which triggers a call to SkinManager.ApplySkin, which checks if skin is null and, if so assigns it (without thread synchronization or recursion protection), which ultimately causes a file to be opened. When logged in as local admin, that sequence completes just fine. When logged in as my test standard user, ApplySkin is always called a second time while skin is still null, causing a second attempt to load, causing the file to be locked on the second attempt. The error handling is draconian at this point and the program terminates. The Question While this code can be easily fixed, I would like to understand why the error is happening only in some cases.

    Read the article

  • Using IsolatedStorage on a IIS server

    - by JoeBilly
    I'am a bit confusing about the use of Isolated Storage on an IIS server. I understand the goal of Isolated Storage : provides a safe place to store data with no worry about how and where is this place. Since Isolated Storage has a by-user and by-assembly approach, I'am not to wild about using it on a IIS server where applications have almost their own identity. I haven't really seen the interest of impersonating a web application and almost never seen impersonated web applications myself but this is my point of view. Using Isolated Storage on a server mean : Using Isolated stores in \Documents and Settings\<user>\ Which mean \Documents and Settings\Default User\ when the application pool is owned by Local System or Network Services I guess Which also mean Write rights on this folder for Local System or Network Services Using of impersonation Regarding a web application (logic), these ideas are confusing me... Document and Settings ? Default User ? Enable impersonation just for storage ? No control about storage on server ? Uh ? And then I'am a front of a dilema : use System.IO.Packaging (with Isolated Storage inside) on web applications or find an alternative ? Am I wrong in my approach ? Did I miss something ? Any point of view is appreciated and an explanation about the Isolated Storage with IIS philosophy could be an anwser. Thanks !

    Read the article

  • Eclipse / Aptana File Sync Solutions

    - by Brad
    Our development team uses Eclipse + Aptana to do their web development work. Currently, most of them are mapping their Eclipse projects directly to the web server. I'd rather them create a local project and use that to sync to the web server project directory they are working on. The issue is that there aren't any good solutions which is just appalling given the popularity of the two. The FileSync plugin for Eclipse is only one-way. Meaning if another developer makes a change to the file on the server, another dev isn't even notified and could overwrite the change. The File Transfer option in Aptana 2.0 doesn't support any sort of Sync, just manually uploading/downloading files. The Sync option in Aptana 1.5.1 doesn't allow you to merge files when they are different. You can only update one or the other. It does however allow you to view a diff (but only if you right click and select) and in that diff you can't make any changes. I did find a way to allow files to be uploaded to their Sync repositories in Aptana using Eclipse Monkey. However it doesn't work if a user saves multiple files at once, 'Save All', again it doesn't work. And additionally, there is no notification if a user opens a local file that has an updated copy on the server. I tried to add one using Eclipse Monkey but I couldn't find any sort of listener in the Eclipse API to do it and any Eclipse Monkey documentation is far and few between. My only solution at this point is just to let them continue to map directly to the server or ask them to do a manual download before they do any work (but again what if someone uploads a change right after they do that). Anyone have any ideas?

    Read the article

  • Magento install errors

    - by nXqd
    I try to install new magento version 1.4.xx . But after the config menu I meet after I copy and change local.example.xml to local.xml . Error in file: "E:\Soft\Programming\xampp\htdocs\magento\app\code\core\Mage\Catalog\sql\catalog_setup\mysql4-install-1.4.0.0.0.php" - SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry '4-page_layout' for key 'entity_type_id' Trace: #0 E:\Soft\Programming\xampp\htdocs\magento\app\code\core\Mage\Core\Model\Resource\Setup.php(374): Mage::exception('Mage_Core', 'Error in file: ...') #1 E:\Soft\Programming\xampp\htdocs\magento\app\code\core\Mage\Core\Model\Resource\Setup.php(260): Mage_Core_Model_Resource_Setup->_modifyResourceDb('install', '', '1.4.0.0.21') #2 E:\Soft\Programming\xampp\htdocs\magento\app\code\core\Mage\Core\Model\Resource\Setup.php(224): Mage_Core_Model_Resource_Setup->_installResourceDb('1.4.0.0.21') #3 E:\Soft\Programming\xampp\htdocs\magento\app\code\core\Mage\Core\Model\Resource\Setup.php(153): Mage_Core_Model_Resource_Setup->applyUpdates() #4 E:\Soft\Programming\xampp\htdocs\magento\app\code\core\Mage\Core\Model\App.php(363): Mage_Core_Model_Resource_Setup::applyAllUpdates() #5 E:\Soft\Programming\xampp\htdocs\magento\app\code\core\Mage\Core\Model\App.php(295): Mage_Core_Model_App->_initModules() #6 E:\Soft\Programming\xampp\htdocs\magento\app\Mage.php(596): Mage_Core_Model_App->run(Array) #7 E:\Soft\Programming\xampp\htdocs\magento\index.php(78): Mage::run('', 'store') #8 {main} I really need your help , thanks so much :)

    Read the article

  • LogonUser -> CreateProcessAsUser from a system service

    - by Josh K
    I've read all the posts on Stack Overflow about CreateProcessAsUser and there are very few resolved questions, so I'm not holding my breath on this one. But it seems like I'm definitely missing something, so it might be easy. The target OS is Windows XP. I have a service running as "Local System" from which I want to create a process running as a different user. For that user, I have the username and password, so LogonUser goes fine and I get a token for the user (in this case, an Administrator account.) I then try to use that token to call CreateProcessAsUser, but it fails because that token does not come with SeAssignPrimaryTokenPrivilege - however, it does have SeIncreaseQuotaPrivilege. (I used GetTokenInformation to dump all the privileges associated with that token.) According to the MSDN page for CreateProcessAsUser, you need both privileges in order to call CreateProcessAsUser successfully. It also says you don't need the SeAssignPrimaryTokenPrivilege if the token you pass in to CreateProcessAsUser() is a "restricted version of the calling process' primary token", which I can create with CreateRestrictedToken(), but then it will be associated with the Local System user and not the target user I'm trying to run the process as. So how would I create a logon token that is both a restricted version of the calling process' primary token, AND is associated with a different user? Thanks! Note that there is no need for user interaction in here - it's all unattended - so there's no need to do stuff like grabbing WINSTA0, etc.

    Read the article

  • Email test deferred (mail transport unavailable) with ClamAV

    - by dirt
    I'm trying to set up a simple new mail server; when I send a test email to the server the email is getting hung up during delivery (user mapping is found) and the email is never found in /home/user/Maildir/new Here is my maillog after a fresh reboot and test email, there are a few warnings I am unfamiliar with. Can you please point me in the right direction? Oct 25 14:54:57 loki dovecot: master: Dovecot v2.0.9 starting up (core dumps disabled) Oct 25 14:54:58 loki postfix/postfix-script[1369]: starting the Postfix mail system Oct 25 14:54:58 loki postfix/master[1370]: daemon started -- version 2.6.6, configuration /etc/postfix Oct 25 14:56:00 loki postfix/tlsmgr[1457]: warning: request to update table btree:/etc/postfix/smtpd_scache in non-postfix directory /etc/postfix Oct 25 14:56:00 loki postfix/tlsmgr[1457]: warning: redirecting the request to postfix-owned data_directory /var/lib/postfix Oct 25 14:56:00 loki postfix/smtpd[1455]: connect from mail-ob0-f180.google.com[209.85.214.180] Oct 25 14:56:01 loki postfix/smtpd[1455]: 1CF5E20A8B: client=mail-ob0-f180.google.com[209.85.214.180] Oct 25 14:56:01 loki postfix/cleanup[1461]: 1CF5E20A8B: message-id= Oct 25 14:56:01 loki postfix/qmgr[1379]: 1CF5E20A8B: from=, size=1788, nrcpt=1 (queue active) Oct 25 14:56:01 loki postfix/qmgr[1379]: warning: connect to transport private/scan: No such file or directory Oct 25 14:56:01 loki postfix/error[1462]: 1CF5E20A8B: to=, orig_to=, relay=none, delay=0.18, delays=0.15/0.02/0/0.01, dsn=4.3.0, status=deferred (mail transport unavailable) Oct 25 14:56:01 loki postfix/smtpd[1455]: disconnect from mail-ob0-f180.google.com[209.85.214.180] master.cf snippets: # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - n - - smtpd submission inet n - n - - smtpd -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING smtps inet n - n - - smtpd -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING scan unix - - n - 16 smtp -o smtp_data_done_timeout=1200 -o smtp_send_xforward_command=yes -o disable_dns_lookups=yes 127.0.0.1:10026 inet n - n - 16 smtpd -o content_filter= -o local_recipient_maps= -o relay_recipient_maps= -o smtpd_restriction_classes= -o smtpd_client_restrictions= -o smtpd_helo_restrictions= -o smtpd_sender_restrictions= -o smtpd_recipient_restrictions=permit_mynetworks,reject -o mynetworks_style=host -o smtpd_authorized_xforward_hosts=127.0.0.0/8

    Read the article

  • Applications on the Web/Cloud the way to go? over Desktop apps?

    - by jiewmeng
    i am currently a mainly web developer, but is quite attracted to the performance and great integration with the OS (eg. Windows 7, Jump Lists, Taskbar Thumbnails, etc) something like WPF/C# can provide to the user, improving workflow and productivity. privacy and performance seems like a major downside of web/cloud apps compared to desktop apps. applications on the cloud/web work on the go, increased popularity of smartphones/netbooks majority of users may not benefit as much from increased performance of desktop apps, eg. internet surfing, word processing, probably benefit more from decreased startup times, lower costs and data on the cloud desktop applications increased performance benefits power users like 3D rendering, HD video/photo editing, gamers (i wonder if such processing maybe offset to cloud processing) integration with OS increases productivity (maybe such features can be adapted to a web version? maybe with a local desktop app to work with Web App API) more control over privacy (maybe fixed by encryption?) local data access (esp. large files) guaranteed and fast (YouTube HD fast enough most of the time) work not affected by intermittent/slow/availability internet connections (i know this is changing tho) what do you think?

    Read the article

  • Overriding routes on Openvpn client, iproute, iptables2

    - by sarvavijJana
    I am looking for some way to route packets based on its destination ports switching regular internet connection and established openvpn tunnel. This is my configuration OpenVPN server ( I have no control over it ) OpenVPN client running ubuntu wlan0 192.168.1.111 - internet connected if Several routes applied on connection to openvpn from server: /sbin/route add -net 207.126.92.3 netmask 255.255.255.255 gw 192.168.1.1 /sbin/route add -net 0.0.0.0 netmask 128.0.0.0 gw 5.5.0.1 /sbin/route add -net 128.0.0.0 netmask 128.0.0.0 gw 5.5.0.1 And I need to route packets regarding it's destination ports for ex: 80,443 into vpn everything else directly to isp connection 192.168.1.1 What i have used during my attempts: iptables -A OUTPUT -t mangle -p tcp -m multiport ! --dports 80,443 -j MARK --set-xmark 0x1/0xffffffff ip rule add fwmark 0x1 table 100 ip route add default via 192.168.1.1 table 100 I was trying to apply this settings using up/down options of openvpn client configuration All my attempts reduced to successful packet delivery and response only via vpn tunnel. Packets routed bypassing vpn i have used some SNAT to gain proper src address iptables -A POSTROUTING -t nat -o $IF -p tcp -m multiport --dports 80,443 -j SNAT --to $IF_IP failed in SYN-ACK like 0 0,1 0,1: "70","192.168.1.111","X.X.X.X","TCP","34314 > 81 [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=18664016 TSER=0 WS=7" "71","X.X.X.X","192.168.1.111","TCP","81 > 34314 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1428 TSV=531584430 TSER=18654692 WS=5" "72","X.X.X.X","192.168.1.111","TCP","81 > 34314 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1428 TSV=531584779 TSER=18654692 WS=5" "73","192.168.1.111","X.X.X.X","TCP","34343 > 81 [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=18673732 TSER=0 WS=7" I hope someone has already overcome such a situation or probably knows better approach to fulfill requirements. Please kindly give me a good advice or working solution.

    Read the article

< Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >