Search Results

Search found 21640 results on 866 pages for 'local storage'.

Page 385/866 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • Validating Time & Date To Be At Least A Certain Amount Of Time In The Future

    - by MJH
    I've built a reservation form for a taxi company which works fine, but I'm having an issue with users making reservations that are due too soon in the future. Since the entire form is kind of long, I first want to make sure the user is not trying to make a reservation for less than an hour ahead of time, without them having to fill out the whole form. This is what I have come up with so far, but it's just not working: <?php //Set local time zone. date_default_timezone_set('America/New_York'); //Get current date and time. $current_time = date('Y-m-d H:i:s'); //Set reservation time variable $res_datetime = $_POST['res_datetime']; //Set event time. $event_time = strtotime($res_datetime); ?> <!doctype html> <html> <head> <meta charset="utf-8"> <title>Check Date and Time</title> </head> <?php //Check to be sure reservation time is at least one hour in the future. if (($current_time - $event_time) <= (3600)) { echo "You must make a reservation at least one hour ahead of time."; } ?> <form name="datetime" action="" method="post"> <input name="res_datetime" type="datetime-local" id="res_datetime"> <input type="submit"> </form> <body> </body> </html> How can I create a validation check to make sure the date and time of the reservation is at least one hour ahead of time?

    Read the article

  • Unable to Uninstall Exchange 2010 ("Internet Newsgroups" public folder)

    - by helplessITguy
    I am trying to uninstall Exchange 2010, before installing a new instance of Exchange 2010 SP1 on a different server. (Our production Exchange server is 2003) We have met all of the Mailbox uninstall prereqs except for the following: Error: Uninstall cannot continue. Database 'Public Folder Database 1579722947': The public folder database "Public Folder Database 1579722947" contains folder replicas. Before deleting the public folder database, remove the folders or move the replicas to another public folder database. For detailed instructions about how to remove a public folder database, see http://go.microsoft.com/fwlink/?linkid=81409&clcid=0x409. Recommended Action: We have been able to delete all Public Folders in the 2010 storage group except for the one (previously replicated) folder - "Internet Newsgroups". How can I delete this folder without impacting public folders on the production Exchange 2003 server? We have: verified permissions to the public folder removed replication for the folder on (on the Exch 2010 server) tried PowerShell scripts: RemoveReplicaFromPFRecursive Get-PublicFolder -Server "\" -Recurse -ResultSize:Unlimited | Remove-PublicFolder -Server -Recurse -ErrorAction:SilentlyContinue

    Read the article

  • How to set default permissions for automounted FAT drives in Ubuntu

    - by piman
    I've got many FAT32 drives that I'd like to mount in Ubuntu such that they have permission mode 700 for directories and 600 for all other files. By default, they have 755 for all files, which is not particularly useful since almost no non-directories should be executable, and it screws up version control repos hosted on the drives. "Back in the day" I would have had the drives listed in /etc/fstab with the umask/dmask I want and there was no such thing as a default. These days, drives automount under their volume names. Which is great, except now I have no idea how to set the default. I have tried changing the /system/storage/default_options/vfat/mount_options gconf key with no apparently effect. It was 077 initially but the mounted drive reflected a default of 022; changing it and re-inserting the drives resulted in the files still having permission bits of 755.

    Read the article

  • Which software to use for RAMDISK on Windows 2008?

    - by Tony_Henrich
    I am building a server machine with lots of RAM. At least 16G. I am planning to put my frequently read and written data in RAM so I am looking for software for creating RAM disks. This is for Windows Server 2008 R2 Standard 64bit. Any recommendations? I would like one where I can flush the disk image into persistent storage upon demand. For example when Windows shuts down. (I am aware of all the consequences of data loss when power is lost)

    Read the article

  • Rails runner command not saving to cache

    - by mark
    Hi I'm having a bit of a problem with a cron task generated by rails whenever plugin that should store remote data in the rails cache for display. What I have is this: schedule.rb set :path, '/var/www/apps/tuexplore/current' every 1.hour do runner "Weather.cache_remote", :environment => :production end calls this model class Weather def self.cache_remote Rails.cache.write('weather_data', Net::HTTP.get_response(URI.parse(WEATHER_URL)).body) end end Calling whenever returns this PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/deploy/.gem/ruby/1.8/bin 0 * * * * /var/www/apps/tuexplore/current/script/runner -e production "Weather.cache_remote" This doesn't work. Calling the weather model method from a controller works fine, but I need to schedule it hourly. The cron task causes a "Cache write: weather_data" entry to appear in the production log but data isn't stored nor output into the page. Additional information, I can log into production console and run Weather.cache_remote, then read the data from the rails cache. I'd be really appreciative if someone could point out the error of my ways. If further explanation is needed please ask. Thanks in advance for any pointers.

    Read the article

  • Why My Hi-Tech Chemical Compound Lost Its Full Effectiveness?

    - by Boris_yo
    I have bought this thing a month ago: It is so far best dust cleaner that i have ever used and all was well until i left it out of its package for half day. Then it became sturdy and could be torn apart, less sticky and flexible which i video'ed here. Can this item be restored back to fully functioning state as it was before? Maybe it just became dry and i should put it in a place with moisture? Since all was in Chinese, i also do not know the storage conditions that must be met.

    Read the article

  • RAID--0 " TWO " DRIVES SSD ONLY Should I use on-board / Software RAID OR a RAID Card / Control

    - by Wes
    I am looking at going with a TWO Drive Only SSD RAID-0 Configuration And was wondering if I would get better performance / Speed from the Use of a RAID Controller / Card Verses just using the Software RAID on my Mother Board. I have herd conflicting reports , Again I only Plan on Running " 2 " SSD Drives in RAID-0 Config I have No- problem spending the extra money for a good controller but only if I am going to benifit performance wise , Otherwise if there is no notable Gain I will just use the Software RAID that my HP-180-T came with Intel- 3.33 GHZ , 6-Core , 12-GB of DDR-3. I have a huge External drive for All Storage and am not concerned about Data loss just looking for pure speed. And if a Controller will benifit my performance Wht type of card would one suggest?

    Read the article

  • Completely replacing (upgrading) a RAID 5 array of disks on an ESXi server

    - by jshin47
    I have a development server that runs several VM on ESXi 5. It has an array of disks in the RAID 5 configuration where all of the disks are currently the same size. I would like to expand storage on this box greatly, but I am not sure what the smartest way to go about this would be. My current plan is to: Turn off all VM Copy VM folders from server to another location Verify that I can mount all the VM on the new location (ie that the copy went ok) Replace all the disks with new, bigger ones Reinstall ESXi5 Copy the VM back over This seems like it might take a while to accomplish and is not terribly slick, especially since I will have to reconfigure ESXi 5, but is there a smarter alternative?

    Read the article

  • Anyway I can trick Carbonite into backing up an external hard drive?

    - by Brian
    I use carbonite to back up my PC (Windows XP). We were running low on disk space on our home PC (down to 15 gig) so I went out and purchased an external hard drive. However, Carbonite will not back it up. I just want the external drive to be extra disk space. From their FAQ: The current version of Carbonite backs up only the files that reside on permanent hard drives on your computer. It will not back up network drives, external drives, and NAS (network accessed storage) drives. If there are files on a remote drive that you wish to include in your Carbonite backup, you should copy the files to a folder on your local hard drive. If the files are on a shared network drive, you could install Carbonite on the computer on which the network shared drive physically exists, and back the files up directly from that computer. Check back soon for a Carbonite service plan that will allow you to back up your external drives.

    Read the article

  • lightweight/portable VCS for server-hopping DBA?

    - by Aaron
    I'm looking for a VCS that'll help me keep all of my work scripts in-sync. Requirements: Portable (as in flash drive, not code-level) Run on Windows XP and Server 2003+ No installation dependencies (Cygwin, perl, Python) I use Mercurial on my work machine for version control of the various T-SQL, ksh, perl, and CMD/BAT scripts that I maintain as a MS SQL Server DBA and Unix sysadmin. So far, hg has worked for my AIX boxes- I mount my home directory as I login, and deal with the repo as if it were local. I haven't been able to find a similar solution for the Windows machines I use. Most of them I do not have Local Admin rights; even if I did, I'd rather not install (and maintain) Python + Mercurial on all of them. I can't get to my home directory on them remotely, which leaves a client running on each machine as the only option. Bonus points for an answer that would let me use a single repo for both the Windows and Unix machines. :) I'm running WinXP, with heavy use of Cygwin and a CrunchBang VM.

    Read the article

  • Using MongoDB + Redis + Apache on the same server in production?

    - by Dayson
    I intend to launch my web app using a 8 GB VPS. It uses MongoDB + Redis for storage/caching and Apache + PHP-FPM for serving requests. Could there be any issues with running Mongo + Redis + Apache on the same server? Would it make more sense to setup 2 x 4 GB VPS servers and keep Mongo on one and Redis + Apache on another? Should I just start with one server and worry about scaling horizontally later by delegating the existing server to Mongo in the future (due to its large RAM) and moving the web servers on to multiple smaller VPS'?

    Read the article

  • What is the specific advantage of a blade server for virtualisation?

    - by ChrisZZ
    We are planning to implement a VDI Solution. We had some discussions about Blade vs Rack. As we are only planning to implement 75-100 Clients, we calculated, that we would need 2 Servers with Dual 8C Processors - and a shared storage server. This calculation is based on a paper by ORACLE, that says, 12 active virtual machines per core. Now, for buying to servers, a blade does not scale financially. But the Blade has some other advantages: a) The interconnectivity between the blades is super-fast. b) IO Virtualisation Are there other advantages, that we should consider, that would make up for price - and are this advantages so important, that we should think about investing in the blade?

    Read the article

  • Storing date/times as UTC in database

    - by James
    I am storing date/times in the database as UTC and computing them inside my application back to local time based on the specific timezone. Say for example I have the following date/time: 01/04/2010 00:00 Say it is for a country e.g. UK which observes DST (Daylight Savings Time) and at this particular time we are in daylight savings. When I convert this date to UTC and store it in the database it is actually stored as: 31/03/2010 23:00 As the date would be adjusted -1 hours for DST. This works fine when your observing DST at time of submission. However, what happens when the clock is adjusted back? When I pull that date from the database and convert it to local time that particular datetime would be seen as 31/03/2009 23:00 when in reality it was processed as 01/04/2010 00:00. Correct me if I am wrong but isn't this a bit of a flaw when storing times as UTC? Example of Timezone conversion Basically what I am doing is storing the date/times of when information is being submitted to my system in order to allow users to do a range report. Here is how I am storing the date/times: public DateTime LocalDateTime(string timeZoneId) { var tzi = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId); return TimeZoneInfo.ConvertTimeFromUtc(DateTime.UtcNow, tzi).ToLocalTime(); } Storing as UTC: var localDateTime = LocalDateTime("AUS Eastern Standard Time"); WriteToDB(localDateTime.ToUniversalTime());

    Read the article

  • Is this a valid backup strategy for MongoDB?

    - by James Simpson
    I've got a single dedicated server with a MongoDB database of around 10GB. I need to do daily backups, but I can't have downtime with the database. Is it possible to use a replica set on a single disk (with 2 instances of mongod running on different ports), and simply take the secondary one offline and backup the data files to an offsite storage such as S3 (journaling is turned on)? Or would using master/slave be better than a replica set? Is this viable, and if so, what potential problems could I have? If not, how do I conceptualize this to work?

    Read the article

  • Parallel.Foreach loop creating multiple db connections throws connection errors?

    - by shawn.mek
    Login failed. The login is from an untrusted domain and cannot be used with Windows authentication I wanted to get my code running in parallel, so I changed my foreach loop to a parallel foreach loop. It seemed simple enough. Each loop connects to the database, looks up some stuff, performs some logic, adds some stuff, closes the connection. But I get the above error? I'm using my local sql server and entity framework (each loop uses it's own context). Is there some problem with connecting multiple times using the same local login or something? How did I get around this? I have (before trying to covert to a parallel.foreach loop) split my list of objects that I am foreach looping through into four groups (separate csv files) and run four concurrent instances of my program (which ran faster overall than just one, thus the idea for parallel). So it seems connecting to the db shouldn't be a problem? Any ideas? EDIT: Here's before var gtgGenerator = new CustomGtgGenerator(); var connectionString = ConfigurationManager.ConnectionStrings["BioEntities"].ConnectionString; var allAccessionsFromObs = _GetAccessionListFromDataFiles(collectionId); ForEach(cloneIdAndAccessions in allAccessionsFromObs) DoWork(gtgGenerator, taxonId, organismId, cloneIdAndAccessions, connectionString)); after var gtgGenerator = new CustomGtgGenerator(); var connectionString = ConfigurationManager.ConnectionStrings["BioEntities"].ConnectionString; var allAccessionsFromObs = _GetAccessionListFromDataFiles(collectionId); Parallel.ForEach(allAccessionsFromObs, cloneIdAndAccessions => DoWork(gtgGenerator, taxonId, organismId, cloneIdAndAccessions, connectionString)); Inside the DoWork I use the BioEntities using (var bioEntities = new BioEntities(connectionString)) {...}

    Read the article

  • Does ZFS cache Compressed or Uncompressed data in a ZFS file-system with compression turned on?

    - by George Bailey
    ZFS supports file-system compression and it also caches frequently or recently accessed data. If a system has lots of CPU but the underlying data storage system is slow. It is possible that ZFS would perform better with compression turned on. This can be easily tested when writing files by measuring CPU and disk usage and throughput. (of course latency may exist,, but this would not be an issue for large files). But what about cache? If data will have to be decompressed every time it is read then this is probably less of a good idea. Is the cached data compressed?. Does anybody have some information on this?

    Read the article

  • How to configure HA iSCSI for Solaris 10

    - by Noah
    BACKGROUND: We have a StarWind NAS that we are currently using for High Availability storage with our Windows network. Starwind has mirrored drives and multiple ip paths, that the Windows Server combines into one HA disk store. QUESTION: How do I accomplish the same thing under Solaris 10? I've looked at ZFS but to document seems to indicate that ZFS wants to do its own raid/mirroring. I can also attach via iSCSI from Solaris and am presented with both drives being served by the Starwind NS. So, how do I configure solaris so that disk M1 and M2 are considered as a single fault tolerant drive?

    Read the article

  • Running SQL 2008 on a VM

    - by chris.w.mclean
    We are pondering trying to set up a SQL 2008 instance inside a VM for a production environment. All our SQL instances use iSCSI over gigabit ethernet to talk to a NAS, as would this new instance. Any reason this is a bad idea or any considerations to make this work well? The VM would be running in Xen 5.5 or we could set it up in Hyper-V if there's a compelling case for that. And the VM's VHD would be stored on a different NAS then the SQL storage is on.

    Read the article

  • How to use symbolic links in windows server 2008R2 across the network (mklink)

    - by server info
    I have One Server (Srv1) which holds data with file shares and the storage is full. Now I have second Server (Srv2) which has alot more space. No I would like to transfer all the data von Serv1 to Serv2 and have links to the new destination. I found mklink very useful here but unfortunately it does not work over the network. Which also points the docu out. People heavily rely on the path's so it would be helpful if somone has a pointer for me... how to handle symbolic links a cross the network with Windows Servers. I am running Windows Server 2008. Thanks for any help

    Read the article

  • glusterfs to replicate files to other servers

    - by sbrattla
    I've got multiple servers which all need to have the same content in /home. In other words, if the file /home/user1/test.txt is updated on server A, this needs to be replicated to all other servers in the cluster. Is it possible to use GlusterFS for this purpose? That is, let each server have a full copy of all data locally - which that server will be working on - and solely use GlusterFS to take care of replicating this data to the other servers? I'm not intersted in a combined storage, but rather have all data on all machines only to have GlusterFS to replicate it to the other machines.

    Read the article

  • What postgresql client version should I build against, if server is 8.x?

    - by Ben Voigt
    I'm planning updates to a system that is currently running with 8.x server on Windows, 8.x client on Windows, and 8.x client on Linux. Obviously that seems like a bad choice of platform in a mixed environment, but the Linux machine has no persistent writable storage (as an anti-rootkit measure). I'm concerned with compatibility between versions right now. Can a linux postgresql 9.0.x client connect to a Windows 8.x server? The server is using some third-party binary extensions, so upgrading it is a more involved task and will be done later. If combining a 9.0.x client and 8.x server is discouraged, would latest 8.x clients be able to continue to connect if I did upgrade the server first? META: What tag is appropriate for backward-compatibility questions?

    Read the article

  • Is it possible to install and use a printer in Azure Worker Roles?

    - by lurkerbelow
    I'd like to know if it's possible to create and use a printer in Azure Worker Roles. I do know that I can install printers with basic batch commands. So I could define a batch script that would run as a startup task. something like: rundll32 printui.dll,PrintUIEntry /if /b "printer" /f %windir%\inf\ntprint.inf /r "file:" /m "printername") Question: Can I use a printer to print to file, maybe to local storage? I do need the function to print to file or atleast to have a printer installed, because I have to get the PCL output from different installed printers. Sadly I can not test it by myself. I do not have a CC to join the 90 days trial.

    Read the article

  • OWA no longer accessing 1 backend exchange server

    - by Morchuboo
    We have IIS hosting OWA that is the web frontend to 3 backend exchange servers. Yesterday we got a lot of event 9791 warnings: "Cleanup of the DeliveredTo table for database 'Second Storage Group\Mailbox Store EUROPE 2' was pre-empted because the database engine's version store was growing too large. 0 entries were purged. At this point the server was crawling. Our Mail admin is currently away and not contactable so we rebooted the server. Everything seems ok when reading mail from outlook and evolution-mapi clients but OWA and active-sync connections cannot access. When logging into OWA, users whos mailboxes are not on this backend server are fine but users on this server can log into the OWA frontend but once submitting their credentials the page returns a 503 service unavailable error. We have since rebstarted the affected exchange server and the IIS server as well as iisreset /noforce but problem persists. Can anyone suggest what we should look at...

    Read the article

  • Lost in dates and timezones

    - by Sebastien
    I'm working on an application that stores conferences with their start and end date. Up until now, I was developing in Belgium and my server is in France, so everything is in the same timezone, no problem. But today, I'm in San Francisco, my server is in France and I noticed I have a bug. I'm setting dates from a Flex client (ActionScript automatically adapts date display according to client local timezone, which is GMT-8 for me today. My server runs on Hibernate and MySQL in France (GMT+1). So when I look at my database using phpMyAdmin, I see a date set to "2010-06-07 00:00:01" but in my Flex client it displays "2010-06-06 15:00:01". Ultimately, what I want is that the dates are displayed in the local timezone of the event, which is the date I set it to. So when I'm in Belgium and I set the start date of an event to be "2010-06-07 00:00:01" I want to retrieve it that way. But I'm lost as to what layer adapts what. Is timezone stored in DATETIME MySQL columns (I can't see it in MySQL)? Does Hibernate to anything to it when it transfers it to java.lang.Date that has Timezone information? And ultimately, what is the best way to solve this mess?

    Read the article

  • Debian Raid5 LVM

    - by Develman
    I want to setup a debian server mainly used as data storage. I have 4 devices: /dev/sda (160GB) - installed debian on it /dev/sdb, /dev/sdc, /dev/sdd (all 500GB) - created a raid5 array with it Now I am not sure, how to go on. Is it usefull to create a LVM on the raid5 /dev/md0? How can I do this? Is there a good HowTo? Or can I just create filesystem on the raid5 and create different partitions?

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >