Search Results

Search found 116 results on 5 pages for 'zac altman'.

Page 2/5 | < Previous Page | 1 2 3 4 5  | Next Page >

  • How to connect a VM running on an ESXi host to that host via a VMKernel NIC?

    - by Zac B
    Say I have an ESXi (5.0) host that runs a Linux distribution which hosts iSCSI targets, which contain the images for other VMs which the host will run. When it's used, I'll start the host first, then the iSCSI server, and then refresh all storage targets/HBAs in order to see the provided shares as online. I know it's a strange puzzle-box solution, but I was told to implement it. The ESXi host itself has a gigabit NIC which connects to the outside world. The guest OS (CentOS) supports VMXNet3, however, and if I can, I'd like to use its VMXNET3 NIC to host iSCSI for the ESXi host. How should I go about doing this? I went to create a new virtual network, and selected "VKernel", as it suggested that I use that type of network for SAN traffic, but it is apparently not set up for "self-hosted" SAN hosts, as the new network did not appear as an option to attach the CentOS box's VMXNET3 NIC to. How should I best connect an iSCSI host out to its "parent" ESXi host, if I need a) a 10gb connection, and (optionally) b) a VMKernel network for it?

    Read the article

  • save workspace environment in windows?

    - by zac
    Is there a way in windows to save the layout of the workspace or is there other software that will allow me to have multiple programs start up and open ino windows at fixed resolutions / position on my displays? I envision something like how Photoshop works where you can save the position and type of menus that are open as different workspaces.

    Read the article

  • Mac Mini drive problems but SMART verified: bad hard drive or controller?

    - by Zac Thompson
    I have a 3-year-old Intel Mac Mini at home. About a month ago, it stopped booting from the hard drive (internal, SATA, 80GB). I tried booting from the Install Disc to repair the filesystem but Disk Utility was unable to do so ("invalid node structure"). I was also unable to use the hard drive in the Terminal from the Install Disc nor from an Ubuntu boot CD ("DRDY err"). I could see the contents of some directories, but others would give an error and I would get failures when trying to copy files. At this point I was sure the filesystem was hosed and I'd want to reformat at least. DiskWarrior was able to let me retrieve the data files I was interested in, which are now copied to an external hard drive, but it reported a high number of problems ("speed reduced by disk malfunction" count was over 2000) when in the process of trying to rebuild the directory for the drive. It also would not let me use the rebuilt directory to replace the one on the drive; it claimed the disk errors prevented recovery in this way. Under normal circumstances I would now assume that the drive itself was going bad: DiskWarrior's "disk malfunction" error above is supposed to imply hardware problems. My initial plan was to buy a replacement for the internal 2.5" drive. However: Disk Utility, command-line tools and DiskWarrior had reported all along that the SMART status of the drive was okay/Verified. So I'm now worried that the drive hardware is actually fine, and that the problems were due to a disk controller that has gone "bad" somehow. If this is the case, I'll probably just replace the whole computer. Any advice on how I can tell what is to blame? I don't have a lot of extra hardware sitting around, so I don't have the option of simply dropping the drive in another machine or popping another hard drive inside the Mini.

    Read the article

  • How to set up a centralized backup server with lots of offsite workstations, intermittent internet connectivity, and stubborn users?

    - by Zac B
    This might be an impossible question. Context: We have a bunch of computers across around 1000 users. We have a centralized office where 900 of the users work, most of the time. Most of the computers are laptops. They are very frequently coming on and off the network for hours at a time. Users often take their computers home and do lots of work from home. In addition, there are a handful of users who work elsewhere in the country, who are offline (no internet connection whatsoever) for more than half of the time they use their machines. All of the machines are Windows 7/XP. Problem: People are always losing data. One day someone accidentally deletes a bunch of files. The next day someone else installs a bad driver or tries to mess with something in system32 and needs a personal data backup/reinstall of Windows. Because of how many of our business operations are done without an internet connection, and how frequently computers come on- and offline, it's unfeasible to make users use network storage for all of their data. We tried giving them Dropboxes, and they stored their files elsewhere. We bought and deployed Altiris, and they uninstalled it and blamed us when they couldn't get files back that they accidentally deleted while they were offline and hadn't taken a backup in months. We tried teaching them backup best-practices, and using scheduled sync tools to upload things to the network drives, and they turned them off because they "looked like viruses". It doesn't help that many of these users are pretty high up in the business and are not amicable to any sort of "you need to do something regularly because we say so" solution. Question: Other than finding another job where IT is treated differently and users are willing to follow best practices, how would people recommend I implement a file backup solution that supports the following: Backs up to a centralized server over LAN or WAN whenever a network link becomes available, or on a schedule. Supports interrupted/resumed backups (and hopefully file-delta only backups), since connections to the network (WAN or LAN) are often slow and only open for half an hour or so. Supports relatively rapid, "I accidentally deleted the TPS reports! Oh no!" single-file recovery, ideally administered from the central backup server rather than the client PC. Supports local-to-local file delta backup on a schedule, so that users without a network connection for a few days can still retrieve accidental deletions or whatnot. Ideally, the local stored backups would be pushed up to the server whenever network link is available. Isn't configurable on the clients without certain credentials. Because the CFOs (who won't give up their admin rights on the domain) will disable it if they can. Backs up the entire hard drive. There are people who are self-righteous about storing things in C:\, or in the recycle bin, or in the C:\Windows dir (yes, I know). I'm fine integrating multiple products/solutions, or scripting different programs together myself (I'm a somewhat competent programmer), but I've been drawing a blank on where to start. Dropbox is folder-specific, Altiris doesn't cope with LAN outages or interrupted/resumed backups, Volume Shadow Copy is awesome for a local-to-local solution, but I don't know how to push days of stored shadow copies up to a server in a 2 hour window of network access. The company is fine with spending decent money on this, thousands (USD) on a server, and hundreds on clients, if necessary. I want to emphasize that this isn't a shopping list request. While I wish there was a program out there that did what I want, I've looked pretty hard, and not found anything that fits the bill. Instead, I'm hoping for ideas on where to start hacking things together from scratch/from different technologies to make something stable that works. Cheers!

    Read the article

  • CentOS disable filesystem check: superblock last mount time is in the future

    - by Zac B
    I'm persistently getting the "Superblock last mount time is in the future" error when booting CentOS 6. I've seen other questions which ask how to resolve this error, but I know exactly why it's occurring: our development/testing VMs regularly have their date set to times far from the present, and have all of their filesystems remounted. What I want to know is: how do I disable all consistency checking for superblock mount time in centOS? I've tried tune2fs -i 0 <device> and setting buggy_init_scripts=1 in /etc/e2fsck.conf and neither has worked; the problem persists.

    Read the article

  • Understanding the Linux Root

    - by Zac
    I've been using Linux (Ubuntu) for about 2 weeks now and am still struggling with some basic concept surrounding the root user: (1) Some terminal operations (such as making subdirectories inside a FHS directory such as /opt) require me to prefix the command with sudo - why? I guess what I'm choking on is: if I'm already logged in as a valid system user, why do I have to be a superuser/root in order to modify things that the sysadmin has already deemed me worthy of accessing? (2) Is there a GUI (Gnome, KDE) equivalent to sudo? Is there a way to assume a superuser role through a graphical context, rather than from inside a new shell? (3) I can't access the /root directory logged in as myself... but I installed the system to begin with and was never asked to create a root account! How do I log in as root and gain access to /root?!? Thanks for all feedback & input!

    Read the article

  • Understanding the Linux Root

    - by Zac
    I've been using Linux (Ubuntu) for about 2 weeks now and am still struggling with some basic concept surrounding the root user: (1) Some terminal operations (such as making subdirectories inside a FHS directory such as /opt) require me to prefix the command with sudo - why? I guess what I'm choking on is: if I'm already logged in as a valid system user, why do I have to be a superuser/root in order to modify things that the sysadmin has already deemed me worthy of accessing? (2) Is there a GUI (Gnome, KDE) equivalent to sudo? Is there a way to assume a superuser role through a graphical context, rather than from inside a new shell? (3) I can't access the /root directory logged in as myself... but I installed the system to begin with and was never asked to create a root account! How do I log in as root and gain access to /root?!? Thanks for all feedback & input!

    Read the article

  • How can I force a merge of all WAL files in pg_xlog back into my base "data" directory?

    - by Zac B
    Question: Is there a way to tell Postgres (9.2) to "merge all WAL files in pg_xlog back into the non-WAL data files, and then delete all WAL files successfully merged?" I would like to be able to "force" this operation; i.e. checkpoint_segments or archiving settings should be ignored. The filesystem WAL buffer (pg_xlog) directory should be emptied, or nearly emptied. It's fine if some or all of the space consumed by the pg_xlog directory is then consumed by the data directory; our DBA has asked for WAL database backups without any backlogged WALs, but space consumption is not a concern. Having near-zero WAL activity during this operation is a fine constraint. I can ensure that the database server is either shut down or not connectible (zero user-generated transaction load) during this process. Essentially, I'd like Postgres to ignore archiving/checkpoint retention policies temporarily, and flush all WAL activity to the core database files, leaving pg_xlog in the same state as if the database were recently created--with very few WAL files. What I've Tried: I know that the pg_basebackup utility performs something like this (it generates an almost-all-WALs-merged copy of a Postgres instance's data directory), but we aren't ready to use it on all our systems yet, as we are still testing replication settings; I'm hoping for a more short-term solution. I've tried issuing CHECKPOINT commands, but they just recycle one WAL file and replace it with another (that is, if they do anything at all; if I issue them during database idle time, they do nothing). pg_switch_xlog() similarly just forces a switch to the next log segment; it doesn't flush all queued/buffered segments. I've also played with the pg_resetxlog utility. That utility sort of does what I want, but all of its usage docs seem to indicate that it destroys (rather than flushing out of the transaction log and into the main data files) some or all of the WAL data. Is that impression accurate? If not, can I use pg_resetxlog during a zero-WAL-activity period to force a flush of all queued WAL data to non-WAL data? If the answer to that is negative, how can I achieve this goal? Thanks!

    Read the article

  • Is there a way to determine the original size or file count of a 7-zip archive?

    - by Zac B
    I know that when I compress an archive with the 7za utility, it gives me stats like the number of files processed and the amount of bytes processed (the original size of the data). Is it possible, using the commandline (on linux) or some programming language, to determine: the original size of an archive, before it was compressed? the number of files/directories contained within an archive? The answer might be "no, just decompress the whole archive and do counting/sizing then", but it would be useful to know if there was a faster/less space-greedy way.

    Read the article

  • GUID Partition Table & Linux

    - by Zac
    (1) Is it true that the new GUID Partition Table scheme allows a user to partition a drive however he/she like, outside of the traditional MBR "4 primaries or 3 primaries + 1 extension" paradigm? If so, are there any limitations to the GPT? If my assumption is wrong, what are its advantages over the MBR model? (2) I'm getting a new laptop this week and will be installing Ubuntu (and, more generally, Linux) for the first time ever. Does Ubunutu come pre-configured with MBR as a default? If so, how do I get Ubuntu w/ GPT? If not, how do I specify GPT over MBR? Thanks!

    Read the article

  • SSL Certificate Stops Working after Server Reboot on IIS7, W2K8

    - by Zac
    We recently upgraded from W2K3/IIS6 to W2K8/IIS7 and have been having problems with our SSL Certificate (Thawte 123 SSL certificate) ceasing to work after rebooting. Initially, the intermediate certificates would stop working and we could repair the problem by reinstalling all of them after the reboot (annoying, but not the end of the world). Unfortunately, this is no longer working. The certificate chain has been doublechecked by several tools and people with decent knowledge but no one has been able to identify the cause of the problem. The bindings in IIS have been checked as well The cert itself is also still valid. NOTE 1: I have seen THIS question which seems to be very similar, but there is no satisfactory answer in that post and it's a year old so not likely to get one any time soon. NOTE 2: I'm asking this on behalf of a co-worker so won't be able to provide instant feedback to any questions/suggestions but I will pass it on. The url is: http://www.flirtalike.com / https://www.flirtalike.com Screenshots:

    Read the article

  • Windows XP refuses to install on a dell mini 10

    - by Zac
    I have a dell inspiron mini 10. It boots from an external CD-ROM drive and goes through the first part of the windows XP home install with no problem, but on reboot it loses the CD-Rom drive when it goes into sysprep. Any ideas on how to get around this to complete the install?

    Read the article

  • Partitioning & Linux

    - by Zac
    Every tutorial on Linux-based partitioning schemes (or, just partitioning in general) will tell you that a PC can have either 4 primary partitions, or 3 primaries and 1 extended. They will all also tell you that Linux (in my case, Ubuntu) can be installed on either. It's also come to my attention that it is not too atypical for FHS directories, such as usr/, tmp/, etc/, home/ or var/ to be mounted separately on other partitions. Several questions I am unable to find the answers to, purely for my own edification: (1) By "PC", are we really talking about common PC disk types, like IDE or SATA? I guess I'm wondering why PC uses are limited to 4 primaries or 3 primaries + 1 extended (2) I'm choking on some basic OS concepts: it is said that a partition can be mounted by a file system or an OS. So I assume this means I can somehow instruct Ubuntu to mount to 1 partition, and then any part of, say, ReiserFS, to be mounted to another partition? How? (3)(a) What about creating swap partitions? Is there too much of a good thing with swap partitioning? If I have 4GB RAM over 320GB disk, what should my swap partition size be, and why? (3)(b) Are swap files the only way to create swap partitions? Wouldn't a Linux partitioning utility allow me to define a partition as being for virtual memory only? (4) Why are partitions limited to being "mounted" by just OSes and file systems? Why couldn't I write a program to take up its own, say, 512 MB partition, and then have it invoked or uses by an OS installed on another partition? Thanks for shedding any light here... not critical that I know this stuff, but it's got me thinking incessantly. And when I think incessantly, I...can't......sleep....

    Read the article

  • How to make the jump from consumer support to enterprise support?

    - by Zac Cramer
    I am currently a high level consumer break/fix technician responsible for about 300-400 repairs a month. I am good at my job, but bored, and I want to move into the enterprise side of my company, dealing with Server 2008 R2 and exchange and switches and routers that cost more than I make in a month. How do I make this transition? Whats the best thing to learn first? Is there a standard trajectory for making this leap from consumer to business? I am full time employed, so going back to school is not a great option, but I have no life, so spending my nights and weekends reading and practicing is totally within my realm. I am basically overwhelmed by the number of things to learn, and looking for any advice you may have on the best way to proceed. PS - I apologize if this is a not quite the right forum for this, I know its not a technical question exactly, but I also know the sorts of people I want to answer this question are reading this website.

    Read the article

  • How do I (robustly) remotely execute tasks on Windows workstations in a domain?

    - by Zac B
    I'm not even sure if "robustly" is a word. Anyway. Context: We have a few hundred Windows 7 workstations on a LAN. We use AD/GPO management pretty heavily, but there are a lot of periodic and/or manual maintenance tasks we need to do that can't be done via GPO/scheduled task. For example, say I want to execute program X (which runs silently, in the background, and doesn't bother the user) on workstation Y, or say I want to execute task A on a workstation group B either on a schedule or on demand. Kicking the users off of their computers to do this (i.e. using RDP) is a no-no, and doesn't work on groups anyway. Question: What's the best way to do this that is robust enough that, after setup, I could give it to beginner support people (read: people who are phobic of the command line, and get confused with GUI interfaces more complicated than Firefox)? I'm a competent programmer, and, if there is a robust set of tools or framework out there for this type of task, I'd consider hacking something together myself if it didn't take too long. If there's some combination of tools or techniques that others use to make remote-workstation-administration doable by beginners, I have yet to find it. For those who care about the "why": I'm midlevel IT, and was told to implement a remote management solution that allows arbitrary/scheduled remote execution, with confirmation that programs actually ran remotely, and the ability to view what they returned. "Why?" I asked, "Can't I just use PsExec and the task scheduler on a dispatcher machine?" "No," I was told, "'Joe' the second-week tech is going to be in charge of this one, and he needs something simple with a GUI." What I've tried: I've played with making a bunch of one-clickable "transfer files to remote computer and run them with PsExec" batch/VB scrips, but those tend to break down and don't easily support running on customizable groups. I've played a little bit with the Windows version of Puppet, but it doesn't support arbitrary-time remote execution (it's ability to group computers into a tree/node structure is really nice though). I've used an older version of Altiris, and, while it does a lot of what I want, it's interface is awful, it's slow, crashes a lot, and is probably too expensive for management. SwiftWater's DMS solution does some of what I want, but it's very underdeveloped, closed-source (not a deal breaker but not ideal), and I get the impression that support and reliability are lacking.

    Read the article

  • What is the max supported number of SATA devices (using cable adapters) on a Dell SAS 6/iR adapter?

    - by Zac B
    I've got a Dell SAS 6/iR PCI-E adapter. I don't have a multiplier backplane. I'm planning on connecting SATA (non SAS) drives. If I buy cable adapters only (ones that split a SAS connector on the card to a certain number of SATA cables), how many drives can I connect to this card? The way I see it, there are two limitations: a limitation imposed by the theoretical max number of devices supported on the card (which I've dug through the specs to find, but haven't seen yet), and a limitation imposed by the number of SAS plugs on the card multiplied by the number of SATA cables that come out of the highest-multiplying splitter I can buy. The answer to my question would be the minimum of those two limitations. I've seen 4x SATA coming out of some splitters; are there any that have more? Alternatively, if this is an RTFM question, does anyone have a good link to a "this is how SAS works, this is how you figure out the max number of devices, and this is how the concepts of 'ports', 'lanes', 'endpoint devices', and 'connectors' all relate in SAS-land" document? I've looked around on the Dell docs, but haven't found anything that explains this to someone at my level of understanding of SAN/enterprise storage technologies. Cheers!

    Read the article

  • MD5 and SHA1 checksum uses for downloading

    - by Zac
    I notice that when downloading a lot of open source tools (Eclipse, etc.) there are links for MD5 and SHA1 checksums, and didn't know what these were or what their purpose was. I know these are hashing algorithms, and I do understand hashing, so my only guess is that these are used for hashing some component of the download targets, and to compare them with "official" hash strings stored server-side. Perhaps that way it can be determined whether or not the targets have been modified from their correct version (for security and other purposes). Am I close or completely wrong, and if wrong, what are they?!?! Thanks!

    Read the article

  • Linux Best Practices

    - by Zac
    I'm a life-long Windows developer switching over to Linux for the first time, and I'm starting off with Ubuntu to ease the learning curve. My new laptop will primarily be a development machine: 6GB RAM, 320 GB HD. I'd like there to be 2 non-root users: (a) Development, which will always be me, and (b) Guest, for anyone else. I assume the root user is added by default, like System Administrator in Windows. (1) I'd like to mount /home to its own partition, but how does this work if I have two user accounts (Development and Guest)? Are there 2 separate /home directories, or do they get shared? Is it possible to allocate more space for Development and only a tiny bit of space for Guest in GRUB2? How?!?! (2) I'm assuming that its okay that all of my development tools (Eclipse & plugins, SVN, JUnit, ant, etc.) and Java will end up getting installed in non-/home directories such as /usr and /opt, but that my Eclipse/SVN workspace will live under my /home directory on a separate partition... any problems, issues, concerns with that? (3) As far as partitioning schemes, nothing too complicated, but not plain Jane either: Boot Partition, 512 MB, in case I want to install other OSes Ubuntu & non-/home file system, 187.5 GB Swap Partition, 12 GB = RAM x 2 /home Partition, 120 GB I don't have any bulky media data (I don't have music or video libraries, this is a lean and mean dev machine) so having 320 GB is like winning the lottery and not knowing what to do with all this space. I figured I'd give a little extra space to the OS/FS partition since I'll be running JEE containers locally and doing a lot of file IO, logging and other memory-instensive operations. Any issues, problems, concerns, suggestions? (4) I was thinking about using ext4; seems to have good filestamping without any space ceiling for me to hit. Any other suggestions for a dev machine? (5) I read somewhere that you need to be careful when you install software as the root user, but I can't remember why. What general caveats do I need to be aware of when doing things (installing packages, making system configurations, etc.) as root vs "Development" user? Thanks!

    Read the article

  • ASP.Net Session Timing Out Rapidly

    - by Zac
    We have an ASP.Net 3.5 website running on Windows Server 2008 with IIS7. The session timeout period for this site is configured to be 20 minutes - however, it is currently lasting for between 40 and 50 seconds. After researching the problem we investigated several configuration values which could be involved in the timeout period but none of them are set to less than 20 minutes. The areas we look are as follows: web.config system.web/sessionState element (20 minutes). web.config system.web/authentication/forms element (not present, defaults to 30 minutes). Sites/{website}/ASP/Session Properties/Time-out (20 minutes). Application Pools/{appPool}/Advanced Settings/Process Model/Idle Time-out (20 minutes). We've also noted that the CPU is staying around 0% and that RAM usage is flat-lining around 1.07 GB (of 8 GB available) - so there is no performance-based reason for IIS to be recycling the Application Pool as far as we can tell. Are there any settings we've overlooked which could cause the session timeouts to be expiring so quickly? EDIT A couple of additional points: This is not occurring in development, only on the server. The session is not sliding (i.e. if we refresh the page a few times it still times out approximately 40 - 50 seconds after the session was created.

    Read the article

  • Java Development in Linux

    - by Zac
    I'm a developer and am brand new to Linux (Ubuntu): I'm wondering what the "best practices dictate" for what FHS directories to install various tools to. Things I'll be installing: Eclipse & plugins GlassFish SVN ...etc. I see that /opt is for holding additional ("optional") software packages, but also see /usr as a place for utils and apps. In another post a user recommended I create an entire partition for /srv alone, and to do my staging there (I assume he meant that /srv is where GlassFish and other servers should go?). So basically: what FHS directories do Linux developers use for which type of tools? Thanks for any input here

    Read the article

  • Log connections to program

    - by Zac
    Besides for using iptables to log incoming connections.. Is there a way to log established inbound connections to a service that you don't have the source to (suppose the service doesn't log stuff like this on its own)? What I'm wanting to do is gather some information based on who's connecting to be able to tell things like what times of the day the service is being used the most, where in the world the main user base is, etc. I am aware I can use netstat and just hook it up to a cron script, but that might not be accurate, since the script could only run as frequently as a minute. Here is what I am thinking right now: Write a program that constantly polls netstat, looking for established connections that didn't appear in the previous poll. This idea seems like such a waste of cpu time though, since there may not be a new connection.. Write a wrapper program that accepts inbound connections on whatever port the service runs on, but then I wouldn't know how to pass that connection along to the real service. Edit: Just occurred to me that this question might be better for stackoverflow, though I am not certain. Sorry if this is the wrong place.

    Read the article

  • How to use AD/GPO/Print Services to "push out" a new printer driver to replace a broken one? How did my server get a broken driver?

    - by Zac B
    Context: We have an AD/GPO-managed corporate network with a little over a hundred PCs running Windows 7 x64, and a few managed printers. Our Server2008R2 primary domain controller is configured as a print server for them all. Problem: After a recent windows update and restart (no printer driver updates were included) on the DC, a particular shared printer (Lexmark T650) has begin exhibiting some strange behavior. First, it prints a preceding and following blank page for almost every document, on jobs submitted by about half of client machines (no separator page is configured on the server or any of the clients I've seen). Second, whenever someone tries to access "Printing Preferences" on any client, they recieve the following error message (this happens everywhere, 100% of the time, and didn't happen before the update on the DC): Once they click "OK", the prefs screen appears (with no separator page selected) and everything seems fine. I'm not even sure if these two issues are related, but everyone seems affected by one or both of those issues. What I've Tried: I've been hesitant to un-deploy the problem printer, or remove it via GPO, as it's pretty heavily used. I've tried updating (via MS update and our internal WSUS server) client machines and the DC. No printer driver updates have appeared, and no number of updates or restarts on the server or the client seems to have achieved anything other than my boss getting grumpy that I'm bouncing the domain controller so often. I've tried deleting the drivers on the server, and re-installing them from the original source that has worked for the past year...no change. I've tried selecting "New Driver" for one of the shared printers on a client machine, running as domain admin, and pushed the latest driver found by MSupdate back up to the DC. This changed the version number of the driver recorded in the print server manager, but caused no change--on the client I pushed from, or on any other. The error still appears. Question: Why the heck is this happening? Obviously, I got a bad driver from somewhere, but how do I get rid of it? I don't know of any "roll back drivers" functionality for centrally managed print drivers like Windows offers for other devices. How would I a) get this issue resolved on a client, and b) push the fix to the other members of the domain?

    Read the article

  • How do I get the current date according to an NTP server without setting it locally?

    - by Zac B
    I want to get the current date and time according to a remote NTP server, using Linux. I don't want to change the local time as a result; I just want to get the remote date, adjusted for the local time zone, printed out. The date returned must comply with the following criteria: It needs to be reasonably accurate. It needs to be adjusted for the timezone on the local system making the request. It needs to be formatted in an easily-readable or interpretable way (standard date format, or seconds since epoch). What I've Tried: I can call ntpdate -q my.ntp.server and get the offset between the local time and the server's time, but that doesn't return the date according to the NTP server; it just returns the offset and the local date. Is there some easy way/command I can use to say: "Print out the date according to a given NTP server, adjusted for my current timezone"?

    Read the article

  • Proper Rules For SSL Redirect For Subdomains

    - by Zac Cleaves
    RewriteCond %{HTTP_HOST} ^(.*\.)*subexample.example.com$ [NC] RewriteCond %{SERVER_PORT} !^443$ RewriteRule ^(.*)$ https://subexample.example.com/$1 [R] Is what I have been using. It works as long as I go to a specific page, like subexample.example.com/orders.php. But if you try to go to the root of the subdomain, it adds the extra "/example" at the end. Any suggestion on a set of rules that will work? Thank you so much for your responces! Actually, this is what I am trying to do: http://support.mydomain.net >> https://support.mydomain.net AND(!) http://support.mydomain.net/anypage* >> https://support.mydomain.net/anypage* > RewriteCond %{HTTPS} off RewriteRule (.*) > https://%{HTTP_HOST}%{REQUEST_URI} Works, except I need it to only do it for the support.mydomain.net With the above set up, you get a certificate warning if you try to go just mydomain.net, which I do not have or need an SSL certificate installed on. UPDATE! The other issue with the rule I have written above, is that if you try to go to the root of the subdomain (i.e. support.mydomain.net) it goes to https://support.mydomain.net/support This is driving me crazy, help! =) Any help would be greatly appreciated!

    Read the article

  • Data Formatters temporarily unavailable

    - by Zac Altman
    Im trying to use Date Formatters (UIDateFormatter), but I keep getting this error: Program received signal: “EXC_BAD_ACCESS”. Data Formatters temporarily unavailable, will re-try after a 'continue'. (Unknown error loading shared library "/Developer/usr/lib/libXcodeDebuggerSupport.dylib")

    Read the article

< Previous Page | 1 2 3 4 5  | Next Page >