Search Results

Search found 2442 results on 98 pages for 'dan ryan'.

Page 23/98 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • What breaks in a Windows domain if a member has a high time skew?

    - by Ryan Ries
    It's taken for granted by most IT people that in a Windows domain, if a member server's clock is off by more than 5 minutes (or however many minutes you've configured it for) from that of its domain controller - logons and authentications will fail. But that is not necessarily true. At least not for all authentication processes on all versions of Windows. For instance, I can set my time on my Windows 7 client to be skewed all to heck - logoff/logon still works fine. What happens is that my client sends an AS_REQ (with his time stamp) to the domain controller, and the DC responds with KRB_AP_ERR_SKEW. But the magic is that when the DC responds with the aforementioned Kerberos error, the DC also includes his time stamp, which the client in turn uses to adjust his own time and resubmits the AS_REQ, which is then approved. This behavior is not considered a security threat because encryption and secrets are still being used in the communication. This is also not just a Microsoft thing. RFC 4430 describes this behavior. So my question is does anyone know when this changed? And why is it that other things fail? For instance, Office Communicator kicks me off if my clock starts drifting too far out. I really wish to have more detail on this. edit: Here's the bit from RFC 4430 that I'm talking about: If the server clock and the client clock are off by more than the policy-determined clock skew limit (usually 5 minutes), the server MUST return a KRB_AP_ERR_SKEW. The optional client's time in the KRB-ERROR SHOULD be filled out. If the server protects the error by adding the Cksum field and returning the correct client's time, the client SHOULD compute the difference (in seconds) between the two clocks based upon the client and server time contained in the KRB-ERROR message. The client SHOULD store this clock difference and use it to adjust its clock in subsequent messages. If the error is not protected, the client MUST NOT use the difference to adjust subsequent messages, because doing so would allow an attacker to construct authenticators that can be used to mount replay attacks.

    Read the article

  • What is the best way to configure Apache or AWS to support a Rails multi tenancy application that allows each customer to have their own domain name?

    - by Ryan Arneson
    I'm building a Rails 3 SaaS site that allows for multi-tenancy. When a customer signs up they put in their own domain name, e.g. example.com. I need example.com to point to my SaaS application and serve them their content. My questions are as follows: Do I need to create an Apache vhost for each customer using their own domain? Is there an easier way with CNAME's to just have the customer point to the IP address of my server(s) that then forwards the request onto my application through some catch all vhost? Would I be able to create the CNAME record for the customer so they don't have to do any setup? Would this be a case better suited to Amazon Web Services? Any help or explanation or corrections on my understanding of dns would be appreciated. I'm a developer so the server ops portion of this is a bit cloudy.

    Read the article

  • Mounted HDD not having enough permissions from Apache/PHP

    - by Dan
    Piwigo gallery, on apache and php. The root system is a RAID 128GB. /var/www/html is on the root file system. Mounted the 320GB hdd to /var/www/html/320 using defaults, it's an ext4 fs. Put a symlink to it in /var/www/html/galleries which is read by the gallery script so I can upload images to there, then click sync. It gives me the error: [./galleries/] PWG-ERROR-NO-FS (File/directory read error) PWG-ERROR-NO-FS: The file or directory cannot be accessed (either it does not exist or the access is denied) chmod 777 set on /dev/sdb1, /var/www/html, and /var/www/html/320 as well as the symlink galleries too. All recursive. chown apache:apache to everything too. PHP just can't read/write to it. I tried with and without the symlink, I've tried everything I can think of. Nothing. Any ideas how I can give apache/php permission to read/write to this drive? With 777 permissions all around it should already be able to.

    Read the article

  • What is the best package repository for Debian ?

    - by Dan Sosedoff
    Hello everybody I have a question regarding packages repository in Debian (Lenny). A new server at rackspace cloud service comes with nothing (plain system) so i had to upgrade system and install all necessary packages by myself. But the problem is that default repositories does not provide newer versions of software i need (PHP is 5.2.6, comparing to 5.2.13). In CentOS i used to get all new packages from third-party sources, which works really well till now. I have talked to guys from rackspace about it and looks like they cant provide anything except kernel headers. And i dont really want to build from source. So, whats the way to get newer software ? Any good third party sources? Thanks.

    Read the article

  • Windows 8 slow after refreshing and rebooting

    - by Dan Drews
    First, I apologize if this is in the wrong place. I use S/O a lot but not the other sites much. I have an HP Split X2 that has been very choppy as of late (takes several seconds to respond to any form of input), so I went ahead and did a system refresh. After the refresh everything ran very fast as it should, then when I went to download my old apps I needed to reboot. After rebooting, it went back to the choppiness. Does anybody have any thoughts on what this could be?

    Read the article

  • Arguments passed on by shell to command in Unix

    - by Ryan Brown
    I've been going over this question and I can't for the life of me figure out why the answer is what it is. How many arguments are passed to the command by the shell on this command line:<pig pig -x " " -z -r" " >pig pig pig a. 8 b. 6 c. 5 d. 7 e. 9 The first symbol is supposed to be the symbol for redirected input but the site isn't letting me use it. [Fixed.] I looked at this question and said ok...arguments...not options so 2nd pig, then " ", then -r" ", 4th pig and 5th pig...-z and -x are options, so I count 5. The answer is b. 6. Where is the 6th argument that's being passed on?

    Read the article

  • Why does Process Explorer cause highly targeted failure of some applications / basic UI functions in a high-power EC2 Windows instance?

    - by Dan Nissenbaum
    Update: I have determined that Process Explorer itself - the program I am using to debug a performance issue - seems to be the cause of the issue. See note, with updated question, at end. I am running a high-power (cc2.8xlarge) Amazon AWS EC2 Windows instance off of a boot EBS volume, provisioned at 2500 PIOPS, which was created from a snapshot of a previous boot volume. My purpose with the instance is to use it as a development workstation with many developer tools installed, such as Visual Studio, a local XAMPP stack, etc. I have upwards of 40 programs installed on the machine. The usability of the instance as a development machine often works quite well. The RDP lag is adequately small. I have used it for hours on end without problems for some of my most intense development tasks. As a result, I have just purchased a reserved instance, and I opted to rebuild my development machine starting from scratch with a Windows Server 2012 AMI. After having installed all of my desired/required applications for development over this past week, again the machine seems to often work well and I have worked for up to an hour at a time without problems doing heavy development work. However, I continue to run into catastrophic OS usability issues that may prevent me from being able to rely on this machine as a development machine. I would like to track down the source of the problem, if there is an easily identifiable source. (Update: I have tracked down the source to be Process Explorer, the very program I was using to debug the problem. See update at end.) The issues are as follows. (These are some primary examples) Some applications, after a period of adequate responsiveness, suddenly begin to respond very, very slowly to basic user interface actions such as clicking on menus and pressing Ctrl-Tab to switch between open documents. Two examples are UltraEdit and PhpEd. It typically takes ~2 seconds for a menu to appear, and ~4 seconds to switch between open documents. Additionally, insertion point motion in the editor is lagged by upwards of ~2 seconds. Process Explorer, which I am using to help debug the problem, seems to run acceptably for a couple of minutes, but on multiple occasions Process Explorer itself hangs completely. It hangs at the same time as the problems noted above. When it hangs, it is 100% unresponsive. Clicking on its taskbar icon neither causes it to come to the top or go behind, and its viewable area is filled with nothing but a region partially containing pure white and partially containing incomplete windows widgets that are unreadable, and that never change. Waiting 10 minutes does not clear the problem. Attempting to force-quit Process Explorer by right-clicking on its taskbar icon and choosing "Close Window" takes about 5 full minutes to exit (Process Explorer itself can't be used to exit Process Explorer, and it is registered as a Task Manager substitute). Other programs work just fine during this time. For example, Chrome tabs flip very quickly back and forth, menus pop open instantly, web pages load quickly, and typing in forms/web applications inside the browser works promptly. Another example of an application that works crisply is Filemaker - its menus open instantly, and switching views in this application occurs promptly. Other applications also work without issue. Also, switching between applications occurs promptly as well. It is only a handful of applications that exhibit the problem, with some primary examples given above. At first I thought that EBS IOPS might be a problem. Therefore, I ran Performance Monitor, and watched the "Disk Transfers/sec" monitor in real time. At no point did this measure come anywhere close to hitting the 2500 PIOPS provisioned for the EBS volume. The RAM was also well under the limit (~10 GB used out of 60 GB). I did notice that one CPU core (out of 32 logical cores) was fully thrashing at 100% (i.e., ~3.1%) during the problematic periods. This seems to indicate that a single CPU core is handling the menus / flipping between open documents (for some applications only) / managing the Process Explorer user interface, and that this single core was hosed for some reason during the problematic periods. Also note that I have a desktop workstation (Windows 7) that I also use as a development machine, via a remote connection, with a nearly identical set of programs installed, and this desktop workstation does not exhibit any of the problems I've discussed above. I have been using it heavily for well over a year now. Any suggestions regarding either the source of the problem, or steps I might take to investigate the source of the problem, would be appreciated. Thanks. Note: After extensive testing & investigation, I have noticed that when I quit Process Explorer, the problem vanishes and the system performance returns to normal, and then reappears quickly when I run Process Explorer again (note: again, the performance problems only appear for a subset of applications - other applications work perfectly fine during the same period). My question is therefore (thankfully) more specific: Why does Process Explorer cause highly targeted failure of some applications (including itself) and basic UI functions, in a high-power EC2 Windows instance?

    Read the article

  • Reporting Services 2008: Virtual directories not visible in IIS7..

    - by Ryan Barrett
    I'm having some problems with Reporting Services on Windows Server 2008 Standard. I've installed server 2008 as a standalone webserver (with roles/features of an web application server). On top of that, I've installed Sql Server 2008 Standard with Reporting Services (and the rest of the BI tools). Problem is, I want to modify the rights on the virtual directories. However, the virtual directories aren't appearing in IIS 7 management tool. I can connect to reporting services, albeit only with the local windows admin account. I can download Report Builder fine from an session on the server (but not from any clients). I've tried removing the default website from IIS, and that stops the reporting services website from working. The machine (a VM) isn't for production use - it's used on a closed network internally for testing and development purposes. I need to be able to let my fellow developers login without a password, and they must be able to install ReportBuilder 2.0. Must not be linked to a domain or active directory in any form. Google isn't much help, the results suggest I modify the virtual directory Does anyone have any suggestions?

    Read the article

  • Per-machine decentralised DNS caching - nscd/lwresd/etc

    - by Dan Carley
    Preface: We have caching resolvers at each of our geographic network locations. These are clustered for resiliency and their locality reduces the latency of internal requests generated by our servers. This works well. Except that a vast quantity of the requests seen over the wire are lookups for the same records, generated by applications which don't perform any DNS caching of their own. Questions: Is there a significant benefit to running lightweight caching daemons on the individual servers in order to reduce repeated requests from hitting the network? Does anyone have experience of using [u]nscd, lwresd or dnscache to do such a thing? Are there any other packages worth looking at? Any caveats to beware of? Besides the obvious, caching and negative caching stale results.

    Read the article

  • Apache logging issues

    - by Dan
    I'm trying to parse apache log files, but I'm finding some strange results and I'm not sure what they mean. Hopefully someone can provide some insight. (all of the IP addresses were altered. none actually start with 192, I didn't figure the search engines mattered though.) In the first example, multiple ip addresses are showing up in the host field: 192.249.71.25 - - [04/Aug/2009:04:21:44 -0500] "GET /publications/example.pdf HTTP/1.1" 200 2738 192.0.100.93, 192.20.31.86 - - [04/Aug/2009:04:21:22 -0500] "GET /docs/another.pdf HTTP/1.0" 206 371469 What causes this? Does it have to do with proxy servers? Is there a way to have Apache only log one? In the second example, a bunch of information is just completely missing! What would cause this? msnbot-65-55-207-50.search.msn.com - - [29/Dec/2009:15:45:16 -0600] "GET /publications/example.pdf HTTP/1.1" 200 3470073 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" 266 3476792 - - - - "-" - - "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; InfoPath.1)" 285 594 - - - - "-" - - "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; InfoPath.1)" 285 4195 - - - - "-" - - "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; InfoPath.1)" 299 109218 crawl-17c.cuil.com - - [29/Dec/2009:15:45:46 -0600] "GET /publications/another.pdf HTTP/1.0" 200 101481 "-" "Mozilla/5.0 (Twiceler-0.9 http://www.cuil.com/twiceler/robot.html)" 253 101704 My CustomLog configuration says: LogFormat "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-agent}i\" %I %O" common

    Read the article

  • How to move SharePoint authentication from AD to LDAP without breaking user profiles?

    - by Dan
    We have a bunch of users in a local Active Directory OU that access the SharePoint portal. We've just added LDAP authentication and pointed it at the organisation's global LDAP server, so out AD accounts are now redundant. Is there a way to re-map the authentication for a SharePoint (MOSS 2007) user/profile. That is, can we manually change a lot of users so that they log in with their LDAP credentials and get the same SharePoint MySite, groups, etc. as when they were authenticating via AD?

    Read the article

  • Make Google Chrome's address bar prefer page titles to domain names when offering completions?

    - by Ryan Thompson
    I've recently switched from Firefox to Chrome, and the thing I miss most from Firefox is the "Awesome Bar" that suggests completions for what I type primarily based on page titles, and then secondarily based on domain names. Chrome offers both matching URLs and titles, just like like Firefox, but Chrome seems to always prefer a matching domain name over a matching page title or a match to another part of the URL (besides the domain), no matter how many times I pass over the former for the latter. In fact, Chrome also prefers to suggest a search rather than matching anything other than a domain name. So is there any hidden preference I can change to tell Chrome that I care more about page titles than domain names? Example: I want to go to Google Reader, so I press Control+L and begin typing "reader". The URL for google reader is http://www.google.com/reader/view/#overview-page, so the domain name is www.google.com, which does not contain the word "reader". So the first option that Chrome suggests is either another site that has "reader" as part of the domain, or a search for "reader" with the default search engine. No matter how many times I scroll down and select Google Reader, Chrome never "learns" that that's what I want.

    Read the article

  • Is it possible to shutdown a remote computer running Windows 7 via Telnet?

    - by Ryan Shripat
    I've successfully connected to my Windows 7 desktop over wifi via Telnet from an XP Home netbook. To login, I use the following command: telnet -l "win7desktop\win7user" win7desktop win7user in this case is an Administrator on win7desktop and is also a member of the Telnet Clients Group. The problem I have is that when I attempt to shut down win7desktop by issuing the following command: shutdown /s ...at the Telnet prompt, I get an Access Denied error: Access is denied.(5) Is it possible to shutdown a remote computer running Windows 7 via Telnet? If so, what do I need to do to get around the Access is denied error?

    Read the article

  • Running scheduled web scripts in a Windows Server environment

    - by Dan Murfitt
    I'm trying to get a scheduled web script running on a Windows Server and so far the only way I've managed to automate this process is by using the Task Scheduler to open Internet Explorer with the web address as a parameter. I then need to create a separate task to run just after this task to close Internet Explorer (otherwise the task doesn't complete). Is there a better way of doing this? I've also managed to run the script by calling the web address through a Telnet connection to the web server (GET /web/address/here) but I haven't found a way of automating this process on a scheduled basis. Any ideas appreciated

    Read the article

  • How do I set advanced file associations in Windows 7?

    - by Dan O
    It used to be in Windows XP that I could make Warcraft III files load automatically into the game by double clicking on them. This association was made by going to file associations ADVANCED area and using this line: "C:\Program Files\Warcraft III\War3.exe" -loadfile "%1" Note that it takes an argument and an option. However, in Windows 7, the "Default Programs" "Set Associations" area doesn't seem to have this advanced area. Can I still get these files to open automatically?

    Read the article

  • Send an Email at a future date

    - by Ryan
    I'd like to write up an email that gets sent out in a few days. I'd prefer to use Gmail, but I could use some other client if necessary. It doesn't look like Gmail has this feature in their labs anywhere, but it could just be hiding somewhere. Any ideas? EDIT: a bit more backstory on my particular situation. My wife is out of town for three weeks, and I've decided to email her every day while she's out. Unfortunately, I myself am going camping this weekend, so I wanted to pre-record a message that gets sent while I'm out. Unfortunately, FutureMail and FutureMe both are for sending email to yourself, probably for anti-spam reasons. I guess the best solution is to use thunderbird on my laptop (so it's shielded from power outages). Seems a little excessive to keep a computer running just to send a few emails, but whatever gets the job done :).

    Read the article

  • Why is a FLAC encoded from a decoded MP3 bigger than the MP3?

    - by Ryan Thompson
    To be more precise than in the title, suppose I have a MP3 file that is 320 kbps. If I decompress it, then logically, all the data except for roughly 320 kilobits out of each second of audio should be redundant data, able to be compressed away. So, when I encode the decompressed file to FLAC, or any other lossless codec, why is it so much larger? On a related note, is it theoretically possible to losslessly recover the source mp3 audio from a decompressed wav? (I know the mp3 itself is lossy. I'm asking if it's possible to re-encode without any further loss.) EDIT: Let me clarify the related question, and the rationale behind it. Suppose I have a wav that was decompressed from an MP3 file (and assume I don't have the mp3 itself for some reason). If I don't want to lose any more quality, I can re-encode it with FLAC or any other lossless encoder and get a larger file just to maintain the same quality. Or, I can re-encode it to mp3 again and get the same size as the original but lose more data. Obviously, neither of these cases is ideal. I can either have the original size or the original quality, but not both (I mean the quality of the original mp3, not the original lossless source). My question is: Can we get both? Is it theoretically possible to recover the lossy compressed data from the lossy decompressed data, without losing even more? If it is possible, I could imagine a lossless compression algorithm that compresses the audio with FLAC. Then it also scans the audio for any signs of previous lossy compression, and if detected, recompresses it losslessly to the original lossy file. Then it keeps whichever file is smaller.

    Read the article

  • Connection timeout when trying to SSH

    - by dan
    The other day I tried to connect to my remote server via SSH as i always have. But now when I try to connect it just times out after about 60 seconds. I run service ssh start Which tells me that Job is already running: ssh. I then ran $netstat -tnlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:993 0.0.0.0:* LISTEN 1972/dovecot tcp 0 0 0.0.0.0:995 0.0.0.0:* LISTEN 1972/dovecot tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 2030/mysqld tcp 0 0 0.0.0.0:110 0.0.0.0:* LISTEN 1972/dovecot tcp 0 0 0.0.0.0:143 0.0.0.0:* LISTEN 1972/dovecot tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 2157/perl tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3028/sshd tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 2273/master tcp6 0 0 :::80 :::* LISTEN 2618/apache2 tcp6 0 0 :::21 :::* LISTEN 2291/proftpd: (acce tcp6 0 0 :::22 :::* LISTEN 3028/sshd I am able to access subdomains on my site, and FTP, but don't have the ability to SSH or even ping remotely. Any thoughts?

    Read the article

  • linux need to discover local sata mirror before hba attached scsi

    - by Ryan
    (none of the machines mentioned are in production) Hello, I'm trying to install Centos 5.4, which wants to put the boot loader on either the boot sector of the boot drive (a local SATA mirror, recognized second as sdb) or the mba of a hba-attached SCSI array (recognized first as sda). There's a LILO install already on the mba of sdb, which keeps trying to boot first. If I zero out the MBA of sdb, would the boot loader at sdb1 be found and booted? I was thinking of that as a plan B, as I was mostly thinking of coaxing CentOS to find the local mirror first and bring that up as sda, but I haven't found info on how to do this anywhere.

    Read the article

  • Accidentally mounted a ReiserFS drive as MBR on my windows box - how do I recover?

    - by Ryan
    I had a WD Netcenter with a 160GB drive that kept dropping off the network. I opened up the enclosure and removed the hard drive, connected to a Windows box without knowing the drive used ReiserFS.... When mounting on the Windows box, I chose "MBR" as filesystem. 70GB of data corrupted: 90% of data is word documents, excel spreadsheets, and jpg's - all mission critical. Attempted recovery on Linux box (ubuntu) using TestDisk: I could see the container, but couldn't get anything out – according to TestDisk this was because I chose "none" as filesystem. Attempted recovery using Nucleus Kernel Recovery for windows: 98% of what was recovered is incomplete and/or unusable. I need to know if a way exists to recover or rebuild original ReiserFS MBR, or what tools/techniques might give me the best results in recovering the data. Found a Windows version of TestDisk and I ran it yesterday - here are the results: TestDisk 6.14-WIP, Data Recovery Utility, May 2012 Christophe GRENIER <[email protected]> http://www.cgsecurity.org Disk /dev/sda - 160 GB / 149 GiB - CHS 19457 255 63 The harddisk (160 GB / 149 GiB) seems too small! (< 519 GB / 483 GiB) Check the harddisk size: HD jumpers settings, BIOS detection... The following partitions can't be recovered: Partition Start End Size in sectors > ReiserFS 3.6 62 241 8 19458 0 18 311581568 ReiserFS 3.6 62 248 55 19458 8 2 311581568 ReiserFS 3.6 62 254 37 19458 13 47 311581568 ReiserFS 3.6 63 6 28 19458 20 38 311581568 ReiserFS 3.6 63 13 11 19458 27 21 311581568 ReiserFS 3.6 63 21 43 19458 35 53 311581568 ReiserFS 3.6 63 27 41 19458 41 51 311581568 ReiserFS 3.6 63 37 35 19458 51 45 311581568 ReiserFS 3.6 63 54 20 19458 68 30 311581568 ReiserFS 3.6 63 76 26 19458 90 36 311581568

    Read the article

  • Is it generally better to compress content on the proxy server or the app server?

    - by Dan
    We're using an F5 for load balancing and SSL proxying. Behind it we're serving up java applications with Tomcat instances. These are fairly small applications - hundreds of concurrent users. I'd like to compress some of the content, and I'm looking for advice on choosing to configure compression on the F5, or on the tomcat instances. Any big factors in the decision, or is it 6-of-one half-dozen of the other?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >