Search Results

Search found 1098 results on 44 pages for 'don corley'.

Page 7/44 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • IIS URl Rewrite working inconsistently?

    - by Don Jones
    I'm having some oddness with the URL rewriting in IIS 7. Here's my Web.config (below). You'll see "imported rule 3," which grabs attempts to access /sitemap.xml and redirects them to /sitemap/index. That rule works great. Right below it is imported rule 4, which grabs attempts to access /wlwmanifest.xml and redirects them to /mwapi/wlwmanifest. That rule does NOT work. (BTW, I do know it's "rewriting" not "redirecting" - that's what I want). So... why would two identically-configured rules not work the same way? Order makes no different; Imported Rule 4 doesn't work even if it's in the first position. Thanks for any advice! EDIT: Let me represent the rules in .htaccess format so they don't get eaten :) RewriteEngine On # skip existing files and folders RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] # get special XML files RewriteRule ^(.*)sitemap.xml$ /sitemap/index [NC] RewriteRule ^(.*)wlwmanifest.xml$ /mwapi/index [NC] # send everything to index RewriteRule ^.*$ index.php [NC,L] The "sitemap" rewrite rule works fine; the 'wlwmanifest' rule returns a "not found." Weird.

    Read the article

  • Can I use dmraid instead of md (mdadm) to make software RAID-1 and RAID-1+0 volumes?

    - by Don MacAskill
    On a related question about SSDs and TRIM (see: Possible to get SSD TRIM (discard) working on ext4 + LVM + software RAID in Linux? ), it turns out that dmraid may now (or shortly) support TRIM on RAID-1. Typically, we've used md (via mdadm) to create our RAID-1 volumes, then used LVM to create volume groups, then formatted with the file system of our choice (ext4 lately). We've been doing this for years, and Google & ServerFault searches seem to confirm this is the most common way of doing software RAID with volume management. Google searches seem to suggest that dmraid is use for so-called 'fakeRAID' configurations where there's some level of hardware 'help' in the form of RAID BIOS in the controller, which we don't have (and don't want to use - we'd like a fully software solution). Since we'd like to use TRIM on our SSDs, and since md doesn't seem to (yet?) support TRIM, I'm wondering if it's possible to use dmraid instead of md to create RAID-1 (and RAID-1+0) volumes in software, with no hardware support (ie, just plugged into a dumb SATA/SAS bus)?

    Read the article

  • How do I keep multiple copies of Outlook in sync when using RPC over HTTP?

    - by Don
    I use Outlook 2007 at work with our Exchange 2003 server. I just setup my home system with Outlook 2007 so that I could use the RPC over HTTP to access Exchange without having to use a VPN. It works fine. I can get mail, send mail, etc. What it doesn't seem to be doing is staying in sync. For example, I read a few messages at home, moved them into different folders from the Inbox, etc. That all seemed fine. When I login to my work machine and look at the copy of Outlook there, the mail is still unread and nothing has been moved. Am I missing something simple here? I would have to assume that my home machine should be telling Exchange where these messages belong and that they've been read. Both machines are running Windows 7, if that matters. Ideas?

    Read the article

  • command prompt DIR with wildcard returns unexpected results

    - by Don Dickinson
    I am running 2003 server (latest service pack). When i type this on the command line: dir 2010* or dir 2010*.* i receive this as the result: 02/01/2011 02:34 PM 2,460 2011-02-01-14-34-23-807.mdn 02/02/2011 08:59 AM 3,757 2011-02-02-08-59-32-604.req 02/01/2011 09:16 AM 235 2011-02-01-09-16-35-104.dat 02/02/2011 05:06 PM 460 2011-02-02-17-06-05-166.log 02/01/2011 03:31 PM 66,570 2011-02-01-15-31-27-838.dat 02/01/2011 03:16 PM 145 2011-02-01-15-16-51-135.log 02/01/2011 08:52 PM 1,608,916 2011-02-01-20-52-57-416.req 7 File(s) 1,682,543 bytes 0 Dir(s) 42,891,452,416 bytes free can anyone tell me why? i was expecting to see a list of only files that begin with "2010". there are no such files in the directory, so i wasn't expecting to see anything. i must either misunderstand how DIR handles wildcards or i'm doing something stupid.

    Read the article

  • RD Gateway reporting features/capabilities

    - by Don
    We have just implemented RD Gateway for our own department in preparation for a push to the whole agency for telecommuting. It is all setup and working great, but I was trying to figure out how best to go about monitoring/reporting of users. I see third party software out there that will do it, but is there anything built-in or via powershell/scripting that I could use that would give me a report of the daily activity of users? Something to say, "User A connected at this time, was on for this long, sent/received this much data"? Basically some of the same stuff you can see in event viewer. Ideally I'd like to be able to have this setup so that once a day it emails me with the daily usage for when a supervisor asks about if their person is actually working (or at least online sending and receiving x amount of data), I'll have some metrics to give them. I realize that actual work output is relevant and more of a managerial issue, but I would like to be able to offer as much as I can from my end when asked. Thoughts? Thanks!

    Read the article

  • How to import a text file into powershell and email it, formatted as HTML

    - by Don
    I'm trying to get a list of all Exchange accounts, format them in descending order from largest mailbox and put that data into an email in HTML format to email to myself. So far I can get the data, push it to a text file as well as create an email and send to myself. I just can't seem to get it all put together. I've been trying to use ConvertTo-Html but it just seems to return data via email like "pageFooterEntry" and "Microsoft.PowerShell.Commands.Internal.Format.AutosizeInfo" versus the actual data. I can get it to send me the right data if i don't tell it to ConvertTo-Html, just have it pipe the data to a text file and pull from it, but it's all ran together with no formatting. I don't need to save the file, i'd just like to run the command, get the data, put it in HTML and mail it to myself. Here's what I have currently: #Connects to Database and returns information on all users, organized by Total Item Size, User $body = Get-MailboxStatistics -database "Mailbox Database 0846468905" | where {$_.ObjectClass -eq “Mailbox”} | Sort-Object TotalItemSize -Descending | ft @{label=”User”;expression={$_.DisplayName}},@{label=”Total Size (MB)”;expression={$_.TotalItemSize.Value.ToMB()}} -auto | ConvertTo-Html #Pause for 5 seconds for Exchange write-host -foregroundcolor Green "Pausing for 5 seconds for Exchange" Start-Sleep -s 5 $toemail = "[email protected]" # Emails report to this address. $fromemail = "[email protected]" #Emails from this address. $server = "Exchange.company.com" #Exchange server - SMTP. #Email the report. $email = New-Object System.Net.Mail.MailMessage $email.IsBodyHtml = $True $email.To.Add($toemail) $email.From = $fromemail $email.Subject = "Exchange Mailbox Sizes" $email.Body = $body $client = New-Object System.Net.Mail.SmtpClient $server $client.UseDefaultCredentials = $true $client.Send($email) Any thoughts would be helpful, thanks!

    Read the article

  • How long does badblocks take on a 1TB drive?

    - by Steven Don
    I'm running badblocks (or rather "e2fsck -c") on a 1TB drive and if the progress indicator is any indication (no pun intended), it's going to take almost forever to complete. Right now it says 0.01% done, 30:20 elapsed which would mean the thing would take 17 weeks or so to complete, which seems rather excessive in my book. Is that a normal amount of time for such a check to take or it simply that my suspicions are correct in that the drive is failing, thus causing the check to take only slightly shorter than eternity? I found this question here, but that pertains to the amount of passes done.

    Read the article

  • When can an FTP server close its passive connections?

    - by Don Kirkby
    Does the FTP protocol allow the server to close any of its passive connections while the client is still connected? Can it tell when the client is finished receiving and then close the connection? I'm including an FTP server in my application using the pyftpdlib Python project. I've got it to work in active and passive mode, but I'm a bit concerned about when it closes its passive connections. I've tried connecting to it with both FileZilla and the default ftp command in Ubuntu, and in both cases, I get a new passive port for every request. That is, if I sit in the root folder and type ls 10 times, I use up 10 ports. This means that I have to allocate a big block of passive ports for the FTP server to use so it won't run out. As soon as the client disconnects, the server releases all the passive connections associated with that client and those ports can be reused. However, a long-running connection could use up a lot of ports.

    Read the article

  • What is the cleanest way to upgrade Fedora and also my individual installs while keeping /home?

    - by Don
    I am a professional programmer, using Fedora 10 (and a host of other packages individually installed). I use my system to telecommute. Every year or so, I go through the ritual dance, usually with a second computer and a KVM switch as I don't have office space for two monitors, to build the next version of Fedora and install all my favorite apps. Is there a better way? At least a nice way to keep track of what I need to 'add on' so that I don't have to manually install my app collection? Also, I keep /home on a separate raid-ed drive set so I can also fall prey to 'old-config-file-itis'.

    Read the article

  • Untar with date filter

    - by Don
    Is there any way to untar and only extract those files that are above a certain date including directory structure?? I restored a backup on a play server but it was a few days old. However I have a tar archive of the entire structure that is more up to date and healthy so now I want to extract all files (including directory structure) based on a date filter on the files if possible?

    Read the article

  • How would I recognize the "spoon-feeding problem" on a dynamic webapp server?

    - by Don Spaulding
    The "spoon-feeding problem", as it was recently explained to me, happens when connections to your application server are tied up feeding data across slow network connections to your clients. This makes sense to me and now I understand the importance of putting a highly-concurrent proxy in front of my app servers. My question is, how did the first person to recognize this problem figure it out? What *nix tools and troubleshooting techniques would help me to recognize this problem if I hadn't had it explained to me?

    Read the article

  • Why is this MySQL FULLTEXT query returning 0 rows when matching rows are present?

    - by Don MacAskill
    I have a MySQL table with 200M rows which has a FULLTEXT index on two columns (Title,Body). When I do a simple FULLTEXT query in the default NATURAL LANGUAGE mode for some popular results (they'd return 2M+ rows), I'm getting zero rows back: SELECT COUNT(*) FROM itemsearch WHERE MATCH (Title, Body) AGAINST ('fubar'); But when I do a FULLTEXT query in BOOLEAN mode, I can see the rows in question do exist (I get 2M+ back, depending): SELECT COUNT(*) FROM itemsearch WHERE MATCH (Title, Body) AGAINST ('+fubar' IN BOOLEAN MODE); I have some queries which return ~500K rows which are working fine in either mode, so if it's result size related, it seems to crop up somewhere between 500K and a little north of 2M. I've tried playing with the various buffer size variables, to no avail. It's clearly not the 50% threshold, since we're not getting 100M rows back for any result. Any ideas?

    Read the article

  • Ubuntu 9.10 Dowload Speed Very Slow

    - by Don
    I'm running Ubuntu 9.10 desktop and I'm new to the Linux world, so bear with me. I'm on a corporate network of 3 T1s shared across 50-60 users. I typically get about 300 KB/sec for downloads, but for whatever reason, the Linux box will start out in that range, then drop to less than 1KB/Sec sometimes. Doesn't seen to matter where I'm downloading from. Right now I'm trying to get Eclipse for PHP and it's running at 3-6KB/sec. Getting the updates for the system will also drop to very slow rates. Our IT person has set up the machine to get the same 10.0.0.x address when it starts, and moved this IP to bypass our Proxy/Firewall going out, so that shouldn't be the issue. Can anyone recommend something I can try to better diagnose the problem. Again, I'm new to the Linux world and the hardware/OS setup side in general (coming form more of a coding background). Thanks for any advice.

    Read the article

  • Ubuntu Studio 10.4 boots to terminal mode only

    - by Don
    I did clean install from ISO DVD. It boots only to command line asking for login. After I login I enter STARTX and desktop appears and some of the studd actually works, but there are a lot of problems: there is no way to reboot or shut down except to stopx/logoff and enter shutdown sound system doesn't respond pulse audio volume control gives "connection failed, connection refused" DVD drives not available GDebi Package installer is grayed out and so I can't use it (but synaptic package manager works OK) Software Center won't start when clicked -- it just stops trying DBus cant run because it says /usr/local/var/run/dbus/system_bus_socket file not found (also /var/run/dbus/system_bus_socket file not found) There's just a lot of things wrong and I can't help but think something is missing or was mis-coded in the distro so that there are typos in some script somewhere. If anyone can tell me where to begin to untangle this mess I'd appreciate it, but I think it all begins when it can't start the GUI automatically and I need to enter STARTX.

    Read the article

  • nginx not returning 304 on cached content

    - by Don H
    I'm using nginx as a reverse proxy with an Apache back-end handling some PHP files. The files return the right expiry headers and proxy_cache does a good job of caching them, but I've noticed that the cached content returns a 200 on every refresh, when it might be more efficient to return a 304 on the cached files. The files in question are generated by PHP. The urls do not have .php in them as they've been prettified. Any idea why nginx might not be returning 304 on repeated visits to a cached PHP output? To clarify: It's using proxy_cache for caching dynamic PHP pages (not static html pages generated by PHP). I'm setting expires headers in the PHP file of time + 24 hours. With that in mind, I was hoping nginx would be able to then return 304s on its cached versions during that 24 hour window.

    Read the article

  • Set up Ubuntu in Virtualbox to have static ip

    - by Don H
    I frequently work in different locations, and need to have a virtualbox version of Ubuntu server running locally. While I was at home getting it set up, I was able to ssh into the server using the locally allocated IP address. However, now that I'm elsewhere, ifconfig is still showing the old 10.0.x.x ip address, but instead of being in the 10.0.x.x space, my laptop's ip starts with 192.168.x.x With that in mind, if there a straightforward way to set up the virtual box Ubuntu server in such a way that I can just connect using "ssh servername" regardless of it's ip address?

    Read the article

  • Upgrading to Java 7u65 breaks my Deployment Rule Set for Oracle applications

    - by Don Atreides
    My company uses an older version of an Oracle application that requires Java 6u45. Naturally we want to be secure, so we use a Deployment Rule Set to specify 6u45 for that internal application and let other applications use 7u60. Now that we're ready to upgrade the Java 7 half to 7u67, the Oracle application breaks with "Deployment Rule Set required version 1.6.0_45 not available." Of course it is available, it just can't find it for some reason. As a test, I specified that JavaTester.org should use 6u45 also and it works fine with no issues. But when I try to use the same configuration (7u67 and 6u45) against the Oracle application it fails every time. If I downgrade to 7u60, it works. 7u65 or higher, it breaks. The Oracle application hasn't changed so it must be something different in how 7u65+ is handling Deployment Rule Sets or pathing or something. I'm at a complete loss. ruleset.xml: <?xml version="1.0"?> -<ruleset version="1.0+"> -<rule> <id location="*.mycorp.com"/> <action version="1.6.0_45" permission="run"/> </rule> -<rule> <id location="http://javatester.org"/> <action version="1.6.0_45" permission="run"/> </rule> </ruleset>

    Read the article

  • Network tries to reindentify itself now and then

    - by Don Young
    When the computer starts up it connects itself to the router (ZTE 4G router) automatically, but after I have surfed the web for a while, it tries to identify itself, meaning you get that little blue circle next to the little screen at the corner. When browsing it does not cause any problems, but if I'm streaming a video the video will then stop, I'll have to refresh the page to make the video start again. And if I'm playing a game, LoL for example, the game will freeze for about 2 seconds then continue again (due to lost connection to the internet). I have no virus on my computer, although I had before. I have reset my router, restarted my computer, updated my ethernet driver, checked so that the IPv4 is set on automatic and tried different router channels. I have had this router for a few months and the problem just started recently. Here is the ipconfig screen:

    Read the article

  • How to play multiple videos side-by-side synchronized?

    - by Don Salva
    I've got 3 videos, all 3 have the same time, same amount of frames, and they only differ in terms of encoding quality. Now I need them to run side-by-side in synchronized fashion for evaluation purposes. Meaning when I press "play" BOTH! videos should start. Analogically for stop, forward, backward. Anyone know any player capable of doing that? By that I mean playing more than 1 video side-by-side... Platform: Win7

    Read the article

  • How to select a user and remove all groups they are a member of using Powershell (with Quest)?

    - by Don
    I've read quite a bit online about this and thought I had found a solution, but it doesn't seem to be working like I would expect. I am wanting to get a user based on the username I input, then remove all groups that it is a member of. Basically the same thing as going into ADUC, selecting the user, selecting the Member Of tab, highlighting everything (except domain users of course) and selecting remove. Here's the command I'm trying to use: Get-QADUser -Name $username | Remove-QADMemberOf -RemoveAll Others have said online that it works for them, but so far it hasn't for me. It doesn't give an error, it accepts the command just fine, but when I look in ADUC, the groups are still there for the user. Any suggestions as to what I may be doing wrong? Executing from Windows 7 with domain admin rights, Exchange cmdlets and Quest snapin loaded. Thanks!

    Read the article

  • VLAN across a router to give wireless access to remote sites?

    - by Don
    I've been looking online for this answer, but getting conflicting information. I was under the impression that you couldn't use a VLAN across a router, but maybe it's possible (according to some documentation I see online)? I was hoping someone could clear it up for me. Here's what I'm working with: We have a remote site with a handful of users. We recently gave them an access point (Cisco 1142n) for internal wireless. It's plugged into a switch and working fine (getting IPs from the same DHCP scope as the wired users are getting). Private wireless is set on VL50. At the home office we have private wireless for our internal network working and on VL50, with a test VLAN setup for VL60, which points to our DSL line for the time being. Both private and public wireless works fine internally (not crossing a router). VL50 is named the same at both sites for consistency in naming. If we wanted to give the remote site access to the public wireless (VL60), would that be possible across the routers? For more information, currently the site is connected to the home office via a T1 connection, Cisco routers on both ends. I didn't think it was possible due to the nature of VLANS being layer 2. But, I am from from an expert on this and would appreciate any instruction as to the actual truth of the matter. The end result I'm going for is, how to get our remote sites access to a public (outside) connection along with their private connection, without actually having a DSL (or similar type line) dropped at their location? Thanks in advance for your thoughts.

    Read the article

  • IHRIM's Latest Workforce Solutions Review Focuses on Risk!

    - by Jay Richey, HCM Product Marketing
    IHRIM's latest edition of the Workforce Solution's Review magazine (in print and online) has some really compelling features and articles focused on HCM risk and compliance management.  Check out this line-up and sign up if you aren't already a member.  It's well worth it.  http://www.ihrimpublications.com/WSR_about.php Three to Watch: HR's Growing Compliance Responsibilities for Data Security, Genetic Nondiscrimination, and Anti-Bribery Laws     By W. Scott Blackmer and Richard Santalesa, InfoLawGroup, LLP Global HR and International Background Check Best Practices     By Terry Corley, Aletheia Consulting Group Compliance: Old Wine in New Wineskins?     By Ursula Christina Fellberg, Ph.D., UCF-StrategieBeraterin Join the HR/HR technology professionals who have subscribed for so many years to IHRIM’s publications and become a reader today by visiting  http://www.ihrimpublications.com/amember/signup.php.  

    Read the article

  • Using DEBUG Mode in Oracle SQL Developer to Log SQL

    - by thatjeffsmith
    Curious how we’re getting the data you see in SQL Developer when you click on something? While many of the dialogs provide a ‘SQL’ panel that shows you the SQL ABOUT to be generated, I’d rather see the SQL AS it’s executed. True, you could set a TRACE or fire up a Monitor Sessions report, but both of those solutions leave me hungry for more. Did you know that SQL Developer has a ‘debug’ mode? It slows the tool down a bit and spits out a lot of information you don’t care about, but it ALSO shows you ALL the SQL that is sent to the database, as you click around the tool! See ALL the SQL that SQL Developer sends to the database on your behalf Enable DEBUG Mode When you see the splash screen as SQL Developer fires up, frantically hit Up, Up, Down, Down, Left, Right, Left, Right, B, A, SELECT, Start. Wait, wrong game. No, all you need to do is go to your SQL Developer directory and navigate down to the ‘bin’ directory. In that directory, find the ‘sqldeveloper.conf’ file. Install Directory - sqldeveloper - bin - sqldeveloper.conf Open it with a text editor. Find this line IncludeConfFile sqldeveloper-nondebug.conf And replace it with this line IncludeConfFile sqldeveloper-debug.conf Save the file. Start up SQL Developer. Observe the Logging Page – Log Panel for the SQL There’s going to be more than just SQL here. You’ll actually see a LOT of other information. If you’re having general problems with the tool and you want to see the nitty-gritty of what’s going on, then this is a good place to satisfy your curiosity and might help us diagnose your issue if you post to the forums or open a ticket with My Oracle Support. You’ll find ‘INFO’ entries that look a little something like this - This is the query used to populate your Tables list in the connection tree. You can double-click on the sql text and get a pop-up window that’s much easier to read. See all that typing we’re saving you? I don’t recommend running in DEBUG mode all the time. Capturing this information and displaying it is more expensive than not doing so. And it provides a lot of information you don’t normally need to see. But when you DO want to know what’s going on and why, this is an excellent way of getting that information. When you’re ready to go back to ‘normal’ mode, just close SQL Developer, go back to your .conf file, and add the ‘nondebug’ bit back.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >