Search Results

Search found 5176 results on 208 pages for 'max fraser'.

Page 40/208 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Rewriting Apache URLs to use only paths and set response headers

    - by jabley
    I have apache httpd in front of an application running in Tomcat. The application exposes URLs of the form: /path/to/images?id={an-image-id} The entities returned by such URLs are images (even though URIs are opaque, I find human-friendly ones are easier to work with!). The application does not set caching directives on the image response, so I've added that via Apache. # LocationMatch to set caching directives on image responses <LocationMatch "^/path/to/images$"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> Note that I can't use ExpiresByType since not all images served by the app have versioned URIs. I know that ones served by the /path/to/images resource handler are versioned URIs though, which don't perform any sort of content negotiation, and thus are ripe for Far Future Expires management. This is working well for us. Now a requirement has come up to put something else in front of the app (in this case, Amazon CloudFront) to further distribute and cache some of the content. Amazon CloudFront will not pass query string parameters through to my origin server. I thought I would be able to work around this, by changing my apache config appropriately: # Rewrite to map new Amazon CloudFront friendly URIs to the application resources RewriteRule ^/new/path/to/images/([0-9]+) /path/to/images?id=$1 [PT] # LocationMatch to set caching directives on image responses <LocationMatch "^/path/to/images$"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> This works fine in terms of serving the content, but there are no longer caching directives with the response. I've tried playing around with [PT], [P] for the RewriteRule, and adding a new LocationMatch directive: # Rewrite to map new Amazon CloudFront friendly URIs to the application resources # /new/path/to/images/12345 -> /path/to/images?id=12345 RewriteRule ^/new/path/to/images/([0-9]+) /path/to/images?id=$1 [PT] # LocationMatch to set caching directives on image responses <LocationMatch "^/path/to/images$"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> <LocationMatch "^/new/path/to/images/"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> Unfortunately, I'm still unable to get the Cache-Control header added to the response with the new URL format. Please point out what I'm missing to get /new/path/to/images/12345 returning a 200 response with a Cache-Control: max-age=8640000 header. Pointers as to how to debug apache like this would be appreciated as well!

    Read the article

  • Windows service running as network service - how does it authenticate? Breaking change in W2K8?

    - by Max
    A Windows service running as "Network Service" talks to services on other machines (here: SQL Server and Analysis Services), using Windows authentication. For authentication, we have to grant permissions to the machine account of the service. E.g. if service runs on server MYSERVER in domain MYDOMAIN, it'll authenticate itself as "MYDOMAIN\MYSERVER$". - Am I correct, so far? Now here's my question: does this still apply when talking to a service on the SAME machine? Or will it authenticate with something like "NT AUTHORITY\Network Service" instead when connecting to a local service? And: is there any chance this is a breaking change from Windows 2003 to Windows 2008? We're having an actual issue in our system where the account was able to connect to local services with only the machine account having permissions in W2K3. In W2K8, this doesn't seem to work anymore: authentication to local services now fails, but still works to remote machines.

    Read the article

  • How to retrieve all MySQL settings?

    - by Max Kielland
    I have a configured MySQL server (MySQL 5.1.47-community) that works perfect. I installed a second server (MySQL 5.5.15-community) to see if the new version of MySQL would work with my application before upgrading. When I run the application against the new server it behaves different. When I run it against the old server (MySQL 5.1.47-community) everything works perfect. I remember that I set some parameters through the MySQL prompt to accept larger result set and some other stuff, now I can't remember what I did. So my question is: Is there a way to transfer all the MySQL settings from one server to another? Thanks.

    Read the article

  • How to retrieve all MySQL settings?

    - by Max Kielland
    I have a configured MySQL server (MySQL 5.1.47-community) that works perfect. I installed a second server (MySQL 5.5.15-community) to see if the new version of MySQL would work with my application before upgrading. When I run the application against the new server it behaves different. When I run it against the old server (MySQL 5.1.47-community) everything works perfect. I remember that I set some parameters through the MySQL prompt to accept larger result set and some other stuff, now I can't remember what I did. So my question is: Is there a way to transfer all the MySQL settings from one server to another? Thanks.

    Read the article

  • is this a hacker or normal apache logs?

    - by Max
    Hey, just checked my Apache logs and stumbled upon this log: Client denied by server configuration. What I found weird, are the different phpMyAdmin versions. The IP is in Czech: http://whois.domaintools.com/188.120.221.206 Am I just too overcautious?

    Read the article

  • Why FreeBSD won't reboot after kernel crash?

    - by Max Kosyakov
    Once in a while I get my server running FreeBSD 8.0 amd64 fail due to bad memory (incompatible with motherboard) modules. Each time it happens, the box stalls with the last note saying that it will automatically reboot in 15 second, but it never does. How do I fix this? I need computer to reboot after kernel crash, unattended. (Please do not recommend to replace memory, as soon as I get the modules, I will, but I need a quicker solution that will not require me to stand still next to the box just to press the reset button each time it crashes.)

    Read the article

  • How can I get Windows 7 to work with two Nvidia graphics cards with different drivers?

    - by Max
    This is similar to this question, but I am using more similar cards with Windows 7. I just purchased a Zotac Nvidia GeForce 7200 GS. I have a motherboard with two PCI Express x16 slots. There is already an MSI Nvidia GeForce 8800 GTS being used as the primary card, driving two LCD monitors. I would like the Zotac to output to a TV via DVI-out. Unfortunately, when Windows detects the Zotac and installs its drivers, or I manually install them, Windows stops being able to boot up. If I remove them and re-install the MSI 8800 drivers, I can boot again, but Windows can no longer see the Zotac 7200--it shows up as a yellow triangle in Device Manager. I've read conflicting reports about this. Some people claim that Windows 7 will support multiple heterogeneous graphics card drivers, as long as they are all using the same driver API ("WDDM?"). Others say that they have to be using the exact same driver, or it won't work. Others claim that you have to use the exact same card. which is it, exactly? I know I can run the MSI 8800 in SLI if I purchase another, but I don't need that kind of power--I just need HD-out to my television. I read somewhere that running two cards in SLI precludes you from using 100% of their output ports, so I'm not sure if that's an option. I suppose I could also run two MSI 8800's without SLI, but again, that's more power than I need (and more money than I'd like to spend). Also, I don't think this exact model is even manufactured anymore. Any ideas?

    Read the article

  • Differential backup missing moved folders (flawed archive attribute logic)

    - by Max
    Recently I've discovered that my backup system it flawed: there are situation where various files/folders are missed. I do my backup from local disk to a network NAS. I use Cobian backup, and I have setup the backup software to create one full backup every week, and one differential backup every day. Now, the backup software (to my knowledge any backup software work this way) decide the files that go in the differential backup by looking at the file archive attribute. If the attribute is set, then the file go in to the backup. Now, when you move a file to a new location, on Windows systems, the archive attribute get set and the file is included in the backup, and that's fine... but when you move an entire folder, no archive attribute is set, nor on the folder, nor in any files inside the folder, so the moved folder isn't included in the differential backup! So, if you have a full backup plus a differential backup, and you moved folders around... then it's impossible to reconstruct the original files/folders structure starting from the full+differential backup, because the backup software didn't include the moved folders in the differential backup. So my differential backup are useless... Why does windows set the archive attribute when moving a file, but not when moving a folder? How can I deal with this issue? Is there a way to create a differential backup that works as it's supposed to do? Doing full backup every day is not practical, because the changed data is about 0.1% at day (by using a differential backup I can keep 4 weeks of files history without using too much disk space.)

    Read the article

  • duplicate data from another sheet in Excel

    - by Max
    I have a rather large Excel document with a lot of separate sheets in it. There is some info (email, last name, first name) that has to be the first three columns on each sheet. In order to be sure that no mistakes are made, I created a "Person" sheet that only contains those three columns. On the other sheets, I want to get the info from that Person sheet. I can get the email column in several ways (right now, I have =Person[Email] in that column), and then I use that to get the last name and first name. So, there isn't a problem getting the data into those other sheets; but now, I want to sort by last name or first name (this is all in a table). What happens is that if I sort by Name, then you can see a flash where it re-orders the entire table, but then the =Person[Email] gets run again and the first column resets back to the order that is in the Person sheet. So this is even worse--not only can't I sort properly, but now the entire table is messed up because all of the data is in name ascending order except for the email addresses which are in the default order. Is there a way to get the email column to replicate in all other sheets, but then stop updating so I can sort/etc? Thanks in advance

    Read the article

  • How do large blobs affect SQL delete performance, and how can I mitigate the impact?

    - by Max Pollack
    I'm currently experiencing a strange issue that my understanding of SQL Server doesn't quite mesh with. We use SQL as our file storage for our internal storage service, and our database has about half a million rows in it. Most of the files (86%) are 1mb or under, but even on fresh copies of our database where we simply populate the table with data for the purposes of a test, it appears that rows with large amounts of data stored in a BLOB frequently cause timeouts when our SQL Server is under load. My understanding of how SQL Server deletes rows is that it's a garbage collection process, i.e. the row is marked as a ghost and the row is later deleted by the ghost cleanup process after the changes are copied to the transaction log. This suggests to me that regardless of the size of the data in the blob, row deletion should be close to instantaneous. However when deleting these rows we are definitely experiencing large numbers of timeouts and astoundingly low performance. In our test data set, its files over 30mb that cause this issue. This is an edge case, we don't frequently encounter these, and even though we're looking into SQL filestream as a solution to some of our problems, we're trying to narrow down where these issues are originating from. We ARE performing our deletes inside of a transaction. We're also performing updates to metadata such as file size stats, but these exist in a separate table away from the file data itself. Hierarchy data is stored in the table that contains the file information. Really, in the end it's not so much what we're doing around the deletes that matters, we just can't find any references to low delete performance on rows that contain a large amount of data in a BLOB. We are trying to determine if this is even an avenue worth exploring, or if it has to be one of our processes around the delete that's causing the issue. Are there any situations in which this could occur? Is it common for a database server to come to the point of complete timeouts when many of these deletes are occurring simultaneously? Is there a way to combat this issue if it exists? (cross-posted from StackOverflow )

    Read the article

  • MS Dynamics CRM users disappear

    - by Max Kosyakov
    Recently we came across quite a weird issue. The administrators say that once in a while they notice that user accounts in MS Dynamics CRM are lost . When a new user is added to the system, the administrators add him/her to the Active Directory first. Then, they go to Dynamics CRM interface, then to system configuration -> administration -> users and add the new user to the CRM, add roles to this user, grant them relevant permissions. Then the user is able to use a custom application, which connects to the Dynamics CRM via WCF. After a while (few weeks or months) the user is unable to use the custom application because Dynamics CRM cannot authorise this user. When administrators open the Dynamics CRM user management interface (configuration -> administration -> users ) and browse through the list of CRM users they cannot find the user in the list. When they try to add the user to Dynamics CRM back, the CRM fails with the error message "User already exists". Moreover, the user still exists in the Active Directory. The admins are very sure the user had been added to the CRM before he/she started to work. The only fact the the user was able to use the custom application normally says that the user had been indeed registered in the CRM. How come the user is not listed in the CRM user management interface at all? Have anyone faced any issues like that? Seen or heard of disappearing CRM users somewhere? Any help is appreciated. Where can one start digging?

    Read the article

  • show php error message on IIS 7

    - by Max
    I am using IIS as a webserver on my development machine for PHP webdevelopment. Or at least, I am trying to. When there is a syntax error in a PHP script and I open that file in my webbrowser, I just get an 503 "internal server error" and the default IIS error page for this error. Some browsers dont open that file at all, possibly because of the 503 HTTP Response Header. I would like IIS to act in that case just like the apache webserver: display the PHP file with the error anyway, so that the error message gets printed out. How can this be done? EDIT: PHP settings: display_errors is on and error_reporting is set to E_ALL

    Read the article

  • lighttpd silently stops logging

    - by Max Cantor
    I'm on a Slicehost 256MB VPS with Ubuntu 9.04 (Jaunty). lighttpd is the only web server process running; it listens on port 80. My lighttpd.conf can be found here. I'm using Ubuntu's default logrotate setup for lighty. At seemingly random times, lighttpd will stop logging. It is not correlated with log rotation--that is, the errors do not occur when logrotate kicks in. What happens is, I will verify that the server is serving files by hitting a URL with my browser, and I will verify that it is not logging by checking access.log and seeing that the GET request I just made is not there. Using init.d to restart the process starts logging again, without truncating or rotating the log file. That is, new requests will be logged at the end of the existing access.log file. There are no cron jobs running on this box. Any ideas?

    Read the article

  • Improving performance by using an additional static file server

    - by Max
    Hello there, I´m planning for a large website that includes many static assets (js, css, images and thumbnails) in the generated pages. That website will use TYPO3 as CMS (is is a customer requirement). I guess I could seriously improve performance / page load times by using a two server setup. One server where the main application (PHP) runs and another one where the static files sit being served by a trimmed down version of apache or something like lighthttpd. Including e. g. js or css files from the file server is of course no big deal. Just use an absolute url http://static.example.com/js/main.js and be done with it. But: that website will have pages with MANY thumbnails of e. g. product images on it. So I see two problems when the main application tries to create a thumbnail of some image: the original image like products/some.jpg is uploaded on the static file server and therefore not on the same server as the PHP application which tries to create the thumbnail. TYPO3 writes created thumbnails to a temp directory which is expected to be on the same server. Therefore, hundreds of thumbnails will be written and served from that temp directory which is on the same server as the main application - the static file server is in that case basically useless, all thumbnails will be requested from the server of the main application. So, my question is: how to overcome this shortcomings? Is it possible to "symlink" some directories to another server? So, for example, if PHP tries to open the original products image for thumbnail creation with imagecreate("products/some.jpg") the products folder actually "points" to the products folder on the static image server? I know something like this can be done with .htaccess but is it possible on file system level?

    Read the article

  • Unable to connect to remote MS SQL Server 2008 Express SP3 instance by name

    - by Max
    I am trying to connect to a remote MS SQL Server 2008 SP3 x86 Instance using it's name. At the first glance all seems to work well (e.g. it is possible to connect to the server locally and succesfully telnet it's port remotely), but there is a thing I can't understand... This line should connect us to the default instance of remote SQL Server: osql -S ServerIP -d MyDatabase /U sa -P MyPassword and it does the trick, however the next one: osql -S ServerIP\MyInstance -d MyDatabase /U sa -P MyPassword ends up with the following error: [SQL Native Client]SQL Network Interfaces: Error Locating Server/Instance Specified [xFFFFFFFF]. [SQL Native Client]Login timeout expired [SQL Native Client]An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. The only instance running on the server is MyInstance, which is (I guess) the default one. Could you please put some time in explaining the issue.

    Read the article

  • how can i get list of created and deleted files on the server?

    - by max
    i have a image sharing website , users log and upload image last night i've lost about 30 newly-consecutive uploaded images ... i mean they have been uploaded ... apparently ... they are in the database but the actual image on the server is gone ! error log doesn't show anything ... so i thought my best option is to check list of created and deleted files ... if there is any ! is there a log file for created and deleted files on the server ? i'm using directadmin

    Read the article

  • Windows XP freezes completely

    - by Max
    Lately my Win XP SP3 started to make problems. From time to time it freezes completely. Which means that the system does not react to mouse and keyboard. Keyboard led indicators also do not react to CAPS-, SCROLL-, NUM- LOCK keys. The problem is that I don't understand what causes this behavior and it seems to happen randomly. System event log also does not contain any clues. I'm thinking this could be some driver/hardware problem, but I don't know which. Are there any tools that would help me figure out the cause of this problem? Does anybody have any clue how can I fix this?

    Read the article

  • Backup XAMPP (Htdocs & MySQL)

    - by Max
    I have a development server, but would like to backup everything at least daily to a remote location. I would like to backup the htdocs folder and the MySQL servers. But if possible also the settings of the server and anything else relevant. At the moment I am using DropBox for the htdocs, but this is not ideal. I have looked into Git, DropBox simple copy paste on a daily basis. I was wondering what any advice would be. For example how hard would it be to set it up as a cloud based system? Any and all advice is greatly appreciated.

    Read the article

  • Simplest way to host a domain name just so i can forward it to a wordpress blog?

    - by Max Williams
    Hi all. I have a domain name, and a wordpress account. The domain name isn't hosted anywhere at the moment. I just want to set it up so that my wordpress blog uses the domain. According to the wordpress help page on this issue, i have to do the following: Update your domain’s name servers to the following. Make sure to remove any existing name servers that are already there. NS1.WORDPRESS.COM NS2.WORDPRESS.COM NS3.WORDPRESS.COM So, this is the only thing that i need a server for - if i even need a server at all? Is there a really simple way i can do this for free? Or for very cheap? I'm a bit ignorant about this stuff: i do rails coding but don't get involved in the really technical stuff to do with servers and dns and what have you.

    Read the article

  • Where are my date ranges in Analytics coming from?

    - by Jeffrey McDaniel
    In the P6 Reporting Database there are two main tables to consider when viewing time - W_DAY_D and W_Calendar_FS.  W_DAY_D is populated internally during the ETL process and will provide a row for every day in the given time range. Each row will contain aspects of that day such as calendar year, month, week, quarter, etc. to allow it to be used in the time element when creating requests in Analytics to group data into these time granularities. W_Calendar_FS is used for calculations such as spreads, but is also based on the same set date range. The min and max day_dt (W_DAY_D) and daydate (W_Calendar_FS) will be related to the date range defined, which is a start date and a rolling interval plus a certain range. Generally start date plus 3 years.  In P6 Reporting Database 2.0 this date range was defined in the Configuration utility.  As of P6 Reporting Database 3.0, with the introduction of the Extended Schema this date range is set in the P6 web application. The Extended Schema uses this date range to calculate the data for near real time reporting in P6.  This same date range is validated and used for the P6 Reporting Database.  The rolling date range means if today is April 1, 2010 and the rolling interval is set to three years, the min date will be 1/1/2010 and the max date will be 4/1/2013.  1/1/2010 will be the min date because we always back fill to the beginning of the year. On April 2nd, the Extended schema services are run and the date range is adjusted there to move the max date forward to 4/2/2013.  When the ETL process is run the Reporting Database will pick up this change and also adjust the max date on the W_DAY_D and W_Calendar_FS. There are scenarios where date ranges affecting areas like resource limit may not be adjusted until a change occurs to cause a recalculation, but based on general system usage these dates in these tables will progress forward with the rolling intervals. Choosing a large date range can have an effect on the ETL process for the P6 Reporting Database. The extract portion of the process will pull spread data over into the STAR. The date range defines how long activity and resource assignment spread data is spread out in these tables. If an activity lasts 5 days it will have 5 days of spread data. If a project lasts 5 years, and the date range is 3 years the spread data after that 3 year date range will be bucketed into the last day in the date range. For the overall project and even the activity level you will still see the correct total values.  You just would not be able to see the daily spread 5 years from now. This is an important question when choosing your date range, do you really need to see spread data down to the day 5 years in the future?  Generally this amount of granularity years in the future is not needed. Remember all those values 5, 10, 15, 20 years in the future are still available to report on they would be in more of a summary format on the activity or project.  The data is always there, the level of granularity is the decision.

    Read the article

  • Can't burn 8.1G iso onto 8.4GB DVD - "Media does not have enough free space"

    - by Max Williams
    I'm trying to burn a dvd on a mac with an external (firewire-connected) dvd drive. I'm checking the size of the iso thus: DVD-4:dvd_files macbook$ ls -l /tmp/hybrid.iso -rw-r--r-- 1 macbook wheel 8700884992 Aug 22 10:57 /tmp/hybrid.iso DVD-4:dvd_files macbook$ ls -lh /tmp/hybrid.iso -rw-r--r-- 1 macbook wheel 8.1G Aug 22 10:57 /tmp/hybrid.iso The "human-readable" size is 8.1 Gig but when i try to burn, onto an 8.4G dual-layer dvd, it says "Media does not have enough free space" The definition of a "Gigabyte" according to Wikipedia is 1 billion bytes, so the iso size should actually be 8.7 Gig according to this definition, in which case the disk definitely isn't big enough, and it's just that the -h option to ls is misleading. Is the discrepancy just due to the ls command using a different definition of "G" (eg 1024 Meg aka 1.07 Gig? This comes out as 8.103 which fits what ls is displaying)

    Read the article

  • Do large corporations block jQuery content on web pages?

    - by Max Vernon
    We are currently redesigning our website. The company we've hired to do the redesign is advocating the use of jQuery to render the pages dynamically. Our SEO specialist is under the impression that many larger corporations may have jQuery blocked in their proxies to prevent their users from visiting sites like Facebook. Is this something you are aware of? Forgive me if this is off topic for SF.SE!

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >