Search Results

Search found 73851 results on 2955 pages for 'time machine'.

Page 17/2955 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Mathematics for AI/Machine learning ?

    - by Ankur Gupta
    I intend to build a simple recommendation systems for fun. I read a little on the net and figured being good at math would enable on to build a good recommendation system. My math skills are not good. I am willing to put considerable efforts and time in learning maths. Can you please tell me what mathematics topics should I cover? Also if any of you folks can point me to some online material to learn from it would be great. I am aware of MIT OCW, book like collective intelligence. Math Topics to cover and from where to read would really help.

    Read the article

  • Agile: User Stories for Machine Learning Project?

    - by benjismith
    I've just finished up with a prototype implementation of a supervised learning algorithm, automatically assigning categorical tags to all the items in our company database (roughly 5 million items). The results look good, and I've been given the go-ahead to plan the production implementation project. I've done this kind of work before, so I know how the functional components of the software. I need a collection of web crawlers to fetch data. I need to extract features from the crawled documents. Those documents need to be segregated into a "training set" and a "classification set", and feature-vectors need to be extracted from each document. Those feature vectors are self-organized into clusters, and the clusters are passed through a series of rebalancing operations. Etc etc etc etc. So I put together a plan, with about 30 unique development/deployment tasks, each with time estimates. The first stage of development -- ignoring some advanced features that we'd like to have in the long-term, but aren't high enough priority to make it into the development schedule yet -- is slated for about two months worth of work. (Keep in mind that I already have a working prototype, so the final implementation is significantly simpler than if the project was starting from scratch.) My manager said the plan looked good to him, but he asked if I could reorganize the tasks into user stories, for a few reasons: (1) our project management software is totally organized around user stories; (2) all of our scheduling is based on fitting entire user stories into sprints, rather than individually scheduling tasks; (3) other teams -- like the web developers -- have made great use of agile methodologies, and they've benefited from modelling all the software features as user stories. So I created a user story at the top level of the project: As a user of the system, I want to search for items by category, so that I can easily find the most relevant items within a huge, complex database. Or maybe a better top-level story for this feature would be: As a content editor, I want to automatically create categorical designations for the items in our database, so that customers can easily find high-value data within our huge, complex database. But that's not the real problem. The tricky part, for me, is figuring out how to create subordinate user stories for the rest of the machine learning architecture. Case in point... I know that the algorithm requires two major architectural subdivisions: (A) training, and (B) classification. And I know that the training portion of the architecture requires construction of a cluster-space. All the Agile Development literature I've read seems to indicate that a user story should be the "smallest possible implementation that provides any business value". And that makes a lot of sense when designing a piece of end-user software. Start small, and then incrementally add value when users demand additional functionality. But a cluster-space, in and of itself, provides zero business value. Nor does a crawler, or a feature-extractor. There's no business value (not for the end-user, or for any of the roles internal to the company) in a partial system. A trained cluster-space is only possible with the crawler and feature extractor, and only relevant if we also develop an accompanying classifier. I suppose it would be possible to create user stories where the subordinate components of the system act as the users in the stories: As a supervised-learning cluster-space construction routine, I want to consume data from a feature extractor, so that I can exist. But that seems really weird. What benefit does it provide me as the developer (or our users, or any other stakeholders, for that matter) to model my user stories like that? Although the main story can be easily divided along architectural-component boundaries (crawler, trainer, classifier, etc), I can't think of any useful decomposition from a user's perspective. What do you guys think? How do you plan Agile user stories for sophisticated, indivisible, non-user-facing components?

    Read the article

  • MySql Check if NOW() falls within a weekday/time range

    - by Niall
    I have a table as follows: CREATE TABLE `zonetimes` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `zone_id` int(10) unsigned NOT NULL, `active_from_day` tinyint(1) unsigned NOT NULL DEFAULT '2', `active_to_day` tinyint(1) unsigned NOT NULL DEFAULT '2', `active_from` time NOT NULL, `active_to` time NOT NULL PRIMARY KEY (`id`) ) ENGINE=MyISAM ; So, a user could add a time entry starting on a particular day and time and ending on a particular day and time, eg: Between Monday 08:00 and Friday 18:00 or Between Thursday 15:00 and Tuesday 15:00 (Note the crossover at the end of the week). I need to query this data and determine if a zone is currently active (NOW(), DAYOFWEEK() etc)... This is turning out to be quite tricky. If I didn't have overlaps, eg: from 'Wednesday 8pm to Tuesday 4am' or from 'Thursday 4pm to Tuesday 4pm' this would be easy with BETWEEN. Also, need to allow a user to add for the entire week, eg: Monday 8am - Monday 8am (This should be easy enough, eg: where (active_from_day=active_to_day AND active_from=active_to) OR .. Any ideas? Note: I found a similar question here Timespan - Check for weekday and time of day in mysql but it didn't get an answer. One of the suggestions was to store each day as a separate row. I would much rather store one time span for multiple days though.

    Read the article

  • Real Time Sound Leveler

    - by Soldier.moth
    Lately I've been annoyed with Hulu as the commercials are significantly louder than the actual show. This has caused me to wonder if there existed any application either generic or specific to Hulu or Firefox to reduce the difference in sound volume between the show and commercials.

    Read the article

  • Real-time offline folder-to-folder backup application needed (Windows)

    - by niktech
    I recently started using Intel Matrix Storage RAID solution that allowed me to use my 5 1TB drives for two RAID volumes. First one a 1TB RAID 0 striped across all 5 drives and second one a RAID 5 across the rest of the free space on all drives (around 2.85TB usable space). The RAID 0 I use for OS, applications and games while the RAID 5 I use as a more-permanent type storage (photos, etc). Now I do realize that running the OS and applications on RAID 0 across 5 drives is very dangerous, which is what brings up the following question. Is there a reliable freeware realtime backup application that can backup a set of folders from one drive to another drive (no online backups needed)? I've already tried a few (Mozy, Yadis, Comodo Backup, GFI Backup, Idoo, Crash Plan) but none meet my requirements: Low CPU and RAM usage. Realtime Backups - as soon as a file is modified in the source folder, it is added to the backup queue which will be processed with the lowest priority when the CPU is idle. This backup queue should persist in cases of computer restarts (ie: the source and destination folders should always have the same set of files, except for the ones waiting in the backup queue). Incremental Backups - if only 10 bytes changed in a 1GB file, the app should only copy those 10 new bytes. Ability to back up locked and opened files (some apps, like Yadis, can't back up critical files like browser favorites). Ability to run as a service (no need for any user to log-in to have the app started). Optional requirements: Compression of the destination into a well-known format (RAR, Zip) that can be directly read without the use of the application. Preset source folders (such as Browser Favorites, Game Saves, Application Settings, etc). The idea is to use RAID 0 array as "semi-persistent RAM-like" storage which in case of a failure can be quickly rebuilt by reinstalling the OS, apps and games and copying over the settings, saves, favorites from the RAID 5. I'm also thinking of taking this RAID 0 as RAM idea to the extreme with SSDs (as soon as we get some nice 6Gb/s SATA III SSDs out there), where a couple of SSDs chained in RAID 0 will work as yet another semi-persistent cache layer sitting between the RAM and the HD. I'm just hoping there already exists an application that satisfies these requirements... otherwise I'll have to write one myself, which I would prefer not to do.

    Read the article

  • Replacing sick NTP server source and re-synching (with internal time currently 2 minutes late)

    - by l0c0b0x
    One of the external NTP servers (the primary one--currently) we're using as source seems to not be responding to NTP calls. Unfortunately, on our core router (Cisco 6509), the NTP functionality hasn't switched to the secondary NTP external server as it was expected. As a result, our core router which is pretty much our main internal NTP source is 2 minutes late. I'm planning to fix the external router issue by making the external NTP source be the one currently working. I'm wondering, how much will a 2 minute change affect my users and services? Specially since these days, we're heavily relying on certificate-based authentication. We're a Windows/Cisco shop. Internal NTP setup: [Core Router 1 / Cisco 6509]: looking out to two external NTP servers (in which the primary one is not responding to NTP calls) [Core Router 2]: Synching with Core router 1 (primary), working external router (secondary) [Other Cisco network devices]: Synching with Core router 1 (primary), core router 2 (secondary) [Domain controller(s)]: Synching with Core router 1 [All windows clients/servers]: Synching with domain controllers

    Read the article

  • Real-time threat finder

    - by Rohit
    I want to make a small program that is capable to download files from the cloud onto my system. As the file reaches my system, another program on my system will analyze the file and try to find suspicious behaviors in it. I want to make a system similar to ThreatExpert (www.threatexpert.com). The suspicious data gathered by my program will be sent to Anti-Virus companies for analysis. I want to know whether this program can be written in .NET or as a PHP website. I have no experience of Cloud computing. How to retrieve files from the cloud?

    Read the article

  • Extracting RTD (Real Time Data) from Excel file

    - by Mat0930912
    Hi everyone. I have an Excel 2010 file containing auto-updating cells with RTD. Example of cell: =RTD("xxx";"yyy") I need to extract (in a .txt file) those cells' values, every X minutes. My .txt file MUST contain the updated value. I tried with a macro. That macro exports every X minutes a txt file of the Excel file. The problem is that when macro is running, cells doesn't update: the values remain the same of those before the macro was launched. It looks like macro forbids the updating. How can I do? Thank you.

    Read the article

  • Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server

    - by SQLOS Team
    Windows Azure Virtual Machine Readiness and Capacity Assessment for Windows Server Machine Running SQL Server With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server. MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Now, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following: Perform an inventory View the Windows Azure VM Readiness results and report Collect performance data for determine VM sizing View the Windows Azure Capacity results and report Perform an inventory: 1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below: 2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue. 3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue. Description of Discovery Methods: Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS. Windows networking protocols --  This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers. System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers. Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range. Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers. Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried. 4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added. NOTE: The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment. 5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next. 6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names. 7. On the Summary page confirm your settings and then click Finish. After clicking Finish the inventory process will start, as shown below: Windows Azure Readiness results and report After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes. If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report. After the report is generated, select View | Saved Reports and Proposals to view the location of the report. Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported. Collect Performance Data Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for. Windows Azure Capacity results and report After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed. You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.   MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Useful References: Windows Azure Homepage How to guides for Windows Azure Virtual Machines Provisioning a SQL Server Virtual Machine on Windows Azure Windows Azure Pricing     Peter Saddow Senior Program Manager – MAP Toolkit Team

    Read the article

  • Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server

    - by SQLOS Team
    Windows Azure Virtual Machine Readiness and Capacity Assessment for Windows Server Machine Running SQL Server With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server. MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Now, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following: Perform an inventory View the Windows Azure VM Readiness results and report Collect performance data for determine VM sizing View the Windows Azure Capacity results and report Perform an inventory: 1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below: 2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue. 3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue. Description of Discovery Methods: Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS. Windows networking protocols --  This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers. System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers. Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range. Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers. Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried. 4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added. NOTE: The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment. 5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next. 6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names. 7. On the Summary page confirm your settings and then click Finish. After clicking Finish the inventory process will start, as shown below: Windows Azure Readiness results and report After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes. If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report. After the report is generated, select View | Saved Reports and Proposals to view the location of the report. Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported. Collect Performance Data Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for. Windows Azure Capacity results and report After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed. You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.   MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Useful References: Windows Azure Homepage How to guides for Windows Azure Virtual Machines Provisioning a SQL Server Virtual Machine on Windows Azure Windows Azure Pricing     Peter Saddow Senior Program Manager – MAP Toolkit Team

    Read the article

  • Splitting a tetris game apart - where to put time-management?

    - by nightcracker
    I am creating a tetris game in C++ & SDL, and I'm trying to do it "good" by making it object-oriented and keeping scopes small. So far I have the following structure: A main with some lowlevel SDL set up and handling input A game class that keeps track of score and provides the interface for main (move block down, etc) A map class that keeps track of the current game field, which blocks are where. Used by the game class. A block class that consists of the current falling block, used by game. A renderer class abstracting low level SDL to a format where you render "tetris blocks". Used by map and block. Now I have a though time where to place the time-management of this game. For example, where should be decided when a block bumps the bottom of the screen how long it takes the current block locks in place and a new block spawns? I also have an other unrelated question, is there some place where you can find some standard data on tetris like standard score tables, rulesets, timings, etc?

    Read the article

  • How to speed up/slow down application's 'time'

    - by overboming
    I guess I am not saying it right in the title. What I intend to do is to hook to some system api,like a time interrupt happens every amount of time (which is how every application in the operating system interprets time) and make an application's call to this api return some bigger/smaller result. So from the point view of an applicaiton, time has been speed up or slow down. I have found some application on windows doing this, can anyone gives me some pointers on how to implement this on Mac OS X?

    Read the article

  • FIND TIME DIFFERENCE

    - by RUBAN
    HI, I HAVE THREE COLUMNS. COLUMN A COLUMN B COLUMNC 2.43 2.45 00:02:00 WHAT I WANT IS COLUMN A HAVE START TIME, AND COLUMN B HAVE END TIME.I NEED THE TIME DIFFERENCE OF START AND END TIME IN COLUMN C FORMAT LIKE(HH:MM:SS) SOMEBODY CAN HELP ME PLEASE!!!!!!!!!!!!!

    Read the article

  • Time Calculation in Python

    - by user343934
    Hi everyone, I have two python cgi pages (index, display), what i need to do is calculate time frame between the execution. Work flow: Index.py (submit) After submit it redirect to display.py page display.py page has got class with execute(),display() &init_main() functions In init_main() i have created class object which access to above functions I need to calculate time interval between the submit (index.py) to execute() end, so i can show user how much time does different weighted file takes on real time. Hoping to see your suggestions Thanks

    Read the article

  • Convert GMT time to local time

    - by Dibish
    Am getting a GMT time from my server in this format Fri, 18 Oct 2013 11:38:23 GMT My requirement is to convert this time to local time using Javascript, eg:/ if the user is from India, first i need to take the time zone +5.30 and add that to my servertime and convert the time string to the following format 2013-10-18 16:37:06 I tried with following code but not working var date = new Date('Fri, 18 Oct 2013 11:38:23 GMT'); date.toString(); Please help me to solve this issue, Thanks in advance

    Read the article

  • MySQL - Extracting Time in correct format

    - by mouthpiec
    Hi, I have a table with a coloumn of type "time", and the values in this coloumn are stored as follows: 20:45:00, 18:00:00, etc. Now when displaying the result, I am not getting the minutes, but just 00. I am using the following to get the time: SELECT TIME_FORMAT(time, '%h:%m') as time FROM ......... etc

    Read the article

  • What are programmers made to do in spare time in jobs?

    - by Shashank Jain
    Well, with no prior job experience I am completely ignorant of how things happen at software companies. I want to know what programmers are made to do when there is nothing to do? Lets consider Facebook or twitter. Now it is quite improbable that Facebook people have something or other feature in mind to be implemented. So software developers are quite expected to have some time when there is nothing to do. Are they free to do anything in this?

    Read the article

  • Rails timezone differences between Time and DateTime

    - by kjs3
    I have the timezone set. config.time_zone = 'Mountain Time (US & Canada)' Creating a Christmas event from the console... c = Event.new(:title = "Christmas") using Time: c.start = Time.new(2012,12,25) = 2012-12-25 00:00:00 -0700 #has correct offset c.end = Time.new(2012,12,25).end_of_day = 2012-12-25 23:59:59 -0700 #same deal using DateTime: c.start = DateTime.new(2012,12,25) = Tue, 25 Dec 2012 00:00:00 +0000 # no offset c.end = DateTime.new(2012,12,25).end_of_day = Tue, 25 Dec 2012 23:59:59 +0000 # same I've carelessly been using DateTime thinking that input was assumed to be in the config.time_zone but there's no conversion when this gets saved to the db. It's stored just the same as the return values (formatted for the db). Using Time is really no big deal but do I need to manually offset anytime I'm using DateTime and want it to be in the correct zone?

    Read the article

  • Making the user change the time in Android

    - by Casebash
    Android doesn't appear to provide a way for a user application to change the system time. What I would like to do instead is to get the user to change the time. It is easy to open up the Date & Time settings: startActivity(new Intent(android.provider.Settings.ACTION_DATE_SETTINGS)); What I would like to know is: Is it possible to link directly to the set time option? Is it possible to check that the user set the time correctly? I am aware of the TIME_CHANGED broadcast message, but I can't find any documentaion on it

    Read the article

  • Have set Expiration time: Still getting "Query string present but no explicit expiration time"

    - by oligofren
    I have one local Apache instance running with mod_cache (+ disk & mem) enabled, and it seems to cache content from my appserver fine. My app server sets Expiration headers and Last-modified. Yet, when deploying on a production server with the same modules enabled, I am getting the following error in my logs: blablabla not cached. Reason: Query string present but no explicit expiration time Any clues on why Apache is not caching content? The only difference is the Apache version. Locally I am running 2.2. This is from my config CacheRoot "/var/cache/apache2/" CacheEnable disk / This is example output < HTTP/1.1 200 OK < Date: Mon, 19 Nov 2012 16:09:13 GMT < Server: Sun GlassFish Enterprise Server v2.1.1 < X-Powered-By: Servlet/2.5 < Expires: Tue Nov 20 05:00:00 CET 2012 < Last-Modified: Mon Nov 19 17:09:13 CET 2012 < Cache-Control: no-transform < Content-Type: application/x-javascript < Transfer-Encoding: chunked

    Read the article

  • issues with Nginx + Passenger Production setup - Loading time/request time delay

    - by Dani Cela
    having a bit of an issue relating to request time. I have NGINX as a proxy server for a ruby on rails app running passenger. I also have a postgresql database server which is running on its own VM separate from my nginx/application server. My issue is that when I try and access my products page which does a lot of database queries, my query takes maybe 3-4 seconds. The second I flood the web server with requests, i will choke out the web server and have requests take almost 20-30 seconds to process. The rails server and database server do not crash, and the usage is not that high. Each server has more than enough memory, even cpu usage on the rails server isn't more than 85%, albeit thats high but its not maxing it out. Is my problem related to my nginx proxy server? I dont really know how to fully explain this so if you have a question please ask it and I can clarify what I mean. EDIT: to see exactly what i mean relating to the database query, see http://207.245.4.215/products

    Read the article

  • Which components should I invest in.. for a backup machine.

    - by Senthil
    I am a freelance developer. I have a PC, a laptop and an old testing and file server machine. I might add one or two in future. I want to have an on-site backup machine that can handle backups of ALL these machines - file backups, MySQL backups, backup of subversion repository, etc.. When building the machine, which components should I invest more in? Examples: The cabinet should have lots of room for expansion. Hard disk size should be large. But I guess hard disk speed need not be high (?) But other components like, RAM, PSU, Processor, Network card, Cooling, etc.. how much relative importance do these have in a backup machine? Which of these components should be high-end or large, and which ones need not be? Some Idea of the load: There will TBs of data. File backups and subversion repository backups will at least be done daily. MySQL backups done weekly. assume 3 machines at the moment and somewhere around 10 machines in the future.

    Read the article

  • Html 5 Time Tag not recognized by IE8 when cloning

    - by matsientst
    I have been having trouble getting IE to recognize the new Time tag in this context. This all works great in FF. Here is the code: var origComment = $('.articleComment:first div'); if (origComment.length > 0) { var commentHtml = origComment.clone(true); commentHtml.find('time').text('today'); var html = '<article class="' + ((side == 'LEFT') ? '' : 'that') + '">' + commentHtml.html() + '</article>'; $(html).insertAfter('.articleComment:last'); The HTML looks something like this: <article class="articleComment that"> <div id="156" class="parent"> <div class="byline"> <p>Posted <time pubdate="pubdate" datetime="2010-05-07T09:11:08">today</time> by<br/> <a class="username" href="/u/matt">matt</a> </p> <p class="report"><a href="#">Report?</a></p> </div> <div class="comment">left</div> </div> </article> IE can find the Time tag but it returns a collection of 2 elements. I assume the beginning and ending. However, I cannot access it to modify it. I have tried val(), html() and text(). I also can't drop to the actual HTMLElement. I can't get(0).innerHTML. But, if I .get(0).tagName it actually is the Time tag I've got. Any ideas? I hope this makes sense.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >