Search Results

Search found 1776 results on 72 pages for 'cached'.

Page 12/72 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Mako "Missing parentheses in %def"

    - by michaelmior
    In trying to add a cached section to a Mako template, I get the error listed in the above question. Adding () to the end gets rid of the error, but I see no content on my page. Any help is appreciated! <%def name="test" cached="True" cache_timeout="60" cache_type="file"> Test /%def>

    Read the article

  • php-common causing conflict

    - by Cornwell
    I'm trying to install php-pdo but it always fails because of php-common bash-3.2# yum install php-pdo Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * epel: fedora-epel.mirror.lstn.net Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php-pdo.i386 0:5.1.6-40.el5_9 set to be updated --> Processing Dependency: php-common = 5.1.6-40.el5_9 for package: php-pdo --> Finished Dependency Resolution php-pdo-5.1.6-40.el5_9.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-40.el5_9 is needed by package php-pdo-5.1.6-40.el5_9.i386 (base) Error: Missing Dependency: php-common = 5.1.6-40.el5_9 is needed by package php-pdo-5.1.6-40.el5_9.i386 (base) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. Installing php-common I get: bash-3.2# yum install php-common Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * epel: fedora-epel.mirror.lstn.net Setting up Install Process Package matching php-common-5.1.6-40.el5_9.i386 already installed. Checking for update. Nothing to do Searched here and google but couldn't find anything that worked EDIT: Added new repositories: bash-3.2# yum install php-common Loaded plugins: fastestmirror Repository base is listed more than once in the configuration Repository addons is listed more than once in the configuration Repository extras is listed more than once in the configuration Repository centosplus is listed more than once in the configuration Repository contrib is listed more than once in the configuration Loading mirror speeds from cached hostfile * addons: mirror.raystedman.net * base: mirror.5ninesolutions.com * centosplus: mirror.anl.gov * contrib: yum.singlehop.com * epel: fedora-epel.mirror.lstn.net * extras: centos.unmeteredvps.net * update: mirror.team-cymru.org addons | 1.9 kB 00:00 base | 1.1 kB 00:00 centosplus | 1.9 kB 00:00 centosplus/primary_db | 53 kB 00:01 contrib | 1.9 kB 00:00 contrib/primary_db | 1.1 kB 00:00 epel | 3.6 kB 00:00 extras | 2.1 kB 00:00 nginx | 2.5 kB 00:00 update | 1.9 kB 00:00 update/primary_db | 84 kB 00:04 updates | 1.9 kB 00:00 Setting up Install Process Package matching php-common-5.1.6-40.el5_9.i386 already installed. Checking for update. Nothing to do And then... bash-3.2# yum install php-pdo Loaded plugins: fastestmirror Repository base is listed more than once in the configuration Repository addons is listed more than once in the configuration Repository extras is listed more than once in the configuration Repository centosplus is listed more than once in the configuration Repository contrib is listed more than once in the configuration Loading mirror speeds from cached hostfile * addons: mirror.raystedman.net * base: centos.mirror.lstn.net * centosplus: mirror.anl.gov * contrib: yum.singlehop.com * epel: fedora-epel.mirror.lstn.net * extras: centos.unmeteredvps.net * update: mirrors.loosefoot.com Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php-pdo.i386 0:5.1.6-40.el5_9 set to be updated --> Processing Dependency: php-common = 5.1.6-40.el5_9 for package: php-pdo --> Finished Dependency Resolution php-pdo-5.1.6-40.el5_9.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-40.el5_9 is needed by package php-pdo-5.1.6-40.el5_9.i386 (base) Error: Missing Dependency: php-common = 5.1.6-40.el5_9 is needed by package php-pdo-5.1.6-40.el5_9.i386 (base) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package.

    Read the article

  • Inequality joins, Asynchronous transformations and Lookups : SSIS

    - by jamiet
    It is pretty much accepted by SQL Server Integration Services (SSIS) developers that synchronous transformations are generally quicker than asynchronous transformations (for a description of synchronous and asynchronous transformations go read Asynchronous and synchronous data flow components). Notice I said “generally” and not “always”; there are circumstances where using asynchronous transformations can be beneficial and in this blog post I’ll demonstrate such a scenario, one that is pretty common when building data warehouses. Imagine I have a [Customer] dimension table that manages information about all of my customers as a slowly-changing dimension. If that is a type 2 slowly changing dimension then you will likely have multiple rows per customer in that table. Furthermore you might also have datetime fields that indicate the effective time period of each member record. Here is such a table that contains data for four dimension members {Terry, Max, Henry, Horace}: Notice that we have multiple records per customer and that the [SCDStartDate] of a record is equivalent to the [SCDEndDate] of the record that preceded it (if there was one). (Note that I am on record as saying I am not a fan of this technique of storing an [SCDEndDate] but for the purposes of clarity I have included it here.) Anyway, the idea here is that we will have some incoming data containing [CustomerName] & [EffectiveDate] and we need to use those values to lookup [Customer].[CustomerId]. The logic will be: Lookup a [CustomerId] WHERE [CustomerName]=[CustomerName] AND [SCDStartDate] <= [EffectiveDate] AND [EffectiveDate] <= [SCDEndDate] The conventional approach to this would be to use a full cached lookup but that isn’t an option here because we are using inequality conditions. The obvious next step then is to use a non-cached lookup which enables us to change the SQL statement to use inequality operators: Let’s take a look at the dataflow: Notice these are all synchronous components. This approach works just fine however it does have the limitation that it has to issue a SQL statement against your lookup set for every row thus we can expect the execution time of our dataflow to increase linearly in line with the number of rows in our dataflow; that’s not good. OK, that’s the obvious method. Let’s now look at a different way of achieving this using an asynchronous Merge Join transform coupled with a Conditional Split. I’ve shown it post-execution so that I can include the row counts which help to illustrate what is going on here: Notice that there are more rows output from our Merge Join component than on the input. That is because we are joining on [CustomerName] and, as we know, we have multiple records per [CustomerName] in our lookup set. Notice also that there are two asynchronous components in here (the Sort and the Merge Join). I have embedded a video below that compares the execution times for each of these two methods. The video is just over 8minutes long. View on Vimeo  For those that can’t be bothered watching the video I’ll tell you the results here. The dataflow that used the Lookup transform took 36 seconds whereas the dataflow that used the Merge Join took less than two seconds. An illustration in case it is needed: Pretty conclusive proof that in some scenarios it may be quicker to use an asynchronous component than a synchronous one. Your mileage may of course vary. The scenario outlined here is analogous to performance tuning procedural SQL that uses cursors. It is common to eliminate cursors by converting them to set-based operations and that is effectively what we have done here. Our non-cached lookup is performing a discrete operation for every single row of data, exactly like a cursor does. By eliminating this cursor-in-disguise we have dramatically sped up our dataflow. I hope all of that proves useful. You can download the package that I demonstrated in the video from my SkyDrive at http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100514/20100514%20Lookups%20and%20Merge%20Joins.zip Comments are welcome as always. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • OSB and Coherence Integration

    - by mark.ms.smith
    Anyone who has tried to manage Coherence nodes or tried to cache results in OSB, will appreciate the new functionality now available. As of WebLogic Server 10.3.4, you can use the WebLogic Administration Server, via the Administration Console or WLST, and java-based Node Manager to manage and monitor the life cycle of stand-alone Coherence cache servers. This is a great step forward as the previous options mainly involved writing your own scripts to do this. You can find an excellent description of how this works at James Bayer’s blog. You can also find the WebLogic documentation here.As of Oracle Service Bus 11gR1 (11.1.1.3.0), OSB now supports service result caching for Business Bervices with Coherence. If you use Business Services that return somewhat static results that do not change often, you can configure those Business Services to cache results. For Business Services that use result caching, you can control the time to live for the cached result. After the cached result expires, the next Business Service call results in invoking the back-end service to get the result. This result is then stored in the cache for future requests to access. I’m thinking that this caching functionality would be perfect for some sort of cross reference data that was refreshed nightly by batch. You can find the OSB Business Service documentation here.Result Caching in a dedicated JVMThis example demonstrates these new features by configuring a OSB Business Service to cache results in a separate Coherence JVM managed by WebLogic. The reason why you may want to use a separate, dedicated JVM is that the result cache data could potentially be quite large and you may want to protect your OSB java heap.In this example, the client will call an OSB Proxy Service to get Employee data based on an Employee Id. Using a Business Service, OSB calls an external system. The results are automatically cached and when called again, the respective results are retrieved from the cache rather than the external system.Step 1 – Set up your Coherence Server Via the OSB Administration Server Console, create your Coherence Server to be used as the results cache.Here are the configured Coherence Server arguments from the Server Start tab. Note that I’m using the default Cache Config and Override files in the domain.-Xms256m -Xmx512m -XX:PermSize=128m -XX:MaxPermSize=256m -Dtangosol.coherence.override=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-override.xml -Dtangosol.coherence.cluster=OSB-cluster -Dtangosol.coherence.cacheconfig=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-cache-config.xml -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dcom.sun.management.jmxremote Just incase you need it, here is my Coherence Server classpath:/app/middleware/jdev_11.1.1.4/oracle_common/modules/oracle.coherence_3.6/coherence.jar: /app/middleware/jdev_11.1.1.4/modules/features/weblogic.server.modules.coherence.server_10.3.4.0.jar: /app/middleware/jdev_11.1.1.4/oracle_osb/lib/osb-coherence-client.jarBy default, OSB will try and create a local result cache instance. You need to disable this by adding the following JVM parameters to each of the OSB Managed Servers:-Dtangosol.coherence.distributed.localstorage=false -DOSB.coherence.cluster=OSB-clusterIf you need more information on configuring a remote result cache, have a look at the configuration documentration under the heading Using an Out-of-Process Coherence Cache Server.Step 2 – Configure your Business Service Under the respective Business Service Message Handling Configuration (Advanced Properties), you need to enable “Result Caching”. Additionally, you need to determine what the cache data will be keyed on. In the example below, I’m keying it on the unique Employee Id.The Results As this test was on my laptop, the actual timings are just an indication that there is a benefit to caching results. Using my test harness, I sent 10,000 requests to OSB, all with the same Employee Id. In this case, I had result caching disabled.You can see that this caused the back end Business Service (BS_GetEmployeeData) to be called for each request. Then after enabling result caching, I sent the same number of identical requests.You can now see the Business Service was only invoked once on the first request. All subsequent requests used the Results Cache.

    Read the article

  • Free RAM disappears - Memory leak?

    - by Izzy
    On a fresh started system, free reports about 1.5G used RAM (8G RAM alltogether, Ubuntu 12.04 with lightdm and plasma desktop, one konsole window started). Having the apps running I use, it still consumes not more than 2G. However, having the system running for a couple of days, more and more of my free RAM disappears -- without showing up in the list of used apps: while smem --pie=name reports less than 20% used (and 80% being available), everything else says differently. free -m for example reports on about day 7: total used free shared buffers cached Mem: 7459 7013 446 0 178 997 -/+ buffers/cache: 5836 1623 Swap: 9536 296 9240 (so you can see, it's not the buffers or the cache). Today this finally ended with the system crashing completely: the windows manager being gone, apps "hanging in the air" (frameless) -- and a popup notifying me about "too many open files". Syslog reports: kernel: [856738.020829] VFS: file-max limit 752838 reached So I closed those applications I was able to close, and killed X using Ctrl-Alt-backspace. X tried to come up again after that with failsafeX, but was unable to do so as it could no longer detect its configuration. So I switched to a console using Ctrl-Alt-F2, captured all information I could think of (vmstat, free, smem, proc/meminfo, lsof, ps aux), and finally rebooted. X again came up with failsafeX; this time I told it to "recover from my backed-up configuration", then switched to a console and successfully used startx to bring up the graphical environment. I have no real clue to what is causing this issue -- though it must have to do either with X itself, or with some user processes running on X -- as after killing X, free -m output looked like this: total used free shared buffers cached Mem: 7459 2677 4781 0 62 419 -/+ buffers/cache: 2195 5263 Swap: 9536 59 9477 (~3.5GB being freed) -- to compare with the output after a fresh start: total used free shared buffers cached Mem: 7459 1483 5975 0 63 730 -/+ buffers/cache: 689 6769 Swap: 9536 0 9536 Two more helpful outputs are provided by memstat -u. Shortly before the crash: User Count Swap USS PSS RSS mail 1 0 200 207 616 whoopsie 1 764 740 817 2300 colord 1 3200 836 894 2156 root 62 70404 352996 382260 569920 izzy 80 177508 1465416 1519266 1851840 After having X killed: User Count Swap USS PSS RSS mail 1 0 184 188 356 izzy 1 1400 708 739 1080 whoopsie 1 848 668 826 1772 colord 1 3204 804 888 1728 root 62 54876 131708 149950 267860 And after a restart, back in X: User Count Swap USS PSS RSS mail 1 0 212 217 628 whoopsie 1 0 1536 1880 5096 colord 1 0 3740 4217 7936 root 54 0 148668 180911 345132 izzy 47 0 370928 437562 915056 Edit: Just added two graphs from my monitoring system. Interesting to see: everytime when there's a "jump" in memory consumption, CPU peaks as well. Just found this right now -- and it reminds me of another indicator pointing to X itself: Often when returning to my machine and unlocking the screen, I found something doing heavvy work on my CPU. Checking with top, it always turned out to be /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background none. So after this long explanation, finally my questions: What could be the possible causes? How can I better identify involved processes/applications? What steps could be taken to avoid this behaviour -- short from rebooting the machine all X days? I was running 8.04 (Hardy) for about 5 years on my old machine, never having experienced the like (always more than 100 days uptime, before rebooting for e.g. kernel updates). This now is a complete new machine with a fresh install of 8.04. In case it matters, some specs: AMD A4-3400 APU with Radeon(tm) HD Graphics, using the open-source ati/radeon driver (so no fglrx installed), 8GB RAM, WDC WD1002FAEX-0 hdd (1TB), Asus F1A75-V Evo mainboard. Ubuntu 12.04 64-bit with KDE4/Plasma. Apps usually open more or less permanently include Evolution, Firefox, konsole (with Midnight Commander running inside, about 4 tabs), and LibreOffice -- plus occasionally Calibre, Gimp and Moneyplex (banking software I'm already using for almost 20 years now, in a version which did fine on Hardy).

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • Caching factory design

    - by max
    I have a factory class XFactory that creates objects of class X. Instances of X are very large, so the main purpose of the factory is to cache them, as transparently to the client code as possible. Objects of class X are immutable, so the following code seems reasonable: # module xfactory.py import x class XFactory: _registry = {} def get_x(self, arg1, arg2, use_cache = True): if use_cache: hash_id = hash((arg1, arg2)) if hash_id in _registry: return _registry[hash_id] obj = x.X(arg1, arg2) _registry[hash_id] = obj return obj # module x.py class X: # ... Is it a good pattern? (I know it's not the actual Factory Pattern.) Is there anything I should change? Now, I find that sometimes I want to cache X objects to disk. I'll use pickle for that purpose, and store as values in the _registry the filenames of the pickled objects instead of references to the objects. Of course, _registry itself would have to be stored persistently (perhaps in a pickle file of its own, in a text file, in a database, or simply by giving pickle files the filenames that contain hash_id). Except now the validity of the cached object depends not only on the parameters passed to get_x(), but also on the version of the code that created these objects. Strictly speaking, even a memory-cached object could become invalid if someone modifies x.py or any of its dependencies, and reloads it while the program is running. So far I ignored this danger since it seems unlikely for my application. But I certainly cannot ignore it when my objects are cached to persistent storage. What can I do? I suppose I could make the hash_id more robust by calculating hash of a tuple that contains arguments arg1 and arg2, as well as the filename and last modified date for x.py and every module and data file that it (recursively) depends on. To help delete cache files that won't ever be useful again, I'd add to the _registry the unhashed representation of the modified dates for each record. But even this solution isn't 100% safe since theoretically someone might load a module dynamically, and I wouldn't know about it from statically analyzing the source code. If I go all out and assume every file in the project is a dependency, the mechanism will still break if some module grabs data from an external website, etc.). In addition, the frequency of changes in x.py and its dependencies is quite high, leading to heavy cache invalidation. Thus, I figured I might as well give up some safety, and only invalidate the cache only when there is an obvious mismatch. This means that class X would have a class-level cache validation identifier that should be changed whenever the developer believes a change happened that should invalidate the cache. (With multiple developers, a separate invalidation identifier is required for each.) This identifier is hashed along with arg1 and arg2 and becomes part of the hash keys stored in _registry. Since developers may forget to update the validation identifier or not realize that they invalidated existing cache, it would seem better to add another validation mechanism: class X can have a method that returns all the known "traits" of X. For instance, if X is a table, I might add the names of all the columns. The hash calculation will include the traits as well. I can write this code, but I am afraid that I'm missing something important; and I'm also wondering if perhaps there's a framework or package that can do all of this stuff already. Ideally, I'd like to combine in-memory and disk-based caching.

    Read the article

  • OpenXML SDK: Make Excel recalculate formula

    - by chiccodoro
    I update some cells of an Excel spreadsheet through the Microsoft Office OpenXML SDK 2.0. Changing the values makes all cells containing formula that depend on the changed cells invalid. However, due to the cached values Excel does not recalculate the formular, even if the user clicks on "Calculate now". What is the best way to invalidate all dependent cells of the whole workbook through the SDK? So far, I've found the following code snippet at http://cdonner.com/introduction-to-microsofts-open-xml-format-sdk-20-with-a-focus-on-excel-documents.htm: public static void ClearAllValuesInSheet (SpreadsheetDocument spreadSheet, string sheetName) { WorksheetPart worksheetPart = GetWorksheetPartByName(spreadSheet, sheetName); foreach (Row row in worksheetPart.Worksheet. GetFirstChild().Elements()) { foreach (Cell cell in row.Elements()) { if (cell.CellFormula != null && cell.CellValue != null) { cell.CellValue.Remove(); } } } worksheetPart.Worksheet.Save(); } Besides the fact that this snippet does not compile for me, it has two limitations: It only invalidates a single sheet, although other sheets might contain dependent formula It does not take into account any dependencies. I am looking for a way that is efficient (in particular, only invalidates cells that depend on a certain cell's value), and takes all sheets into account. Update: In the meantime I have managed to make the code compile & run, and to remove the cached values on all sheets of the workbook. (See answers.) Still I am interested in better/alternative solutions, in particular how to only delete cached values of the cells that actually depend on the updated cell.

    Read the article

  • Remove file from git repository (history)

    - by Devenv
    (solved, see bottom of the question body) Looking for this for a long time now, what I have till now is: http://dound.com/2009/04/git-forever-remove-files-or-folders-from-history/ and http://progit.org/book/ch9-7.html Pretty much the same method, but both of them leave objects in pack files... Stuck. What I tried: git filter-branch --index-filter 'git rm --cached --ignore-unmatch file_name' rm -Rf .git/refs/original rm -Rf .git/logs/ git gc Still have files in the pack, and this is how I know it: git verify-pack -v .git/objects/pack/pack-3f8c0...bb.idx | sort -k 3 -n | tail -3 And this: git filter-branch --index-filter "git rm -rf --cached --ignore-unmatch file_name" HEAD rm -rf .git/refs/original/ && git reflog expire --all && git gc --aggressive --prune The same... Tried git clone trick, it removed some of the files (~3000 of them) but the largest files are still there... I have some large legacy files in the repository, ~200M, and I really don't want them there... And I don't want to reset the repository to 0 :( SOLUTION: This is the shortest way to get rid of the files: check .git/packed-refs - my problem was that I had there a refs/remotes/origin/master line for a remote repository, delete it, otherwise git won't remove those files (optional) git verify-pack -v .git/objects/pack/#{pack-name}.idx | sort -k 3 -n | tail -5 - to check for the largest files (optional) git rev-list --objects --all | grep a0d770a97ff0fac0be1d777b32cc67fe69eb9a98 - to check what files those are git filter-branch --index-filter 'git rm --cached --ignore-unmatch file_names' - to remove the file from all revisions rm -rf .git/refs/original/ - to remove git's backup git reflog expire --all --expire='0 days' - to expire all the loose objects (optional) git fsck --full --unreachable - to check if there are any loose objects git repack -A -d - repacking the pack git prune - to finally remove those objects

    Read the article

  • Wix Burn issue: Uninstall fails saying "Found dependent"

    - by vivek chaurasiya
    I have made a burn bundle which encapsulates 2 msi (msi1 , msi2) . In the UI I use checkboxes to ask the user to select which MSI to install. Now if user selects one of the msi to install, installation goes fine. But during Uninstall action, the burn log file says : [][:15]: Detected package: Netfx4Full, state: Present, cached: None [][:15]: Detected package: DummyInstallationPackageId3, state: **Absent**, cached: None [][:15]: Detected package: msi2.msi, state: **Present**, cached: Complete [][:15]: Detect complete, result: 0x0 [][:16]: Plan 3 packages, action: Uninstall [][:16]: Will not uninstall package: msi2.msi, found dependents: 1 [][:16]: Found dependent: {08e74372-83f2-4594-833b-e924b418b360}, name: My Test Application In the install scenario, I chose to install msi2 and NOT msi1. My bundle code looks like: <Bundle Name="My Test Application" Version="1.0.0.0" Manufacturer="Bryan" UpgradeCode="CC2A383C-751A-43B8-90BF-A250F7BC2863"> <Chain> <PackageGroupRef Id='Netfx4Full' /> <MsiPackage Id="DummyInstallationPackageId3" SourceFile="msi1.msi" ForcePerMachine="yes" InstallCondition="var1 = 1" > </MsiPackage> <MsiPackage SourceFile="msi2.msi" Vital="yes" Cache="yes" Visible="no" ForcePerMachine="yes" InstallCondition="var2 = 2" > </MsiPackage> </Chain> My OnDetectPackageComplete() looks like: private void OnDetectPackageComplete(object sender, DetectPackageCompleteEventArgs e) { if (e.PackageId == "DummyInstallationPackageId3" ) { if (e.State == PackageState.Absent) InstallEnabled = true; else if (e.State == PackageState.Present) UninstallEnabled = true; } } What should I do so that the burn bundle is freely able to uninstall the msi which the user selected at the time of install. Besides, If I select both msi to install, then uninstall is working fine. IMO, there is some problem b/w the relation of bundle and the 2 msi. Please help me as I am stuck with this problem.

    Read the article

  • CSS Caching and User-CUstomized CSS - Any Suggestions?

    - by Joe
    I'm just trying to figure out the best approach here... my consideration is that I'd like my users to be able to customize the colors of certain elements on the page. These are not pre-made css they can choose from, but rather styles that can be edited using javascript to change the style, so they cannot be "pre-made" into separate style sheets. I am also generating the css into a cache directory since I am generating the css through a PHP script. I am thinking that, perhaps I should: 1) If cached css file doesn't exist, create css file using the website default style settings. 2) If cached css file does exist, check if user has custom settings, if so, edit the cache file before displaying it. Btw, when I refer to "cached file" I mean the PHP generated css document. My goal is to prevent the need to have PHP re-generate the css file each time, whilst still allowing users, when logged in, to have their customized css settings applied. I will store these settings in a database most likely so when they return it is saved for them when they login.

    Read the article

  • How to cache L2E entity without attach/detach?

    - by Eran Betzalel
    The following code will select a key/value table in the DB and will save the result to the cache: using (var db = new TestEntities()) { if(Cache["locName_" + inventoryLocationName] != null) return Cache["locName_" + inventoryLocationName]; var location = db.InventoryLocationsSet.FirstOrDefault( i => i.InventoryLocationName.Equals( inventoryLocationName, StringComparison.CurrentCultureIgnoreCase)); db.Detach(location); // ???? Cache["locName_" + inventoryLocationName] = location; return location; } But it doesn't work well when I'm trying to use the cached object. Of course, the problem is the different ObjectContext, so I use Attach/Detach and then the problem solves but with a codding horror price: db.Attach(locationFromCache); // ???? try { // Use location as foreign key db.SaveChanges(); } finally { db.Detach(locationFromCache); // ???? } So, how can I use cached objects without using attach/detach methods? What happens if more than one user will have to use this cached object? Should I put the whole thing in a CriticalSection?!

    Read the article

  • Security implications of writing files using PHP

    - by susmits
    I'm currently trying to create a CMS using PHP, purely in the interest of education. I want the administrators to be able to create content, which will be parsed and saved on the server storage in pure HTML form to avoid the overhead that executing PHP script would incur. Unfortunately, I could only think of a few ways of doing so: Setting write permission on every directory where the CMS should want to write a file. This sounds like quite a bad idea. Setting write permissions on a single cached directory. A PHP script could then include or fopen/fread/echo the content from a file in the cached directory at request-time. This could perhaps be carried out in a Mediawiki-esque fashion: something like index.php?page=xyz could read and echo content from cached/xyz.html at runtime. However, I'll need to ensure the sanity of $_GET['page'] to prevent nasty variations like index.php?page=http://www.bad-site.org/malicious-script.js. I'm personally not too thrilled by the second idea, but the first one sounds very insecure. Could someone please suggest a good way of getting this done?

    Read the article

  • Why does presentModalViewController not always work?

    - by E-Madd
    My application requires data from a server in order to run. The first thing it does is displays a view controller (LoadingViewController) that is responsible for checking if the data is saved to my PersistentStoreCoordinator. If the data isn't cached locally, it gets it from my server and caches it, and posts a notification that the LoadingViewController is listening for. When that notification comes through, LoadingViewController presents the application's MainViewController using the presentModalViewController with a flip animation. So far, so good... no errors. However, if the application loads and determines the data IS cached - the presentModalViewController does not work and the main application view never appears. No errors. I've even gone as far as adding a button to the Loading view that executes the same code when pressed and the damn thing works. I'm suspicious it has something to do with the timing of it all but I'm clueless as to what I can do to ensure the view is displayed with that flipping animation if the data is already cached locally. Any suggestions?

    Read the article

  • iOS LocationManager is not updating location (Titanium Appcelerator module)

    - by vale4674
    I've made Appcelerator Titanium Module for fetching device's rotaion and location. Source can be found on GitHub. The problem is that it fetches only one cached location but device motion data is OK and it is refreshing. I don't use delegate, I pull that data in my Titanium Javascript Code. If I set "City Run" in Simulator - Debug - Location nothing happens. The same cached location is returning. Pulling of location is OK because I tried with native app wich does this: textView.text = [NSString stringWithFormat:@"%f %f\n%@", locationManager.location.coordinate.longitude, locationManager.location.coordinate.latitude, textView.text]; And it is working in simulator and on device. But the same code as you can see on GitHub is not working as Titanium module. Any ideas? EDIT: I am looking at GeolocationModule src and I see nothing special there. As I said, my code in my module has to work since it is working in native app. "Only" problem is that it is not updating location and it always returns me that cached location.

    Read the article

  • ExpertPDF and Caching of URLs

    - by Josh
    We are using ExpertPDF to take URLs and turn them into PDFs. Everything we do is through memory, so we build up the request and then read the stream into ExpertPDF and then write the bits to file. All the files we have been requesting so far are just plain HTML documents. Our designers update CSS files or change the HTML and rerequest the documents as PDFs, but often times, things are getting cached. Take, for example, if I rename the only CSS file and view the HTML page through a web browser, the page looks broke because the CSS doesn't exist. But if I request that page through the PDF Generator, it still looks ok, which means somewhere the CSS is cached. Here's the relevant PDF creation code: // Create a request HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(url); request.UserAgent = "IE 8.0"; request.ContentType = "application/x-www-form-urlencoded"; request.Method = "GET"; // Send the request HttpWebResponse resp = (HttpWebResponse)request.GetResponse(); if (resp.IsFromCache) { System.Web.HttpContext.Current.Trace.Write("FROM THE CACHE!!!"); } else { System.Web.HttpContext.Current.Trace.Write("not from cache"); } // Read the response pdf.SavePdfFromHtmlStream(resp.GetResponseStream(), System.Text.Encoding.UTF8, "Output.pdf"); When I check the trace file, nothing is being loaded from cache. I checked the IIS log file and found a 200 response coming from the request, even after a file had been updated (I would expect a 302). We've tried putting the No-Cache attribute on all HTML pages, but still no luck. I even turned off all caching at the IIS level. Is there anything in ExpertPDF that might be caching somewhere or something I can do to the request object to do a hard refresh of all resources? UPDATE I put ?foo at the end of my style href links and this updates the CSS everytime. Is there a setting someplace that can prevent stylesheets from being cached so I don't have to do this inelegant solution?

    Read the article

  • In Rails, a Sweeper isn't getting called in a Model-only setup

    - by charliepark
    I'm working on a Rails app, where I'm using page caching to store static html output. The caching works fine. I'm having trouble expiring the caches, though. I believe my problem is, in part, because I'm not expiring the cache from my controller. All of the actions necessary for this are being handled within the model. This seems like it should be doable, but all of the references to Model-based cache expiration that I'm finding seem to be out of date, or are otherwise not working. In my environment.rb file, I'm calling config.load_paths += %W( #{RAILS_ROOT}/app/sweepers ) And I have, in the /sweepers folder, a LinkSweeper file: class LinkSweeper < ActionController::Caching::Sweeper observe Link def after_update(link) clear_links_cache(link) end def clear_links_cache(link) # expire_page :controller => 'links', :action => 'show', :md5 => link.md5 expire_page '/l/'+ link.md5 + '.html' end end So ... why isn't it deleting the cached page when I update the model? (Process: using script/console, I'm selecting items from the database and saving them, but their corresponding pages aren't deleting from the cache), and I'm also calling the specific method in the Link model that would normally invoke the sweeper. Neither works. If it matters, the cached file is an md5 hash off a key value in the Links table. The cached page is getting stored as something like /l/45ed4aade64d427...99919cba2bd90f.html. Essentially, it seems as though the Sweeper isn't actually observing the Link. I also read (here) that it might be possible to simply add the sweeper to config.active_record.observers in environment.rb, but that didn't seem to do it (and I wasn't sure if the load_path of app/sweepers in environment.rb obviated that).

    Read the article

  • Cakephp cache only caching one file per action

    - by Jamesz
    Hi, I have a songs controller. Within the songs controller i have a 'view' action which get's passed an id, eg /songs/view/1 /songs/view/5 /songs/view/500 When a user visits /songs/view/1, the file is cached correctly and saved as 'songs_view_1.php' Now for the problem, when a user hit's a different song, eg /songs/view/2, the 'songs_view_1.php' is deleted and '/songs/view/2.php' is in it's place. The cahced files will stay there for a day if I don't visit a different url, and visiting a different action will not affect any other action's cached file. I've tried replacing my 'cake' folder (from 1.2 to 1.2.6), but that didn't do anything. I get no error messages at all and nothing in the logs. Here's my code, I've tried umpteen variations all ending up with the same problem. var $helpers = array('Cache'); var $cacheAction = array( 'view/' => '+1 day' ); Any ideas? EDIT: After some more testing, this code var $cacheAction = array( 'view/1' => "1 day", 'view/2' => "1 day" ); will cache 'view/1' or 'view/2', but delete the previous page as before. If I visit '/view/3' it will delete the cached page from before... sigh EDIT: Having the same issue on another server with same code...

    Read the article

  • CXF code first service, WSDL generation; soap:address changes?

    - by jcalvert
    I have a simple Java interface/implementation I am exposing via CXF. I have a jaxws element in my Spring configuration file like this: <jaxws:endpoint id="managementServiceJaxws" implementor="#managementService" address="/jaxws/ManagementService" > </jaxws:endpoint> It generates the WSDL from my annotated interface and exposes the service. Then when I hit http://myhostname/cxf/jaxws/ManagementService?wsdl I get a lovely WSDL. At the bottom in the wsdl:service element, I'll see <soap:address location="http://myhostname/cxf/jaxws/ManagementService"/> However, some time a day or so later, with no application restart, hitting that same url produces: This causes a number of problems, but what I really want is to fix it. Right now, there's a particular client to the webservice that sets the endpoint to localhost; because it runs on the same machine. Is it possible the wsdl is getting regenerated and cached and then exposing the 'localhost' version? In part I don't know the exact mechanism by which one goes from a ?wsdl request in CXF to the response. It seems almost certain that it's retrieving some cached version, given that it's supposed to be determining the address by asking the servletcontainer (Jetty). For reference I know a stopgap solution is using the hostname on the client and making sure an alias in place so that it goes over the loopback. EDIT: For reference, I confirmed that if I bring my application up and first hit it over localhost, then querying for the wsdl via the hostname shows the address as localhost. Conversely, first hitting it over the hostname causes localhost requests to show the hostname. So obviously something is getting cached here.

    Read the article

  • iOS - is it possible to cache CGContextDrawImage?

    - by woot586
    I used the timing profile tool to identify that 95% of the time is spent calling the function CGContextDrawImage. In my app there are a lot of duplicate images repeatably being chopped from a sprite map and drawn to the screen. I was wondering if it was possible to cache the output of CGContextDrawImage in an NSMutableDictionay, then if the same sprite is requested again it can be just pull it from the cache rather than doing all the work of clipping and rendering it again. This is what i’ve got but I have not been to successful: Definitions if(cache == NULL) cache = [[NSMutableDictionary alloc]init]; //Identifier based on the name of the sprite and location within the sprite. NSString* identifier = [NSString stringWithFormat:@"%@-%d",filename,frame]; Adding to cache CGRect clippedRect = CGRectMake(0, 0, clipRect.size.width, clipRect.size.height); CGContextClipToRect( context, clippedRect); //create a rect equivalent to the full size of the image //offset the rect by the X and Y we want to start the crop //from in order to cut off anything before them CGRect drawRect = CGRectMake(clipRect.origin.x * -1, clipRect.origin.y * -1, atlas.size.width, atlas.size.height); //draw the image to our clipped context using our offset rect CGContextDrawImage(context, drawRect, atlas.CGImage); [cache setValue:UIGraphicsGetImageFromCurrentImageContext() forKey:identifier]; UIGraphicsEndImageContext(); Rendering a cached sprite There is probably a better way to render CGImage which is my ultimate caching goal but at the moment I’m just looking to successfully render the cached image out however this has not been successful. UIImage* cachedImage = [cache objectForKey:identifier]; if(cachedImage){ NSLog(@"Cached %@",identifier); CGRect imageRect = CGRectMake(0, 0, cachedImage.size.width, cachedImage.size.height); if (NULL != UIGraphicsBeginImageContextWithOptions) UIGraphicsBeginImageContextWithOptions(imageRect.size, NO, 0); else UIGraphicsBeginImageContext(imageRect.size); //Use draw for now just to see if the image renders out ok CGContextDrawImage(context, imageRect, cachedImage.CGImage); UIGraphicsEndImageContext(); }

    Read the article

  • Limiting object allocation over multiple threads

    - by John
    I have an application which retrieves and caches the results of a clients query. The client then requests different chunks of data and the application sends the relevant results and removes them from the cache. A new requirement for this application is that there needs to be a run-time configurable maximum number of results which may be cached. I've taken the naive approach and implemented this by using a counter under a lock which is incremented every time a result is cached and decremented whenever a result is removed from the cache. Unfortunately, this has drastically reduced the applications performance when processing a large number of concurrent requests. I have tried both a critical section lock and spin-lock; the performance improves a bit with a spin-lock, but is still unacceptably slow. Is there a better way to solve this problem which may improve performance? Right now I have a thread pool that services requests and each request is tied to a Request object which stores that cached results for that particular request. Here is a simplified pseudo code version of my current implementation: void ResultCallback( Result result, Request *request ) { lock totalResultsCached lock cachedLimit if( totalResultsCached + 1 > cachedLimit ) { unlock cachedLimit unlock totalResultsCached //cancel the request return; } ++totalResultsCached; unlock cachedLimit unlock totalResultsCached request.add(result) } void SendResults( int resultsToSend, Request *request ) { while ( resultsToSend > 0 ) { send(request.remove()) lock totalResultsCached --totalResultsCached unlock totalResultsCached --resultsToSend; } }

    Read the article

  • Apache hanging with MaxClients is reached

    - by Ash White
    My Apache 2.2 (preform MPM) is hanging when MaxClients is reached, rather than queueing up requests and serving them when child processes become free. When this happens, the web server is totally unresponsive until it is manually restarted. The server stack is Ubuntu 8, MySQL 5, PHP 5. Hardware is Dual Xeons (2.8) with 2GB of RAM. It serves 30,000 - 50,000 pageviews per day. Static images, CSS, and JS are offloaded to a separate server and PHP is cached using eAccelerator. The HTML output of many pages is cached to the filesystem. Relevant Apache directives: KeepAlive On MaxKeepAliveRequests 50 KeepAliveTimeout 2 StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 2000

    Read the article

  • Stop squid caching 302 and 307 with deny_info

    - by 0xception
    TLDR: 302, 307 and Error pages are being cached. Need to force a refresh of the content. Long version: I've setup a very minimal squid instance running on a gateway which shouldn't not cache ANYTHING but needs to be solely used as a domain based web filter. I'm using another application which redirects un-authenticated users to the proxy which then uses the deny_info option redirects any non-whitelisted request to the login page. After the user has authenticated the firewall rule gets placed so they no longer get sent to the proxy. The problem is that when a user hits a website (xkcd.com) they are unauthenticated so they get redirected via the firewall: iptables -A unknown-user -t nat -p tcp --dport 80 -j REDIRECT --to-port 39135 to the proxy at this point squid redirects the user to the login page using a 302 (i've also tried 307, and i've also make sure the headers are set to no-cache and/or no-store for Cache-Control and Pragma). Then when the user logs into the system they get firewall rule which no longer directs them to the squid proxy. But if they go to xkcd.com again they will have the original redirection page cached and will once again get the login page. Any idea how to force these redirects to NOT be cached by the browser? Perhaps this is a problem w/ the browsers and not squid, but not sure how to get around it. Full squid config below. # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 acl localnet src 192.168.182.0/23 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl https port 443 acl http port 80 acl CONNECT method CONNECT # # Disable Cache # cache deny all via off negative_ttl 0 seconds refresh_all_ims on #error_default_language en # Allow manager access only from localhost http_access allow manager localhost http_access deny manager # Deny access to anything other then http http_access deny !http # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !https visible_hostname gate.ovatn.net # Disable memory pooling memory_pools off # Never use neigh cache objects for cgi-bin scripts hierarchy_stoplist cgi-bin ? # # URL rewrite Test Settings # #acl whitelist dstdomain "/etc/squid/domains-pre.lst" #url_rewrite_program /usr/lib/squid/redirector #url_rewrite_access allow !whitelist #url_rewrite_children 5 startup=0 idle=1 concurrency=0 #http_access allow all # # Deny Info Error Test # acl whitelist dstdomain "/etc/squid/domains-pre.lst" deny_info http://login.domain.com/ whitelist #deny_info ERR_ACCESS_DENIED whitelist http_access deny !whitelist http_access allow whitelist http_port 39135 transparent ## Debug Values access_log /var/log/squid/access-pre.log cache_log /var/log/squid/cache-pre.log # Production Values #access_log /dev/null #cache_log /dev/null # Set PID file pid_filename /var/run/gatekeeper-pre.pid SOLUTION: I believe I might have found a solution to this. After days and days trying to figure it out, only through a random stumble I found client_persistent_connections off server_persistent_connections off This did the trick. So it wasn't so much cache as it was a single persistent connection messing things up. W000T!

    Read the article

  • No package rrdtool-perl available

    - by Pentium10
    On a CentOS release 5.10 (Final) I am trying to install rrdtool to get RRDs.pm but I have no luck. yum install rrdtool-perl Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * rpmforge: mirror.team-cymru.org Excluding Packages in global exclude list Finished Setting up Install Process No package rrdtool-perl available. Nothing to do I tried also librrds-perl but that was not found either. 2. I tried: yum whatprovides "*/RRDs.pm" Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * rpmforge: mirror.team-cymru.org Excluding Packages in global exclude list Finished cpanel-perl-514-Log-Log4perl-1.37-1.cp1136.x86_64 : CPAN module - Log4j implementation for Perl Repo : installed Matched from: Filename : /usr/local/cpanel/3rdparty/perl/514/lib64/perl5/cpanel_lib/Log/Log4perl/Appender/RRDs.pm cpanel-perl-514-RRDs-v1.4.7-1.cp1136.x86_64 : CPAN module - unknown Repo : installed Matched from: Filename : /usr/local/cpanel/3rdparty/perl/514/lib64/perl5/cpanel_lib/x86_64-linux-64int/RRDs.pm then I tried installing but I got: No package cpanel-perl available and the variants (tried with full name, tried both repos listed)

    Read the article

  • moving from Exchange 2003 to Exchange 2010

    - by pcampbell
    Consider a small-medium business' deployment of Exchange 2003. The question is around migrating to Exchange 2010. Here's a bit about the landscape: Current state is 50-100 users/mailboxes with the majority using Outlook 2007 OWA enabled desktop users are NOT running in Cached Exchange Mode laptops users ARE running in Cached Exchange Mode a single Exchange server with modest or reasonable specs for the day (3gz, multi-core, 4gb, Win 2003 32-bit) Questions Do you have any suggestions for the admin team regarding the upgrade path/steps from Exchange 2003 to 2010? Considering the requirement of a 64 bit OS, consider a new separate machine as ready to go with Win 2008. Have I missed any details? Where might virtualization help in this project? Any lessons learned in previous upgrades (2007 or 2010) would be appreciated!

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >