Search Results

Search found 8125 results on 325 pages for 'metadata cache'.

Page 32/325 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Syntactic sugar in PHP with static functions

    - by Anna
    The dilemma I'm facing is: should I use static classes for the components of an application just to get nicer looking API? Example - the "normal" way: // example component class Cache{ abstract function get($k); abstract function set($k, $v); } class APCCache extends Cache{ ... } class application{ function __construct() $this->cache = new APCCache(); } function whatever(){ $this->cache->add('blabla'); print $this->cache->get('blablabla'); } } Notice how ugly is this->cache->.... But it gets waay uglier when you try to make the application extensible trough plugins, because then you have to pass the application instance to its plugins, and you get $this->application->cache->... With static functions: interface CacheAdapter{ abstract function get($k); abstract function set($k, $v); } class Cache{ public static $ad; public function setAdapter(CacheAdapter $a){ static::$ad = $ad; } public static function get($k){ return static::$ad->get($k); } ... } class APCCache implements CacheAdapter{ ... } class application{ function __construct(){ cache::setAdapter(new APCCache); } function whatever() cache::add('blabla', 5); print cache::get('blabla'); } } Here it looks nicer because you just call cache::get() everywhere. The disadvantage is that I loose the possibility to extend this class easily. But I've added a setAdapter method to make the class extensible to some point. I'm relying on the fact that I won't need to rewrite to replace the cache wrapper, ever, and that I won't need to run multiple application instances simultaneously (it's basically a site - and nobody works with two sites at the same time) So, am doing it wrong?

    Read the article

  • ESX 3.5 refuses to update

    - by Speeddymon
    I have a set of ESX 3.5 servers in 2 different datacenters. One is DR, one is production. They are on the same vlan and so I can access any of them on the private network from my vCenter server. Last month, as a learning experience (I hadn't dealt with ESX much before), I updated the DR server. Other than finding out that a couple of bundles had to be installed manually in order to get the rest to install from vCenter, it went off without a hitch. Now, I'm trying to do the same for our production servers and it is not working. I've googled around for the error I get during scan, and investigate loads of different solutions (editing the integrity file, checking DNS, etc) -- I did install the 2 bundles that had to be installed manually already -- but scan from vCenter is just not working. Side note: I did just scan the DR server again and that scan works fine so shouldn't be a problem with vCenter that has cropped up recently -- it has to be something else. The error I get is: Patch metadata for (servername) missing. Please download updates metadata first. Failed to scan (servername) for updates. I'm all out of ideas on how to make this work, so any help would be hugely appreciated.

    Read the article

  • Getting table schema from a query

    - by Appu
    As per MSDN, SqlDataReader.GetSchemaTable returns column metadata for the query executed. I am wondering is there a similar method that will give table metadata for the given query? I mean what tables are involved and what aliases it has got. In my application, I get the query and I need to append the where clause programically. Using GetSchemaTable(), I can get the column metadata and the table it belongs to. But even though table has aliases, it still return the real table name. Is there a way to get the aliase name for that table? Following code shows getting the column metadata. const string connectionString = "your_connection_string"; string sql = "select c.id as s,c.firstname from contact as c"; using(SqlConnection connection = new SqlConnection(connectionString)) using(SqlCommand command = new SqlCommand(sql, connection)) { connection.Open(); SqlDataReader reader = command.ExecuteReader(CommandBehavior.KeyInfo); DataTable schema = reader.GetSchemaTable(); foreach (DataRow row in schema.Rows) { foreach (DataColumn column in schema.Columns) { Console.WriteLine(column.ColumnName + " = " + row[column]); } Console.WriteLine("----------------------------------------"); } Console.Read(); } This will give me details of columns correctly. But when I see BaseTableName for column Id, it is giving contact rather than the alias name c. Is there any way to get the table schema and aliases from a query like the above? Any help would be great!

    Read the article

  • InPlaceBitmapMetadataWriter.TrySave() returns true but does nothing

    - by mephisto123
    On some .JPG files (EPS previews, generated by Adobe Illustrator) in Windows 7 InPlaceBitmapMetadataWriter.TrySave() returns true after some SetQuery() calls, but does nothing. Code sample: BitmapDecoder decoder; BitmapFrame frame; BitmapMetadata metadata; InPlaceBitmapMetadataWriter writer; decoder = BitmapDecoder.Create(s, BitmapCreateOptions.PreservePixelFormat | BitmapCreateOptions.IgnoreColorProfile, BitmapCacheOption.Default); frame = decoder.Frames[0]; metadata = frame.Metadata as BitmapMetadata; writer = frame.CreateInPlaceBitmapMetadataWriter(); try { writer.SetQuery("System.Title", title); writer.SetQuery(@"/app1/ifd/{ushort=" + exiftagids[0] + "} ", (title + '\0').ToCharArray()); writer.SetQuery(@"/app13/irb/8bimiptc/iptc/object name", title); return writer.TrySave(); } catch { return false; } Image sample You can reproduce problem (if you have Windows 7) by downloading image sample and using this code sample to set title on this image. Image has enough room for metadata - and this code sample works fine on my WinXP. Same code works fine on Win7 with other .JPG files. Any ideas are welcome :)

    Read the article

  • How to clear Windows disk read cache?

    - by Sebastiaan Megens
    For performance testing I need to clear Windows' disk read cache. I tried googling but I couldn't find anything other than rebooting or other manual stuff. Before I give in and do that, I'd like to know if anyone knows of a way to clear Windows disk read cache. I'm testing on Windows 7, but I'm also interested in Windows XP solutions.

    Read the article

  • ASP.net Create Self re-caching objects?

    - by BlackTea
    How can i make a cached object re-cache it self with updated info when the cache has expired? I'm trying to prevent the next user who request the cache to have to deal with getting the data setting the cache then using it is there any background method/event i can tie the object to so that when it expires it just calls the method it self and self-caches.

    Read the article

  • problem with custom NSProtocol and caching on iPhone

    - by TomSwift
    My iPhone app embeds a UIWebView which loads html via a custom NSProtocol handler I have registered. My problem is that resources referenced in the returned html, which are also loaded via my custom protocol handler, are cached and never reloaded. In particular, my stylesheet is cached: <link rel="stylesheet" type="text/css" href="./styles.css" /> The initial request to load the html in the UIWebView looks like this: NSString* strUrl = [NSMutableString stringWithFormat: @"myprotocol:///entry?id=%d", entryID ]; NSURL* url = [NSURL URLWithString: strUrl]; [_pCurrentWebView loadRequest: [NSURLRequest requestWithURL: url cachePolicy: NSURLRequestReloadIgnoringLocalCacheData timeoutInterval: 60 ]]; (note the cache policy is set to ignore, and I've verified this cache policy carries through to subsequent requests for page resources on the initial load) The protocol handler loads the html from a database and returns it to the client using code like this: // create the response record NSURLResponse *response = [[NSURLResponse alloc] initWithURL: [request URL] MIMEType: mimeType expectedContentLength: -1 textEncodingName: textEncodingName]; // get a reference to the client so we can hand off the data id client = [self client]; // turn off caching for this response data [client URLProtocol: self didReceiveResponse:response cacheStoragePolicy: NSURLCacheStorageNotAllowed]; // set the data in the response to our jfif data [client URLProtocol: self didLoadData:data]; [data release]; (Note the response cache policy is "not allowed"). Any ideas how I can make it NOT cache my styles.css resource? I need to be able to dynamically alter the content of this resource on subsequent loads of html that references this file. I thought clearing the shared url cache would work, but it doesnt: [[NSURLCache sharedURLCache] removeAllCachedResponses]; One thing that does work, but it's terribly inefficient, is to dynamically cache-bust the url for the stylesheet by adding a timestamp parameter: <link rel="stylesheet" type="text/css" href="./styles.css?ts=1234567890" /> To make this work I have to load my html from the db, search and replace the url for the stylesheet with a cache-busting parameter that changes on each request. I'd rather not do this. My presumption is that there is no problem if I were to load my content via the built-in HTTP protocol. In that case, I'm guessing that the UIWebView looks at any Cache-Control flags in the NSURLHTTPResponse object's http headers and abides by them. Since my NSURLResponseObject has no http headers (it's not http...) then perhaps UIWebView just decides to cached the resource (ignoring the NSURLRequest caching directive?). Ideas???

    Read the article

  • How might maven's buildNumber metadata become inconsistent across multiple build agents?

    - by Brian Laframboise
    We recently added a second build machine to our build environment and began experiencing very odd occasional build failures. I have two separate Maven build machines, A and B, each running Maven 2.2.1 and communicating to a shared Nexus 1.5.0 repository manager. My problem is that builds on B will occasionally fail because it refuses to download a newer version of a common dependency 'acme-1.0.0-SNAPSHOT' previously built by A and uploaded to Nexus. Looking inside the local repositories on both machines I noticed some oddities in the repository metadata. Machine A's acme\1.0.0-SNAPSHOT\maven-metadata-nexus.xml: <metadata> <groupId>acme</groupId> <artifactId>acme</artifactId> <version>1.0.0-SNAPSHOT</version> <versioning> <snapshot> <buildNumber>1</buildNumber> </snapshot> <lastUpdated>20100525173546</lastUpdated> </versioning> </metadata> Machine B's acme\1.0.0-SNAPSHOT\maven-metadata-nexus.xml: <metadata> <groupId>acme</groupId> <artifactId>acme</artifactId> <version>1.0.0-SNAPSHOT</version> <versioning> <snapshot> <buildNumber>2</buildNumber> </snapshot> <lastUpdated>20100519232317</lastUpdated> </versioning> </metadata> In Nexus's acme/1.0.0-SNAPSHOT/maven-metadata.xml: <metadata> <groupId>acme</groupId> <artifactId>acme</artifactId> <version>1.0.0-SNAPSHOT</version> <versioning /> </metadata> If I'm interpreting the metadata files correctly (documentation online is scant), it appears machine B believes it has a newer version of the acme dependency (based on buildNumber) despite the fact that machine A last built it 6 days after machine B did (based on timestamp). Nexus also appears to be unaware of a universally correct buildNumber. How could this situation possibly arise? What could I do to prevent my builds from failing due to inconsistent metadata? Have you experienced anything similar? Important notes: Both build machines have settings.xml files where the updatePolicy is "always". Nexus does indeed have the newer version of acme that was built by A. B simply refuses to download it. A and B are the only machines uploading to Nexus. Both servers share the same system time. All processes involved have write privileges to the metadata files so that they can be updated as necessary. I was unable to find any open Maven or Nexus issues describing this behaviour. Our CI server (Atlassian Bamboo) prevents builds of the same artifact from happening concurrently, so some race condition while uploading to Nexus is rather unlikely.

    Read the article

  • [Flex] Caching canvas into ByteArray

    - by Eugene
    Task: (all in code, not visual) create a canvas, place into it some labels and draw some lines, then cache it as byteArray. The problem is that if I cache an object that is already drawed on the screen, it works great, but if I cache a canvas, that have been created few lines earlier, this results white image. Is there any solution to cache a display object, that was created in code, but not intended to be displayed at all?

    Read the article

  • Can I use a static cache Helper method in a NET MVC controller?

    - by Euston
    I realise there have been a few posts regarding where to add a cache check/update and the separation of concerns between the controller, the model and the caching code. There are two great examples that I have tried to work with but being new to MVC I wonder which one is the cleanest and suits the MVC methodology the best? I know you need to take into account DI and unit testing. Example 1 (Helper method with delegate) ...in controller var myObject = CacheDataHelper.Get(thisID, () => WebServiceServiceWrapper.GetMyObjectBythisID(thisID)); Example 2 (check for cache in model class) in controller var myObject = WebServiceServiceWrapper.GetMyObjectBythisID(thisID)); then in model class.............. if (!CacheDataHelper.Get(cachekey, out myObject)) { //do some repository processing // Add obect to cache CacheDataHelper.Add(myObject, cachekey); } Both use a static cache helper class but the first example uses a method signature with a delegate method passed in that has the name of the repository method being called. If the data is not in cache the method is called and the cache helper class handles the adding or updating to the current cache. In the second example the cache check is part of the repository method with an extra line to call the cache helper add method to update the current cache. Due to my lack of experience and knowledge I am not sure which one is best suited to MVC. I like the idea of calling the cache helper with the delegate method name in order to remove any cache code in the repository but I am not sure if using the static method in the controller is ideal? The second example deals with the above but now there is no separation between the caching check and the repository lookup. Perhaps that is not a problem as you know it requires caching anyway?

    Read the article

  • Installing a DLL to Global assembly cache (GAC)

    - by DAXShekhar
    Install you DLL assembly by using the ‘gacutil.exe’, before installing the DLL ensure it has a strong name, to assign a strong name refer to the link Assigning a DLL strong name .   1) open the Command prompt, and navigate to the folder of gacutil. 2) To install a DLL assembly gacutil /I "C:\[PathToBinDirectoryInVSProject]\gac.dll" 3) To uninstall gacutil /U  “Name_of_The_DLL”

    Read the article

  • How to implement Cache in web apps?

    - by Jhonnytunes
    This is really two questions. Im doing a project for the university for storing baseball players statitics, but from baseball data I have to calculate the score by year for the player who is beign displayed. The background is, lets say 10, 000 users hit the player "Alex Rodriguez", the application have to calculate 10, 000 the A-Rod stats by years intead of just read it from some where is temporal saved. Here I go: What is the best method for caching this type of data? Do I have to used the same database, and some temporal values on the same database, or create a Web Service for that? What reading about web caching so you recommend?

    Read the article

  • Why is facebook cache buggy?

    - by IAdapter
    I just started using facebook and I see that many times when I add something to my profile and visit it later its not there. I bet the reason is that the page is cached and not updated very often. Is this on purpose or is it a bug? P.S. For example I added the music I like and later I see that I did not add it, but next day when I visit again its there. I saw it in two web-browsers, so its a facebook bug. Does it has something to do with scalability?

    Read the article

  • Runescape - Ubuntu 12.04 - Jaggex cache folder

    - by user214179
    does anyone here know how to hide the jaggexcache folder, jagexappletviewer file and jagex_cl_runescape_LIVE file from the home folder? The problem is this, everytime you play runescape (in both windows or linux), it creates those files and folders. The problem is that on windows, you can hide them and everything is fine. On linux, if you add ( /.) to the folder or file to hide it, when you play runescape, those files and folders are again created, because they cannot be found. As far as I know, the game is Java based (needs java iced tea plug in and java 7), so is there any way of changing the directory where the game puts all those files? like home/documents instead of just /home thanks!

    Read the article

  • Recovering a lost website with no backup?

    - by Jeff Atwood
    Unfortunately, our hosting provider experienced 100% data loss, so I've lost all content for two hosted blog websites: http://blog.stackoverflow.com http://www.codinghorror.com (Yes, yes, I absolutely should have done complete offsite backups. Unfortunately, all my backups were on the server itself. So save the lecture; you're 100% absolutely right, but that doesn't help me at the moment. Let's stay focused on the question here!) I am beginning the slow, painful process of recovering the website from web crawler caches. There are a few automated tools for recovering a website from internet web spider (Yahoo, Bing, Google, etc.) caches, like Warrick, but I had some bad results using this: My IP address was quickly banned from Google for using it I get lots of 500 and 503 errors and "waiting 5 minutes…" Ultimately, I can recover the text content faster by hand I've had much better luck by using a list of all blog posts, clicking through to the Google cache and saving each individual file as HTML. While there are a lot of blog posts, there aren't that many, and I figure I deserve some self-flagellation for not having a better backup strategy. Anyway, the important thing is that I've had good luck getting the blog post text this way, and I am definitely able to get the text of the web pages out of the Internet caches. Based on what I've done so far, I am confident I can recover all the lost blog post text and comments. However, the images that go with each blog post are proving…more difficult. Any general tips for recovering website pages from Internet caches, and in particular, places to recover archived images from website pages? (And, again, please, no backup lectures. You're totally, completely, utterly right! But being right isn't solving my immediate problem… Unless you have a time machine…)

    Read the article

  • Source code not matching uploaded HTML file

    - by benhowdle89
    I'm not sure if this is the right place to ask but i'm having a hugely frustrating problem with Coda and my website (i'm not sure which one is causing the issue) I'm using Coda to make changes to my website, Coda uses built in FTP to save changes to your web page. So when you hit Save, it uploads the new file. I've been using Coda for months and never had a problem until now. I am making changes in the html of my index.php and hitting save, it's successfully uploading the file but no changes are reflected in the source code in ANY browser. I even logged into cPanel on my website, ie. www.example.com:2082 and looked at the file - the changes have been made successfully. But the actual webpage in browser's source code, no changes?? I have tried adding which made no difference. Interestingly i make changes to style.css and the changes are instant. I have emptied the cache on all of my browsers but i'm still having an issue. Does this sound like a Coda problem or has anyone heard of such a thing?

    Read the article

  • AngularJS dealing with large data sets (Strategy)

    - by Brian
    I am working on developing a personal temperature logging viewer based on my rasppi curl'ing data into my web server's api. Temperatures are taken every 2 seconds and I can have several temperature sensors posting data. Needless to say I will have a lot of data to handle even within the scope of an hour. I have implemented a very simple paging api from the server so the server doesn't timeout and is currently only returning data in 1000 units per call, then paging through the data. I had the idea to intially show say the last 20 minutes of data from a sensor (or all sensors depending on user choices), then allowing the user to select other timeframes from which to show data. The issue comes in when you want to view all sensors or an extended time period (say 24 hours). Is there a best practice of handling this large amount of data? Would it be useful to load those first 20 minutes into the live view and then cache into local storage something like the last 24 hours? I haven't been able to find a decent idea of this in use yet even though there are a lot of ways to take this problem. I am just looking for some suggestions as to what might provide a good balance between good performance and not caching the entire data set on the client side (as beyond a week of data this might not be feasible).

    Read the article

  • java annotations: library to override annotations with xml files

    - by flybywire
    Java has annotations and that is good. However, some developers feel that it is best to annotate code with metadata using xml files - others prefer annotations but would use metadata to override annotations in source code. I am writing a Java framework that uses annotations. The question is: is there a standard way to define and parse metadata from xml files. I think this is something every framework that uses annotations could benefit from but I can seem to find something like this on the Internet. Must I roll my own xml parsing/validation or has someone already done something like this?

    Read the article

  • How is external memory, internal memory, and cache organized?

    - by goldenmean
    Consider a system as follows:= A hardware board having say ARM Cortex-A8 and Neon Vector coprocessor, and Embedded Linux OS running on Cortex-A8. On this environment, if there is some application - say, a video decoder is executing - then: How is it decided that which buffers would be in external memory, which ones would be allocated in internal SRAM, etc. When one says calloc/malloc on such system/code, the pointer returned is from which memory: internal or external? Can a user make buffers to be allocated to the memories of his choice (internal/external)? In ARM architectures, there is another memory called as Tightly coupled memory (TCM). What is that and how can user enable and use it? Can I declare buffers in this memory? Do I need to see the memory map (if any) of the hardware board to understand about all these different physical memories present in a typical hardware board? How much of a role does the OS play in distinguishing these different memories? Sorry for multiple questions, but i think they all are interlinked.

    Read the article

  • Alternative to jQuery .data()?

    - by thebossman
    I'm a big fan of jQuery's .data() method, but I can't always use it. Often times I am rendering html templates that I pass via AJAX and I need to attach metadata to each of the elements in the template. For example: <ul> {% for item in itemlist %} <li metadata="{{ item.metadata }}">{{ item.name }}</li> {% endfor %} </ul> I know attaching attributes to store data is bad practice (and it might not even work in older versions of IE). What is the best practice? Is there a good alternative to this method?

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >