Search Results

Search found 15535 results on 622 pages for 'index buffer'.

Page 28/622 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Serving index.html from a subdirectory

    - by xbonez
    In my document root, I have to directories: home and foobar, both with their own index.html files. How can I set it up so that when someone visits my site at example.com, they see the contents on home/index.html? I tried using an index.php with a redirect in document root, as well as a .htaccess redirect, but both of them change the URL in the browser to example.com/home/, which I would like to ideally avoid.

    Read the article

  • How to Modify Windows 7 Search to Index Removable Drives

    - by AMissico
    I have over 8GB in my "Code Library" that I maintain on a 64GB "ScanDisk Ultra Backup USB Device". Windows Search 4.0 (installed on Windows XP) can index removable drives, but Windows 7 (which uses Windows Search 4.0) cannot because the USB device identifies itself as a "Removable" drive and Windows 7 refuses to index removable drives. How can I modify Windows 7 Search to index removable drives? All suggestions welcome and greatly appreciated.

    Read the article

  • ext4 loopback device, Buffer I/O Error on reboot

    - by cvb
    I am trying to mount a loopback device on my ext4 formatted ssd drive. I get these errors when I reboot on Linux kernel 2.6.38.8 Buffer I/O error on device loop1, logical block 0 Here is what I do: dd if=/dev/zero of=/mnt/s/lodev bs=4096 count=250000 mkfs.ext4 /mnt/s/lodev mount -n -o loop,rw /mnt/s/lodev /mnt/test The loopback mount is successful, but on reboot I get errors as mentioned above. Even mouting with 'sync','data=writeback' does not help. I tried to losetup a device, but see the same behavior. I also reformatted the base device and created the loopback device and mounted as above, I still see these errors. I do not see them when I format them as vfat. Appreciate any suggestions on this problem.

    Read the article

  • Mysql: create index on 1.4 billion records

    - by SiLent SoNG
    I have a table with 1.4 billion records. The table structure is as follows: CREATE TABLE text_page ( text VARCHAR(255), page_id INT UNSIGNED ) ENGINE=MYISAM DEFAULT CHARSET=ascii The requirement is to create an index over the column text. The table size is about 34G. I have tried to create the index by the following statement: ALTER TABLE text_page ADD KEY ix_text (text) After 10 hours' waiting I finally give up this approach. Is there any workable solution on this problem? UPDATE: the table is unlikely to be updated or inserted or deleted. The reason why to create index on the column text is because this kind of sql query would be frequently executed: SELECT page_id FROM text_page WHERE text = ? UPDATE: I have solved the problem by partitioning the table. The table is partitioned into 40 pieces on column text. Then creating index on the table takes about 1 hours to complete. It seems that MySQL index creation becomes very slow when the table size becomes very big. And partitioning reduces the table into smaller trunks.

    Read the article

  • Fresh install of nginx causes browser to download index.html instead of opening it

    - by 010110110101
    When I view this in Chrome, http://localhost:90 the file is downloaded instead of displayed in Chrome. This question has been asked a lot of times on SO, but about index.php files. My problem is a plain jane HTML file, not a PHP file. That hasn't been asked yet. I was hoping the solution would be similar, but I haven't been able to figure it out. Here's my example.com.conf: server { server_name localhost; listen 90; root /var/www/example.com/html index index.html location / { try_file $uri $uri/ =404; } } My index.html file contains only two words, no markup Hello World I think it's the mime.types. The mime.types file has the entry for html in it. This is a fresh nginx install. nginx -t reports "test is successful"

    Read the article

  • MySQL reclaim index space after large delete?

    - by cdunn
    After performing a large delete in MySQL, I understand you need to run a NULL ALTER to reclaim disk space, is this also true for reclaiming index space? We have tables using 10G of index space and have deleted/archived large chunks of this data and unsure if we need to rebuild the table in order to decrease the size of the index. Can anyone offer any advice? We are trying to avoid rebuilding the table since it would take quite awhile and lock the table. Thanks!

    Read the article

  • multi-dimension array problem in RGSS (RPG Maker XP)

    - by AzDesign
    This is my first day code script in RMXP. I read tutorials, ruby references, etc and I found myself stuck on a weird problem, here is the scenario: I made a custom script to display layered images Create the class, create an instance variable to hold the array, create a simple method to add an element into it, done The draw method (skipped the rest of the code to this part): def draw image = [] index = 0 for i in [email protected] if image.size > 0 index = image.size end image[index] = Sprite.new image[index].bitmap = RPG::Cache.picture(@components[i][0] + '.png') image[index].x = @x + @components[i][1] image[index].y = @y + @components[i][2] image[index].z = @z + @components[i][3] @test =+ 1 end end Create an event that does these script > $layerz = Layerz.new $layerz.configuration[0] = ['root',0,0,1] > $layerz.configuration[1] = ['bark',0,10,2] > $layerz.configuration[2] = ['branch',0,30,3] > $layerz.configuration[3] = ['leaves',0,60,4] $layerz.draw Run, trigger the event and the result : ERROR! Undefined method`[]' for nil:NilClass pointing at this line on draw method : image[index].bitmap = RPG::Cache.picture(@components[i][0] + '.png') THEN, I changed the method like these just for testing: def draw image = [] index = 0 for i in [email protected] if image.size > 0 index = image.size end image[index] = Sprite.new image[index].bitmap = RPG::Cache.picture(@components[0][0] + '.png') image[index].x = @x + @components[0][1] image[index].y = @y + @components[0][2] image[index].z = @z + @components[0][3] @test =+ 1 end I changed the @components[i][0] to @components[0][0] and IT WORKS, but only the root as it not iterates to the next array index Im stuck here, see : > in single level array, @components[0] and @components[i] has no problem > in multi-dimension array, @components[0][0] has no problem BUT > in multi-dimension array, @components[i][0] produce the error as above > mentioned. any suggestion to fix the error ? Or did I wrote something wrong ?

    Read the article

  • How to get search engines to properly index an ajax driven search page

    - by Redtopia
    I have an ajax-driven search page that will allow users to search through a large collection of records. Each search result points to index.php?id=xyz (where xyz is the id of the record). The initial view does not have any records listed, and there is no interface that allows you to browse through all records. You can only conduct a search. How do I build the page so that spiders can crawl each record? Or is there another way (outside of this specific search page) that will allow me to point spiders to a list of all records. FYI, the collection is rather large, so dumping links to every record in a single request is not a workable solution. Outputting the records must be done in multiple requests. Each record can be viewed via a single page (eg "record.php?id=xyz"). I would like all the records indexed without anything indexed from the sitemap that shows where the records exist, for example: <a href="/result.php?id=record1">Record 1</a> <a href="/result.php?id=record2">Record 2</a> <a href="/result.php?id=record3">Record 3</a> <a href="/seo.php?page=2">next</a> Assuming this is the correct approach, I have these questions: How would the search engines find the crawl page? Is it possible to prevent the search engines from indexing the words "Record 1", etc. and "next"? Can I output only the links? Or maybe something like:  

    Read the article

  • SEO - Index images (lazyload)

    - by Guilherme Nascimento
    Note:My question is not about Javascript. I'm developing a plugin for jQuery/Mootols/Prototype, that work with DOM. This plugin will be to improve page performance (better user experience). The plugin will be distributed to other developers so that they can use in their projects. How does the lazyload: The images are only loaded when you scroll down the page (will look like this: http://www.appelsiini.net/projects/lazyload/enabled_timeout.html LazyLoad). But he does not need HTML5, I refer to this attribute: data-src="image.jpg" Two good examples of website use LazyLoad are: youtube.com (suggested videos) and facebook.com (photo gallery). I believe that the best alternative would be to use: <A href="image.jpg">Content for ALT=""</a> and convert using javascript, for this: <IMG alt="Content for ALT=\"\"" src="image.jpg"> Then you question me: Why do you want to do that anyway? I'll tell you: Because HTML5 is not supported by any browser (especially mobile) And the attribute data-src="image.jpg" not work at all Indexers. I need a piece of HTML code to be fully accessible to search engines. Otherwise the plugin will not be something good for other developers. I thought about doing so to help in indexing: <noscript><img src="teste.jpg"></noscript> But noscript has negative effect on the index (I refer to the contents of noscript) I want a plugin that will not obstruct the image indexing in search engines. This plugin will be used by other developers (and me too). This is my question: How to make a HTML images accessible to search engines, which can minimize the requests?

    Read the article

  • Request Removal of naked domain from Google Index

    - by Pedr
    I have a site which was temporarily available at both example.com and www.example.com. All traffic to example.com is now redirected to www.example.com, however during the brief period that the site was available at the naked domain, Google indexed it. So Google now has two versions of every page indexed: www.example.com www.example.com/about_us www.example.com/products/something ... and example.com example.com/about_us example.com/products/something ... For obvious reasons, this is a bad situation, so how can I best resolve it? Should I request removal of these pages from the index? There is still content at these URLs, but they now redirect to the www subdomain equivalent. The site has many hundreds of pages, but the only way I can see to request removal is via the Remove outdated content screen in Webmaster Tools, one URL at a time. How can I request removal of an entire domain (ie. the naked domain) without it effecting the true site located at the www subdomain? Is this the correct strategy given that all the naked domains now redirect to their www equivalent?

    Read the article

  • Best approach to depth streaming via existing codec

    - by Kevin
    I'm working on a development system (and game) intended for games set mostly in static third-person views. We produce our scenery by CG and photographic techniques. Our background art is rendered off-line by a production-grade renderer. To allow the runtime imagery to properly interact with the background art, I wrote a program to convert from depth output by Mental Ray into a texture, and a pixel shader to draw a quad such that the Z data comes from the texture. This technique is working out very well, but now we've decided that some of the camera angle changes between scenes should be animated. The animation itself is straightforward to produce from our CG models. We intend to encode it to some HD video codec such as H.264. The problem is that in order to maintain our runtime imagery on the screen, the depth buffer will need to be loaded for each video frame. Due to the bandwidth, the video's depth data will need to be compressed efficiently. I've looked into methods for performing temporal compression of depth info and found an interesting research paper here: http://web4.cs.ucl.ac.uk/staff/j.kautz/publications/depth-streaming.pdf The method establishes a mapping between 16-bit depth values and YCbCr values. The mapping is tuned to the properties of existing video codecs in order to maximize precision of the decoded depths after the YCbCr has undergone video compression. It allows an existing, unmodified video codec to be used on the backend. I'm looking at how to pull this off with the least possible work. (This design change was unplanned.) Our game engine itself is native C++, presently for Win32 and DirectX, although we've worked hard to keep platform dependence segregated because we intend other ports. We don't have motion video facilities in the engine yet but will ultimately need that anyway for cinematics. I was planning on using some off-the-shelf motion video solution we can plug into our engine, and haven't chosen one yet. This new added requirement makes selecting one harder since, among other things, we'll now need to bypass colourspace conversion on one of the streams, and also will need to be playing two streams simultaneously in lockstep, on top of in some cases audio on one of them (for the cinematics). I'm also wondering if it's possible (or even useful) to do the conversion from YCbCr to depth in a pixel shader, or if it's better to just do it in CPU and separately load the resulting depth values into a locked tex. The conversion unfortunately does involve branching logic per-pixel. (There are more naive mappings that don't need branching, but they produce inferior results.) It could be reduced to a table lookup but the table would be 32MB. Programming is second-nature to me but I'm not that experienced with pix shaders and have zero knowledge of off-the-shelf video solutions. I'd therefore be interested in advice from others who may have dealt more with depth streaming, pixel shaders, and/or off-the-shelf codecs, regarding how feasible the proposed application is and what off-the-shelf video systems out there would best get along with this usage case.

    Read the article

  • performing simple buffer overflow on Mac os 10.6

    - by REALFREE
    I'm trying to learn about stack base overflow and write a simple code to exploit stack. But somehow it doesn't work at all but showing only Abort trap on my machine (mac os leopard) I guess Mac os treats overflow differently, it won't allow me to overwrite memory through c code. for example, strcpy(buffer, input) // lets say char buffer[6] but input is 7 bytes on Linux machine, this code successfully overwrite next stack, but prevented on mac os (Abort trap) Anyone know how to perform a simple stack-base overflow on mac machine?

    Read the article

  • Redirecting exec output to a buffer or file

    - by devin
    I'm writing a C program where I fork(), exec(), and wait(). I'd like to take the output of the program I exec'ed to write it to file or buffer. For example, if I exec ls I want to write file1 file2 etc to buffer/file. I don't think there is a way to read stdout, so does that mean I have to use a pipe? Is there a general procedure here that I haven't been able to find?

    Read the article

  • understanding z buffer formates direct x

    - by numerical25
    A z buffer is just a 3d array that shows what object should be written in front of another object. each element in the array represents a pixel that holds a value from 0.0 to 1.0. My question is if that is all a z buffer does, then why are some buffers 24bit, 32bit, and 16 bit ??

    Read the article

  • MFC: Reading entire file to buffer...

    - by deostroll
    I've meddled with some code but I am unable to read the entire file properly...a lot of junk gets appended to the output. How do I fix this? // wmfParser.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include "wmfParser.h" #include <cstring> #ifdef _DEBUG #define new DEBUG_NEW #endif // The one and only application object CWinApp theApp; using namespace std; int _tmain(int argc, TCHAR* argv[], TCHAR* envp[]) { int nRetCode = 0; // initialize MFC and print and error on failure if (!AfxWinInit(::GetModuleHandle(NULL), NULL, ::GetCommandLine(), 0)) { // TODO: change error code to suit your needs _tprintf(_T("Fatal Error: MFC initialization failed\n")); nRetCode = 1; } else { // TODO: code your application's behavior here. CFile file; CFileException exp; if( !file.Open( _T("c:\\sample.txt"), CFile::modeRead, &exp ) ){ exp.ReportError(); cout<<'\n'; cout<<"Aborting..."; system("pause"); return 0; } ULONGLONG dwLength = file.GetLength(); cout<<"Length of file to read = " << dwLength << '\n'; /* BYTE* buffer; buffer=(BYTE*)calloc(dwLength, sizeof(BYTE)); file.Read(buffer, 25); char* str = (char*)buffer; cout<<"length of string : " << strlen(str) << '\n'; cout<<"string from file: " << str << '\n'; */ char str[100]; file.Read(str, sizeof(str)); cout << "Data : " << str <<'\n'; file.Close(); cout<<"File was closed\n"; //AfxMessageBox(_T("This is a test message box")); system("pause"); } return nRetCode; }

    Read the article

  • Win32 C/C++ Load Image from memory buffer

    - by Bruno
    I want to load a image (.bmp) file on a Win32 application, but I do not want to use the standard LoadBitmap/LoadImage from Windows API: I want it to load from a buffer that is already in memory. I can easily load a bitmap directly from file and print it on the screen, but this issue is making me stuck :( What I'm looking for is a function that works like this: HBITMAP LoadBitmapFromBuffer(char* buffer, int width, int height); Thanks.

    Read the article

  • SQL indexes for "not equal" searches

    - by bortzmeyer
    The SQL index allows to find quickly a string which matches my query. Now, I have to search in a big table the strings which do not match. Of course, the normal index does not help and I have to do a slow sequential scan: essais=> \d phone_idx Index "public.phone_idx" Column | Type --------+------ phone | text btree, for table "public.phonespersons" essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone = '+33 1234567'; QUERY PLAN ------------------------------------------------------------------------------- Index Scan using phone_idx on phonespersons (cost=0.00..8.41 rows=1 width=4) Index Cond: (phone = '+33 1234567'::text) (2 rows) essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone != '+33 1234567'; QUERY PLAN ---------------------------------------------------------------------- Seq Scan on phonespersons (cost=0.00..18621.00 rows=999999 width=4) Filter: (phone <> '+33 1234567'::text) (2 rows) I understand (see Mark Byers' very good explanations) that PostgreSQL can decide not to use an index when it sees that a sequential scan would be faster (for instance if almost all the tuples match). But, here, "not equal" searches are really slower. Any way to make these "is not equal to" searches faster? Here is another example, to address Mark Byers' excellent remarks. The index is used for the '=' query (which returns the vast majority of tuples) but not for the '!=' query: essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) = 'fr'; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------ Index Scan using tld_idx on emailspersons (cost=0.25..4010.79 rows=97033 width=4) (actual time=0.137..261.123 rows=97110 loops=1) Index Cond: (tld(email) = 'fr'::text) Total runtime: 444.800 ms (3 rows) essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) != 'fr'; QUERY PLAN -------------------------------------------------------------------------------------------------------------------- Seq Scan on emailspersons (cost=0.00..27129.00 rows=2967 width=4) (actual time=1.004..1031.224 rows=2890 loops=1) Filter: (tld(email) <> 'fr'::text) Total runtime: 1037.278 ms (3 rows) DBMS is PostgreSQL 8.3 (but I can upgrade to 8.4).

    Read the article

  • How to internally rewrite a page when requested from specific HTTP_HOST

    - by Andy
    Hi all, I have a Drupal site, site.com, and our client has a campaign that they're promoting for which they've bought a new domain name, campaign.com. I'd like it so that a request for campaign.com internally rewrites to a particular page of the Drupal site. Note Drupal uses an .htaccess file in the document root. The normal Drupal rewrite is # Rewrite URLs of the form 'x' to the form 'index.php?q=x'. RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] I added the following before the normal rewrite. # Custom URLS (eg. microsites) go here RewriteCond %{HTTP_HOST} =campaign.com RewriteCond %{REQUEST_URI} =/ RewriteRule ^ index.php?q=node/22 [L] Unfortunately it doesn't work, it just shows the homepage. Turning on the rewrite log I get this. 1. [rid#2da8ea8/initial] (3) [perdir D:/wamp/www/] strip per-dir prefix: D:/wamp/www/ - 2. [rid#2da8ea8/initial] (3) [perdir D:/wamp/www/] applying pattern '^' to uri '' 3. [rid#2da8ea8/initial] (2) [perdir D:/wamp/www/] rewrite '' - 'index.php?q=node/22' 4. [rid#2da8ea8/initial] (3) split uri=index.php?q=node/22 - uri=index.php, args=q=node/22 5. [rid#2da8ea8/initial] (3) [perdir D:/wamp/www/] add per-dir prefix: index.php - D:/wamp/www/index.php 6. [rid#2da8ea8/initial] (2) [perdir D:/wamp/www/] strip document_root prefix: D:/wamp/www/index.php - /index.php 7. [rid#2da8ea8/initial] (1) [perdir D:/wamp/www/] internal redirect with /index.php [INTERNAL REDIRECT] 8. [rid#2da7770/initial/redir#1] (3) [perdir D:/wamp/www/] strip per-dir prefix: D:/wamp/www/index.php - index.php 9. [rid#2da7770/initial/redir#1] (3) [perdir D:/wamp/www/] applying pattern '^' to uri 'index.php' 10.[rid#2da7770/initial/redir#1] (3) [perdir D:/wamp/www/] strip per-dir prefix: D:/wamp/www/index.php - index.php 11.[rid#2da7770/initial/redir#1] (3) [perdir D:/wamp/www/] applying pattern '^(.*)$' to uri 'index.php' 12.[rid#2da7770/initial/redir#1] (1) [perdir D:/wamp/www/] pass through D:/wamp/www/index.php I'm not used to mod_rewrite, so I might be missing something, but comparing the logs from a call to http://site.com/node/3 and from http://campaign.com/ I can't see any meaningful difference. Specifically uri and args on line 4 seem correct, the internal redirect on line 7 seems right, and the pass through on line 12 seems right (because the file index.php exists). But for some reason it seems the query string's been discarded/ignored around the time of the internal redirect. I'm completely stumped. Also, if anyone could provide a reference on understanding the rewrite log, that might help. It'd be great if there's a way to track the query string through the internal redirect. FWIW I'm using WampServer 2.1 with Apache 2.2.17.

    Read the article

  • Visual Studio 2010 plus Help Index : have your cake and eat it too

    - by Adrian Hara
    Although the team's intentions might have been good, the new help system in Visual Studio 2010  is a huge step backwards (more like a cannonball-shot-kind-of-leap really) from the one we all know (and love?) in Visual Studio 2008 and 2005 (and heck, even VS6). Its biggest problem, from my point of view, is the total and complete lack of the Help Index feature: you know...the thing where you just go and type in what you're looking for and it filters down the list of results automatically. For me this was the number one productivity feature in the "old" help system, allowing me to find stuff very quickly. Number two is that it's entirely web based and runs, by default, in the browser. So imagine, when you press F1, a new tab opens in your default browser pointing to the help entry. While this is wrong in many ways, it's also extremely annoying, cleaning up tabs in the browser becomes a chore which represents a serious productivity hit. These and many other problems were discussed extensively (and rather vocally) on connect but it seems MS seemed to ignore it and opt to release the new help system anyway, with the promise that more features will be added in a later release. Again, it kind of amazes me that they chose to ship a product with LESS features that the previous one and, what's worse, missing KEY features, just so it's "standards based" and "extensible". To be honest, I couldn't care less about the help system's implementation, I just want it to be usable and I would've thought that by now the software community and especially MS would've learned this lesson. In the end, what kind of saddens me is that MS regards these basic features as ones for the "power help user". I mean, come on! I mean a) it's not like my aunt's using Visual Studio 2010 and she represents the regular user, b) all software developers are, by definition, power users and c) it's a freakin help, not rocket science! As you can tell, I'm pretty pissed. Even more so because I really feel that the VS2010 & co. release really is a great one, with a lot of effort going into the various platforms and frameworks, most (if not all) of them being really REALLY good products. And then they go and screw up the help! How lame is that?!   Anyway, it's not all gloom-and-doom. Luckily there is a desktop app which presents a UI over the new help system that's very close to what was there in VS2008, by Robert Chandler (to which I hereby declare eternal gratitude). It still has some minor issues but I'll take it over the browser version of the help any day. It's free, pretty quick (on my machine ;)) and nicely usable. So, if you hate the new help system (passionately) like I do, download H3Viewer now.

    Read the article

  • How to copy depth buffer to CPU memory in DirectX?

    - by Ashwin
    I have code in OpenGL that uses glReadPixels to copy the depth buffer to a CPU memory buffer: glReadPixels(0, 0, w, h, GL_DEPTH_COMPONENT, GL_FLOAT, dbuf); How do I achieve the same in DirectX? I have looked at a similar question which gives the solution to copy the RGB buffer. I've tried to write similar code to copy the depth buffer: IDirect3DSurface9* d3dSurface; d3dDevice->GetDepthStencilSurface(&d3dSurface); D3DSURFACE_DESC d3dSurfaceDesc; d3dSurface->GetDesc(&d3dSurfaceDesc); IDirect3DSurface9* d3dOffSurface; d3dDevice->CreateOffscreenPlainSurface( d3dSurfaceDesc.Width, d3dSurfaceDesc.Height, D3DFMT_D32F_LOCKABLE, D3DPOOL_SCRATCH, &d3dOffSurface, NULL); // FAILS: D3DERR_INVALIDCALL D3DXLoadSurfaceFromSurface( d3dOffSurface, NULL, NULL, d3dSurface, NULL, NULL, D3DX_FILTER_NONE, 0); // Copy from offscreen surface to CPU memory ... The code fails on the call to D3DXLoadSurfaceFromSurface. It returns the error value D3DERR_INVALIDCALL. What is wrong with my code?

    Read the article

  • Clarification On Write-Caching Policy, Its Underlying Options And How It Applies To Hard Drives And Solid-State Drives

    - by Boris_yo
    In last week after doing more research on subject matter, I have been wondering about what I have been neglecting all those years to understand write-caching policy, always leaving it on default setting. Write-caching policy improves writing performance and consists of write-back caching and write-cache buffer flushing. This is how I understand all the above, but correct me if I erred somewhere: Write-through cache / Write-through caching itself is not a part of write caching policy per se and it's when data is written to both cache and storage device so if Windows will need that data later again, it is retrieved from cache and not from storage device which means only improved read performance as there is no need for waiting for storage device to read required data again. Since data is still written to storage device, write performance isn't improved and represents no risk of data loss or corruption in case of power failure or system crash while only data in cache gets lost. This option seems to be enabled by default and is recommended for removable devices with no need to use function of "Safely Remove Hardware" on user's part. Write-back caching is similar to above but without writing data to storage device, periodically releasing data from cache and writing to storage device when it is idle. In my opinion this option improves both read and write performance but represents risk if power failure or system crash occurs with the outcome of not only losing data eventually to be written to storage device, but causing file inconsistencies or corrupted file system. Write-back caching cannot be enabled together with write-through caching and it is not recommended to be enabled if no backup power supply is availabe. Write-cache buffer flushing I reckon is similar to write-back caching but enables immediate release and writing of data from cache to storage device right before power outage occurs but I don't know if it applies also to occasional system crash. This option seem to be complementary to write-back cache reducing or potentially eliminating risk of data loss or corruption of file system. I have questions about relevance of last 2 options to today's modern SSDs in order to get best performance and with less wear on SSDs: I know that traditional hard drives come with onboard cache (I wonder what type of cache that is), but do SSDs also come with cache? Assuming they do, is this cache faster than their NAND flash and system RAM and worth taking the risk of utilizing it by enabling write-back cache? I read somewhere that generally storage device's cache is faster than RAM, but I want to be sure. Additionally I read that write-caching should be enabled since current data that is to be written later to NAND flash is kept for a while in cache and provided there is data that gets modified a lot before finally being written, holding of this data and its periodic release reduces its write times to SSD thereby reducing its wearing. Now regarding to write-cache buffer flushing, I heard that SSD controllers are so fast by themselves that enabling this option is not required, because they manage flushing. However, once again, I don't know if SSDs have their own onboard cache and whether or not it is faster than their NAND flash and system RAM because if it is, keeping this option enabled would make sense. Recently I have posted question about issue with my Intel 330 SSD 120GB which was main reason to do deeper research having suspicion of write-caching policy being the culprit of SSD's freezing issue assuming data being released is what causes freezes. Currently I have write-cache enabled and write-cache buffer flushing disabled because I believe SSD controller's management of write-cache flushing and Windows write-cache buffer flushing are conflicting with each other: Since I want to troubleshoot in small steps to finally determine the source of issue, I have decided to start with write-caching policy and the move to drivers, switching to AHCI later on and finally disabling DIPM (device initiated power management) through registry modification thanks to @TomWijsman

    Read the article

  • Redirect index.php in CodeIgniter

    - by Gabriel Bianconi
    Hello. I created a CodeIgniter application and now I'm trying to redirect the urls with index.php to urls without it. My current .htaccess is: RewriteEngine On RewriteBase / # Removes trailing slashes (prevents SEO duplicate content issues) RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)/$ $1 [L,R=301] # Enforce www # If you have subdomains, you can add them to # the list using the "|" (OR) regex operator RewriteCond %{HTTP_HOST} !^(www|subdomain) [NC] RewriteRule ^(.*)$ http://www.plugb.com/$1 [L,R=301] # Checks to see if the user is attempting to access a valid file, # such as an image or css document, if this isn't true it sends the # request to index.php RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L] The only problem is that the same page is accessible with and without the index.php. For example: http://www.plugb.com/index.php/games/adventure-rpg and http://www.plugb.com/games/adventure-rpg Is there a way to redirect the index.php URLs? Thank you, Gabriel.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >