Search Results

Search found 93838 results on 3754 pages for 'aspire one'.

Page 413/3754 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • array and array_view from amp.h

    - by Daniel Moth
    This is a very long post, but it also covers what are probably the classes (well, array_view at least) that you will use the most with C++ AMP, so I hope you enjoy it! Overview The concurrency::array and concurrency::array_view template classes represent multi-dimensional data of type T, of N dimensions, specified at compile time (and you can later access the number of dimensions via the rank property). If N is not specified, it is assumed that it is 1 (i.e. single-dimensional case). They are rectangular (not jagged). The difference between them is that array is a container of data, whereas array_view is a wrapper of a container of data. So in that respect, array behaves like an STL container, whereas the closest thing an array_view behaves like is an STL iterator (albeit with random access and allowing you to view more than one element at a time!). The data in the array (whether provided at creation time or added later) resides on an accelerator (which is specified at creation time either explicitly by the developer, or set to the default accelerator at creation time by the runtime) and is laid out contiguously in memory. The data provided to the array_view is not stored by/in the array_view, because the array_view is simply a view over the real source (which can reside on the CPU or other accelerator). The underlying data is copied on demand to wherever the array_view is accessed. Elements which differ by one in the least significant dimension of the array_view are adjacent in memory. array objects must be captured by reference into the lambda you pass to the parallel_for_each call, whereas array_view objects must be captured by value (into the lambda you pass to the parallel_for_each call). Creating array and array_view objects and relevant properties You can create array_view objects from other array_view objects of the same rank and element type (shallow copy, also possible via assignment operator) so they point to the same underlying data, and you can also create array_view objects over array objects of the same rank and element type e.g.   array_view<int,3> a(b); // b can be another array or array_view of ints with rank=3 Note: Unlike the constructors above which can be called anywhere, the ones in the rest of this section can only be called from CPU code. You can create array objects from other array objects of the same rank and element type (copy and move constructors) and from other array_view objects, e.g.   array<float,2> a(b); // b can be another array or array_view of floats with rank=2 To create an array from scratch, you need to at least specify an extent object, e.g. array<int,3> a(myExtent);. Note that instead of an explicit extent object, there are convenience overloads when N<=3 so you can specify 1-, 2-, 3- integers (dependent on the array's rank) and thus have the extent created for you under the covers. At any point, you can access the array's extent thought the extent property. The exact same thing applies to array_view (extent as constructor parameters, incl. convenience overloads, and property). While passing only an extent object to create an array is enough (it means that the array will be written to later), it is not enough for the array_view case which must always wrap over some other container (on which it relies for storage space and actual content). So in addition to the extent object (that describes the shape you'd like to be viewing/accessing that data through), to create an array_view from another container (e.g. std::vector) you must pass in the container itself (which must expose .data() and a .size() methods, e.g. like std::array does), e.g.   array_view<int,2> aaa(myExtent, myContainerOfInts); Similarly, you can create an array_view from a raw pointer of data plus an extent object. Back to the array case, to optionally initialize the array with data, you can pass an iterator pointing to the start (and optionally one pointing to the end of the source container) e.g.   array<double,1> a(5, myVector.begin(), myVector.end()); We saw that arrays are bound to an accelerator at creation time, so in case you don’t want the C++ AMP runtime to assign the array to the default accelerator, all array constructors have overloads that let you pass an accelerator_view object, which you can later access via the accelerator_view property. Note that at the point of initializing an array with data, a synchronous copy of the data takes place to the accelerator, and then to copy any data back we'll see that an explicit copy call is required. This does not happen with the array_view where copying is on demand... refresh and synchronize on array_view Note that in the previous section on constructors, unlike the array case, there was no overload that accepted an accelerator_view for array_view. That is because the array_view is simply a wrapper, so the allocation of the data has already taken place before you created the array_view. When you capture an array_view variable in your call to parallel_for_each, the copy of data between the non-CPU accelerator and the CPU takes place on demand (i.e. it is implicit, versus the explicit copy that has to happen with the array). There are some subtleties to the on-demand-copying that we cover next. The assumption when using an array_view is that you will continue to access the data through the array_view, and not through the original underlying source, e.g. the pointer to the data that you passed to the array_view's constructor. So if you modify the data through the array_view on the GPU, the original pointer on the CPU will not "know" that, unless one of two things happen: you access the data through the array_view on the CPU side, i.e. using indexing that we cover below you explicitly call the array_view's synchronize method on the CPU (this also gets called in the array_view's destructor for you) Conversely, if you make a change to the underlying data through the original source (e.g. the pointer), the array_view will not "know" about those changes, unless you call its refresh method. Finally, note that if you create an array_view of const T, then the data is copied to the accelerator on demand, but it does not get copied back, e.g.   array_view<const double, 5> myArrView(…); // myArrView will not get copied back from GPU There is also a similar mechanism to achieve the reverse, i.e. not to copy the data of an array_view to the GPU. copy_to, data, and global copy/copy_async functions Both array and array_view expose two copy_to overloads that allow copying them to another array, or to another array_view, and these operations can also be achieved with assignment (via the = operator overloads). Also both array and array_view expose a data method, to get a raw pointer to the underlying data of the array or array_view, e.g. float* f = myArr.data();. Note that for array_view, this only works when the rank is equal to 1, due to the data only being contiguous in one dimension as covered in the overview section. Finally, there are a bunch of global concurrency::copy functions returning void (and corresponding concurrency::copy_async functions returning a future) that allow copying between arrays and array_views and iterators etc. Just browse intellisense or amp.h directly for the full set. Note that for array, all copying described throughout this post is deep copying, as per other STL container expectations. You can never have two arrays point to the same data. indexing into array and array_view plus projection Reading or writing data elements of an array is only legal when the code executes on the same accelerator as where the array was bound to. In the array_view case, you can read/write on any accelerator, not just the one where the original data resides, and the data gets copied for you on demand. In both cases, the way you read and write individual elements is via indexing as described next. To access (or set the value of) an element, you can index into it by passing it an index object via the subscript operator. Furthermore, if the rank is 3 or less, you can use the function ( ) operator to pass integer values instead of having to use an index object. e.g. array<float,2> arr(someExtent, someIterator); //or array_view<float,2> arr(someExtent, someContainer); index<2> idx(5,4); float f1 = arr[idx]; float f2 = arr(5,4); //f2 ==f1 //and the reverse for assigning, e.g. arr(idx[0], 7) = 6.9; Note that for both array and array_view, regardless of rank, you can also pass a single integer to the subscript operator which results in a projection of the data, and (for both array and array_view) you get back an array_view of rank N-1 (or if the rank was 1, you get back just the element at that location). Not Covered In this already very long post, I am not going to cover three very cool methods (and related overloads) that both array and array_view expose: view_as, section, reinterpret_as. We'll revisit those at some point in the future, probably on the team blog. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #007

    - by pinaldave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 Find Stored Procedure Related to Table in Database – Search in All Stored Procedure In 2006 I wrote a small script which will help user  find all the Stored Procedures (SP) which are related to one or more specific tables. This was quite a popular script however, in SQL Server 2012 the same can be achieved using new DMV sys.sql-expression_dependencies. I recently blogged about it over Find Referenced or Referencing Object in SQL Server using sys.sql_expression_dependencies. 2007 SQL SERVER – Versions, CodeNames, Year of Release 1993 – SQL Server 4.21 for Windows NT 1995 – SQL Server 6.0, codenamed SQL95 1996 – SQL Server 6.5, codenamed Hydra 1999 – SQL Server 7.0, codenamed Sphinx 1999 – SQL Server 7.0 OLAP, codenamed Plato 2000 – SQL Server 2000 32-bit, codenamed Shiloh (version 8.0) 2003 – SQL Server 2000 64-bit, codenamed Liberty 2005 – SQL Server 2005, codenamed Yukon (version 9.0) 2008 – SQL Server 2008, codenamed Katmai (version 10.0) 2011 – SQL Server 2008, codenamed Denali (version 11.0) Search String in Stored Procedure Searching sting in the stored procedure is one of the most frequent task developer do. They might be searching for a table, view or any other details. I have written a script to do the same in SQL Server 2000 and SQL Server 2005. This is worth bookmarking blog post. There is an alternative way to do the same as well here is the example. 2008 SQL SERVER – Refresh Database Using T-SQL NO! Some of the questions have a single answer NO! You may want to read the question in the original blog post. I had a great time saying No! SQL SERVER – Delete Backup History – Cleanup Backup History SQL Server stores history of all the taken backup forever. History of all the backup is stored in the msdb database. Many times older history is no more required. Following Stored Procedure can be executed with a parameter which takes days of history to keep. In the following example 30 is passed to keep a history of month. 2009 Stored Procedure are Compiled on First Run – SP taking Longer to Run First Time Is stored procedure pre-compiled? Why the Stored Procedure takes a long time to run for the first time?  This is a very common questions often discussed by developers and DBAs. There is an absolutely definite answer but the question has been discussed forever. There is a misconception that stored procedures are pre-compiled. They are not pre-compiled, but compiled only during the first run. For every subsequent runs, it is for sure pre-compiled. Read the entire article for example and demonstration. Removing Key Lookup – Seek Predicate – Predicate – An Interesting Observation Related to Datatypes This is one of the most important performance tuning lesson on my blog. I suggest this weekend you spend time reading them and let me know what you think about the concepts which I have demonstrated in the four part series. Part 1 | Part 2 | Part 3 | Part 4 Seek Predicate is the operation that describes the b-tree portion of the Seek. Predicate is the operation that describes the additional filter using non-key columns. Based on the description, it is very clear that Seek Predicate is better than Predicate as it searches indexes whereas in Predicate, the search is on non-key columns – which implies that the search is on the data in page files itself. Policy Based Management – Create, Evaluate and Fix Policies This article will cover the most spectacular feature of SQL Server – Policy-based management and how the configuration of SQL Server with policy-based management architecture can make a powerful difference. Policy based management is loaded with several advantages. It can help you implement various policies for reliable configuration of the system. It also provides additional administration assistance to DBAs and helps them effortlessly manage various tasks of SQL Server across the enterprise. 2010 Recycle Error Log – Create New Log file without Server Restart Once I observed a DBA to restaring the SQL Server when he needed new error log file. This was funny and sad both at the same time. There is no need to restart the server to create a new log file or recycle the log file. You can run sp_cycle_errorlog and achieve the same result. Get Database Backup History for a Single Database Simple but effective script! Reducing CXPACKET Wait Stats for High Transactional Database The subject is very complex and I have done my best to simplify the concept. In simpler words, when a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. Threads which came first have to wait for the slower thread to finish. The Wait by a specific completed thread is called CXPACKET Wait Stat. Information Related to DATETIME and DATETIME2 There are quite a lot of confusion with DATETIME and DATETIME2. DATETIME2 is also one of the underutilized datatype of SQL Server.  In this blog post I have written a follow up of the my earlier datetime series where I clarify a few of the concepts related to datetime. Difference Between GETDATE and SYSDATETIME Difference Between DATETIME and DATETIME2 – WITH GETDATE Difference Between DATETIME and DATETIME2 2011 Introduction to CUME_DIST – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical function CUME_DIST(). This function provides cumulative distribution value. It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. Introduction to FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical functions FIRST_VALUE() and LAST_VALUE(). This function returns first and last value from the list. It will be very difficult to explain this in words so I’d like to attempt to explain its function through a brief example. Instead of creating a new table, I will be using the AdventureWorks sample database as most developers use that for experiment purposes. OVER clause with FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012 – ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING “Don’t you think there is bug in your first example where FIRST_VALUE is remain same but the LAST_VALUE is changing every line. I think the LAST_VALUE should be the highest value in the windows or set of result.” Puzzle – Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY You can see that row number 2, 3, 4, and 5 has same SalesOrderID = 43667. The FIRST_VALUE is 78 and LAST_VALUE is 77. Now if these function was working on maximum and minimum value they should have given answer as 77 and 80 respectively instead of 78 and 77. Also the value of FIRST_VALUE is greater than LAST_VALUE 77. Why? Explain in detail. Introduction to LEAD and LAG – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical function LEAD() and LAG(). This functions accesses data from a subsequent row (for lead) and previous row (for lag) in the same result set without the use of a self-join . It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. A Real Story of Book Getting ‘Out of Stock’ to A 25% Discount Story Available Our book was out of stock in 48 hours of it was arrived in stock! We got call from the online store with a request for more copies within 12 hours. But we had printed only as many as we had sent them. There were no extra copies. We finally talked to the printer to get more copies. However, due to festivals and holidays the copies could not be shipped to the online retailer for two days. We knew for sure that they were going to be out of the book for 48 hours. This is the story of how we overcame that situation! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Carriers Holding Your OS Updates Hostage

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/10/10/carriers-holding-your-os-updates-hostage.aspx Just a small rant here.  Today the Windows Phone 8 GDR2 update finally became available for Nokia handset users.  Now I’m not sure that it is AT&T fault entirely that Samsung and HTC users got their updates two months ago and we are just finally seeing it.  It may have something to do with the Nokia Amber update.  But every Windows Phone update on AT&T from 7.1 on seems to have been delayed.  How is it that the premiere Windows Phone carrier is always the last one to release updates? Smart phone ecosystems are a partnership between the OS provider, the hardware manufacturer and the carriers.  If any one of those partners does not hold up its responsibilities then everyone gets a black eye.  The goal for all involved should be to release updates as early as possible with reasonable assurance of stability.  This ensures the satisfaction of consumers and increases the likelihood of future sales. From what I have seen so far AT&T has been the one breaking the consumer’s trust in the Windows Phone ecosystem.  Aside from voicing our dissatisfaction we may need to start voting with our feet until they realize that they being a poor citizen has consequences. Technorati Tags: ATT,Windows Phone,Windows Phone 8,Microsoft,Nokia,GDR2

    Read the article

  • Libgdx actor bounds are wrong

    - by Undume
    The Actor's boundaries are not centered at the ButtonText but I used the setBounds() method. The higher the Y position is, the less centered is the boundary. The weird thing is that i only created and added to the Stage one button but the screen shows two. When i click the top button, the bottom one is the one highlighted. How can i fix that? import com.badlogic.gdx.Game; import com.badlogic.gdx.Gdx; import com.badlogic.gdx.files.FileHandle; import com.badlogic.gdx.scenes.scene2d.Stage; import com.badlogic.gdx.scenes.scene2d.ui.Skin; import com.badlogic.gdx.scenes.scene2d.ui.TextButton; public class MyGame extends Game { Stage stage; @Override public void create() { stage=new Stage(); FileHandle skinFile = new FileHandle("data/resources/uiskin/uiskin.json"); Skin skin = new Skin(skinFile); TextButton sas=new TextButton("dd",skin); sas.setBounds(0, 500, 100, 100); stage.addActor(sas); Gdx.input.setInputProcessor(stage); } @Override public void dispose() { super.dispose(); } @Override public void render() { super.render(); stage.act(Gdx.graphics.getDeltaTime()); stage.draw(); } @Override public void resize(int width, int height) { super.resize(width, height); } @Override public void pause() { super.pause(); } @Override public void resume() { super.resume(); } }

    Read the article

  • Are there any actual case studies on rewrites of software success/failure rates?

    - by James Drinkard
    I've seen multiple posts about rewrites of applications being bad, peoples experiences about it here on Programmers, and an article I've ready by Joel Splosky on the subject, but no hard evidence of case studies. Other than the two examples Joel gave and some other posts here, what do you do with a bad codebase and how do you decide what to do with it based on real studies? For the case in point, there are two clients I know of that both have old legacy code. They keep limping along with it because as one of them found out, a rewrite was a disaster, it was expensive and didn't really work to improve the code much. That customer has some very complicated business logic as the rewriters quickly found out. In both cases, these are mission critical applications that brings in a lot of revenue for the company. The one that attempted the rewrite felt that they would hit a brick wall at some point if the legacy software didn't get upgraded at some point in the future. To me, that kind of risk warrants research and analysis to ensure a successful path. My question is have there been actual case studies that have investigated this? I wouldn't want to attempt a major rewrite without knowing some best practices, pitfalls, and successes based on actual studies. Aftermath: okay, I was wrong, I did find one article: Rewrite or Reuse. They did a study on a Cobol app that was converted to Java.

    Read the article

  • CodeGolf : Find the Unique Paths

    - by st0le
    Here's a pretty simple idea, in this pastebin I've posted some pair of numbers. These represent Nodes of a unidirected connected graph. The input to stdin will be of the form, (they'll be numbers, i'll be using an example here) c d q r a b d e p q so x y means x is connected to y (not viceversa) There are 2 paths in that example. a->b->c->d->e and p->q->r. You need to print all the unique paths from that graph The output should be of the format a->b->c->d->e p->q->r Notes You can assume the numbers are chosen such that one path doesn't intersect the other (one node belongs to one path) The pairs are in random order. They are more than 1 paths, they can be of different lengths. All numbers are less than 1000. If you need more details, please leave a comment. I'll amend as required. Shameless-Plug For those who enjoy Codegolf, please Commit at Area51 for its very own site:) (for those who don't enjoy it, please support it as well, so we'll stay out of your way...)

    Read the article

  • WCF/webservice architecture question

    - by M.R.
    I have a requirement to create a webservice to expose certain items from a CMS as a web service, and I need some suggestions - the structure of the items is as such: item - field 1 - field 2 - field 3 - field 4 So, one would think that the class for this will be: public class MyItem { public string ItemName { get; set; } public List<MyField> Fields { get; set; } } public class MyField { public string FieldName { get; set; } public string FieldValue { get; set; } //they are always string (except - see below) } This works for when its always one level deep, but sometimes, one of the fields is actually a point to ANOTHER item (MyItem) or multiple MyItem (List<MyItem>), so I thought I would change the structure of MyField as follows, to make FieldValue as object; public class MyField { public string FieldName { get; set; } public object FieldValue { get; set; } //changed to object } So, now, I can put whatever I want in there. This is great in theory, but how will clients consume this? I suspect that when users make a reference to this web service, they won't know which object is being returned in that field? This seems like a not-so-good design. Is there a better approach to this?

    Read the article

  • What to choose for a multilingual site with support for Markdown and commenting

    - by Kent
    I want to publish articles at a multilingual site. I want to be able to write an article in two languages and have them available on separate URLs: thesite.foo/english-breakfast thesite.com/engelsk-frukost If the users web browser is set to English I'd like to show a small notice at the top of the Swedish version with a link to the English one. The link should have an appropriate rel attribute for a translation (search for hreflang at http://diveintohtml5.org/semantics.html). There should be a way to list all articles belonging to these sets: Swedish only, English only, Swedish versions + English only, English versions + Swedish only. I'd like to publish these as four RSS-feeds. And I would like to have two versions of the main site, one in Swedish (showing Swedish versions + English only) and one in English (showing English versions). I shall be able to write the articles using Markdown, as that is the formatting language I find most convenient. There should be a way for users to comment. And some kind of way for me to protect myself against comment spam. I am leaning towards learning Drupal. I suspect I'll have to code this behavior myself as a module. To be frank I'd rather work with Java. Is Drupal the way to go? Or is there something more suitable for this project?

    Read the article

  • Files backup utility with incremental backups that would keep backup device clean

    - by Wojtek
    I've tested a few of backup utilities and still haven't found one that would satisfy me. Almost every one of them has two options: - full backup - not an option to use frequently - incremental backup - seems right, but there's one thing about it: Incremental backup builds on a base of a full backup, backing up only those files, that were created/changed. The thing is, that after some time you've got a lot of unwanted files from the old backups bloating your backup device. Also, if you'd accidentally delete your full (first) backup file, then the differential backups would be corrupted (you wouldn't be able to restore them). The thing I'm looking for is a program, that would backup files simply by copying them. It would check the backup device whether it contains the file (unchanged): - if yes, it should proceed to the next file (we've got current version backed up) - if no, it would copy the file to the backup device - if the device contains a file that is no longer on our disk, the program would delete it from the backup device Is there any such utility, that would work this way? If not, do you have any hints on how to backup fairly big amounts of data (around 20gb) quite frequently with incremental backups and not be exposed to those unwanted effects of backup size puffing up?

    Read the article

  • Using cascade in NHibernate

    - by Tyler
    I have two classes, call them Monkey and Banana, with a one-to-many bidirectional relationship. Monkey monkey = new Monkey(); Banana banana = new Banana(); monkey.Bananas.Add(banana); banana.Monkey = monkey; hibernateService.Save(banana); When I run that chunk of code, I want both monkey and banana to be persisted. However, it's only persisting both when I explicitly save the monkey and not vice versa. Initially, this made sense since only my Monkey.hbm.xml had a mapping with cascade="all". <set name="Bananas" inverse="true" cascade="all"> <key column="Id"/> <one-to-many class="Banana"/> </set> I figured I just needed to add the following to my Banana.hbm.xml file: <many-to-one name="Monkey" column="Id" cascade="all" /> Unfortunately, this resulted in a Parameter index is out of range error when I tried to run the snippet of code. I investigated this error and found this post, but I still don't see what I'm doing wrong. I have the relationship mapped once on each side as far as I can tell. For full disclosure, here are the two mapping files: Monkey.hbm.xml <class name="Monkey" table="monkies" lazy="true"> <id name="Id"> <generator class="increment" /> </id> <property name="Name" /> <set name="Bananas" inverse="true" cascade="all"> <key column="Id"/> <one-to-many class="Banana"/> </set> </class> Banana.hbm.xml <class name="Banana" table="bananas" lazy="true"> <id name="Id"> <generator class="increment" /> </id> <property name="Name" /> <many-to-one name="Monkey" column="Id" cascade="all" /> </class>

    Read the article

  • Where to find Hg/Git technical support?

    - by Rook
    Posting this as a kind of a favour for a former coleague, so I don't know the exact circumstances, but I'll try to provide as much info as I can ... A friend from my old place of employment (maritime research institute; half government/commercial funding) has asked me if I could find out who provides technical support (commercial) for two major DVCS's of today - Git and Mercurial. They have been using VCS for years now (Subversion while I was there, don't know what they're using now - probably the same), and now they're renewing their software licences (they have to give a plan some time in advance for everything ... then it goes "through the system") and although they will be keeping Subversion as well, they would like to justify beginning of DVCS as an alternative system (most people root for Mercurial since it seems simpler; mostly engineers and physicians there who are not that interested in checking Git repos for corruption and the finer workings of Git, but I believe any one of the two could "pass") - but it has to have a price (can be zero; no problem there) and some sort of official technical support. It is a pro forma matter, but it has to be specified. Most of the people there are using one of the two already, but this has to be specified to be official. So, I'm asking you - do you know where could one go for Git or Mercurial technical support (can be commercial)? Technical forums and the like are out of the question. It has to work on the principle: - I have a problem. - I post a question with the details. - I get an answer in specified time. It can be "we cannot do that." but it has to be an official answer and given in agreed time. I'm sure by now most of you understand what I'm asking, but if not - post a comment or similar. Also, if you think of any reasons which could decide justification of introducing Git/Hg from an technical and administrative viewpoint, feel free to write them down also.

    Read the article

  • Frequent grub rescue error: unknown filesystem

    - by user3215
    It's really frustrating. Three week's back story: I faced the following error on ubuntu 10.04 LTS Desktop, error : unknown filesystem grub rescue> I tried many solutions online apart from 1 and 2 and eventually I could not boot my system at all. Trying those solutions could not completely fix it and then I had to face initramfs(could not remember exact error). As I could not fix, a week back I did a fresh install of Ubuntu 12.04 Desktop and after a week i.e, now, I got the same grub rescue> error. Could anybody tell me what's the reason behind this error?. Is there any problem with the Ubuntu or it's a problem at my side due to some reason?. My machine description: I have two hard disks, windows is installed on one hard disk and on another it's Ubuntu. I installed the operating systems(vista, ubuntu) such a way one is not known to another, I mean to say, I'll unplug one hard disk and install the Ubuntu and similarly I'll unplug the ubuntu hard disk, installed the vista on another hard disk, generally I do(I actually don't want to run into grub/boot.ini issues installing OSs connecting both the hard disks). When ever I power on the computer windows will boot by default, and I'll generally boot to ubuntu by pressing F8 and selecting the ubuntu hard disk. Any help is greatly appreciated!. Thank you!

    Read the article

  • Porting Ruby/NCruses Rogue-Like to .NET and FlatRedBall

    - by ashes999
    I created an awesome rogue-like game in Ruby. For the GUI, I used NCurses. Since I'm using FlatRedBall as my engine of choice for Silverlight game development, I want to port this game over. What is the best way to efficiently doing this, and what are the pitfalls I should expect? For example, Ruby is object-oriented, like C#, and I should be able to just convert (rewrite) classes one by one. However, I will run into issues like: NCurses API. I need to possibly create my own notions of a "Window", or else rewrite GUI code. It's one class, but it's BIG. Mix-Ins. These are essentially aspect-oriented development. There are a couple of solutions in .NET, like dynamic classes. What else? Also, I should mention that I want to create a C# application out of this. As much as possible, I'll dump reusable and helper code, algorithms, etc. into separate projects and generate reusable DLLs.

    Read the article

  • Is browser fingerprinting a viable technique for identifying anonymous users?

    - by SMrF
    Is browser fingerprinting a sufficient method for uniquely identifying anonymous users? What if you incorporate biometric data like mouse gestures or typing patterns? The other day I ran into the Panopticlick experiment EFF is running on browser fingerprints. Of course I immediately thought of the privacy repercussions and how it could be used for evil. But on the other hand, this could be used for great good and, at the very least, it's a tempting problem to work on. While researching the topic I found a few companies using browser fingerprinting to attack fraud. And after sending out a few emails I can confirm at least one major dating site is using browser fingerprinting as but one mechanism to detect fake accounts. (Note: They have found it's not unique enough to act as an identity when scaling up to millions of users. But, my programmer brain doesn't want to believe them). Here is one company using browser fingerprints for fraud detection and prevention: http://www.bluecava.com/ Here is a pretty comprehensive list of stuff you can use as unique identifiers in a browser: http://browserspy.dk/

    Read the article

  • Expression Studio 4 Community Launch Event

    - by Timmy Kokke
    Event On June 7th Expression Studio 4 will be launched at the Internet Week in New York. One day later, on June 8th, the Dutch Silverlight and Expression User Group SIXIN organizes the Dutch Community Launch in collaboration with Microsoft and Centric at Centric's office in IJsselstein. To celebrate the 4 Expression release we have two interesting speakers. In addition, we give three packages Expression Studio and more great gifts away.   Program The preliminary program for the evening is as follows: 5:45 p.m. - Food, drinks and networking 6:45 p.m. - Reception and Introduction by Koen Zwikstra, co-founder of SIXIN and Silverlight MVP 7:00 p.m. - Phone 7 Building a Windows application using the new features of Expression Blend by Loek van den Ouweland, founder and web designer for Magic Studio 8:00 p.m. - Break 8:30 p.m. - Tour Encoder and Expression Web by Antoni Dol, senior designer at Macaw 9:30 p.m. - Networking while enjoying a drink   Ask your question to one of the speakers If you have a question to one of the speakers, then you can by email ([email protected]) or thru Twitter. Send an email with subject # expression4 or send a tweet @ sixinUG and use it to hashtag # expression4.   Register To register for this event or to get more information you can go to the SIXIN meetings page here.   Special thanks to our sponsors:

    Read the article

  • Rendering large and high poly meshes

    - by Aurus
    Consider an huge terrain that has a lot polygons, to render this terrain I thought of following techniques: Using height-map instead of raw meshes: Yes, but I want to create a lot of caves and stuff that simply wont work with height-maps. Using voxels: Yes, but I think that this would be to much since I don't even want to support changing terrain.. Split into multiple chunks and do some sort of LOD with the mesh: Yes, but how would I do that? Tessellation usually creates more detail not less. Precompute the same mesh in lower poly version (like Mudbox does) and depending on the distance it renders one of these meshes: Graphic memory is limited and uploading only the chunks won't solve that problem since the traffic would be too high. IMO the last one sounds really good, but imagine the following process: Upload and render the chunks depending on the current player position. [No problem] Player will walk straight forward Now we maybe have to change on of the low poly chunk with the high poly one So, Remove the low poly chunk and load the high poly chunk [Already to much traffic here, I think] I am not very experienced in graphic programming and maybe the upper process is totally okay but somehow I think it is too much. And how about the disk space it would require.. I think 3 kind of levels would be fine but isn't that also too much? (I am using OpenGL but I don't think that this is important)

    Read the article

  • The Oracle OpenWorld 2012 Call for Papers Closes April 9

    - by Kerrie Foy
    It is On! Oracle OpenWorld 2012 Call for Papers is closes April 9.   This year's OpenWorld event is September 30  - October 4, Moscone Center, San Francisco. Oracle OpenWorld is among the world’s largest industry events for good reason. It offers a vast array of learning and networking opportunities in one of the planet’s great cities.  And one of the key reasons for its popularity is the prominence of presentations by customers. If you would like to deliver a presentation based on your experience, now is the time to submit your abstract for review by the selection panel. The competition is strong: roughly 18% of entries are accepted each year from more than 3,000 submissions. Review panels are made up of experts both internal and external to Oracle. Successful submissions often (but not exclusively) focus on customer successes, how-tos, or best practices. http://www.oracle.com/openworld/call-for-papers/information/index.html What is in it for you? Recognition, for one thing. Accepted sessions are publicized in the content catalog, which goes live in mid-June, and sessions given by external speakers often prove the most popular. Plus, accepted speakers get a complimentary pass to Oracle OpenWorld with access to all sessions and networking events- that could save you up to $2,595! Be sure designate your session for inclusion in the correct track by selecting  “APPLICATIONS: Product Lifecycle Management from the Primary Track drop down menu. Looking forward to seeing you at this year's OpenWorld!

    Read the article

  • Easily Close All Tabs in Google Chrome

    - by Asian Angel
    Do you find yourself with a lot of tabs open but dread closing all but one manually? Now you can close all of your tabs with a single click, and have just one ready to go with the Close all Tabs extension. Before We all find ourselves with a lot of tabs open sooner or later. That is not so bad until we realize that we need to close all of them and get back to work. A person could open a new tab and manually close the rest or close the entire window and restart Chrome. But a single click solution would be a lot more convenient. After There it is…the single click solution. Just click the Toolbar Button and BOOM! One fresh window with a single new tab page showing. Now if you could only take the rest of the day off… Conclusion The Close all Tabs extension may not be something that everyone would use, but if you are tired of manually closing all of those tabs then you will definitely like it. Links Download the Close all Tabs extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Focused New Tabs Quick-Fix for Google ChromeVisually Browse Through Your Open Tabs in Google ChromeMake Google Chrome Open with Pinned TabsStupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeEasily Control a Large Amount of Tabs in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • Platformer Enemy AI

    - by hayer
    I'm currently developing a platformer shooter. The game is multiplayer and while my net code could use some real work I have put that off for the time, so currently I'm trying to implement the AI. The game is pretty simple; Players run around on a map filled with a X amount of zombies that try to eat their brains, classic and overused I know. Weapons spawn at random intervals around the map. The problem is that the zombies, when they find their pray the have to follow it for some while.. And here is the problem, running the AI navcode seems to take for ever. So here is the ideas I have come up with so far Have the AI update at different intervals with a maximum of Y ms with no updates. Have the zombies assigned to groups of zombies. One is appointed the leader of the group who finds the way to the player - the rest just follows the leader. If the leader dies another one of the zombies in the group is appointed president of the zombie swarm. If there is less than five zombies in a group they try to meet up with other zombies.(Aka they are assigned to a different group and therefor a new leader) Multi-threading option one or two? For navigation I have some kinda navmesh(since the game is not tile-based) that tells the zombies where they can walk etc. If anyone else got some ideas on how to do navigation I would love some input. For LoS(zombie - player) I have split the map into grids. If the players grid is connected to the zombies grid(if I go with option two I would only need to check if leader zombies grid is connected to player, aka less checks) - if they are connected and there is more than 250ms since last check do a raytrace.. This is my first time programming AI so input on any field is appreciated.

    Read the article

  • PHP: A config file in .ini or .php format?

    - by Tim Visee
    I'm working on a huge CMS system, and I asked myself what configuration format I should use. There are two common formats for configuration files. The first one is an INI file, containg all the configuration properties. Then you can simply parse this INI file using build in PHP functions. A second option is to use a PHP file containing a regular PHP array with these configuration properties. Now, it's easier to edit an INI file, but a PHP file give's you more options, for example it allows you to add a function which retrieves one of the configuration options while reading the configuration file. Note: The PHP configuration file would only contain an array of configuration, no initialization functions or anything. (This is possible of course, but it's not implemented by default) Now, what is recommend for me to use as configuration file? What is the most common format for a configuration file? Should I go for simplicity with the INI files, or with a more dynamic one using PHP. One thing to note, this is not for personal usage. I'm planning to release the CMS system soon, and a lot of websites are scheduled already to change to my CMS system.

    Read the article

  • Manager Self Service at your Fingertips

    - by Elaine Clement
    Last week we released new and improved Manager Self Service capabilities in PeopleSoft HCM 9.1. We delivered a new Manager Dashboard, streamlined many Manager Self Service transactions, provided new Pivot Grid capabilities, and implemented one-click Related Actions accessible from multiple places – all with the goal of improving every Manager’s self service experience. Manager Dashboard These new capabilities have the potential to significantly impact an organization’s bottom line, and here is why. Increased Efficiency The Manager Dashboard provides a ‘one-stop shop’ for your Managers with all of the key data they need consolidated into a single view. Alerts notifying managers of important tasks are immediately viewable and actionable. Administrators can configure the dashboard to include the most important pagelets needed for their organization, and Managers can personalize it to fit within their personal way of conducting their tasks. The Related Actions feature further improves the ease with which Managers get their work done by providing one-click access to Manager Self Service transactions.  Increased Job Satisfaction The streamlined Manager transactions, related actions, and the new Manager Dashboard provide an enhanced user experience. Managers are able to quickly get in, get the information they need, complete their transactions, and get out. Managers can spend their time focusing on getting the business results they need instead of their day to day HR tasks. Enhanced Decision Support Administrators can ensure the information and analytics they want their Managers to use are available from the Manager Dashboard, establishing best business practices. Additional pivot grids relevant to your own organization can be added to the Manager Dashboard. With this easy access to the relevant information in an easily understood format, Managers can make the right business decisions needed to improve their team and their team’s productivity. For more details on the Manager Dashboard and some of the other newly posted features, such as a new Talent Summary, check out this video and others: Oracle PeopleSoft Webcasts

    Read the article

  • RTFMobile

    - by ultan o'broin
    It may seem obvious but it’s worth stating again. The idea that mobile users are going to read lots of user assistance on their devices is just wrong. So, Jakob Nielsen’s post Mobile Content Is Twice as Difficult serves as a timely reminder for anyone thinking of putting manuals as a form of user assistance onto mobile phones. There is also an excellent post on UXMag.com, explaining that one of the ways to screw up with your iPhone app is to throw an old-style user manual into the user experience: 10 Surefire Ways to Screw Up Your iPhone App.   (Image copyright and referenced from UX Magazine 2010)   Instead, user assistance  alternatives—if any at all—include one-time tours, graphics, in-context instructions, and so on. Not so sure that importing “humor” and “personality” work so well in the enterprise app space, myself. However, the message is clear: iPhone users don’t read manuals. Great message. Users will figure it out, and if they can’t, well then your app’s UX is a problem and the app will fail. Shame some teams are obsessed with figuring out ways to port existing manuals to mobile platforms without any thought for the UX. Razorfish’s Scatter/Gather blog says it all: One thing that is particularly discouraging, most material currently available on “Creating Content for the iPad” or similar themes turns out to be about getting traditional content onto, or into, the iPad. Now, manuals for non-end users in PDF format on eReaders is a different matter. I have research on that, but it’s for another post. Technorati Tags: mobile,user assistance,UX,user experience,manuals,documentation

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • Ubuntu Server 12.04 as a router. Problem with DNS

    - by Lorenzo
    I have a virtualbox lab made up of 4 Windows 2008 R2 servers (DC/DNS,SQL,SHAREPOINT, EXCHANGE) that are configured with static ip addresses with NIC's attached to Internal network. Everything works. I had the requirement to execute some tests that also access external services available on the internet. To keep things clean and similar to the production environment I have installed another VM, with Ubuntu Server 12.04 64 bit and configured (I hope) to work as a router like described on this post. This VM has two network interfaces: first is Bridged with the host and is used as a WAN connection and the other one attached in the Internal Network with its own static IP address on the internal network subnet. But actually the Windows servers does not connect to the internet while the unix one connects. I did a route command. this is the result: Kernel IP Routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.69.121.1 0.0.0.0 UG 100 0 0 eth0 10.69.121.0 * 255.255.255.0 U 0 0 0 eth0 192.168.83.0 * 255.255.255.0 U 0 0 0 eth1 Can somebody help me with this configuration? :) Thanks! Addendum: I forgot to mention that one of the windows server hosts a DNS service for which I should maybe configure a forwarding server but I do not exactly know which server to forward on... :(

    Read the article

  • Will these instructions work when turning of journaling on a n ext4 SSD?

    - by snowlord
    I have an Acer Aspire One with an SSD for storage. I recently installed Ubuntu on it and chose ext4 for my filesystem. Then I read that journaling on an SSD isn't the best idea, so I will try to disable journaling and I have found these intstructions (from http://fenidik.blogspot.com/2010/03/ext4-disable-journal.html): # Create ext4 fs on /dev/sda10 disk mkfs.ext4 /dev/sda10 # Enable writeback mode. This mode will typically provide the best ext4 performance. tune2fs -o journal_data_writeback /dev/sda10 # Delete has_journal option tune2fs -O ^has_journal /dev/sda10 # Required fsck e2fsck -f /dev/sda10 # Check fs options dumpe2fs /dev/sda10 |more For more performance add fstab opions: data=writeback,noatime,nodiratime i.e: /dev/sda10 /opt ext4 defaults,data=writeback,noatime,nodiratime 0 0 I will use them on my boot partition. Are there any particularly bad parts here, or are there any missing steps? Will my boot partition be fit for being on an SSD after this? Or should I consider switching to ext2, or even reinstall it all and choose ext2 at partitioning time (I'd rather not though, since I've configured quite some stuff already)?

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >