Search Results

Search found 5254 results on 211 pages for 'concept analysis'.

Page 191/211 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • Persistent (purely functional) Red-Black trees on disk performance

    - by Waneck
    I'm studying the best data structures to implement a simple open-source object temporal database, and currently I'm very fond of using Persistent Red-Black trees to do it. My main reasons for using persistent data structures is first of all to minimize the use of locks, so the database can be as parallel as possible. Also it will be easier to implement ACID transactions and even being able to abstract the database to work in parallel on a cluster of some kind. The great thing of this approach is that it makes possible implementing temporal databases almost for free. And this is something quite nice to have, specially for web and for data analysis (e.g. trends). All of this is very cool, but I'm a little suspicious about the overall performance of using a persistent data structure on disk. Even though there are some very fast disks available today, and all writes can be done asynchronously, so a response is always immediate, I don't want to build all application under a false premise, only to realize it isn't really a good way to do it. Here's my line of thought: - Since all writes are done asynchronously, and using a persistent data structure will enable not to invalidate the previous - and currently valid - structure, the write time isn't really a bottleneck. - There are some literature on structures like this that are exactly for disk usage. But it seems to me that these techniques will add more read overhead to achieve faster writes. But I think that exactly the opposite is preferable. Also many of these techniques really do end up with a multi-versioned trees, but they aren't strictly immutable, which is something very crucial to justify the persistent overhead. - I know there still will have to be some kind of locking when appending values to the database, and I also know there should be a good garbage collecting logic if not all versions are to be maintained (otherwise the file size will surely rise dramatically). Also a delta compression system could be thought about. - Of all search trees structures, I really think Red-Blacks are the most close to what I need, since they offer the least number of rotations. But there are some possible pitfalls along the way: - Asynchronous writes -could- affect applications that need the data in real time. But I don't think that is the case with web applications, most of the time. Also when real-time data is needed, another solutions could be devised, like a check-in/check-out system of specific data that will need to be worked on a more real-time manner. - Also they could lead to some commit conflicts, though I fail to think of a good example of when it could happen. Also commit conflicts can occur in normal RDBMS, if two threads are working with the same data, right? - The overhead of having an immutable interface like this will grow exponentially and everything is doomed to fail soon, so this all is a bad idea. Any thoughts? Thanks! edit: There seems to be a misunderstanding of what a persistent data structure is: http://en.wikipedia.org/wiki/Persistent_data_structure

    Read the article

  • Schema to support dynamic properties

    - by Johan Fredrik Varen
    Hi people. I'm working on an editor that enables its users to create "object" definitions in real-time. A definition can contain zero or more properties. A property has a name a type. Once a definition is created, a user can create an object of that definition and set the property values of that object. So by the click of a mouse-button, the user should ie. be able to create a new definition called "Bicycle", and add the property "Size" of type "Numeric". Then another property called "Name" of type "Text", and then another property called "Price" of type "Numeric". Once that is done, the user should be able to create a couple of "Bicycle" objects and fill in the "Name" and "Price" property values of each bike. Now, I've seen this feature in several software products, so it must be a well-known concept. My problem started when I sat down and tried to come up with a DB schema to support this data structure, because I want the property values to be stored using the appropriate column types. Ie. a numeric property value is stored as, say, an INT in the database, and a textual property value is stored as VARCHAR. First, I need a table that will hold all my object definitions: Table obj_defs id | name | ---------------- 1 | "Bicycle" | 2 | "Book" | Then I need a table for holding what sort of properties each object definition should have: Table prop_defs id | obj_def_id | name | type | ------------------------------------ 1 | 1 | "Size" | ? | 2 | 1 | "Name" | ? | 3 | 1 | "Price" | ? | 4 | 2 | "Title" | ? | 5 | 2 | "Author" | ? | 6 | 2 | "ISBN" | ? | I would also need a table that holds each object: Table objects id | created | updated | ------------------------------ 1 | 2011-05-14 | 2011-06-15 | 2 | 2011-05-14 | 2011-06-15 | 3 | 2011-05-14 | 2011-06-15 | Finally, I need a table that will hold the actual property values of each object, and one solution is for this table to have one column for each possible value type, such as this: Table prop_vals id | prop_def_id | object_id | numeric | textual | boolean | ------------------------------------------------------------ 1 | 1 | 1 | 27 | | | 2 | 2 | 1 | | "Trek" | | 3 | 3 | 1 | 1249 | | | 4 | 1 | 2 | 26 | | | 5 | 2 | 2 | | "GT" | | 6 | 3 | 2 | 159 | | | 7 | 4 | 3 | | "It" | | 8 | 5 | 3 | | "King" | | 9 | 6 | 4 | 9 | | | If I implemented this schema, what would the "type" column of the prop_defs table hold? Integers that each map to a column name, varchars that simply hold the column name? Any other possibilities? Would a stored procedure help me out here in some way? And what would the SQL for fetching the "name" property of object 2 look like?

    Read the article

  • Design patterns and interview question

    - by user160758
    When I was learning to code, I read up on the design patterns like a good boy. Long after this, I started to actually understand them. Design discussions such as those on this site constantly try to make the rules more and more general, which is good. But there is a line, over which it becomes over-analysis starts to feed off itself and as such I think begins to obfuscate the original point - for example the "What's Alternative to Singleton" post and the links contained therein. http://stackoverflow.com/questions/1300655/whats-alternative-to-singleton I say this having been asked in both interviews I’ve had over the last 2 weeks what a singleton is and what criticisms I have of it. I have used it a few times for items such as user data (simple key-value eg. last file opened by this user) and logging (very common i'm sure). I've never ever used it just to have what is essentially global application data, as this is clearly stupid. In the first interview, I reply that I have no criticisms of it. He seemed disappointed by this but as the job wasn’t really for me, I forgot about it. In the next one, I was asked again and, as I wanted this job, I thought about it on the spot and made some objections, similar to those contained in the post linked to above (I suggested use of a factory or dependency injection instead). He seemed happy with this. But my problem is that I have used the singleton without ever using it in this kind of stupid way, which I had to describe on the spot. Using it for global data and the like isn’t something I did then realised was stupid, or read was stupid so didn’t do, it was just something I knew was stupid from the start. Essentially I’m supposed to be able to think of ways of how to misuse a pattern in the interview? Which class of programmers can best answer this question? The best ones? The medium ones? I'm not sure.... And these were both bright guys. I read more than enough to get better at my job but had never actually bothered to seek out criticisms of the most simple of the design patterns like this one. Do people think such questions are valid and that I ought to know the objections off by heart? Or that it is reasonable to be able to work out what other people who are missing the point would do on the fly? Or do you think I’m at least partially right that the question is too unsubtle and that the questions ought to be better thought out in order to make sure only good candidates can answer. PS. Please don’t think I’m saying that I’m just so clever that I know everything automatically - I’ve learnt the hard way like everyone else. But avoiding global data is hardly revolutionary.

    Read the article

  • gcc optimization? bug? and its practial implication to project

    - by kumar_m_kiran
    Hi All, My questions are divided into three parts Question 1 Consider the below code, #include <iostream> using namespace std; int main( int argc, char *argv[]) { const int v = 50; int i = 0X7FFFFFFF; cout<<(i + v)<<endl; if ( i + v < i ) { cout<<"Number is negative"<<endl; } else { cout<<"Number is positive"<<endl; } return 0; } No specific compiler optimisation options are used or the O's flag is used. It is basic compilation command g++ -o test main.cpp is used to form the executable. The seemingly very simple code, has odd behaviour in SUSE 64 bit OS, gcc version 4.1.2. The expected output is "Number is negative", instead only in SUSE 64 bit OS, the output would be "Number is positive". After some amount of analysis and doing a 'disass' of the code, I find that the compiler optimises in the below format - Since i is same on both sides of comparison, it cannot be changed in the same expression, remove 'i' from the equation. Now, the comparison leads to if ( v < 0 ), where v is a constant positive, So during compilation itself, the else part cout function address is added to the register. No cmp/jmp instructions can be found. I see that the behaviour is only in gcc 4.1.2 SUSE 10. When tried in AIX 5.1/5.3 and HP IA64, the result is as expected. Is the above optimisation valid? Or, is using the overflow mechanism for int not a valid use case? Question 2 Now when I change the conditional statement from if (i + v < i) to if ( (i + v) < i ) even then, the behaviour is same, this atleast I would personally disagree, since additional braces are provided, I expect the compiler to create a temporary built-in type variable and them compare, thus nullify the optimisation. Question 3 Suppose I have a huge code base, an I migrate my compiler version, such bug/optimisation can cause havoc in my system behaviour. Ofcourse from business perspective, it is very ineffective to test all lines of code again just because of compiler upgradation. I think for all practical purpose, these kinds of error are very difficult to catch (during upgradation) and invariably will be leaked to production site. Can anyone suggest any possible way to ensure to ensure that these kind of bug/optimization does not have any impact on my existing system/code base? PS : When the const for v is removed from the code, then optimization is not done by the compiler. I believe, it is perfectly fine to use overflow mechanism to find if the variable is from MAX - 50 value (in my case).

    Read the article

  • Patterns: Local Singleton vs. Global Singleton?

    - by Mike Rosenblum
    There is a pattern that I use from time to time, but I'm not quite sure what it is called. I was hoping that the SO community could help me out. The pattern is pretty simple, and consists of two parts: A singleton factory, which creates objects based on the arguments passed to the factory method. Objects created by the factory. So far this is just a standard "singleton" pattern or "factory pattern". The issue that I'm asking about, however, is that the singleton factory in this case maintains a set of references to every object that it ever creates, held within a dictionary. These references can sometimes be strong references and sometimes weak references, but it can always reference any object that it has ever created. When receiving a request for a "new" object, the factory first searches the dictionary to see if an object with the required arguments already exits. If it does, it returns that object, if not, it returns a new object and also stores a reference to the new object within the dictionary. This pattern prevents having duplicative objects representing the same underlying "thing". This is useful where the created objects are relatively expensive. It can also be useful where these objects perform event handling or messaging - having one object per item being represented can prevent multiple messages/events for a single underlying source. There are probably other reasons to use this pattern, but this is where I've found this useful. My question is: what to call this? In a sense, each object is a singleton, at least with respect to the data it contains. Each is unique. But there are multiple instances of this class, however, so it's not at all a true singleton. In my own personal terminology, I tend to call the factory method a "global singleton". I then call the created objects "local singletons". I sometimes also say that the created objects have "reference equality", meaning that if two variables reference the same data (the same underlying item) then the reference they each hold must be to the same exact object, hence "reference equality". But these are my own invented terms, and I am not sure that they are good ones. Is there standard terminology for this concept? And if not, could some naming suggestions be made? Thanks in advance...

    Read the article

  • Where are the function literals in c++?

    - by academicRobot
    First of all, maybe literals is not the right term for this concept, but its the closest I could think of (not literals in the sense of functions as first class citizens). The idea is that when you make a conventional function call, it compiles to something like this: callq <immediate address> But if you make a function call using a function pointer, it compiles to something like this: mov <memory location>,%rax callq *%rax Which is all well and good. However, what if I'm writing a template library that requires a callback of some sort with a specified argument list and the user of the library is expected to know what function they want to call at compile time? Then I would like to write my template to accept a function literal as a template parameter. So, similar to template <int int_literal> struct my_template {...};` I'd like to write template <func_literal_t func_literal> struct my_template {...}; and have calls to func_literal within my_template compile to callq <immediate address>. Is there a facility in C++ for this, or a work around to achieve the same effect? If not, why not (e.g. some cataclysmic side effects)? How about C++0x or another language? Solutions that are not portable are fine. Solutions that include the use of member function pointers would be ideal. I'm not particularly interested in being told "You are a <socially unacceptable term for a person of low IQ>, just use function pointers/functors." This is a curiosity based question, and it seems that it might be useful in some (albeit limited) applications. It seems like this should be possible since function names are just placeholders for a (relative) memory address, so why not allow more liberal use (e.g. aliasing) of this placeholder. p.s. I use function pointers and functions objects all the the time and they are great. But this post got me thinking about the don't pay for what you don't use principle in relation to function calls, and it seems like forcing the use of function pointers or similar facility when the function is known at compile time is a violation of this principle, though a small one.

    Read the article

  • Rails learn's confusion

    - by Steve
    This is a beginner's rails learning confusion. When I learn rails, from time to time, I feel frustrated on rails' principle "Convention over Configuration". Rails uses heavily on conventions. A lot of them are just naming conventions. If I forget a convention, I will either use the wrong naming and get unexpected result or get things magically done but don't understand how. Sometimes, I think of configuration. At least configuration lists everything clearly and nothing is in fog. In rails, there seems a hidden, dark contract between you and the machine. If you follow the contract, you communicate well. But a beginner usually forgets items listed on the contract and this usually leads to confusion. That's why when I first pick up rails, I feel like it is somehow difficult to learn. Besides, there are many other things that could be new to a learner, such as using git, using plugins from community, using RESTful routing style, using RSpec. All these are new and come together in learning ruby and rails. This definitely adds up difficulties for a beginner. In contrast, if you learn php, it wouldn't be that bad. You can forget many things and focus on learning php itself. You don't need to learn database handling if you know SQL already(in rails, you need to learn a whole new concept migration), you don't have to learn a new decent unit test(in rails, usually they teach RSpec along the way because rails is agile and you should learn test-driven development in the early learning stage), you don't have to learn a new version control(in rails, you will be taught about git anyway), you don't have to use complicated plugins(in rails, they usually use third-party plugins in textbook examples! what the hell? why not teach how to do a simplified similar thing in rails?), you don't have to worry RESTful style. All in all, when I learn php, I learn it quick and soon I start to write things myself. Learning php is similar to learning C/java. It tastes like those traditional languages. When I learn rails, it is more difficult. And I need to learn ruby as well (I believe many of you learn ruby just because of rails). Does anyone have the similar feeling as I have? How do you overcome it and start to master rails? Hints will be welcomed. Thank you.

    Read the article

  • Stepping into Ruby Meta-Programming: Generating proxy methods for multiple internal methods

    - by mstksg
    Hi all; I've multiply heard Ruby touted for its super spectacular meta-programming capabilities, and I was wondering if anyone could help me get started with this problem. I have a class that works as an "archive" of sorts, with internal methods that process and output data based on an input. However, the items in the archive in the class itself are represented and processed with integers, for performance purposes. The actual items outside of the archive are known by their string representation, which is simply number_representation.to_s(36). Because of this, I have hooked up each internal method with a "proxy method" that converts the input into the integer form that the archive recognizes, runs the internal method, and converts the output (either a single other item, or a collection of them) back into strings. The naming convention is this: internal methods are represented by _method_name; their corresponding proxy method is represented by method_name, with no leading underscore. For example: class Archive ## PROXY METHODS ## ## input: string representation of id's ## output: string representation of id's def do_something_with id result = _do_something_with id.to_i(36) return nil if result == nil return result.to_s(36) end def do_something_with_pair id_1,id_2 result = _do_something_with_pair id_1.to_i(36), id_2.to_i(36) return nil if result == nil return result.to_s(36) end def do_something_with_these ids result = _do_something_with_these ids.map { |n| n.to_i(36) } return nil if result == nil return result.to_s(36) end def get_many_from id result = _get_many_from id return nil if result == nil # no sparse arrays returned return result.map { |n| n.to_s(36) } end ## INTERNAL METHODS ## ## input: integer representation of id's ## output: integer representation of id's def _do_something_with id # does something with one integer-represented id, # returning an id represented as an integer end def do_something_with_pair id_1,id_2 # does something with two integer-represented id's, # returning an id represented as an integer end def _do_something_with_these ids # does something with multiple integer ids, # returning an id represented as an integer end def _get_many_from id # does something with one integer-represented id, # returns a collection of id's represented as integers end end There are a couple of reasons why I can't just convert them if id.class == String at the beginning of the internal methods: These internal methods are somewhat computationally-intensive recursive functions, and I don't want the overhead of checking multiple times at every step There is no way, without adding an extra parameter, to tell whether or not to re-convert at the end I want to think of this as an exercise in understanding ruby meta-programming Does anyone have any ideas? edit The solution I'd like would preferably be able to take an array of method names @@PROXY_METHODS = [:do_something_with, :do_something_with_pair, :do_something_with_these, :get_many_from] iterate through them, and in each iteration, put out the proxy method. I'm not sure what would be done with the arguments, but is there a way to test for arguments of a method? If not, then simple duck typing/analogous concept would do as well.

    Read the article

  • What is an overloaded operator in c++?

    - by Jeff
    I realize this is a basic question but I have searched online, been to cplusplus.com, read through my book, and I can't seem to grasp the concept of overloaded operators. A specific example from cplusplus.com is: // vectors: overloading operators example #include <iostream> using namespace std; class CVector { public: int x,y; CVector () {}; CVector (int,int); CVector operator + (CVector); }; CVector::CVector (int a, int b) { x = a; y = b; } CVector CVector::operator+ (CVector param) { CVector temp; temp.x = x + param.x; temp.y = y + param.y; return (temp); } int main () { CVector a (3,1); CVector b (1,2); CVector c; c = a + b; cout << c.x << "," << c.y; return 0; } from http://www.cplusplus.com/doc/tutorial/classes2/ but reading through it I'm still not understanding them at all. I just need a basic example of the point of the overloaded operator (which I assume is the "CVector CVector::operator+ (CVector param)"). There's also this example from wikipedia: Time operator+(const Time& lhs, const Time& rhs) { Time temp = lhs; temp.seconds += rhs.seconds; if (temp.seconds >= 60) { temp.seconds -= 60; temp.minutes++; } temp.minutes += rhs.minutes; if (temp.minutes >= 60) { temp.minutes -= 60; temp.hours++; } temp.hours += rhs.hours; return temp; } from "http://en.wikipedia.org/wiki/Operator_overloading" The current assignment I'm working on I need to overload a ++ and a -- operator. Thanks in advance for the information and sorry about the somewhat vague question, unfortunately I'm just not sure on it at all.

    Read the article

  • What are some things you'd like fresh college grads to know?

    - by bradhe
    So I proposed this to the Reddit community and I'd like to get SO's perspective on this. This is pretty much the copypasta of what I put there. I was thinking about this last night and thought it would be neat to compile a list. I'm still a pretty fresh college grad -- been in industry for 2 years -- but I think that I might have a few interesting things to lend. You don't know as much as you think you do. Somehow, college students think they know a lot more than they do (or maybe that was just me). Likewise, they think they can do more than they actually can. You should fairly assess your skills. QA people are not out to get you. Humans introduce bugs to code. It's not (nescessarily) a personal reflection on you and your skills if your code has a bug and it's caught by the QA/testing team. Listen to your senior (developers). They are not actually fuddy duddies who don't know about the new L337 hax in Ruby (okay, sometimes they are, but still...). They have a wealth of knowledge that you can learn from and it's in your best interest to do so. You will most likely not be doing what you want to for a while. This is mostly true in the corporate world -- startups are a different matter. Also, this is due to more than just the economy, man! Junior devs need to earn their keep, so to speak. Everyone wants to be lead dev on the next project and there are a lot of people in line ahead of you! For every elite developer there are 100 average developers. Joel Spolsky, I'm looking at you. Somehow this concept of ninja coders has really ingrained itself in our culture. While I encourage you to be the best you can be don't be disappointed if people aren't writing blog posts about you in the near future. Anyone else have anything they would see added to this list?

    Read the article

  • Are there compelling reasons not to use Groovy?

    - by Leonard H Martin
    I'm developing a LoB application in Java after a long absence from the platform (having spent the last 8 years or so entrenched in Fortran, C, a smidgin of C++ and latterly .Net). Java, the language, is not much changed from how I remember it. I like it's strengths and I can work around its weaknesses - the platform has grown and deciding upon the myriad of different frameworks which appear to do much the same thing as one another is a different story; but that can wait for another day - all-in-all I'm comfortable with Java. However, over the last couple of weeks I've become enamoured with Groovy, and purely from a selfish point of view: but not just because it makes development against the JVM a more succinct and entertaining (and, well, "groovy") proposition than Java (the language). What strikes me most about Groovy is its inherent maintainability. We all (I hope!) strive to write well documented, easy to understand code. However, sometimes the languages we use themselves defeat us. An example: in 2001 I wrote a library in C to translate EDIFACT EDI messages into ANSI X12 messages. This is not a particularly complicated process, if slightly involved, and I thought at the time I had documented the code properly - and I probably had - but some six years later when I revisited the project (and after becoming acclimatised to C#) I found myself lost in so much C boilerplate (mallocs, pointers, etc. etc.) that it took three days of thoughtful analysis before I finally understood what I'd been doing six years previously. This evening I've written about 2000 lines of Java (it is the day of rest, after all!). I've documented as best as I know how, but, but, of those 2000 lines of Java a significant proportion is Java boiler plate. This is where I see Groovy and other dynamic languages winning through - maintainability and later comprehension. Groovy lets you concentrate on your intent without getting bogged down on the platform specific implementation; it's almost, but not quite, self documenting. I see this as being a huge boon to me when I revisit my current project (which I'll port to Groovy asap) in several years time and to my successors who will inherit it and carry on the good work. So, are there any reasons not to use Groovy?

    Read the article

  • In R, how do you get the best fitting equation to a set of data?

    - by Matherion
    I'm not sure wether R can do this (I assume it can, but maybe that's just because I tend to assume that R can do anything :-)). What I need is to find the best fitting equation to describe a dataset. For example, if you have these points: df = data.frame(x = c(1, 5, 10, 25, 50, 100), y = c(100, 75, 50, 40, 30, 25)) How do you get the best fitting equation? I know that you can get the best fitting curve with: plot(loess(df$y ~ df$x)) But as I understood you can't extract the equation, see Loess Fit and Resulting Equation. When I try to build it myself (note, I'm not a mathematician, so this is probably not the ideal approach :-)), I end up with smth like: y.predicted = 12.71 + ( 95 / (( (1 + df$x) ^ .5 ) / 1.3)) Which kind of seems to approximate it - but I can't help to think that smth more elegant probably exists :-) I have the feeling that fitting a linear or polynomial model also wouldn't work, because the formula seems different from what those models generally use (i.e. this one seems to need divisions, powers, etc). For example, the approach in Fitting polynomial model to data in R gives pretty bad approximations. I remember from a long time ago that there exist languages (Matlab may be one of them?) that do this kind of stuff. Can R do this as well, or am I just at the wrong place? (Background info: basically, what we need to do is find an equation for determining numbers in the second column based on the numbers in the first column; but we decide the numbers ourselves. We have an idea of how we want the curve to look like, but we can adjust these numbers to an equation if we get a better fit. It's about the pricing for a product (a cheaper alternative to current expensive software for qualitative data analysis); the more 'project credits' you buy, the cheaper it should become. Rather than forcing people to buy a given number (i.e. 5 or 10 or 25), it would be nicer to have a formula so people can buy exactly what they need - but of course this requires a formula. We have an idea for some prices we think are ok, but now we need to translate this into an equation.

    Read the article

  • Why boost::recursive_mutex is not working as expected?

    - by Kjir
    I have a custom class that uses boost mutexes and locks like this (only relevant parts): template<class T> class FFTBuf { public: FFTBuf(); [...] void lock(); void unlock(); private: T *_dst; int _siglen; int _processed_sums; int _expected_sums; int _assigned_sources; bool _written; boost::recursive_mutex _mut; boost::unique_lock<boost::recursive_mutex> _lock; }; template<class T> FFTBuf<T>::FFTBuf() : _dst(NULL), _siglen(0), _expected_sums(1), _processed_sums(0), _assigned_sources(0), _written(false), _lock(_mut, boost::defer_lock_t()) { } template<class T> void FFTBuf<T>::lock() { std::cerr << "Locking" << std::endl; _lock.lock(); std::cerr << "Locked" << std::endl; } template<class T> void FFTBuf<T>::unlock() { std::cerr << "Unlocking" << std::endl; _lock.unlock(); } If I try to lock more than once the object from the same thread, I get an exception (lock_error): #include "fft_buf.hpp" int main( void ) { FFTBuf<int> b( 256 ); b.lock(); b.lock(); b.unlock(); b.unlock(); return 0; } This is the output: sb@dex $ ./src/test Locking Locked Locking terminate called after throwing an instance of 'boost::lock_error' what(): boost::lock_error zsh: abort ./src/test Why is this happening? Am I understanding some concept incorrectly?

    Read the article

  • What are the rules governing how a bind variable can be used in Postgres and where is this defined?

    - by Craig Miles
    I can have a table and function defined as: CREATE TABLE mytable ( mycol integer ); INSERT INTO mytable VALUES (1); CREATE OR REPLACE FUNCTION myfunction (l_myvar integer) RETURNS mytable AS $$ DECLARE l_myrow mytable; BEGIN SELECT * INTO l_myrow FROM mytable WHERE mycol = l_myvar; RETURN l_myrow; END; $$ LANGUAGE plpgsql; In this case l_myvar acts as a bind variable for the value passed when I call: SELECT * FROM myfunction(1); and returns the row where mycol = 1 If I redefine the function as: CREATE OR REPLACE FUNCTION myfunction (l_myvar integer) RETURNS mytable AS $$ DECLARE l_myrow mytable; BEGIN SELECT * INTO l_myrow FROM mytable WHERE mycol IN (l_myvar); RETURN l_myrow; END; $$ LANGUAGE plpgsql; SELECT * FROM myfunction(1); still returns the row where mycol = 1 However, if I now change the function definition to allow me to pass an integer array and try to this array in the IN clause, I get an error: CREATE OR REPLACE FUNCTION myfunction (l_myvar integer[]) RETURNS mytable AS $$ DECLARE l_myrow mytable; BEGIN SELECT * INTO l_myrow FROM mytable WHERE mycol IN (array_to_string(l_myvar, ',')); RETURN l_myrow; END; $$ LANGUAGE plpgsql; Analysis reveals that although: SELECT array_to_string(ARRAY[1, 2], ','); returns 1,2 as expected SELECT * FROM myfunction(ARRAY[1, 2]); returns the error operator does not exist: integer = text at the line: WHERE mycol IN (array_to_string(l_myvar, ',')); If I execute: SELECT * FROM mytable WHERE mycol IN (1,2); I get the expected result. Given that array_to_string(l_myvar, ',') evaluates to 1,2 as shown, why arent these statements equivalent. From the error message it is something to do with datatypes, but doesnt the IN(variable) construct appear to be behaving differently from the = variable construct? What are the rules here? I know that I could build a statement to EXECUTE, treating everything as a string, to achieve what I want to do, so I am not looking for that as a solution. I do want to understand though what is going on in this example. Is there a modification to this approach to make it work, the particular example being to pass in an array of values to build a dynamic IN clause without resorting to EXECUTE? Thanks in advance Craig

    Read the article

  • Where are the function literals c++?

    - by academicRobot
    First of all, maybe literals is not the right term for this concept, but its the closest I could think of (not literals in the sense of functions as first class citizens). The idea is that when you make a conventional function call, it compiles to something like this: callq <immediate address> But if you make a function call using a function pointer, it compiles to something like this: mov <memory location>,%rax callq *%rax Which is all well and good. However, what if I'm writing a template library that requires a callback of some sort with a specified argument list and the user of the library is expected to know what function they want to call at compile time? Then I would like to write my template to accept a function literal as a template parameter. So, similar to template <int int_literal> struct my_template {...};` I'd like to write template <func_literal_t func_literal> struct my_template {...}; and have calls to func_literal within my_template compile to callq <immediate address>. Is there a facility in C++ for this, or a work around to achieve the same effect? If not, why not (e.g. some cataclysmic side effects)? How about C++0x or another language? Solutions that are not portable are fine. Solutions that include the use of member function pointers would be ideal. I'm not particularly interested in being told "You are a <socially unacceptable term for a person of low IQ>, just use function pointers/functors." This is a curiosity based question, and it seems that it might be useful in some (albeit limited) applications. It seems like this should be possible since function names are just placeholders for a (relative) memory address, so why not allow more liberal use (e.g. aliasing) of this placeholder. p.s. I use function pointers and functions objects all the the time and they are great. But this post got me thinking about the don't pay for what you don't use principle in relation to function calls, and it seems like forcing the use of function pointers or similar facility when the function is known at compile time is a violation of this principle, though a small one.

    Read the article

  • Unable to get set intersection to work

    - by chavanak
    Sorry for the double post, I will update this question if I can't get things to work :) I am trying to compare two files. I will list the two file content: File 1 File 2 "d.complex.1" "d.complex.1" 1 4 5 5 48 47 65 21 d.complex.10 d.complex.10 46 6 21 46 109 121 192 192 TI am trying to compare the contents of the two file but not in a trivial way. I will explain what I want with an example. If you observe the file content I have typed above, the d.complex.1 of file_1 has "5" similar to d.complex.1 in file_2; the same d.complex.1 in file_1 has nothing similar to d.complex.10 in file_2. What I am trying to do is just to print out those d.complex. which has nothing in similar with the other d.complex. Consider the d.complex. as a heading if you want. But all I am trying is compare the numbers below each d.complex. and if nothing matches, I want that particular d.complex. from both files to be printed. If even one number is present in both d.complex. of both files, I want it to be rejected. My Code: The method I chose to achieve this was to use sets and then do a difference. Code I wrote was: first_complex=open( "file1.txt", "r" ) first_complex_lines=first_complex.readlines() first_complex_lines=map( string.strip, first_complex_lines ) first_complex.close() second_complex=open( "file2.txt", "r" ) second_complex_lines=second_complex.readlines() second_complex_lines=map( string.strip, second_complex_lines ) second_complex.close() list_1=[] list_2=[] res_1=[] for line in first_complex_lines: if line.startswith( "d.complex" ): res_1.append( [] ) res_1[-1].append( line ) res_2=[] for line in second_complex_lines: if line.startswith( "d.complex" ): res_2.append( [] ) res_2[-1].append( line ) h=len( res_1 ) k=len( res_2 ) for i in res_1: for j in res_2: print i[0] print j[0] target_set=set ( i ) target_set_1=set( j ) for s in target_set: if s not in target_set_1: if s[0] != "d": print s The above code is giving an output like this (just an example): d.complex.1.dssp d.complex.1.dssp 1 48 65 d.complex.1.dssp d.complex.10.dssp 46 21 109 What I would like to have is: d.complex.1 d.complex.1 (name from file2) d.complex.1 d.complex.10 (name from file2) I am sorry for confusing you guys, but this is all that is required. I am so new to python so my concept above might be flawed. Also I have never used sets before :(. Can someone give me a hand here?

    Read the article

  • Swing: How do I run a job from AWT thread, but after a window was layed out?

    - by java.is.for.desktop
    My complete GUI runs inside the AWT thread, because I start the main window using SwingUtilities.invokeAndWait(...). Now I have a JDialog which has just to display a JLabel, which indicates that a certain job is in progress, and close that dialog after the job was finished. The problem is: the label is not displayed. That job seems to be started before JDialog was fully layed-out. When I just let the dialog open without waiting for a job and closing, the label is displayed. The last thing the dialog does in its ctor is setVisible(true). Things such as revalidate(), repaint(), ... don't help either. Even when I start a thread for the monitored job, and wait for it using someThread.join() it doesn't help, because the current thread (which is the AWT thread) is blocked by join, I guess. Replacing JDialog with JFrame doesn't help either. So, is the concept wrong in general? Or can I manage it to do certain job after it is ensured that a JDialog (or JFrame) is fully layed-out? Simplified algorithm of what I'm trying to achieve: Create a subclass of JDialog Ensure that it and its contents are fully layed-out Start a process and wait for it to finish (threaded or not, doesn't matter) Close the dialog I managed to write a reproducible test case: EDIT Problem from an answer is now addressed: This use case does display the label, but it fails to close after the "simulated process", because of dialog's modality. import java.awt.*; import javax.swing.*; public class _DialogTest2 { public static void main(String[] args) throws Exception { SwingUtilities.invokeAndWait(new Runnable() { final JLabel jLabel = new JLabel("Please wait..."); @Override public void run() { JFrame myFrame = new JFrame("Main frame"); myFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); myFrame.setSize(750, 500); myFrame.setLocationRelativeTo(null); myFrame.setVisible(true); JDialog d = new JDialog(myFrame, "I'm waiting"); d.setModalityType(Dialog.ModalityType.APPLICATION_MODAL); d.add(jLabel); d.setSize(300, 200); d.setLocationRelativeTo(null); d.setVisible(true); SwingUtilities.invokeLater(new Runnable() { @Override public void run() { try { Thread.sleep(3000); // simulate process jLabel.setText("Done"); } catch (InterruptedException ex) { } } }); d.setVisible(false); d.dispose(); myFrame.setVisible(false); myFrame.dispose(); } }); } }

    Read the article

  • Finding open contiguous blocks of time for every day of a month, fast

    - by Chris
    I am working on a booking availability system for a group of several venues, and am having a hard time generating the availability of time blocks for days in a given month. This is happening server-side in PHP, but the concept itself is language agnostic -- I could be doing this in JS or anything else. Given a venue_id, month, and year (6/2012 for example), I have a list of all events occurring in that range at that venue, represented as unix timestamps start and end. This data comes from the database. I need to establish what, if any, contiguous block of time of a minimum length (different per venue) exist on each day. For example, on 6/1 I have an event between 2:00pm and 7:00pm. The minimum time is 5 hours, so there's a block open there from 9am - 2pm and another between 7pm and 12pm. This would continue for the 2nd, 3rd, etc... every day of June. Some (most) of the days have nothing happening at all, some have 1 - 3 events. The solution I came up with works, but it also takes waaaay too long to generate the data. Basically, I loop every day of the month and create an array of timestamps for each 15 minutes of that day. Then, I loop the time spans of events from that day by 15 minutes, marking any "taken" timeslot as false. Remaining, I have an array that contains timestamp of free time vs. taken time: //one day's array after processing through loops (not real timestamps) array( 12345678=>12345678, // <--- avail 12345878=>12345878, 12346078=>12346078, 12346278=>false, // <--- not avail 12346478=>false, 12346678=>false, 12346878=>false, 12347078=>12347078, // <--- avail 12347278=>12347278 ) Now I would need to loop THIS array to find continuous time blocks, then check to see if they are long enough (each venue has a minimum), and if so then establish the descriptive text for their start and end (i.e. 9am - 2pm). WHEW! By the time all this looping is done, the user has grown bored and wandered off to Youtube to watch videos of puppies; it takes ages to so examine 30 or so days. Is there a faster way to solve this issue? To summarize the problem, given time ranges t1 and t2 on day d, how can I determine the remaining time left in d that is longer than the minimum time block m. This data is assembled on demand via AJAX as the user moves between calendar months. Results are cached per-page-load, so if the user goes to July a second time, the data that was generated the first time would be reused. Any other details that would help, let me know. Edit Per request, the database structure (or the part that is relevant here) *events* id (bigint) title (varchar) *event_times* id (bigint) event_id (bigint) venue_id (bigint) start (bigint) end (bigint) *venues* id (bigint) name (varchar) min_block (int) min_start (varchar) max_start (varchar)

    Read the article

  • Controlling the USB from Windows

    - by b-gen-jack-o-neill
    Hi, I know this probably is not the easiest thing to do, but I am trying to connect Microcontroller and PC using USB. I dont want to use internal USART of Microcontroller or USB to RS232 converted, its project indended to help me understand various principles. So, getting the communication done from the Microcontroller side is piece of cake - I mean, when I know he protocol, its relativelly easy to implement it on Micro, becouse I am in direct control of evrything, even precise timing. But this is not the case of PC. I am not very familiar with concept of Windows handling the devices connected. In one of my previous question I ask about how Windows works with devices thru drivers. I understood that for internal use of Windows, drivers must have some default set of functions available to OS. I mean, when OS wants to access HDD, it calls HDD driver (which is probably internal in OS), with specific "questions" so that means that HDD driver has to be written to cooperate with Windows, to have write function in the proper place to be called by the OS. Something similiar is for GPU, Even DirectX, I mean DirectX must call specific functions from drivers, so drivers must be written to work with DX. I know, many functions from WinAPI works on their own, but even "simple" window must be in the end written into framebuffer, using MMIO to adress specified by drivers. Am I right? So, I expected that Windows have internal functions, parts of WinAPI designed to work with certain comonly used things. To call manufacturer-designed drivers. But this seems to not be entirely true becouse Windows has no way to communicate thru Paralel port. I mean, there is no function in the WinAPI to work with serial port, but there are funcions to work with HDD,GPU and so. But now there comes the part I am getting very lost at. So, I think Windows must have some built-in functions to communicate thru USB, becouse for example it handles USB flash memory. So, is there any WinAPI function designed to let user to operate USB thru that function, or when I want to use USB myself, do I have to call desired USB-driver function myself? Becouse all you need to send to USB controller is device adress and the infromation right? I mean, I don´t have to write any new drivers, am I right? Just to call WinAPI function if there is such, or directly call original USB driver. Does any of this make some sense?

    Read the article

  • Is it possible to create a C++ factory system that can create an instance of any "registered" object

    - by chrensli
    Hello, I've spent my entire day researching this topic, so it is with some scattered knowledge on the topic that i come to you with this inquiry. Please allow me to describe what I am attempting to accomplish, and maybe you can either suggest a solution to the immediate question, or another way to tackle the problem entirely. I am trying to mimic something related to how XAML files work in WPF, where you are essentially instantiating an object tree from an XML definition. If this is incorrect, please inform. This issue is otherwise unrelated to WPF, C#, or anything managed - I solely mention it because it is a similar concept.. So, I've created an XML parser class already, and generated a node tree based on ObjectNode objects. ObjectNode objects hold a string value called type, and they have an std::vector of child ObjectNode objects. The next step is to instantiate a tree of objects based on the data in the ObjectNode tree. This intermediate ObjectNode tree is needed because the same ObjectNode tree might be instantiated multiple times or delayed as needed. The tree of objects that is being created is such that the nodes in the tree are descendants of a common base class, which for now we can refer to as MyBase. Leaf nodes can be of any type, not necessarily derived from MyBase. To make this more challenging, I will not know what types of MyBase derived objects might be involved, so I need to allow for new types to be registered with the factory. I am aware of boost's factory. Their docs have an interesting little design paragraph on this page: o We may want a factory that takes some arguments that are forwarded to the constructor, o we will probably want to use smart pointers, o we may want several member functions to create different kinds of objects, o we might not necessarily need a polymorphic base class for the objects, o as we will see, we do not need a factory base class at all, o we might want to just call the constructor - without #new# to create an object on the stack, and o finally we might want to use customized memory management. I might not be understanding this all correctly, but that seems to state that what I'm trying to do can be accomplished with boost's factory. But all the examples I've located, seem to describe factories where all objects are derived from a base type. Any guidance on this would be greatly appreciated. Thanks for your time!

    Read the article

  • Is throwing an exception a healthy way to exit?

    - by ramaseshan
    I have a setup that looks like this. class Checker { // member data Results m_results; // see below public: bool Check(); private: bool Check1(); bool Check2(); // .. so on }; Checker is a class that performs lengthy check computations for engineering analysis. Each type of check has a resultant double that the checker stores. (see below) bool Checker::Check() { // initilisations etc. Check1(); Check2(); // ... so on } A typical Check function would look like this: bool Checker::Check1() { double result; // lots of code m_results.SetCheck1Result(result); } And the results class looks something like this: class Results { double m_check1Result; double m_check2Result; // ... public: void SetCheck1Result(double d); double GetOverallResult() { return max(m_check1Result, m_check2Result, ...); } }; Note: all code is oversimplified. The Checker and Result classes were initially written to perform all checks and return an overall double result. There is now a new requirement where I only need to know if any of the results exceeds 1. If it does, subsequent checks need not be carried out(it's an optimisation). To achieve this, I could either: Modify every CheckN function to keep check for result and return. The parent Check function would keep checking m_results. OR In the Results::SetCheckNResults(), throw an exception if the value exceeds 1 and catch it at the end of Checker::Check(). The first is tedious, error prone and sub-optimal because every CheckN function further branches out into sub-checks etc. The second is non-intrusive and quick. One disadvantage is I can think of is that the Checker code may not necessarily be exception-safe(although there is no other exception being thrown anywhere else). Is there anything else that's obvious that I'm overlooking? What about the cost of throwing exceptions and stack unwinding? Is there a better 3rd option?

    Read the article

  • How to ensure structures are completly initialized (by name) in GCC?

    - by Steven Spark
    How do I ensure each and every field of my structures are initialized in GCC when using designated initializers? (I'm especially interested in function pointers.) (I'm using C not C++.) Here is an example: typedef struct { int a; int b; } foo_t; typedef struct { void (*Start)(void); void (*Stop)(void); } bar_t; foo_t fooo = { 5 }; foo_t food = { .b=4 }; bar_t baro = { NULL }; bar_t bard = { .Start = NULL }; -Wmissing-field-initializers does not help at all. It works for fooo only in GCC (mingw 4.7.3, 4.8.1), and clang does only marginally better (no warnings for food and bard). I'm sure there is a reason for not producing warnings for designated initializer (even when I explicitly ask for them) but I want/need them. I do not want to initialize structures based on order/position because that is more error prone (for example swapping Start and Stop won't even give any warning). And neither gcc nor clang will give any warning that I failed to explicitly initialize a field (when initializing by name). I also don't want to litter my code with if(x.y==NULL) lines for multiple reasons, one of which is I want compile time warnings and not runtime errors. At least splint will give me warnings on all 4 cases, but unfortunately I cannot use splint all the time (it chokes on some of the code (fails to parse some C99, GCC extensions)). Note: If I'm using a real function instead of NULL GCC will also show a warning for baro (but not bard). I searched google and stack overflow but only found related questions and have not found answer for this specific problem. The best match I have found is 'Ensure that all elements in a structure are initialized' Ensure that all elements in a structure are initialized Which asks pretty much the same question, but has no satisfying answer. Is there a better way dealing with this that I have not mentioned? (Maybe other code analysis tool? Preferably something (free) that can be integrated into Eclipse or Visual Studio...)

    Read the article

  • Are there any CMS editors out there which users can populate locked down HTML templates with content

    - by Deep
    Hi there, We work in email marketing, creating HTML/TEXT emails for clients. In essence we design HTML email templates for our clients. Clients then post us content (via a form) to populate these templates before we send them out. Right now we do this manually, basically cutting and pasting the content from their submitted form into the relevant parts of the template, which is time consuming and particularly mind-numbing. What we're looking for (and have so far been unable to find) is a simple system which will allow us to capture this client content in a sort of WYSIWYG HTML format. Basically they populate a locked down version of the template, entering text where necessary, before submitting to us. This is our most basic requirement, and a friend of mine kindly demo'd a proof of concept here: http://advantageone.co.uk/mbe/ Note: If you click on a text area in the body of the template, an editor pop ups. Now what we are looking for a CMS editor out there which can be easily adapted to do the above and the following for our end clients? User login View previously submitted campaigns that they have created and edit these Create new - selecting from template (assigned to their user/client id), perhaps being able to add new rows to the template. And have these HTML templates locked down so they can only edit what they're allowed too (like in the demo above), and perhaps make some areas required. Perhaps have a simple workflow or approval built in Allow us to lock submitted campaigns after a point so they can't be further edited, and as administrators view all campaigns from all users Be so incredibly simple, with any extraneous functionality switched off Essentially an extremley simple stripped down CMS, but we use the outputted HTML for sending out as an email, rather than publishing onto the web. Now to the actual dilemma: we're looking for something really simple, and the above sounds like a CMS. But we haven't been able to find anything that already does, or can be easily adapted to do this. Everything is either too complex, or simple and inflexible. We're sure there must be something off the shelf available, rather than us coding something ourselves. But we've kind of got stuck. Does anyone know of a system, or could recommend a system that can do the above out of the box, or with a few days tweaking? Forgive me if this is a little disjointed, if I'm being incredibly dopey and there is something out there please let me know! Kind regards, Dp.

    Read the article

  • How to check total cache size using a program

    - by user1888541
    so I'm having some trouble creating a program to measure cache size in C. I understand the basic concept of going about this but I'm still having trouble figuring out exactly what I am doing wrong. Basically, I create an array of varying length (going by power of 2s) and access each element in the array and put it in a dummy variable. I go through the array and do this around 1000 times to negate the "noise" that would otherwise occur if I only did it once to get an accurate measurement for time. Then, I look for the size that causes a big jump in access time. Unfortunately, this is where I am having my problem, I don't see this jump using my code and clearly I am doing something wrong. Another thing is that I used /proc/cpuinfo to check the cache and it said the size was 6114 but that was not a power of 2. I was told to go by powers of 2 to figure out the cache can anyone explain why this is? Here is the just of my code...I will post the rest if need be { struct timeval start; struct timeval end; // int n = 1; // change this to test different sizes int array_size = 1048576*n; // I'm trying to check the time "manually" first before creating a loop for the program to do it by itself this is why I have a separate "n" variable to increase the size char x = 0; int i =0, j=0; char *a; a =malloc(sizeof(char) * (array_size)); gettimeofday(&start,NULL); for(i=0; i<1000; i++) { for(j=0; j < array_size; j += 1) { x = a[j]; } } gettimeofday(&end,NULL); int timeTaken = (end.tv_sec * 1000000 + end.tv_usec) - (start.tv_sec *1000000 + start.tv_usec); printf("Time Taken: %d \n", timeTaken); printf("Average: %f \n", (double)timeTaken/((double)array_size); }

    Read the article

  • HP SmartArray P400: How to repair failed logical drive?

    - by TegtmeierDE
    I have a HP Server with SmartArray P400 controller (incl. 256 MB Cache/Battery Backup) with a logicaldrive with replaced failed physicaldrive that does not rebuild. This is how it looked when I detected the error: ~# /usr/sbin/hpacucli ctrl slot=0 show config Smart Array P400 in Slot 0 (Embedded) (sn: XXXX) array A (SATA, Unused Space: 0 MB) logicaldrive 1 (698.6 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 750 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 750 GB, OK) array B (SATA, Unused Space: 0 MB) logicaldrive 2 (2.7 TB, RAID 5, Failed) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 750 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 750 GB, OK) physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SATA, 750 GB, OK) physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SATA, 750 GB, Failed) physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SATA, 750 GB, OK) unassigned physicaldrive 2I:1:8 (port 2I:box 1:bay 8, SATA, 750 GB, OK) ~# I thought that I had drive 2I:1:8 configured as a spare for Array A and Array B, but it seems this was not the case :-(. I noticed the problem due to I/O errors on the host, even if only 1 physicaldrive of the RAID5 is failed. Does someone know why this could happen? The logicaldrive should go into "Degraded" mode but still be fully accessible from the host os!? I first tried to add the unassigned drive 2I:1:8 as a spare to logicaldrive 2, but this was not possible: ~# /usr/sbin/hpacucli ctrl slot=0 array B add spares=2I:1:8 Error: This operation is not supported with the current configuration. Use the "show" command on devices to show additional details about the configuration. ~# Interestingly it is possible to add the unassigned drive to the first array without problems. I thought maybe the controller put the array into "failed" state due to the missing spare and protects failed arrays from modification. So I tried was to reenable the logicaldrive (to add the spare afterwards): ~# /usr/sbin/hpacucli ctrl slot=0 ld 2 modify reenable Warning: Any previously existing data on the logical drive may not be valid or recoverable. Continue? (y/n) y Error: This operation is not supported with the current configuration. Use the "show" command on devices to show additional details about the configuration. ~# But as you can see, re-enabling the logicaldrive this was not possible. Now I replaced the failed drive by hotswapping it with the unassigned drive. The status now looks like this: ~# /usr/sbin/hpacucli ctrl slot=0 show config Smart Array P400 in Slot 0 (Embedded) (sn: XXXX) array A (SATA, Unused Space: 0 MB) logicaldrive 1 (698.6 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 750 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 750 GB, OK) array B (SATA, Unused Space: 0 MB) logicaldrive 2 (2.7 TB, RAID 5, Failed) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 750 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 750 GB, OK) physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SATA, 750 GB, OK) physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SATA, 750 GB, OK) physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SATA, 750 GB, OK) ~# The logical drive is still not accessible. Why is it not rebuilding? What can I do? FYI, this is the configuration of my controller: ~# /usr/sbin/hpacucli ctrl slot=0 show Smart Array P400 in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: XXXX Cache Serial Number: XXXX RAID 6 (ADG) Status: Enabled Controller Status: OK Chassis Slot: Hardware Revision: Rev E Firmware Version: 5.22 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Analysis Inconsistency Notification: Disabled Raid1 Write Buffering: Disabled Post Prompt Timeout: 0 secs Cache Board Present: True Cache Status: OK Accelerator Ratio: 25% Read / 75% Write Drive Write Cache: Disabled Total Cache Size: 256 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True ~# Thanks for you help in advance.

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >