Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 40/361 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Looking for efficient scaling patterns for Silverlight application with distributed text-file data s

    - by Edward Tanguay
    I'm designing a Silverlight software solution for students and teachers to record flashcards, e.g. words and phrases that students find while reading and errors that teachers notice while teaching. Requirements are: each person publishes his own flashcards in a file on a web server, e.g. http://:www.mywebserver.com/flashcards.txt other people subscribe to that person's flashcards by using a Silverlight flashcard reader that I have developed and entering the URLs of flashcard files they want to subscribe to, URLs and imported flashcards being saved in IsolatedStorage the flashcards.txt file has the following simple format: title, then blocks of question/answers: Jim Smith's flashcards from English class 53-222, winter semester 2009 ==fla Das kann nicht sein. That can't be. ==fla Es sei denn, er kommt nicht. Unless he doesn't come. The user then makes public the URL to his flashcard file and other readers begin reading in his flashcards. In order to lower the bar for non-technical users to contribute, it will even be possible for them to save this text in a Google Document, which they publish and distribute the URL. The flashcard readers will then recognize it is a google document and perform the necessary screen scraping to get at the raw text. I have two technical questions about this approach: What is a best way to plan now for scalability issues: e.g. if your reader is subscribed to 10 flashcard files that are each 200K, it will have to download 2MB of text just to find out if any new flashcards are available. Or can I somehow accurately and consistently get at the last update date/time of text files on servers and published google docs? Each reader will have the ability to allow the person to test himself on imported flashcards and add meta information to them, e.g. categorize them, edit them, etc. This information will be stored in IsolatedStorage along with the important flashcards themselves. What is a good pattern to allow these readers to share and synchronize this meta data, e.g. so when you are looking at a flashcard you can see that 5 other people have made corrections to it. The best solution I can think of now is that the Silverlight readers will have to republish their data to a central database, but then there is the problem of uniquely identifying each flashcard, the best approach seems to be URL + position-in-file, or even better URL + original text of both question and answer fields, but both of these have their obvious drawbacks. The main requirement is that the bar for participation is kept as low as possible, i.e. type text in a google document, publish it, distribute the URL, and you're publishing within the flashcard community. So I want to come up with the most efficient technical solutions in order to compensate for the lack of database, lack of unique ids, etc. For those who have designed or developed similar non-traditional, distributed database projects like this, what advice, experience or best-practice tips you can share on the above two points?

    Read the article

  • The most efficient method of drawing multiple quads in OpenGL

    - by CPatton
    I'm not very keen with OpenGL and I was wondering if someone could give me some insight on this. I'm a 'seasoned' programmer, I've read the redbook about VBOs and the like, but I was wondering from a more experienced person about the best/most efficient way of achieving this. I've been producing this 2d tile-based game engine to be used in several projects. I have a class called "ScreenObject" which is mainly composed of a Dictionary<Point, Tile> The Point key is to show where to render the Tile on the screen, and the Tile contains one or more textures to be drawn at that point. This ScreenObject is where the tiles will be modified, deleted, added, etc.. My original method of drawing the tiles in the testing I've done was to iterate through the ScreenObject and draw each quad at each location separately. From what I've read, this is a massive waste of resources. It wasn't horribly slow in the testing, but after I've completed the animation classes and effect classes, I'm sure it would be extremely slow. And one last thing, if you wouldn't mind.. As I said before, the Tile class can contain multiple textures to be drawn at the Point location on the screen. I recognize possibly two options for me here. Either add a quad at that location for each texture to be drawn, or, somehow.. use a multiple texture for the same quad (if it's possible). Even if each tile contained one texture only, that would be 64 quads to be drawn on the screen. Most of the tiles will contain 2-5 textures, so the number of total quads would increase dramatically with this method. Would it be feasible to add a quad for each new texture, or am I ignoring a better way to do this? Just need some help understanding this if you don't mind :) I've tried to be as concise as possible, and I'd greatly appreciate any responses.. and even some criticism. Programming is often a learning process and one who develops seems to never stops learning. Thanks for your time.

    Read the article

  • How to make efficient code emerge through unit testing

    - by Jean
    Hi, I participate in a TDD Coding Dojo, where we try to practice pure TDD on simple problems. It occured to me however that the code which emerges from the unit tests isn't the most efficient. Now this is fine most of the time, but what if the code usage grows so that efficiency becomes a problem. I love the way the code emerges from unit testing, but is it possible to make the efficiency property emerge through further tests ? Here is a trivial example in ruby: prime factorization. I followed a pure TDD approach making the tests pass one after the other validating my original acceptance test (commented at the bottom). What further steps could I take, if I wanted to make one of the generic prime factorization algorithms emerge ? To reduce the problem domain, let's say I want to get a quadratic sieve implementation ... Now in this precise case I know the "optimal algorithm, but in most cases, the client will simply add a requirement that the feature runs in less than "x" time for a given environment. require 'shoulda' require 'lib/prime' class MathTest < Test::Unit::TestCase context "The math module" do should "have a method to get primes" do assert Math.respond_to? 'primes' end end context "The primes method of Math" do should "return [] for 0" do assert_equal [], Math.primes(0) end should "return [1] for 1 " do assert_equal [1], Math.primes(1) end should "return [1,2] for 2" do assert_equal [1,2], Math.primes(2) end should "return [1,3] for 3" do assert_equal [1,3], Math.primes(3) end should "return [1,2] for 4" do assert_equal [1,2,2], Math.primes(4) end should "return [1,5] for 5" do assert_equal [1,5], Math.primes(5) end should "return [1,2,3] for 6" do assert_equal [1,2,3], Math.primes(6) end should "return [1,3] for 9" do assert_equal [1,3,3], Math.primes(9) end should "return [1,2,5] for 10" do assert_equal [1,2,5], Math.primes(10) end end # context "Functionnal Acceptance test 1" do # context "the prime factors of 14101980 are 1,2,2,3,5,61,3853"do # should "return [1,2,3,5,61,3853] for ${14101980*14101980}" do # assert_equal [1,2,2,3,5,61,3853], Math.primes(14101980*14101980) # end # end # end end and the naive algorithm I created by this approach module Math def self.primes(n) if n==0 return [] else primes=[1] for i in 2..n do if n%i==0 while(n%i==0) primes<<i n=n/i end end end primes end end end

    Read the article

  • Efficient list compacting

    - by Patrik
    Suppose you have a list of unsigned ints. Suppose some elements are equal to 0 and you want to push them back. Currently I use this code (list is a pointer to a list of unsigned ints of size n for (i = 0; i < n; ++i) { if (list[i]) continue; int j; for (j = i + 1; j < n && !list[j]; ++j); int z; for (z = j + 1; z < n && list[z]; ++z); if (j == n) break; memmove(&(list[i]), &(list[j]), sizeof(unsigned int) * (z - j))); int s = z - j + i; for(j = s; j < z; ++j) list[j] = 0; i = s - 1; } Can you think of a more efficient way to perform this task? The snippet is purely theoretical, in the production code, each element of list is a 64 bytes struct EDIT: I'll post my solution. Many thanks to Jonathan Leffler. void RemoveDeadParticles(int * list, int * n) { int i, j = *n - 1; for (; j >= 0 && list[j] == 0; --j); for (i = 0; i < j; ++i) { if (list[i]) continue; memcpy(&(list[i]), &(list[j]), sizeof(int)); list[j] = 0; for (; j >= 0 && list[j] == 0; --j); if (i == j) break; } *n = i + 1; }

    Read the article

  • Is this postgres function cost efficient or still have to clean

    - by kiranking
    There are two tables in postgres db. english_all and english_glob First table contains words like international,confidential,booting,cooler ...etc I have written the function to get the words from english_all then perform for loop for each word to get word list which are not inserted in anglish_glob table. Word list is like I In Int Inte Inter .. b bo boo boot .. c co coo cool etc.. for some reason zwnj(zero-width non-joiner) is added during insertion to english_all table. But in function I am removing that character with regexp_replace. Postgres function for_loop_test is taking two parameter min and max based on that I am selecting words from english_all table. function code is like DECLARE inMinLength ALIAS FOR $1; inMaxLength ALIAS FOR $2; mviews RECORD; outenglishListRow english_word_list;--custom data type eng_id,english_text BEGIN FOR mviews IN SELECT id,english_all_text FROM english_all where wlength between inMinLength and inMaxLength ORDER BY english_all_text limit 30 LOOP FOR i IN 1..char_length(regexp_replace(mviews.english_all_text,'(?)$','')) LOOP FOR outenglishListRow IN SELECT distinct on (regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','')) mviews.id, regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','') where regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','') not in(select english_glob.english_text from english_glob where i=english_glob.wlength) order by regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','') LOOP RETURN NEXT outenglishListRow; END LOOP; END LOOP; END LOOP; END; Once I get the word list I will insert that into another table english_glob. My question is is there any thing I can add to or remove from function to make it more efficient. edit Let assume english_all table have words like footer,settle,question,overflow,database,kingdom If inMinLength = 5 and inmaxLength=7 then in the outer loop footer,settle,kingdom will be selected. For above 3 words inner two loop will apply to get words like f,fo,foo,foot,foote,footer,s,se,set,sett,settl .... etc. In the final process words which are bold will be entered into english_glob with another parameter like 1 to denote it is a proper word and stored in the another filed of english_glob table. Remaining word will be stored with another parameter 0 because in the next call words which are saved in database should not be fetched again. edit2: This is a complete code CREATE TABLE english_all ( id serial NOT NULL, english_all_text text NOT NULL, wlength integer NOT NULL, CONSTRAINT english_all PRIMARY KEY (id), CONSTRAINT english_all_kan_text_uq_id UNIQUE (english_all_text) ) CREATE TABLE english_glob ( id serial NOT NULL, english_text text NOT NULL, is_prop integer default 1, CONSTRAINT english_glob PRIMARY KEY (id), CONSTRAINT english_glob_kan_text_uq_id UNIQUE (english_text) ) insert into english_all(english_text) values ('ant'),('forget'),('forgive'); on function call with parameter 3 and 6 fallowing rows should fetched a an ant f fo for forg forge forget next is insert to another table based on above row insert into english_glob(english_text,is_prop) values ('a',1),('an',1), ('ant',1),('f',0), ('fo',0),('for',1), ('forg',0),('forge',1), ('forget',1), on function call next time with parameter 3 and 7 fallowing rows should fetched.(because f,fo,for,forg are all entered in english_glob table) forgi forgiv forgive

    Read the article

  • Efficient algorithm to distribute work?

    - by Zwei Steinen
    It's a bit complicated to explain but here we go. We have problems like this (code is pseudo-code, and is only for illustrating the problem. Sorry it's in java. If you don't understand, I'd be glad to explain.). class Problem { final Set<Integer> allSectionIds = { 1,2,4,6,7,8,10 }; final Data data = //Some data } And a subproblem is: class SubProblem { final Set<Integer> targetedSectionIds; final Data data; SubProblem(Set<Integer> targetedSectionsIds, Data data){ this.targetedSectionIds = targetedSectionIds; this.data = data; } } Work will look like this, then. class Work implements Runnable { final Set<Section> subSections; final Data data; final Result result; Work(Set<Section> subSections, Data data) { this.sections = SubSections; this.data = data; } @Override public void run(){ for(Section section : subSections){ result.addUp(compute(data, section)); } } } Now we have instances of 'Worker', that have their own state sections I have. class Worker implements ExecutorService { final Map<Integer,Section> sectionsIHave; { sectionsIHave = {1:section1, 5:section5, 8:section8 }; } final ExecutorService executor = //some executor. @Override public void execute(SubProblem problem){ Set<Section> sectionsNeeded = fetchSections(problem.targetedSectionIds); super.execute(new Work(sectionsNeeded, problem.data); } } phew. So, we have a lot of Problems and Workers are constantly asking for more SubProblems. My task is to break up Problems into SubProblem and give it to them. The difficulty is however, that I have to later collect all the results for the SubProblems and merge (reduce) them into a Result for the whole Problem. This is however, costly, so I want to give the workers "chunks" that are as big as possible (has as many targetedSections as possible). It doesn't have to be perfect (mathematically as efficient as possible or something). I mean, I guess that it is impossible to have a perfect solution, because you can't predict how long each computation will take, etc.. But is there a good heuristic solution for this? Or maybe some resources I can read up before I go into designing? Any advice is highly appreciated!

    Read the article

  • Making an efficient algorithm

    - by James P.
    Here's my recent submission for the FB programming contest (qualifying round only requires to upload program output so source code doesn't matter). The objective is to find two squares that add up to a given value. I've left it as it is as an example. It does the job but is too slow for my liking. Here's the points that are obviously eating up time: List of squares is being recalculated for each call of getNumOfDoubleSquares(). This could be precalculated or extended when needed. Both squares are being checked for when it is only necessary to check for one (complements). There might be a more efficient way than a double-nested loop to find pairs. Other suggestions? Besides this particular problem, what do you look for when optimizing an algorithm? public static int getNumOfDoubleSquares( Integer target ){ int num = 0; ArrayList<Integer> squares = new ArrayList<Integer>(); ArrayList<Integer> found = new ArrayList<Integer>(); int squareValue = 0; for( int j=0; squareValue<=target; j++ ){ squares.add(j, squareValue); squareValue = (int)Math.pow(j+1,2); } int squareSum = 0; System.out.println( "Target=" + target ); for( int i = 0; i < squares.size(); i++ ){ int square1 = squares.get(i); for( int j = 0; j < squares.size(); j++ ){ int square2 = squares.get(j); squareSum = square1 + square2; if( squareSum == target && !found.contains( square1 ) && !found.contains( square2 ) ){ found.add(square1); found.add(square2); System.out.println( "Found !" + square1 +"+"+ square2 +"="+ squareSum); num++; } } } return num; }

    Read the article

  • Explained: EF 6 and “Could not determine storage version; a valid storage connection or a version hint is required.”

    - by Ken Cox [MVP]
    I have a legacy ASP.NET 3.5 web site that I’ve upgraded to a .NET 4 web application. At the same time, I upgraded to Entity Framework 6. Suddenly one of the pages returned the following error: [ArgumentException: Could not determine storage version; a valid storage connection or a version hint is required.]    System.Data.SqlClient.SqlVersionUtils.GetSqlVersion(String versionHint) +11372412    System.Data.SqlClient.SqlProviderServices.GetDbProviderManifest(String versionHint) +91    System.Data.Common.DbProviderServices.GetProviderManifest(String manifestToken) +92 [ProviderIncompatibleException: The provider did not return a ProviderManifest instance.]    System.Data.Common.DbProviderServices.GetProviderManifest(String manifestToken) +11431433    System.Data.Metadata.Edm.Loader.InitializeProviderManifest(Action`3 addError) +11370982    System.Data.EntityModel.SchemaObjectModel.Schema.HandleAttribute(XmlReader reader) +216 A search of the error message didn’t turn up anything helpful except that someone mentioned that the error messages was bogus in his case. The page in question uses the ASP.NET EntityDataSource control, consumed by a Telerik RadGrid. This is a fabulous combination for putting a huge amount of functionality on a page in a very short time. Unfortunately, the 6.0.1 release of EF6 doesn’t support EntityDataSource. According to the people in charge, support is planned but there’s no timeline for an EntityDataSource build that works with EF6.  I’m not sure what to do in the meantime. Should I back out EF6 or manually wire up the RadGrid? The upshot is that you might want to rethink plans to upgrade to Entity Framework 6 for Web forms projects if they rely on that handy control. It might also help to spend a User voice vote here:  http://data.uservoice.com/forums/72025-entity-framework-feature-suggestions/suggestions/3702890-support-for-asp-net-entitydatasource-and-dynamicda

    Read the article

  • Architecture for a business objects / database access layer

    - by gregmac
    For various reasons, we are writing a new business objects/data storage library. One of the requirements of this layer is to separate the logic of the business rules, and the actual data storage layer. It is possible to have multiple data storage layers that implement access to the same object - for example, a main "database" data storage source that implements most objects, and another "ldap" source that implements a User object. In this scenario, User can optionally come from an LDAP source, perhaps with slightly different functionality (eg, not possible to save/update the User object), but otherwise it is used by the application the same way. Another data storage type might be a web service, or an external database. There are two main ways we are looking at implementing this, and me and a co-worker disagree on a fundamental level which is correct. I'd like some advice on which one is the best to use. I'll try to keep my descriptions of each as neutral as possible, as I'm looking for some objective view points here. Business objects are base classes, and data storage objects inherit business objects. Client code deals with data storage objects. In this case, common business rules are inherited by each data storage object, and it is the data storage objects that are directly used by the client code. This has the implication that client code determines which data storage method to use for a given object, because it has to explicitly declare an instance to that type of object. Client code needs to explicitly know connection information for each data storage type it is using. If a data storage layer implements different functionality for a given object, client code explicitly knows about it at compile time because the object looks different. If the data storage method is changed, client code has to be updated. Business objects encapsulate data storage objects. In this case, business objects are directly used by client application. Client application passes along base connection information to business layer. Decision about which data storage method a given object uses is made by business object code. Connection information would be a chunk of data taken from a config file (client app does not really know/care about details of it), which may be a single connection string for a database, or several pieces connection strings for various data storage types. Additional data storage connection types could also be read from another spot - eg, a configuration table in a database that specifies URLs to various web services. The benefit here is that if a new data storage method is added to an existing object, a configuration setting can be set at runtime to determine which method to use, and it is completely transparent to the client applications. Client apps do not need to be modified if data storage method for a given object changes. Business objects are base classes, data source objects inherit from business objects. Client code deals primarily with base classes. This is similar to the first method, but client code declares variables of the base business object types, and Load()/Create()/etc static methods on the business objects return the appropriate data source-typed objects. The architecture of this solution is similar to the first method, but the main difference is the decision about which data storage object to use for a given business object is made by the business layer, not the client code. I know there are already existing ORM libraries that provide some of this functionality, but please discount those for now (there is the possibility that a data storage layer is implemented with one of these ORM libraries) - also note I'm deliberately not telling you what language is being used here, other than that it is strongly typed. I'm looking for some general advice here on which method is better to use (or feel free to suggest something else), and why.

    Read the article

  • Writing an optimised and efficient search engine with mySQL and ColdFusion

    - by Mel
    I have a search page with the following scenarios listed below. I was told there was a better way to do it, but not how, and that I am using too many if statements, and that it's prone to causing an error through url manipulation: Search.cfm will processes a search made from a search bar present on all pages, with one search input (titleName). If search.cfm is accessed manually (through URL not through using the simple search bar on all pages) it displays an advanced search form with three inputs (titleName, genreID, platformID) or it evaluates searchResponse variable and decides what to do. If simple search query is blank, has no results, or less than 3 characters it displays an error If advanced search query is blank, has no results, or less than 3 characters it displays an error If any successful search returns results, they come back normally. The top-of-page logic is as follows: <!---SET DEFAULT VARIABLE---> <cfparam name="variables.searchResponse" default=""> <!---CHECK TO SEE IF SIMPLE SEARCH A FORM WAS SUBMITTED AND EXECUTE SEARCH IF IT WAS---> <cfif IsDefined("Form.simpleSearch") AND Len(Trim(Form.titleName)) LTE 2> <cfset variables.searchResponse = "invalidString"> <cfelseif IsDefined("Form.simpleSearch") AND Len(Trim(Form.titleName)) GTE 3> <!---EXECUTE METHOD AND GET DATA---> <cfinvoke component="myComponent" method="simpleSearch" searchString="#Form.titleName#" returnvariable="simpleSearchResult"> <cfset variables.searchResponse = "simpleSearchResult"> </cfif> <!---CHECK IF ANY RECORDS WERE FOUND---> <cfif IsDefined("variables.simpleSearchResult") AND simpleSearchResult.RecordCount IS 0> <cfset variables.searchResponse = "noResult"> </cfif> <!---CHECK IF ADVANCED SEARCH FORM WAS SUBMITTED---> <cfif IsDefined("Form.AdvancedSearch") AND Len(Trim(Form.titleName)) LTE 2> <cfset variables.searchResponse = "invalidString"> <cfelseif IsDefined("Form.advancedSearch") AND Len(Trim(Form.titleName)) GTE 2> <!---EXECUTE METHOD AND GET DATA---> <cfinvoke component="myComponent" method="advancedSearch" returnvariable="advancedSearchResult" titleName="#Form.titleName#" genreID="#Form.genreID#" platformID="#Form.platformID#"> <cfset variables.searchResponse = "advancedSearchResult"> </cfif> <!---CHECK IF ANY RECORDS WERE FOUND---> <cfif IsDefined("variables.advancedSearchResult") AND advancedSearchResult.RecordCount IS 0> <cfset variables.searchResponse = "noResult"> </cfif> I'm using the searchResponse variable to decide what the the page displays, based on the following scenarios: <!---ALWAYS DISPLAY SIMPLE SEARCH BAR AS IT'S PART OF THE HEADER---> <form name="simpleSearch" action="search.cfm" method="post"> <input type="hidden" name="simpleSearch" /> <input type="text" name="titleName" /> <input type="button" value="Search" onclick="form.submit()" /> </form> <!---IF NO SEARCH WAS SUBMITTED DISPLAY DEFAULT FORM---> <cfif searchResponse IS ""> <h1>Advanced Search</h1> <!---DISPLAY FORM---> <form name="advancedSearch" action="search.cfm" method="post"> <input type="hidden" name="advancedSearch" /> <input type="text" name="titleName" /> <input type="text" name="genreID" /> <input type="text" name="platformID" /> <input type="button" value="Search" onclick="form.submit()" /> </form> </cfif> <!---IF SEARCH IS BLANK OR LESS THAN 3 CHARACTERS DISPLAY ERROR MESSAGE---> <cfif searchResponse IS "invalidString"> <cfoutput> <h1>INVALID SEARCH</h1> </cfoutput> </cfif> <!---IF SEARCH WAS MADE BUT NO RESULTS WERE FOUND---> <cfif searchResponse IS "noResult"> <cfoutput> <h1>NO RESULT FOUND</h1> </cfoutput> </cfif> <!---IF SIMPLE SEARCH WAS MADE A RESULT WAS FOUND---> <cfif searchResponse IS "simpleSearchResult"> <cfoutput> <h1>Search Results</h1> </cfoutput> <cfoutput query="simpleSearchResult"> <!---DISPLAY QUERY DATA---> </cfoutput> </cfif> <!---IF ADVANCED SEARCH WAS MADE A RESULT WAS FOUND---> <cfif searchResponse IS "advancedSearchResult"> <cfoutput> <h1>Search Results</h1> <p>Your search for "#Form.titleName#" returned #advancedSearchResult.RecordCount# result(s).</p> </cfoutput> <cfoutput query="advancedSearchResult"> <!---DISPLAY QUERY DATA---> </cfoutput> </cfif> Is my logic a) not efficient because my if statements/is there a better way to do this? And b) Can you see any scenarios where my code can break? I've tested it but I have not been able to find any issues with it. And I have no way of measuring performance. Any thoughts and ideas would be greatly appreciated. Many thanks

    Read the article

  • System.Runtime.InteropServices.COMException (0x80070008): Not enough storage is available to process

    - by Darryl Braaten
    I am trying to diagnose this exception. "System.Runtime.InteropServices.COMException (0x80070008): Not enough storage is available to process this command. (Exception from HRESULT: 0x80070008) at System.Runtime.Remoting.RemotingServices.AllocateUninitializedObject(RuntimeType objectType) at System.Runtime.Remoting.RemotingServices.AllocateUninitializedObject(Type objectType) at System.Runtime.Remoting.Activation.ActivationServices.CreateInstance(Type serverType) at System.Runtime.Remoting.Activation.ActivationServices.IsCurrentContextOK(Type serverType, Object[] props, Boolean bNewObj) at Oracle.DataAccess.Client.CThreadPool..ctor() at Oracle.DataAccess.Client.OracleCommand.set_CommandTimeout(Int32 value) ... It does not look like any of the normal types of "storage" have hit any limits. The application is using about 400MB of memory, 70 threads, 2000 handles and the hard drive has many GB free. The machine is running Windows 2003 Enterprise server with 16GB of RAM so memory shouldn't be an issue. The application is running as a windows service so there are no GDI objects being used. Running out of GDI handles is a common cause of this exception. Database connections, commands & readers are all all wrapped with using blocks so they should be getting cleaned up correctly.

    Read the article

  • Secure Password Storage and Transfer

    - by Andras Zoltan
    I'm developing a new user store for my organisation and am now tackling password storage. The concepts of salting, HMAC etc are all fine with me - and want to store the users' passwords either salted and hashed, HMAC hashed, or HMAC salted and hashed - not sure what the best way will be - but in theory it won't matter as it will be able to change over time if required. I want to have an XML & JSON service that can act as a Security Token Service for client-side apps. I've already developed one for another system, which requires that the client double-encrypts a clear-text password using SHA1 first and then HMACSHA1 using a 128 unique key (or nonce) supplied by the server for that session only. I'd like to repeat this technique for the new system - upgrading the algo to SHA256 (chosen since implementations are readily available for all aforementioned platforms - and it's much stronger than SHA1) - but there is a problem. If I'm storing the password as a salted hash in the user-store, the client will need to be sent that salt in order to construct the correct hash before being HMACd with the unique session key. This would completely go against the point of using a salt in the first place. Equally, if I don't use salt for password storage, but instead use HMAC, it's still the same problem. At the moment, the only solution I can see is to use naked SHA256 hashing for the password in the user store, so that I can then use this as a starting point on both the server and the client for a more secure salted/hmacd password transfer for the web service. This still leaves the user store vulnerable to a dictionary attack were it ever to be accessed; and however unlikely that might be - assuming it will never happen simply doesn't sit well with me. Greatly appreciate any input.

    Read the article

  • Validating Application Settings Key Values in Isolated Storage for Windows Phone Applications

    - by Martin Anderson
    Hello everyone. I am very new at all this c# Windows Phone programming, so this is most probably a dumb question, but I need to know anywho... IsolatedStorageSettings appSettings = IsolatedStorageSettings.ApplicationSettings; if (!appSettings.Contains("isFirstRun")) { firstrunCheckBox.Opacity = 0.5; MessageBox.Show("isFirstRun not found - creating as true"); appSettings.Add("isFirstRun", "true"); appSettings.Save(); firstrunCheckBox.Opacity = 1; firstrunCheckBox.IsChecked = true; } else { if (appSettings["isFirstRun"] == "true") { firstrunCheckBox.Opacity = 1; firstrunCheckBox.IsChecked = true; } else if (appSettings["isFirstRun"] == "false") { firstrunCheckBox.Opacity = 1; firstrunCheckBox.IsChecked = false; } else { firstrunCheckBox.Opacity = 0.5; } } I am trying to firstly check if there is a specific key in my Application Settings Isolated Storage, and then wish to make a CheckBox appear checked or unchecked depending on if the value for that key is "true" or "false". Also I am defaulting the opacity of the checkbox to 0.5 opacity when no action is taken upon it. With the code I have, I get the warnings Possible unintended reference comparison; to get a value comparison, cast the left hand side to type 'string' Can someone tell me what I am doing wrong. I have explored storing data in an Isolated Storage txt file, and that worked, I am now trying Application Settings, and will finally try to download and store an xml file, as well as create and store user settings into an xml file. I want to try understand all the options open to me, and use which ever runs better and quicker

    Read the article

  • Cache Class compilation error using parent-child relationships and cache sql storage

    - by Fred Altman
    I have the global listed below that I'm trying to create a couple of cache classes using sql stoarage for: ^WHEAIPP(1,26,1)=2 ^WHEAIPP(1,26,1,1)="58074^^SMSNARE^58311" 2)="58074^59128^MPHILLIPS^59135" ^WHEAIPP(1,29,1)=2 ^WHEAIPP(1,29,1,1)="58074^^SMSNARE^58311" 2)="58074^59128^MPHILLIPS^59135" ^WHEAIPP(1,93,1)=2 ^WHEAIPP(1,93,1,1)="58884^^SSNARE^58948" 2)="58884^59128^MPHILLIPS^59135" ^WHEAIPP(1,166,1)=2 ^WHEAIPP(1,166,1,1)="58407^^SMSNARE^58420" 2)="58407^59128^MPHILLIPS^59135" ^WHEAIPP(1,324,1)=2 ^WHEAIPP(1,324,1,1)="58884^^SSNARE^58948" 2)="58884^59128^MPHILLIPS^59135" ^WHEAIPP(1,419,1)=3 ^WHEAIPP(1,419,1,1)="59707^^SSNARE^59708" 2)="59707^^MPHILLIPS^59910,58000^^^^" 3)="59707^59981^SSNARE^60117,53241^^^^" The first two subscripts of the global (Hmo and Keen) make a unique entry. The third subscript (Seq) has a property (IppLineCount) which is the number of IppLines in the fourth subscript level (Seq2). I create the class WIppProv below which is the parent class: /// <PRE> /// ============================ /// Generated Class Definition /// Table: WMCA_B_IPP_PROV /// Generated by: FXALTMAN /// Generated on: 05/21/2012 13:46:41 /// Generator: XWESTblClsGenV2 /// ---------------------------- /// </PRE> Class XFXA.MCA.WIppProv Extends (%Persistent, %XML.Adaptor) [ ClassType = persistent, Inheritance = right, ProcedureBlock, StorageStrategy = SQLMapping ] { /// .HMO Property Hmo As %Integer; /// .KEEN Property Keen As %Integer; /// .SEQ Property Seq As %String; Property IppLineCount As %Integer; Index iMaster On (Hmo, Keen, Seq) [ IdKey, Unique ]; Relationship IppLines As XFXA.MCA.WIppProvLine [ Cardinality = many, Inverse = relWIppProv ]; <Storage name="SQLMapping"> <DataLocation>^WHEAIPP</DataLocation> <ExtentSize>1000000</ExtentSize> <SQLMap name="DBMS"> <Data name="IppLineCount"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>1</Piece> </Data> <Global>^WHEAIPP</Global> <PopulationType>full</PopulationType> <Subscript name="1"> <AccessType>Sub</AccessType> <Expression>{Hmo}</Expression> <LoopInitValue>1</LoopInitValue> </Subscript> <Subscript name="2"> <AccessType>Sub</AccessType> <Expression>{Keen}</Expression> </Subscript> <Subscript name="3"> <AccessType>Sub</AccessType> <LoopInitValue>1</LoopInitValue> <Expression>{Seq}</Expression> </Subscript> <Type>data</Type> </SQLMap> <StreamLocation>^XFXA.MCA.WIppProvS</StreamLocation> <Type>%Library.CacheSQLStorage</Type> </Storage> } This class compiles fine. Next I created the WIppProvLine class listed below and made a parent-child relationship between the two: /// Used to represent a single line of IPP data Class XFXA.MCA.WIppProvLine Extends (%Persistent, %XML.Adaptor) [ ClassType = persistent, Inheritance = right, ProcedureBlock, StorageStrategy = SQLMapping ] { /// .CLM_AMT_ALLOWED node: 0 piece: 6<BR> /// This field should be used in conjunction with the Claim Operator field to /// define a whole claim dollar amount at which a particular claim should be /// flagged with a Pend status. Property ClmAmtAllowed As %String; /// .CLM_LINE_AMT_ALLOWED node: 0 piece: 8<BR> /// This field should be used in conjunction with the Clm Line Operator field to /// define a claim line dollar amount at which a particular claim should be flagged /// with a Pend status. Property ClmLineAmtAllowed As %String; /// .CLM_LINE_OP node: 0 piece: 7<BR> /// A new Table/Column Reference that gives the SIU (Special Investigative Unit) /// the ability to look for claim line dollars above, below, or equal to a set /// amount. Property ClmLineOp As %String; /// .CLM_OP node: 0 piece: 5<BR> /// A new Table/Column Reference that gives the SIU (Special Investigative Unit) /// the ability to look for claim dollars above, below, or equal to a set amount. Property ClmOp As %String; Property EffDt As %Date; Property Hmo As %Integer; /// .IPP_REASON node: 0 piece: 10<BR> /// IPP Reason Code Property IppCode As %Integer; Property Keen As %Integer; /// .LAST_CHG_DT node: 0 piece: 4<BR> /// Last Changed Date Property LastChgDt As %Date; /// .PX_DX_CDE_FLAG node: 0 piece: 9<BR> /// A Flag to indicate whether or not Procedure Codes or Diagnosis Codes are to be /// associated with this SIU Flag Type Entry. If the Flag = Y, then control would /// jump to a new screen where the user can enter the necessary codes. Property PxDxCdeFlag As %String; Property Seq As %String; Property Seq2 As %String; Index iMaster On (Hmo, Keen, Seq, Seq2) [ IdKey, PrimaryKey, Unique ]; /// .TERM_DT node: 0 piece: 2<BR> /// Term Date Property TermDt As %Date; /// .USER_INI node: 0 piece: 3 Property UserIni As %String; Relationship relWIppProv As XFXA.MCA.WIppProv [ Cardinality = one, Inverse = IppLines ]; Index relWIppProvIndex On relWIppProv; //Index NewIndex1 On (RelWIppProv, Seq2) [ IdKey, PrimaryKey, Unique ]; <Storage name="SQLMapping"> <ExtentSize>1000000</ExtentSize> <SQLMap name="DBMS"> <ConditionalWithHostVars></ConditionalWithHostVars> <Data name="ClmAmtAllowed"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>6</Piece> </Data> <Data name="ClmLineAmtAllowed"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>8</Piece> </Data> <Data name="ClmLineOp"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>7</Piece> </Data> <Data name="ClmOp"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>5</Piece> </Data> <Data name="EffDt"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>1</Piece> </Data> <Data name="Hmo"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>11</Piece> </Data> <Data name="IppCode"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>10</Piece> </Data> <Data name="LastChgDt"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>4</Piece> </Data> <Data name="PxDxCdeFlag"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>9</Piece> </Data> <Data name="TermDt"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>2</Piece> </Data> <Data name="UserIni"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>3</Piece> </Data> <Global>^WHEAIPP</Global> <Subscript name="1"> <AccessType>Sub</AccessType> <Expression>{Hmo}</Expression> <LoopInitValue>1</LoopInitValue> </Subscript> <Subscript name="2"> <AccessType>Sub</AccessType> <Expression>{Keen}</Expression> <LoopInitValue>1</LoopInitValue> </Subscript> <Subscript name="3"> <AccessType>Sub</AccessType> <Expression>{Seq}</Expression> <LoopInitValue>1</LoopInitValue> </Subscript> <Subscript name="4"> <AccessType>Sub</AccessType> <Expression>{Seq2}</Expression> <LoopInitValue>1</LoopInitValue> </Subscript> <Type>data</Type> </SQLMap> <StreamLocation>^XFXA.MCA.WIppProvLineS</StreamLocation> <Type>%Library.CacheSQLStorage</Type> </Storage> } When I try to compile this one I get the following error: ERROR #5502: Error compiling SQL Table 'XFXA_MCA.WIppProvLine %msg: Table XFXA_MCA.WIppProvLine has the following unmapped (not defined on the data map) fields: relWIppProv' ERROR #5030: An error occurred while compiling class XFXA.MCA.WIppProvLine Detected 1 errors during compilation in 2.745s. What am I doing wrong? Thanks in Advance, Fred

    Read the article

  • Different EF Property DataType than Storage Layer Possible?

    - by dj_kyron
    Hi, I am putting together a WCF Data Service for PatientEntities using Entity Framework. My solution needs to address these requirements: Property DateOfBirth of entity Patient is stored in SQL Server as string. It would be ideal if the entity class did not also use the "string" type but rather a DateTime type. (I would expect this to be possible since we're abstracting away from the storage layer). Where could a conversion mechanism be put in place that would convert to and from DateTime/string so that the entity and SQL Server are in sync?. I cannot change the storage layer's structure, so I have to work around it. WCF Data Services (Read-only, so no need for saving changes) need to be used since clients will be able to use LINQ expressions to consume the service. They can generate results based on any given query scenario they need and not be constrained by a single method such as GetPatient(int ID). I've tried to use DTOs, but run into problem of mapping the ObjectContext to a DTO, I don't think that is theoretically possible...or too complicated if it is. I've tried to use Self Tracking Entities but they require the metadata from the .edmx file if I'm correct, and this isn't allowing a different property data type. I also want to add customizations to my Entity getter methods so that a property "MRN" of type "string" needs to have .Replace("MR~", string.Empty) performed before it is returned. I can add this to the getter methods but the problem with that is Entity Framework will overwrite that next time it refreshes the entity classes. Is there a permanent place I can put these? Should I use POCO instead? How would that work with WCF Data Services? Where would the service grab the metadata?

    Read the article

  • ASP.Net Session Storage provider in 3-layer architecture

    - by Tedd Hansen
    I'm implementing a custom session storage provider in ASP.Net. We have a strict 3-layer architecture and therefore the session storage needs to go through the business layer. Presentation-Business-Database. The business layer is accessed through WPF. The database is MSSQL. What I need is (in order of preference): A commercial/free/open source product that solves this. The source code of a SqlSessionStateStore (custom session store) (not the ODBC-sample on MSDN) that I can modify to use a middle layer. I've tried looking at .Net source through Reflector, but the code is not usable. Note: I understand how to do this. I am looking for working samples, preferably that has been proven to work fine under heavy load. The ODBC sample on MSDN doesn't use the (new?) stored procs that the build in SqlSessionStateStore uses - I'd like to use these if possible (decreases traffic). Edit1: To answer Simons question on more info: ASP.Net Session()-object can be stored in either InProc, ASP.Net State Service or SQL-server. In a secure 3-layer model the presentation layer (web server) does not have direct/physical access to the database layer (SQL-server). And even without the physical limitations, from an architectural standpoint you may not want this. InProc and ASP.Net State Service does not support load balancing and doesn't have fault tolerance. Therefore the only option is to access SQL through webservice middle layer (business layer).

    Read the article

  • STL vectors with uninitialized storage?

    - by Jim Hunziker
    I'm writing an inner loop that needs to place structs in contiguous storage. I don't know how many of these structs there will be ahead of time. My problem is that STL's vector initializes its values to 0, so no matter what I do, I incur the cost of the initialization plus the cost of setting the struct's members to their values. Is there any way to prevent the initialization, or is there an STL-like container out there with resizeable contiguous storage and uninitialized elements? (I'm certain that this part of the code needs to be optimized, and I'm certain that the initialization is a significant cost.) Also, see my comments below for a clarification about when the initialization occurs. SOME CODE: void GetsCalledALot(int* data1, int* data2, int count) { int mvSize = memberVector.size() memberVector.resize(mvSize + count); // causes 0-initialization for (int i = 0; i < count; ++i) { memberVector[mvSize + i].d1 = data1[i]; memberVector[mvSize + i].d2 = data2[i]; } }

    Read the article

  • Can i upload my apk to SD Card instead of internal storage?

    - by Ahmed Al Khashab
    My APK is big, 70 MB. I don't have any problems when I install it in external storage, but when I upload it to phone before installing, the APK goes to internal storage and my testing phone doesn't have enough space in internal storage and a lot of android phones don't have enough internal space for 70 MB... Can I upload my APK straight to the SD Card instead of internal storage first? upload mean "upload from my eclipse while runnig , or download from market, apk will go to internal storage but i want apk go to external and start install from external"

    Read the article

  • Dell VRTX - slow cluster shared storage

    - by NorbyTheGeek
    I have a brand new Dell VRTX box set up as a Failover Cluster running HA Hyper-V virtual machines. This is my first time setting up clustering, and my first time with one of these boxes, so I'm sure I've missed something. The virtual machines are experiencing high disk latency and bad performance when accessing their VHD(x) files located on a Cluster Shared Volume. The VRTX has 10 x 900 GB 10K SAS drives in RAID 6 configuration, and the VRTX has the redundant Shared PERC 8 controllers. Both blades have full access to the virtual disks. There are two M520 blades installed, each with 128 GB RAM. MPIO is configured for the PERC 8 controllers. Operating system on the blades is Server 2012 (NOT R2). The RAID 6 array is split into a small (8 GB) volume for cluster quorum witness and a large (6.5 TB) volume for a Cluster Shared Volume (mounted on the nodes as C:\ClusterStorage\Volume1) An example of slow disk access: logging into a Server 2012 VM and having Server Manager come up automatically. Disk access goes to 100%, with write speeds at 20 MB or so, read speeds of 500 KB or so, and Average Response Time of over 1000 ms, sometimes spiking at 4000-5000 ms or so. It's the latency that really worries me. Is there something specific I should look at in my configuration? It doesn't seem to matter whether I use VHD or VHDX, dynamic or static.

    Read the article

  • Looking for USB Network Attached Storage (NAS) Adapter that supports multiple drives, NTFS/FAT32 fil

    - by braveterry
    I'm looking for a NAS Adapter that supports attaching multiple USB devices to the network. Here's what I'd like to see in a NAS adapter: Under $100.00. Support for multiple devices. This can be through a USB hub or through multiple USB connectors on the device itself. Bittorrent support would be nice, but this isn't a deal-breaker. Filesystem support for at least NTFS or FAT32. I'd prefer to not have to reformat to use the device, but this is also not a deal-breaker. Here is what I am NOT looking for: I'm NOT looking for a NAS enclosure. I already have a couple of spare external USB drives that I'd like to use. I'm NOT looking for a networked USB hub like the one mentioned here. Network USB hubs only allow access to a drive from one PC at a time. I'm NOT looking for a wireless router with a NAS built in. I already have a wireless router, and I'd rather not go through the hassle of replacing it if possible. What I've looked at so far PogoPlug: This supports multiple devices via a USB hub, but there's not Bittorrent support. It's $99.00, so I may end up going with this and hope that they patch in Bittorrent support later. Addonics NAS Adapter: This supports only one device per adapter, so it's a non-starter. SimpleNET NAS Head USB 2.0 Portable Dongle: I'm not 100% sure this supports multiple devices. Plus there doesn't seem to be any Bittorrent support. I'll try to update this post as I explore other devices.

    Read the article

  • Remove Kernel Lock from Unmounted Mass Storage USB Device from the Command Line in Linux

    - by Casey
    I've searched high and low, and can't figure this one out. I have a older Olympus Camera (2001 or so). When I plug in the USB connection, I get the following log output: $ dmesg | grep sd [20047.625076] sd 21:0:0:0: Attached scsi generic sg7 type 0 [20047.627922] sd 21:0:0:0: [sdg] Attached SCSI removable disk Secondly, the drive is not mounted in the FS, but when I run gphoto2 I get the following error: $ gphoto2 --list-config *** Error *** An error occurred in the io-library ('Could not lock the device'): Camera is already in use. *** Error (-60: 'Could not lock the device') *** What command will unmount the drive. For example in Nautilus, I can right click and select "Safely Remove Device". After doing that, the /dev/sg7 and /dev/sdg devices are removed. The output of gphoto2 is then: # gphoto2 --list-config /Camera Configuration/Picture Settings/resolution /Camera Configuration/Picture Settings/shutter /Camera Configuration/Picture Settings/aperture /Camera Configuration/Picture Settings/color /Camera Configuration/Picture Settings/flash /Camera Configuration/Picture Settings/whitebalance /Camera Configuration/Picture Settings/focus-mode /Camera Configuration/Picture Settings/focus-pos /Camera Configuration/Picture Settings/exp /Camera Configuration/Picture Settings/exp-meter /Camera Configuration/Picture Settings/zoom /Camera Configuration/Picture Settings/dzoom /Camera Configuration/Picture Settings/iso /Camera Configuration/Camera Settings/date-time /Camera Configuration/Camera Settings/lcd-mode /Camera Configuration/Camera Settings/lcd-brightness /Camera Configuration/Camera Settings/lcd-auto-shutoff /Camera Configuration/Camera Settings/camera-power-save /Camera Configuration/Camera Settings/host-power-save /Camera Configuration/Camera Settings/timefmt Some things I've tried already are sdparm and sg3_utils, however I am unfamiliar with them, so it's possible I just didn't find the right command. Update 1: # mount | grep sdg # mount | grep sg7 # umount /dev/sg7 umount: /dev/sg7: not mounted # umount /dev/sdg umount: /dev/sdg: not mounted # gphoto2 --list-config *** Error *** An error occurred in the io-library ('Could not lock the device'): Camera is already in use. *** Error (-60: 'Could not lock the device') ***

    Read the article

  • Remove Kernel Lock from Unmounted Mass Storage USB Device from the Command Line in Linux

    - by Casey
    I've searched high and low, and can't figure this one out. I have a older Olympus Camera (2001 or so). When I plug in the USB connection, I get the following log output: $ dmesg | grep sd [20047.625076] sd 21:0:0:0: Attached scsi generic sg7 type 0 [20047.627922] sd 21:0:0:0: [sdg] Attached SCSI removable disk Secondly, the drive is not mounted in the FS, but when I run gphoto2 I get the following error: $ gphoto2 --list-config *** Error *** An error occurred in the io-library ('Could not lock the device'): Camera is already in use. *** Error (-60: 'Could not lock the device') *** What command will unmount the drive. For example in Nautilus, I can right click and select "Safely Remove Device". After doing that, the /dev/sg7 and /dev/sdg devices are removed. The output of gphoto2 is then: # gphoto2 --list-config /Camera Configuration/Picture Settings/resolution /Camera Configuration/Picture Settings/shutter /Camera Configuration/Picture Settings/aperture /Camera Configuration/Picture Settings/color /Camera Configuration/Picture Settings/flash /Camera Configuration/Picture Settings/whitebalance /Camera Configuration/Picture Settings/focus-mode /Camera Configuration/Picture Settings/focus-pos /Camera Configuration/Picture Settings/exp /Camera Configuration/Picture Settings/exp-meter /Camera Configuration/Picture Settings/zoom /Camera Configuration/Picture Settings/dzoom /Camera Configuration/Picture Settings/iso /Camera Configuration/Camera Settings/date-time /Camera Configuration/Camera Settings/lcd-mode /Camera Configuration/Camera Settings/lcd-brightness /Camera Configuration/Camera Settings/lcd-auto-shutoff /Camera Configuration/Camera Settings/camera-power-save /Camera Configuration/Camera Settings/host-power-save /Camera Configuration/Camera Settings/timefmt Some things I've tried already are sdparm and sg3_utils, however I am unfamiliar with them, so it's possible I just didn't find the right command. Update 1: # mount | grep sdg # mount | grep sg7 # umount /dev/sg7 umount: /dev/sg7: not mounted # umount /dev/sdg umount: /dev/sdg: not mounted # gphoto2 --list-config *** Error *** An error occurred in the io-library ('Could not lock the device'): Camera is already in use. *** Error (-60: 'Could not lock the device') ***

    Read the article

  • iSCSI connection timing out on ESX 3.5 against a NetApp storage appliance

    - by Jesse1973
    I get a timeout on iSCSI on different ESX hosts (3.5) at different times. It is puzzling, as both the ESX hosts as windows and other guests are experiencing timeouts. The iSCSI network is segregated on a private network. Here is an export of vmkiscsid.log from last night: 2010-02-10-12:30:38: iscsid: an InitiatorAlias= is required, but was not found in /etc/vmware/vmkiscsid/initiatorname.iscsi 2010-02-10-12:30:38: iscsid: LogLevel = 0 2010-02-10-12:30:38: iscsid: LogSync = 0 2010-02-10-12:30:42: iscsid: Login Success: iqn.1992-08.com.netapp:sn.101197719,default,192.168.73.2,3260,2001, 0x1 2010-02-10-12:30:42: iscsid: connection1:0 is operational now 2010-02-16-02:03:35: iscsid: Kernel reported iSCSI connection 1:0 error (1008) state (3) 2010-02-16-02:03:39: iscsid: connection1:0 is operational after recovery (2 attempts) 2010-02-16-04:02:27: iscsid: Kernel reported iSCSI connection 1:0 error (1008) state (3) 2010-02-16-04:02:32: iscsid: connection1:0 is operational after recovery (2 attempts) Should i edit the timeout value on the ESX host for iSCSI? This may work around the problem but will not solve it.

    Read the article

  • Damaged XenServer Storage LVM partition table

    - by Fiolek
    I have a homeserver running under XenServer control with 3x1TB discs inside, one for XenServer and two mirrored(using Intel's fakeRAID and dmraid) for VMs and a user data(but now I think RAID didn't work). I tried to pass PCI card to VM using PCI-passthroug and I read somewhere that I need to recompile kernel with pciback module but something went wrong(I made mistake in /boot/extlinux.conf and server couldn't run) and I had to use LiveCD of GPartEd(I already had it on USB key) to correct this. But when I re-run the server all VDIs were gone. I have completly no idea what could go wrong. I tried to repair RAID using dmraid -R in the hope that everything will return to noramal but now I think this done more bad than good(and corrupted rest of LVM table...). Is there any possibility to recover this SR or only data from one(~100GB) of VDI? I also wants to apologise for my English, I'm not from English-speaking country and I'm only 16 years old, so I hadn't "time" to learn it(school isn't good place to do this) in sufficient way.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >