Search Results

Search found 67448 results on 2698 pages for 'data management'.

Page 65/2698 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Core Data, Bindings, value transformers : crash when saving

    - by Gael
    Hi, I am trying to store a PNG image in a core data store backed by an sqlite database. Since I intend to use this database on an iPhone I can't store NSImage objects directly. I wanted to use bindings and an NSValueTransformer subclass to handle the transcoding from the NSImage (obtained by an Image well on my GUI) to an NSData containing the PNG binary representation of the image. I wrote the following code for the ValueTransformer : + (Class)transformedValueClass { return [NSImage class]; } + (BOOL)allowsReverseTransformation { return YES; } - (id)transformedValue:(id)value { if (value == nil) return nil; return [[[NSImage alloc] initWithData:value] autorelease]; } - (id)reverseTransformedValue:(id)value { if (value == nil) return nil; if(![value isKindOfClass:[NSImage class]]) { NSLog(@"Type mismatch. Expecting NSImage"); } NSBitmapImageRep *bits = [[value representations] objectAtIndex: 0]; NSData *data = [bits representationUsingType:NSPNGFileType properties:nil]; return data; } The model has a transformable property configured with this NSValueTransformer. In Interface Builder a table column and an image well are both bound to this property and both have the proper value transformer name (an image dropped in the image well shows up in the table column). The transformer is registered and called every time an image is added or a row is reloaded (checked with NSLog() calls). The problem arises when I am trying to save the managed objects. The console output shows the error message : [NSImage length]: unrecognized selector sent to instance 0x1004933a0 It seems like core data is using the value transformer to obtain the NSImage back from the NSData and then tries to save the NSImage instead of the NSData. There are probably workarounds such as the one presented in this post but I would really like to understand why my approach is flawn. Thanks in advance for your ideas and explanations.

    Read the article

  • Generic dataset handling library

    - by Pep.
    Hello, I want to build a generic Perl module for handling and analysing biomedical character separated datasets and which can, most certain, be used on any kind of datasets that contain a mixture of categorical (A,B,C,..) and continuous (1.2,3,881..) and identifier (XXX1,XXX2...). The plan is to have people initialize the module and then use some arguments to point to the data file(s), the place were the analysis reports should be placed and the structure of the data. By structure of data I mean which variable is in which place and its name/type. And this is where I need some enlightenment. I am baffled how to do this in a clean way. Obviously, having people create a simple schema file, be it XML or some other format would be the cleanest but maybe not all people enjoy doing something like this. The solutions I can think of are: Create a configuration file in XML or similar and with a prespecified format. Pass the information during initialization of the module. Use the first row of the data as headers and try to guess types (ouch) Surely there must be a "canonical" way of doing this that is also usable and efficient. Thanks p.

    Read the article

  • Ember Data Sycn - LocalStorage+REST+RealTime+Online/Offline

    - by Miguel Madero
    We have a combination of requirements in terms o data access. Pre-load some reference data. We need reference data to survive browser restarts instead of just living in memory to avoid loading it all the time. I'm currently using the LocalStorageAdapter for that. Once we have it, we would like to sync changes (polling or using Socket.IO in the background and updating the LocalStorage could do the trick) There're other models that are more transactional, where we would need to directly go to the Server and get/save them. It would be nice to use something like the RESTAdapter for that. Lastly, there're some operations that should work off-line and changes should be synced later. To make it more concrete: * We pre-load vendor and "favorite products" into Local Storage. We work offline with those. * We need to sync server changes to vendor and product information. * If they search the full catalog, that requires them to be online. * When offline, we need to allow users to add something to their cart or even submit and order. We would like to queue this action and submit it when they have an Internet Connection. So a few questions are derived from this: * Is there a way to user RESTAdapter in combination with LocalStorage? * Is there some Socket.IO support? (Happy to do this part manually) * Is there Queueing support? Ideally at the Ember-Data level. I know we will have to do a lot of this manually and pull together the different lego pieces, but I wanted to ask for some perspective from experience Ember devs.

    Read the article

  • Sending data in a GTK Callback

    - by snostorm
    How can I send data through a GTK callback? I've Googled, and with the information I found created this: #include <gtk/gtk.h> #include <stdio.h> void button_clicked( GtkWidget *widget, GdkEvent *event, gchar *data); int main( int argc, char *argv[]){ GtkWidget *window; GtkWidget *button; gtk_init (&argc, &argv); window = gtk_window_new(GTK_WINDOW_TOPLEVEL); button = gtk_button_new_with_label("Go!"); gtk_container_add(GTK_CONTAINER(window), button); g_signal_connect_swapped(G_OBJECT(window), "destroy", G_CALLBACK(gtk_main_quit), NULL); g_signal_connect(G_OBJECT(button), "clicked", G_CALLBACK(button_clicked),"test" ); gtk_widget_show(window); gtk_widget_show(button); gtk_main(); return 0; } void button_clicked( GtkWidget *widget, GdkEvent *event, gchar *data){ printf("%s \n", (gchar *) data); return; } But it just Segfaults when I press the button. What is the right way to do this?

    Read the article

  • Counting in R data.table

    - by Simon Z.
    I have the following data.table set.seed(1) DT <- data.table(VAL = sample(c(1, 2, 3), 10, replace = TRUE)) VAL 1: 1 2: 2 3: 2 4: 3 5: 1 6: 3 7: 3 8: 2 9: 2 10: 1 Now I want to to perform two tasks: Count the occurrences of numbers in VAL. Count within all rows with the same value VAL (first, second, third occurrence) At the end I want the result VAL COUNT IDX 1: 1 3 1 2: 2 4 1 3: 2 4 2 4: 3 3 1 5: 1 3 2 6: 3 3 2 7: 3 3 3 8: 2 4 3 9: 2 4 4 10: 1 3 3 where COUNT defines task 1. and IDX task 2. I tried to work with which and length using .I: dt[, list(COUNT = length(VAL == VAL[.I]), IDX = which(which(VAL == VAL[.I]) == .I))] but this does not work as .I refers to a vector with the index, so I guess one must use .I[]. Though inside .I[] I again face the problem, that I do not have the row index and I do know (from reading data.table FAQ and following the posts here) that looping through rows should be avoided if possible. So, what's the data.table way?

    Read the article

  • Core Data object into an NSDictionary with possible nil objects

    - by Chuck
    I have a core data object that has a bunch of optional values. I'm pushing a table view controller and passing it a reference to the object so I can display its contents in a table view. Because I want the table view displayed a specific way, I am storing the values from the core data object into an array of dictionaries then using the array to populate the table view. This works great, and I got editing and saving working properly. (i'm not using a fetched results controller because I don't have anything to sort on) The issue with my current code is that if one of the items in the object is missing, then I end up trying to put nil into the dictionary, which won't work. I'm looking for a clean way to handle this, I could do the following, but I can't help but feeling like there's a better way. *passedEntry is the core data object handed to the view controller when it is pushed, lets say it contains firstName, lastName, and age, all optional. if ([passedEntry firstName] != nil) { [dictionary setObject:[passedEntry firstName] forKey:@"firstName"] } else { [dictionary setObject:@"" forKey:@"firstName"] } And so on. This works, but it feels kludgy, especially if I end up adding more items to the core data object down the road.

    Read the article

  • using jquery $.ajax to retrieve data and push data into javascript array through the success functio

    - by teamdane
    Hello All, I have a function that I'm trying to retrieve values using $.ajax and push the returned data into an array called output. Problem is the success function will not allow me to push the results into the array. see below for code. var getValues = function(el) { var value = ($(el).val()); if(value == '' || $(el).attr('disabled')) { return []; } var string = 'value=' + value; var output = []; $.ajax({ type: "GET", url: 'shocklookup_callback.php', dataType: "string", data: string, success: function(data) { } }); //output.push({ text : value + 'test', value : value + 'test1'} ); return output; }; Any ideas of what I can put in the success function to be able to push the data into the output array? Also I need to be able to return the output array to the original function getValues. If this doesn't' make a whole lot of sense please let me know and I'll try to explain better. Thanks, Dane

    Read the article

  • How to reduce a data frame keeping the order for other columns

    - by betabandido
    I am trying to reduce a data frame using the max function on a given column. I would like to preserve other columns but keeping the values from the same rows where each maximum value was selected. An example will make this explanation easier. Let us assume we have the following data frame: dframe <- data.frame(list(BENCH=sort(rep(letters[1:4], 4)), CFG=rep(1:4, 4), VALUE=runif(4 * 4) )) This gives me: BENCH CFG VALUE 1 a 1 0.98828096 2 a 2 0.19630597 3 a 3 0.83539540 4 a 4 0.90988296 5 b 1 0.01191147 6 b 2 0.35164194 7 b 3 0.55094787 8 b 4 0.20744004 9 c 1 0.49864470 10 c 2 0.77845408 11 c 3 0.25278871 12 c 4 0.23440847 13 d 1 0.29795494 14 d 2 0.91766057 15 d 3 0.68044728 16 d 4 0.18448748 Now, I want to reduce the data in order to select the maximum VALUE for each different BENCH: aggregate(VALUE ~ BENCH, dframe, FUN=max) This gives me the expected result: BENCH VALUE 1 a 0.9882810 2 b 0.5509479 3 c 0.7784541 4 d 0.9176606 Next, I tried to preserve other columns: aggregate(cbind(VALUE, CFG) ~ BENCH, dframe, FUN=max) This reduction returns: BENCH VALUE CFG 1 a 0.9882810 4 2 b 0.5509479 4 3 c 0.7784541 4 4 d 0.9176606 4 Both VALUE and CFG are reduced using max function. But this is not what I want. For instance, in this example I would like to obtain: BENCH VALUE CFG 1 a 0.9882810 1 2 b 0.5509479 3 3 c 0.7784541 2 4 d 0.9176606 2 where CFG is not reduced, but it just keeps the value associated to the maximum VALUE for each different BENCH. How could I change my reduction in order to obtain the last result shown?

    Read the article

  • Core Data Errors vs Exceptions Part 3

    - by John Gallagher
    My question is similar to this one. Background I'm creating a large number of objects in a core data store using NSOperations to speed things up. I've followed all the Core Data multithreading rules - I've got a single persistent store coordinator and a managed object context per thread that on save is merging back to the main managed object context. The Problem When the number of threads running at once is more than 1, I get the exception logged on save of my core data store: NSExceptionHandler has recorded the following exception: NSInternalInconsistencyException -- optimistic locking failure What I've Tried My code that creates new entities is quite complex - it makes entities that have relationships with other entities that could be being created in a separate thread. If I replace my object creation routine with some very simple code just making non-related entries, everything works perfectly. Initially, as well as the exceptions, I was getting a save error saying core data couldn't save due to the merge failing. I read the docs and realised I needed a merge policy on the Managed Object Context I was saving to. I set this up and as this question states, the save error goes away, but the exception remains. My Question Do I need to worry about these exceptions? If I do need to get rid of the exceptions, any ideas on how I do it?

    Read the article

  • Quickly generate junk data of certain size in Javascript

    - by user1357607
    I am writing an upload speed test in Javascript. I am using Jquery (and Ajax) to send chunks of data to a server in order to time how long it takes to get a response. This should, in theory give an estimation, of the upload speed. Of course to cater for different bandwidths of the user I sequentially upload larger and larger amounts of junk data until a threshold duration is reached. Currently I generate the junk data using the following function, however, it is very slow when generation megabytes of data. function generate_random_data(size){ var chars = "abcdefghijklmnopqrstuvwxyz"; var random_data = ""; for (var i = 0; i < size; i++){ var random_num = Math.floor(Math.random() * char.length); random_data = random_data + chars.substring(random_num,random_num+1); } return random_data; Really all I am doing is generating a chunk of bytes to send to the server, however, this is the only way I could find out how in Javascript. Any help would be appreciated.

    Read the article

  • Data transformation question

    - by tkm
    I have data composed of a list of employers and a list of workers. Each has a many-to-many relationship with the other (so an employer can have many workers, and a worker can have many employers). The way the data is retrieved (and given to me) is as follows: each employer has an array of workers. In other words: employer n has: worker x, worker y etc. So I have a bunch of employer objects each containing an array of workers. I need to transform this data (and basically invert the relationship). I need to have a bunch of worker objects, each containing and array of employers. In other words: worker x has: employer n1, employer n2 etc. The context is hypothetical so please don't comment on why I need this or why I am doing it this way. I would really just like help on the algorithm to perform this transformation (there isn't that much data so I would prefer readability over complex optimizations which reduce complexity). (Oh and I am using Java, but pseudocode would be fine). Thanks!

    Read the article

  • Authoritative sources about Database vs. Flatfile decision

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • How do I prove I should put a table of values in source code instead of a database table?

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • Which is the best opensource IT infra management s/w?

    - by karthick
    I am looking for some opensource IT infrastructure management s/w which should be able to monitor, manage servers & pc's, network devices, printers etc and it should have patch management, software inventory, user activity data etc And I am planning to have it on a linux server and it should be manageable for both linux and windows machines. I have found many while googling, but I don't know which is the best one. So anyone please suggest me, which is the best one I am looking for?.... Thanks... Your help is greatly appreciated..

    Read the article

  • What configuration management solutions exist in a non-networked environment?

    - by Rob Spieldenner
    My servers exist in an environment without outside network connectivity (this is a requirement), so when I deploy updates all packages, binaries, config files, etc. must be included on the delivered media. And of course I want some sort of configuration management so I can tell what has and hasn't been installed. So I was wondering if people had experience with chef, puppet, or another configuration management type tool for dealing with this type of environment. Worst case I deploy my updates as an RPM. EDIT: My setup has both Linux servers and Windows servers.

    Read the article

  • Is adding users to the group www-data safe on Debian?

    - by John
    Many PHP applications do self-configuration and self-updating. This requires apache to have write access to the PHP files. While chgrp'ing them all to www-data appears like a good practice to avoid making them world writable, I also wish to allow users to create new files and edit existing one. Is adding users to the group www-data safe on Debian? For example: 775 root www-data /var/www 644 john www-data /var/www/johns_php_application.php 660 john www-data /var/www/johns_php_applications_configuration_file

    Read the article

  • Quick and dirty user management service for Linux VMs?

    - by quack quixote
    Background I have a home server running Debian, and a workstation that runs various VirtualBox VMs (mostly Linuxen but some Windows). At the moment, I'm creating my main user account anew for every new Linux VM. I'd like to make use of a centralized user-management scheme instead, so I can just configure the new VMs for the directory technology and let them handle user lookups automatically. The last time I worked with anything like this, NIS+ was still in fashion. I have a vague notion of what LDAP and Active Directory are, but no knowledge of how to configure them for what I want. Question What user-management/network-directory technology should I use for providing user accounts to my network? The server must run on Debian Lenny. Client configuration should be simple point-at-server-and-go. I need an example configuration for one sample user account. (nice-to-have) I may want to mount the user's home directory from the server. (nice-to-have) The same configuration works with Windows clients.

    Read the article

  • Memory management (segmentation and paging) in 80286 and 80386: How does it work?

    - by Andrew J. Brehm
    I found lots of Web sites and books explaining how memory management worked on the 8086 and later x86 CPUs in Real Mode. I understand, I think, how two 16 bit values, segment address and offset are combined to get a linear 20 bit physical address (shift segment four bits to the left, add offset; segments are 64K and start every 16 bytes). But I couldn't find any good Web sites or books that explained how memory management works in Protected Mode, specifically the differences between 80286 and 80386. Can anyone point me to a good Web site or book (or explain it right here)? (For extra credit, i.e. an upvote, how does it work in Long Mode?)

    Read the article

  • Mostly offsite asset management (laptops/smartphones) - what is a good SaaS based solution?

    - by Jack T
    Most of our company assets are offsite. Everyone either works at home or onsite at a customer. Most asset management/audit/remote control software concentrate on company LAN based assets. We don't need an NMS as we use OpenNMS in the internal network. I was thinking of something like Altiris Client Management Suite but since everything is connected to the internet a SaaS based solution sounds like the ways to go. LogMeIn Central looks ok but not that comprehensive. What do you guys use?

    Read the article

  • What's the strengths and weaknesses of existing configuration management systems?

    - by Daniel C. Sobral
    I was looking up here for some comparisons between CFEngine, Puppet, Chef, bcfg2, AutomateIt and whatever other configuration management systems might be out there, and was very surprised I could find very little here on Server Fault. For instance, I only knew of the first three links above -- the other two I found on a related google search. So, I'm not interested in what people think is the best one, or which they like. I'd like to know the following: Configuration Management System's name. Why it was created (as opposed to using an existing solution). Relative strengths. Relative weaknesses. License. Link to project and examples.

    Read the article

  • What's the strengths and weaknesses of existing configuration management systems?

    - by Daniel C. Sobral
    I was looking up here for some comparisons between CFEngine, Puppet, Chef, bcfg2, AutomateIt and whatever other configuration management systems might be out there, and was very surprised I could find very little here on Server Fault. For instance, I only knew of the first three links above -- the other two I found on a related google search. So, I'm not interested in what people think is the best one, or which they like. I'd like to know the following: Configuration Management System's name. Why it was created (as opposed to using an existing solution). Relative strengths. Relative weaknesses. License. Link to project and examples.

    Read the article

  • Problems with data driven testing in MSTest

    - by severj3
    Hello, I am trying to get data driven testing to work in C# with MSTest/Selenium. Here is a sample of some of my code trying to set it up: [TestClass] public class NewTest { private ISelenium selenium; private StringBuilder verificationErrors; [DeploymentItem("GoogleTestData.xls")] [DataSource("System.Data.OleDb", "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=GoogleTestData.xls;Persist Security Info=False;Extended Properties='Excel 8.0'", "TestSearches$", DataAccessMethod.Sequential)] [TestMethod] public void GoogleTest() { selenium = new DefaultSelenium("localhost", 4444, "*iehta", http://www.google.com); selenium.Start(); verificationErrors = new StringBuilder(); var searchingTerm = TestContext.DataRow["SearchingString"].ToString(); var expectedResult = TestContext.DataRow["ExpectedTextResults"].ToString(); Here's my error: Error 3 An object reference is required for the non-static field, method, or property 'Microsoft.VisualStudio.TestTools.UnitTesting.TestContext.DataRow.get' E:\Projects\SeleniumProject\SeleniumProject\MaverickTest.cs 32 33 SeleniumProject The error is underlining the "TestContext.DataRow" part of both statements. I've really been struggling with this one, thanks!

    Read the article

  • RIA Services vs ADO.NET Data Services

    - by Cody C
    I'm currently in the process of creating a Silverlight 3 data driven application. To access the database, 2 common approaches are used: RIA Services and ADO.NET Data Services. Does anyone have any guidance on when/why to choose each approach? Here is what I've gathered from my research / experience. Any thoughts? ADO.NET seems to be only useful for strictly database calls. If you need to expose the data services to other applications (ignoring Silverlight 3's domain restriction), this is a good approach. Also, if the URL/Query syntax can be useful in your application, this is another advantage RIA Services seem to be a more flexible, accepted framework. It seems to give you more than strictly database access. It does have a limitation of only being used for the Silverlight / Web application as it is not exposed via a service. Thoughts? Ideas? Comments?

    Read the article

  • Stack data storage order

    - by Jamie Dixon
    When talking about a stack in either computing or "real" life we usually assume a "first on, last off" type of functionality. Because the idea of a stack is based around something in the physical world, does it matter how the data in the stack is stored? I notice in a lot of examples that the storage of the stack data is quite often done using an array and the newest item added to the stack is placed at the bottom of the array. (like adding a new plate to an existing stack of plates except putting it underneath the other plates rather than on top). As a paradigm, does it matter in what order the data is stored within the stack as long as the operation of the stack acts as expected?

    Read the article

  • OS X Server DNS management

    - by Sorin Buturugeanu
    I have an OS X 10.6 Server running, which has PHP, Apache, MySQL, and DNS running on it. I want to take the DNS management out of the Server Admin App. I know that the DNS configuration files (the ones BIND uses) are plain text files (which have to obey some rules, obviously). The main reason for this is because I wanted to setup DKIM for one of my domains, and I had to add a TXT record to the subdomain pm._domainkey.example.com. Server Admin did not let me add that subdomain, because of the "invalid" underscore character. I searched for web based DNS management tools (the ones that I would install on my server and would allow me to manage my DNS records), but I couldn't find any good ones. (There were a couple that I managed to install, but they didn't see the configuration that I already had setup in Server Admin). Now I'm looking into editing the config files directly, but I don't know where they're located. This is a test / development server, so messing it up wouldn't be such a disaster. I know "I shouldn't do this", but I want to :). Thanks for your help.

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >