Search Results

Search found 5416 results on 217 pages for 'storage'.

Page 103/217 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • CouchDB, HDFS, HBase or which is right for my situation?

    - by Lucas
    Hello all, This question is regarding data storage systems such as CouchDB, HDFS and HBase, specifically, which is right. I am looking at making a simple and customized Document Management System for my organization. Basically, we need the ability to store some Word Documents, PDFs and other similar files. I also want to store metadata about these files (e.g., Author, Dates, etc). Usage permissions would also be handy, but that can probably be built using meta-data. I would also need the ability to full-text index. The ability to version, while not required would be extremely useful. I would like the ability to simply add hardware to expand the resources of the system and the system must support Network Attached Storage over the CIFS or NFS protocol(s). I have read about CouchDB, HDFS and HBase. My preferred programming language is C# as all of my end-users will be running Windows machines and I will want to make both web and winforms client implementations. My question is which solution best fits my needs? Based on my research it appears that CouchDB (utilizing the CouchDB-Lounge and CouchDB-Lucene) perfectly fits my needs. However, I am worried that since I have worked with CouchDB that I might be overlooking something useful for my needs in HDFS or HBase or something similar due to a bias. Any and all opinions are welcome as I am looking for the community input as I really do not want to make the wrong choice at the start of my project. Please ask if you need more information. I thank you all for your time, input and assistance.

    Read the article

  • Cassandra hot keyspace structure change

    - by Pierre
    Hello. I'm currently running a 12-node Cassandra cluster storing 4TB of data, with a replication factor set to 3. For the needs of an application update, we need to change the configuration of our keyspace, and we'd like to avoid any downtime if possible. I read on a mailing list that the best way to do it is to: Kill cassandra process on one server of the cluster Start it again, wait for the commit log to be written on the disk, and kill it again Make the modifications in the storage.xml file Rename or delete the files in the data directories according to the changes we made Start cassandra Goto 1 with next server on the list My questions would be: Did I understand the process well? Is there any risk of data corruption? During the process, there will be servers with different versions of the storage.xml file in the same cluser, same keyspace. Is it a problem? Same question as above if we not only add, rename and remove ColumnFamilies, but if we change the CompareWith parameter / transform an existing column family into a super one. Or do we need to change the name? Thank you for your answers. It's the first time I'll do this, and I'm a little bit scared.

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr Kochanski
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • Compiled Haskell libraries with FFI imports are invalid when imported into GHCI

    - by John Millikin
    I am using GHC 6.12.1, in Ubuntu 10.04 When I try to use the FFI syntax for static storage, only modules running in interpreted mode (ie GHCI) work properly. Compiled modules have invalid pointers, and do not work. I'd like to know whether anybody can reproduce the problem, whether this an error in my code or GHC, and (if the latter) whether it's a known issue. I'm using sys_siglist because it's present in a standard library on my system, but I don't believe the actual storage used matters (I discovered this while writing a binding to libidn). If it helps, sys_siglist is defined in <signal.h> as: extern __const char *__const sys_siglist[_NSIG]; I thought this type might be the problem, so I also tried wrapping it in a plain C procedure: #include<stdio.h> const char **test_ffi_import() { printf("C think sys_siglist = %X\n", sys_siglist); return sys_siglist; } However, importing that doesn't change the result, and the printf() call prints the same pointer value as show siglist_a. My suspicion is that it's something to do with static and dynamic library loading. Update: somebody in #haskell suggested this might be 64-bit specific; if anybody tries to reproduce it, can you mention your architecture and whether it worked in a comment? Code as follows: -- A.hs {-# LANGUAGE ForeignFunctionInterface #-} module A where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_a :: Ptr CString -- -- B.hs {-# LANGUAGE ForeignFunctionInterface #-} module B where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_b :: Ptr CString -- -- Main.hs {-# LANGUAGE ForeignFunctionInterface #-} module Main where import Foreign import Foreign.C import A import B foreign import ccall "&sys_siglist" siglist_main :: Ptr CString main = do putStrLn $ "siglist_a = " ++ show siglist_a putStrLn $ "siglist_b = " ++ show siglist_b putStrLn $ "siglist_main = " ++ show siglist_main peekSiglist "a " siglist_a peekSiglist "b " siglist_b peekSiglist "main" siglist_main peekSiglist name siglist = do ptr <- peekElemOff siglist 2 str <- maybePeek peekCString ptr putStrLn $ "siglist_" ++ name ++ "[2] = " ++ show str I would expect something like this output, where all pointer values identical and valid: $ runhaskell Main.hs siglist_a = 0x00007f53a948fe00 siglist_b = 0x00007f53a948fe00 siglist_main = 0x00007f53a948fe00 siglist_a [2] = Just "Interrupt" siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt" However, if I compile A.hs (with ghc -c A.hs), then the output changes to: $ runhaskell Main.hs siglist_a = 0x0000000040378918 siglist_b = 0x00007fe7c029ce00 siglist_main = 0x00007fe7c029ce00 siglist_a [2] = Nothing siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt"

    Read the article

  • paperclip overwrites / resets S3 permissions for non-bucket-owners

    - by adriandz
    I have opened this as an issue on Github (http://github.com/thoughtbot/paperclip/issues/issue/225) but on the chance that I'm just doing this wrong, I thought I'd also ask about it here. If someone can tell me where I'm going wrong, I can close the issue and save the Paperclip guys some trouble. Issue: When using S3 for storage, and you wish your bucket to allow access to other users to whom you have granted access, Paperclip appears to overwrite the permissions on the bucket, removing access to these users. Process for duplication: Create a bucket in S3 and set up a Rails app with Paperclip to use this bucket for storage Add a user (for example, [email protected], the user for the video encoding service Zencoder) to the bucket, and grant this user List and Read/Write permissions. Upload a file. Refresh the permissions. The user you added will be gone. As well, a user "Everyone" with read permissions will have been added. The end result is that you cannot, so far as I can tell, retain desired permissions on your bucket when using Paperclip and S3. Can anyone help?

    Read the article

  • Optimizing processing and management of large Java data arrays

    - by mikera
    I'm writing some pretty CPU-intensive, concurrent numerical code that will process large amounts of data stored in Java arrays (e.g. lots of double[100000]s). Some of the algorithms might run millions of times over several days so getting maximum steady-state performance is a high priority. In essence, each algorithm is a Java object that has an method API something like: public double[] runMyAlgorithm(double[] inputData); or alternatively a reference could be passed to the array to store the output data: public runMyAlgorithm(double[] inputData, double[] outputData); Given this requirement, I'm trying to determine the optimal strategy for allocating / managing array space. Frequently the algorithms will need large amounts of temporary storage space. They will also take large arrays as input and create large arrays as output. Among the options I am considering are: Always allocate new arrays as local variables whenever they are needed (e.g. new double[100000]). Probably the simplest approach, but will produce a lot of garbage. Pre-allocate temporary arrays and store them as final fields in the algorithm object - big downside would be that this would mean that only one thread could run the algorithm at any one time. Keep pre-allocated temporary arrays in ThreadLocal storage, so that a thread can use a fixed amount of temporary array space whenever it needs it. ThreadLocal would be required since multiple threads will be running the same algorithm simultaneously. Pass around lots of arrays as parameters (including the temporary arrays for the algorithm to use). Not good since it will make the algorithm API extremely ugly if the caller has to be responsible for providing temporary array space.... Allocate extremely large arrays (e.g. double[10000000]) but also provide the algorithm with offsets into the array so that different threads will use a different area of the array independently. Will obviously require some code to manage the offsets and allocation of the array ranges. Any thoughts on which approach would be best (and why)?

    Read the article

  • JavaScript Exception/Error Handling Not Working

    - by Seán Hayes
    This might be a little hard to follow. I've got a function inside an object: f_openFRHandler: function(input) { console.debug('f_openFRHandler'); try{ //throw 'foo'; DragDrop.FileChanged(input); //foxyface.window.close(); } catch(e){ console.error(e); jQuery('#foxyface_open_errors').append('<div>Max local storage limit reached, unable to store new images in your browser. Please remove some images and try again.</div>'); } }, inside the try block it calls: this.FileChanged = function(input) { // FileUploadManager.addFileInput(input); console.debug(input); var files = input.files; for (var i = 0; i < files.length; i++) { var file = files[i]; if (!file.type.match(/image.*/)) continue; var reader = new FileReader(); reader.onload = (function(f, isLast) { return function(e) { if (files.length == 1) { LocalStorageManager.addImage(f.name, e.target.result, false, true); LocalStorageManager.loadCurrentImage(); //foxyface.window.close(); } else { FileUploadManager.addFileData(f, e.target.result); // add multiple files to list if (isLast) setTimeout(function() { LocalStorageManager.loadCurrentImage() },100); } }; })(file, i == files.length - 1); reader.readAsDataURL(file); } return true; LocalStorageManager.addImage calls: this.setItem = function(data){ localStorage.setItem('ImageStore', $.json_encode(data)); } localStorage.setItem throws an error if too much local storage has been used. I want to catch that error in f_openFRHandler (first code sample), but it's being sent to the error console instead of the catch block. I tried the following code in my Firebug console to make sure I'm not crazy and it works as expected despite many levels of function nesting: try{ (function(){ (function(){ throw 'foo' })() })() } catch(e){ console.debug(e) } Any ideas?

    Read the article

  • Practiaal rules for Django MiddleWare ordering?

    - by o_O Tync
    The official documentation is a bit messy: 'before' & 'after' are used for ordering MiddleWare in a tuple, but in some places 'before'&'after' refers to request-response phases. Also, 'should be first/last' are mixed and it's not clear which one to use as 'first'. I do understand the difference.. however it seems to complicated for a newbie in Django. Can you suggest some correct ordering for builtin MiddleWare classes (assuming we enable all of them) and — most importantly — explain WHY one goes before/after other ones? here's the list, with the info from docs I managed to find: UpdateCacheMiddleware Before those that modify 'Vary:' SessionMiddleware, GZipMiddleware, LocaleMiddleware GZipMiddleware Before any MW that may change or use the response body After UpdateCacheMiddleware: Modifies 'Vary:' ConditionalGetMiddleware Before CommonMiddleware: uses its 'Etag:' header when USE_ETAGS=True SessionMiddleware After UpdateCacheMiddleware: Modifies 'Vary:' Before TransactionMiddleware: we don't need transactions here LocaleMiddleware, One of the topmost, after SessionMiddleware, CacheMiddleware After UpdateCacheMiddleware: Modifies 'Vary:' After SessionMiddleware: uses session data CommonMiddleware Before any MW that may change the response (it calculates ETags) After GZipMiddleware so it won't calculate an E-Tag on gzipped contents Close to the top: it redirects when APPEND_SLASH or PREPEND_WWW CsrfViewMiddleware AuthenticationMiddleware After SessionMiddleware: uses session storage MessageMiddleware After SessionMiddleware: can use Session-based storage XViewMiddleware TransactionMiddleware After MWs that use DB: SessionMiddleware (configurable to use DB) All *CacheMiddleWare is not affected (as an exception: uses own DB cursor) FetchFromCacheMiddleware After those those that modify 'Vary:' if uses them to pick a value for cache hash-key After AuthenticationMiddleware so it's possible to use CACHE_MIDDLEWARE_ANONYMOUS_ONLY FlatpageFallbackMiddleware Bottom: last resort Uses DB, however, is not a problem for TransactionMiddleware (yes?) RedirectFallbackMiddleware Bottom: last resort Uses DB, however, is not a problem for TransactionMiddleware (yes?) (I will add suggestions to this list to collect all of them in one place)

    Read the article

  • Recommend me an architecture for this Facebook application

    - by andybaird
    Firstly, this question is subjective. There is not a right answer for this question and it really depends on what works for you. I'm hoping to use this thread as a breeding ground for ideas. I hope this is acceptable in this medium. I'm working on building a Facebook app that will be replacing an already popular app that gets ~50k hits a day. The original app is using a very typical LAMP setup with help from some Zend libraries for database layer extraction. For the most part the app worked well, except to solve a lot of issues I ended up fragmenting tables to speed things up. As a result, I couldn't do a lot of things with the app that I wanted to (namely any processing using aggregate data that needed to be returned quickly) So I'm starting to design plans for the next version of this application, and I have a whole bunch of new and cool features that I know would choke my current setup. I'm looking for technological recommendations of data storage methods that scale well. The database does not necessarily need to be relational, simple key/value storage would suffice (although at present time I know little to nothing about KV stores) What's your recommendation? How would you tackle this? I'd like to take a completely free approach to this -- although I am most familiar and comfortable using PHP, I want to leave all technical options open.

    Read the article

  • Decoding a jpg in the background in WP7

    - by Shahar Prish
    I have a bunch of apps in the marketplace, and so far I have been able, by changing my functionality or going the extra mile, to work around the issue of being unable to decode a jpg in the background into a WriteableBitmap. I am finding a situation where I can't think of good ways to "work around" the issue. I need to decode the image I get from MediaLibrary, reduce it's resolution to something managable (800x800), rotate it potentially and save to local storage. By far, the thing that takes the most time (80%) is decoding the bitmap to 800x800 - it takes between 700ms to 1000 ms. A user may add 7-10 images when starting, which translates to ~10 seconds of waiting for the images being added. I tried doing this lazily, but at some point you need to pay the piper and the app essentially stutters for ~1000ms at that point and the experience is not great. Is there an alternative I am missing for loading the image in the background somehow? (Note on why CreateOptions.BackgroundCreation is no good for me: It loads the image into a BitmapImage which is great if you want to just use it, but not so great for what I need to do which is create a copy in Isolated Storage).

    Read the article

  • C++ Problems with #import of .NET out-of-proc server.

    - by jm
    In C++ program, I am trying to #import TLB of .NET out of proc server. I get errors like: z:\server.tlh(111) : error C2146: syntax error : missing ';' before identifier 'GetType' z:\server.tlh(111) : error C2501: 'TypePtr' : missing storage-class or type specifiers z:\server.tli(74) : error C2143: syntax error : missing ';' before 'tag::id' z:\server.tli(74) : error C2433: 'TypePtr' : 'inline' not permitted on data declarations z:\server.tli(74) : error C2501: '_TypePtr' : missing storage-class or type specifiers z:\server.tli(74) : fatal error C1004: unexpected end of file found The TLH looks like: ... _bstr_t GetToString ( ); VARIANT_BOOL Equals ( const _variant_t & obj ); long GetHashCode ( ); _TypePtr GetType ( ); long Open ( ); ... I am not really interested in the having the base object .NET object methods like GetType(), Equals(), etc. But GetType() seems to be causing problems. Some google research indicates I could #import MSCORLIB.TLB (or put it in path), but I can't get that to compile either. Any tips?

    Read the article

  • Windows Azure: Creating a subdirectories inside the blob

    - by veda
    I wanted to create some subdirectories inside my blob. But it is not working out well Here is my code protected void ButUpload_click(object sender, EventArgs e) { // store upladed file as a blob storage if (uplFileUpload.HasFile) { name = uplFileUpload.FileName; // get refernce to the cloud blob container CloudBlobContainer blobContainer = cloudBlobClient.GetContainerReference("documents"); if (textbox.Text != "") { name = textbox.Text + "/" + name; } // set the name for the uploading files string UploadDocName = name; // get the blob reference and set the metadata properties CloudBlockBlob blob = blobContainer.GetBlockBlobReference(UploadDocName); blob.Metadata["FILETYPE"] = "text"; blob.Properties.ContentType = uplFileUpload.PostedFile.ContentType; // upload the blob to the storage blob.UploadFromStream(uplFileUpload.FileContent); } } What I did is that, If I have to create a sub directory, I will enter the name of the sub directory in the textbox. for example, if I need to create a file named "test.txt" inside the sub directory "files" Then, my textbox.text = files and uplFileUpload.FileName = test.txt Now I will concatenate them and upload to the blob.. But it is not working well.. I am getting just https://test.core.windows.net/documents/files/ I am not getting the entire thing I was expecting https://test.core.windows.net/documents/files/test.txt What am I doing wrong... How to create sub directories inside the blob.

    Read the article

  • array of structures, or structure of arrays?

    - by Jason S
    Hmmm. I have a table which is an array of structures I need to store in Java. The naive don't-worry-about-memory approach says do this: public class Record { final private int field1; final private int field2; final private long field3; /* constructor & accessors here */ } List<Record> records = new ArrayList<Record>(); If I end up using a large number ( 106 ) of records, where individual records are accessed occasionally, one at a time, how would I figure out how the preceding approach (an ArrayList) would compare with an optimized approach for storage costs: public class OptimizedRecordStore { final private int[] field1; final private int[] field2; final private long[] field3; Record getRecord(int i) { return new Record(field1[i],field2[i],field3[i]); } /* constructor and other accessors & methods */ } edit: assume the # of records is something that is changed infrequently or never I'm probably not going to use the OptimizedRecordStore approach, but I want to understand the storage cost issue so I can make that decision with confidence. obviously if I add/change the # of records in the OptimizedRecordStore approach above, I either have to replace the whole object with a new one, or remove the "final" keyword. kd304 brings up a good point that was in the back of my mind. In other situations similar to this, I need column access on the records, e.g. if field1 and field2 are "time" and "position", and it's important for me to get those values as an array for use with MATLAB, so I can graph/analyze them efficiently.

    Read the article

  • Kohana Sessions data does not persist across pages in chrome and ir browsers

    - by user1062637
    Kohana Session data does not persist across pages opened in Chrome and IE browsers the same works fine in a Firefox browser Kohana version used is 2.3 session config files hold $config['driver'] = 'native'; /** * Session storage parameter, used by drivers. */ $config['storage'] = ''; /** * Session name. * It must contain only alphanumeric characters and underscores. At least one letter must be present. */ $config['name'] = 'NITWSESSID'; /** * Session parameters to validate: user_agent, ip_address, expiration. */ $config['validate'] = array(); /** * Enable or disable session encryption. * Note: this has no effect on the native session driver. * Note: the cookie driver always encrypts session data. Set to TRUE for stronger encryption. */ $config['encryption'] = FALSE; /** * Session lifetime. Number of seconds that each session will last. * A value of 0 will keep the session active until the browser is closed (with a limit of 24h). */ $config['expiration'] = 2700; /** * Number of page loads before the session id is regenerated. * A value of 0 will disable automatic session id regeneration. */ $config['regenerate'] = 0; /** * Percentage probability that the gc (garbage collection) routine is started. */ $config['gc_probability'] = 2; Help needed urgently

    Read the article

  • SqlServer slow on production environment

    - by Lieven Cardoen
    I have a weird problem in a production environment at a customer. I can't give any details on the infrastructure except that sql server runs on a virtual server and the data, log and filestream file are on another storage server (data and filestream together and log on a seperate server). Now, there's this query that when we run it gives these durations (first we clear the cache): 300ms, 20ms, 15ms, 17ms,... First time it takes longer, but from then on it is cached. At the customer, on a sql server that is more powerfull, these are the durations (I didn't have the rights to clear the cache. Will try this tomorrow). 2500ms, 2600ms, 2400ms, ... The query can be improved, that's right, but that's not the question here. How would you tackle this? I don't know where to go from here. The servers at this customer are really more powerfull but they do have virtual servers (we don't). What could be the cause... - Not enough memory? - Fragmentation? - Physical storage? I know it can be a lot of things, but maybe some of you have got some info for me on how to go on with this issue...

    Read the article

  • Multidimensional array (parent and childs)

    - by Juan
    I have a category system in a MySQL database with parents and childs. The database only stores the id of it''s immediate parent (or 0 if on root). Since the system allows multiple subcategories there are cases of multiple childs. For example [98] Storage [1] External [3] Pendrives [4] Portable hhdds [2] Internal [5] Sata hhdd [6] IDE hhdd [...] [99] Clothing The database would be id parent_id name 1 98 External 2 98 Internal 3 1 Pendrives 4 1 Portable 5 2 Sata 6 2 IDE 98 0 Storage 99 0 Clothing I also have a products table with a category id and I need to get a list of all the products in the first level of categories. For example: Product Category A 3 B 4 C 5 D 6 E 74 Should return 98: A, B, C, D 99: X, Y, Z... I'm stuck and I can't think of the logic to retrieve it in that way. I started by getting the IDs of all the categories that aren't in the first level by: while ($row = mysql_fetch_assoc($result)) { if ($row['parent_id'] != 0) { $level1[$i]['name'] = utf8_encode($row['categories_name']); $level1[$i]['id'] = $row['categories_id']; } $i++; } but I'm having a burnout and can't think of a way that would nest them. I thought some kind of while but it's infinite :P Any ideas please?

    Read the article

  • How to check whether user is login in web application?

    - by Morgan Cheng
    I want to learn the whole details of web application authentication. So, I decided to write a CodeIgniter authentication library from scratch. Now, I have to make design decision about how to determine whether one user is login. Basically, after user input username & password pair. A cookie is set for this session, following navigations in the web application will not require username & password. The server side will check whether the session cookie is valid to determine whether current user is login. The question is: how to determine whether cookie is valid cookie issued from server side? I can image the most simple way is to have the cookie value stored in session status as well. For each HTTP request, compare the value from cookie and the value from server session. (Since CodeIgniter session library store session variables in cookies, it is not applicable without some tweak.) This method requires storage in server side. For huge web application that is deployed in multiple datacenters. It is possible that user input username & password when browsing in one datacenter, while he/she access the web application in another datacenter later. The expected behavior is that user just input username & password once. As a result, all datacenters should be able to access the session status. That is possible not applicable even the session status is stored in external storage such as database. I tried Google. I login Google with Asian proxy which is supposed to direct me to datacenters in Asian. Then I switch to North American proxy which should direct me to datacenters in North America. It recognize my login without asking username and password again. So, is there any way to determine whether user is login without server side session status?

    Read the article

  • mysql: storing arbitrary data

    - by Hailwood
    Background: I was asking a question on stack overflow regarding creating tables on the fly where this conversation ensued: This smells like a terrible idea! In fact, it smells just like this one. What in the world do you want to use this for? – deceze @deceze: very true, However, How else would you store the contents of these CSV files. They must be stored in mysql for indexing. The only solid fact about them is that they all have a mobile column with a standard format. The CSV can have an arbitrary amount of columns with an arbitrary amount of rows. They can (with no exaggeration) range from a single row, 35 column csv to an 80k row single column CSV. I am open to other ideas. – Hailwood There are many solutions for this, from attribute-value schemas to JSON storage and NoSQL storage. Open a new question about it. Whatever you do though, don't dynamically create tables! – deceze Question: So my question is, What would you say is the best way to store this data? Are you in agreement with deceze about not creating dynamic tables?

    Read the article

  • DuplicateKeyException in LINQ, but I've set auto increment and auto sync

    - by Fritos
    I'm getting a DuplicateKeyException error in my C# code. I've set Auto Generated = true, and Auto-Sync = OnInsert in my dbml. I'm not even touching the PK field in any manually written code (as seen below [My primary key field is actually called PK]). using (DeviceExerciseDataDataContext context = new DeviceExerciseDataDataContext()) { foreach(Data tgudData in data.Data) { tgd = new tableData(); tgd.FK = key; tgd.Time = tgudData.TimeStamp; tgd.Calories = Convert.ToInt32(tgudData.Calories); tgd.HeartRate = tgudData.AvgHr; tgd.BenchAngle = tgudData.Angle; tgd.WorkoutTarget = 0; tgd.Reps = tgudData.Reps; context.tableDatas.InsertOnSubmit(tgd); } context.SubmitChanges(); } This is the code for the column in the designer (columns are named PK and FK) [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_PK", AutoSync=AutoSync.OnInsert, DbType="Int NOT NULL", IsPrimaryKey=true, IsDbGenerated=true)] public int PK { get { return this._PK; } set { if ((this._PK != value)) { this.OnPKChanging(value); this.SendPropertyChanging(); this._PK = value; this.SendPropertyChanged("PK"); this.OnPKChanged(); } } } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_FK", DbType="Int")] public System.Nullable<int> FK { get { return this._FK; } set { if ((this._FK != value)) { this.OnFKChanging(value); this.SendPropertyChanging(); this._FK = value; this.SendPropertyChanged("FK"); this.OnFKChanged(); } } }

    Read the article

  • Need some clarification on the ANSI/SPARC 3-tier database architecture.

    - by Moonshield
    Hi there, I'm currently revising for a databases exam and looking over some past papers, but there's one question that I'm slightly unsure about and was wondering if someone could offer some assistance. "Describe EACH of the THREE levels of the ANSI SPARC 3 level architecture. Your answer should include the purpose of EACH of the schemas, the level of abstraction they provide and the software tools that would be used to access and support them." As I understand it (although please correct me if I'm wrong): the internal schema specifies the physical storage of the data; the conceptual schema specifies the structure of the database and the domains; and the external schemas are how the database is viewed by "users" (applications, etc.). As for the abstraction, I understand that the conceptual layer means that the physical data storage can be altered without the end user being affected, likewise the The bit that I'm not sure about is what tools are used to access and support each layer. Would the internal schema be handled by the DBMS, the conceptual schema handled by some sort of DDL interpreter and the external schema handled by a DML interpreter (or have I misunderstood what each level does)? Any assistance would be greatly appreciated. Thanks, Moonshield

    Read the article

  • Java ArrayList remove dupes without sets

    - by Kieran
    I'm having problems removing duplicates from an ArrayList. It's for an assignment for college. Here's the code I have already: public int numberOfDiffWords() { ArrayList<String> list = new ArrayList<>(); for(int i=0; i<words.size()-1; i++) { for(int j=i+1; j<words.size(); j++) { if(words.get(i).equals(words.get(j))) { // do nothing } else { list.add(words.get(i)); } } } return list.size(); } The problem is in the numberOfDiffWords() method. The populate list method is working correctly, as my instructor has given me a sample string (containing 4465 words) to analyse - printing words.size() gives the correct result. I want to return the size of the new ArrayList with all duplicates removed. words is an ArrayList class attribute. UPDATE: I should have mentioned I'm only allowed to use dynamic indexed-based storage for this part of the assignment, which means no hash-based storage.

    Read the article

  • Greasemonkey failing to GM_setValue()

    - by HonoredMule
    I have a Greasemonkey script that uses a Javascript object to maintain some stored objects. It covers quite a large volume of information, but substantially less than it successfully stored and retrieved prior to encountering my problem. One value refuses to save, and I can not for the life of me determine why. The following problem code: Works for other larger objects being maintained. Is presently handling a smaller total amount of data than previously worked. Is not colliding with any function or other object definitions. Can (optionally) successfully save the problem storage key as "{}" during code startup. this.save = function(table) { var tables = this.tables; if(table) tables = [table]; for(i in tables) { logger.log(this[tables[i]]); logger.log(JSON.stringify(this[tables[i]])); GM_setValue(tables[i] + "_" + this.user, JSON.stringify(this[tables[i]])); logger.log(tables[i] + "_" + this.user + " updated"); logger.log(GM_getValue(tables[i] + "_" + this.user)); } } The problem is consistently reproducible and the logging statments produce the following output in Firebug: Object { 54,10 = Object } // Expansion shows complete contents as expected, but there is one oddity--Firebug highlights the array keys in purple instead of the usual black for anonymous objects. {"54,10":{"x":54,"y":10,"name":"Lucky Pheasant"}} // The correctly parsed string. bookmarks_HonoredMule saved undefined I have tried altering the format of the object keys, to no effect. Further narrowing down the issue is that this particular value is successfully saved as an empty object ("{}") during code initialization, but skipping that also does not help. Reloading the page confirms that saving of the nonempty object truly failed. Any idea what could cause this behavior? I've thoroughly explored the possibility of hitting size constraints, but it doesn't appear that can be the problem--as previously mentioned, I've already reduced storage usage. Other larger objects save still, and the total number of objects, which was not high already, has further been reduced by an amount greater than the quantity of data I'm attempting to store here.

    Read the article

  • How can I prevent double file uploading with Amazon S3?

    - by Tony
    I decided to use Amazon S3 for document storage for an app I am creating. One issue I run into is while I need to upload the files to S3, I need to create a document object in my app so my users can perform CRUD actions. One solution is to allow for a double upload. A user uploads a document to the server my Rails app lives on. I validate and create the object, then pass it on to S3. One issue with this is progress indicators become more complicated. Using most out-of-the-box plugins would show the client that file has finished uploading because it is on my server, but then there would be a decent delay when the file was going from my server to S3. This also introduces unnecessary bandwidth (at least it does not seem necessary) The other solution I am thinking about is to upload the file directly to S3 with one AJAX request, and when that is successful, make a second AJAX request to store the object in my database. One issue here is that I would have to validate the file after it is uploaded which means I have to run some clean up code in S3 if the validation fails. Both seem equally messy. Does anyone have something more elegant working that they would not mind sharing? I would imagine this is a common situation with "cloud storage" being quite popular today. Maybe I am looking at this wrong.

    Read the article

  • Commited memory goes to physical RAM or reserves space in the paging file?

    - by Sil
    When I do VirtualAlloc with MEM_COMMIT this "Allocates physical storage in memory or in the paging file on disk for the specified reserved memory pages" (quote from MSDN article http://msdn.microsoft.com/en-us/library/aa366887%28VS.85%29.aspx). All is fine up until now BUT: the description of Commited Bytes Counter says that "Committed memory is the physical memory which has space reserved on the disk paging file(s)." I also read "Windows via C/C++ 5th edition" and this book says that commiting memory means reserving space in the page file.... The last two cases don't make sense to me... If you commit memory, doesn't that mean that you commit to physical storage (RAM)? The page file being there for swaping out currently unused pages of memory in case memory gets low. The book says that when you commit memory you actually reserve space in the paging file. If this were true than that would mean that for a committed page there is space reserved in the paging file and a page frame in physical in memory... So twice as much space is needed ?! Isn't the page file's purpose to make the total physical memory larger than it actually is? If I have a 1G of RAM with a 1G page file = 2G of usable "physical memory"(the book also states this but right after that it says what I discribed at point 2). What am I missing? Thanks.

    Read the article

  • To display the images in mobile devices is it necessary that the images should resides on device in

    - by Shailesh Jaiswal
    I am devloping smart device application in C#. In this application I have some images in my application which I used to dispay on emulator from my application. To display the images on emulator I need to create the one folder of images which resides on the emulator. Only after that I am able to display the images in emulator. I am able to create the folder in emulator by using File-Configure-General-Shared Folder. For sharing the folder I am giving the path of the folder which contains the images. Once I share the folder the folder of images which resides in my application will get copied in emulator with the name "Storage Card". Now I need to use the path as Bitmap bmp=new Bitmap(@"/Storage Card/ImageName.jpg"); Now I am able to display the images in emulator. Can we display the images in the emulator without any image folder which resides on emultor (so that we dont need to place the image folder in emulator as in the above case by sharing the folder) ? If the answere is no then to run the application on different mobile devices we need to place the folder which contains the images on different mobile devices. Isnt it? If the answere is yes then how we can display the images on different mobile device from our application without placing any folder of images on mobile devices?

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >