Search Results

Search found 65999 results on 2640 pages for 'large data volumes'.

Page 12/2640 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • how to recover deleted ntfs patition with data entirely while installing ubuntu 13.04

    - by Anson Varghese
    I've installed ubuntu 13.04 onto my hp 2231tx computer. During installation all of my data was erased. I didn't know all of my three partitions would be deleted. I was shocked after finding out that all of my personal data was erased. I didn't know what to do to resolve this problem so I search google for an answer. I found a program called testdisk and I used it to recover about half of my data. Among this data weren't my personal photos and videos. Is there a way to recover the other half?

    Read the article

  • Bring 2 GB Large Pages to Solaris 10

    - by Giri Mandalika
    Few facts: 8 KB is the default page size on Oracle Solaris 10 and 11 as of this writing Both hardware and software must have support for 2 GB large pages SPARC T4 processors are capable of supporting 2 GB pages Oracle Solaris 11 kernel has in-built support for 2 GB pages Oracle Solaris 10 has no default support for 2 GB pages Memory intensive 64-bit applications may benefit the most from using 2 GB pages Prerequisites: OS: Oracle Solaris 10 8/11 (Update 10) or later Hardware: Oracle servers with SPARC T4 processors e.g., SPARC T4-1, T4-2 or T4-4, SPARC SuperCluster T4-4 Steps to enable 2 GB large pages on Oracle Solaris 10: Install the latest kernel patch or ensure that 147440-04 or later was installed Check the patch download instructions Add the following line to /etc/system and reboot set max_uheap_lpsize=0x80000000 Finally check the output of the following command when the system is back online pagesize -a eg., % pagesize -a 8192 <-- 8K 65536 <-- 64K 4194304 <-- 4M 268435456 <-- 256M 2147483648 <-- 2G % uname -a SunOS jar-jar 5.10 Generic_147440-21 sun4v sparc sun4v Also See: Solaris 9 or later: More performance with Large Pages (MPSS) Large page support for instructions (text) in Solaris 10 1/06 Solaris: How To Disable Out Of The Box (OOB) Large Page Support? Memory fragmentation / Large Pages on Solaris x86

    Read the article

  • E-Book on big data (featuring Analysts, Customers and more)

    - by Jean-Pierre Dijcks
    As we are gearing up for Openworld, here is a nice E-book on big data to start paging through. It contains Gartner's take on big data, customer and partner interviews and a lot more good info. Enjoy the read so you come prepared for Openworld!! Read the E-Book here. For those coming to Oracle Openworld (or the Americas Cup races around the same time), you can find big data sessions via this URL. Enjoy!!

    Read the article

  • Choices in Architecture, Design, Algorithms, Data Structures for effective RDF Reasoning and Querying in a Big Data Environment [on hold]

    - by user2891213
    As part of my academic project I would like to know what choices in Architecture, Design, Algorithms, Data Structures do we need in order to provide effective and efficient RDF Reasoning and Querying in a Big Data Environment. Basically I want to get info regarding below points: What are the Systems and Software to get appropriate Architecture? What kind of API layer(s) would we need on top of the Big Data stores, to make this possible? The Indexing structures we will need. The appropriate Algorithms, and appropriate Algorithms for Query Planning across Big Data stores. The Performance Analysis and Cost Models we will need to justify the design decisions we have made along the way. Can anyone please provide pointers.. Thanks, David

    Read the article

  • Large Image in .net

    - by Modir
    I want to create large image by C#. (i have some photos with large size (4800 * 4800). i want to merge these photos.) I use Bitmap but don't support. (Error : Invalid Parameter)

    Read the article

  • Large Image in C#

    - by Modir
    Hi Friend I want to create large image by C#. (i have some photos with large size(4800 * 4800). i want merge these photos.) i use Bitmap but don't support. (Error : Invalid Parameter) Please guide me. THANKS

    Read the article

  • Data Structure Behind Amazon S3s Keys (Filtering Data Structure)

    - by dimo414
    I'd like to implement a data structure similar to the lookup functionality of Amazon S3. For those of you who don't know what I'm taking about, Amazon S3 stores all files at the root, but allows you to look up groups of files by common prefixes in their names, therefore replicating the power of a directory tree without the complexity of it. The catch is, both lookup and filter operations are O(1) (or close enough that even on very large buckets - S3's disk equivalents - both operations might as well be O(1))). So in short, I'm looking for a data structure that functions like a hash map, with the added benefit of efficient (at the very least not O(n)) filtering. The best I can come up with is extending HashMap so that it also contains a (sorted) list of contents, and doing a binary search for the range that matches the prefix, and returning that set. This seems slow to me, but I can't think of any other way to do it. Does anyone know either how Amazon does it, or a better way to implement this data structure?

    Read the article

  • Best Data Structure For Time Series Data

    - by TriParkinson
    Hi all, I wonder if someone could take a minute out of their day to give their two cents on my problem. I would like some suggestions on what would be the best data structure for representing, on disk, a large data set of time series data. The main priority is speed of insertion, with other priorities in decreasing order; speed of retrieval, size on disk, size in memory, speed of removal. I have seen that B+ trees are often used in database because of their fast search times, but how about for fast insertion times? Is a linked list really the way to go? Thanks in advance for your time, Tri

    Read the article

  • MySQL: Blank row in table after LOAD DATA INFILE

    - by Tom
    Hi, I'm uploading a large amount of data from a CSV (I'm doing it via MySQL Workbench): LOAD DATA INFILE 'C:/development/mydoc.csv' INTO TABLE mydatabase.mytable CHARACTER SET utf8 FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\r'; However, I'm noticing that it keeps adding an empty line full of nulls/zeros after the last record. I'm guessing it's because of the "LINES TERMINATED" command. However, I need that to load the data in correctly. Is there some way around this / some better SQL to avoid the blank row in the table? Thanks

    Read the article

  • Update tableview instantly as data pushed in core data iphone

    - by user336685
    I need to update the tableview as soon as the content is pushed in core data database. for this AppDelegate.m contains following code NSManagedObjectContext *moc = [self managedObjectContext]; NSFetchRequest *request = [[NSFetchRequest alloc] init]; [request setEntity:[NSEntityDescription entityForName:@"FeedItem" inManagedObjectContext:moc]]; //for loop // push data in code data & then save context [moc save:&error]; ZAssert(error == nil, @"Error saving context: %@", [error localizedDescription]); //for loop ends This code triggers following code from RootviewController.m - (void)controllerWillChangeContent:(NSFetchedResultsController*)controller { [[self tableView] beginUpdates]; } But this updates the tableview only at the end of the for loop ,the table does not get updated after immediate push in db. I tried following code but that didn't work - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { // In the simplest, most efficient, case, reload the table view. [self.tableView reloadData]; } I have been stuck with this problem for several days.Please help.Thanks in advance for solution.

    Read the article

  • Core Data data type for just the date - not including time

    - by Jason
    I am new at Core Data, and it seems like it is a great way to manage the data store. However I am also very memory-conscious due to the fact that the iPhone doesn't have that much of it. I was a little surprised to see that the data types are so limited - eg. there is a Date type which includes also the time, but no Date type for just the date! All the time information takes up precious bytes of memory, if I just wanted an attribute with the date (e.g. 2/15/2010 rather than 2/15/2010 02:34:48), how could I do this? Is it possible?

    Read the article

  • Parsing large delimited files with dynamic number of columns

    - by annelie
    Hi, What would be the best approach to parse a delimited file when the columns are unknown before parsing the file? The file format is Rightmove v3 (.blm), the structure looks like this: #HEADER# Version : 3 EOF : '^' EOR : '~' #DEFINITION# AGENT_REF^ADDRESS_1^POSTCODE1^MEDIA_IMAGE_00~ // can be any number of columns #DATA# agent1^the address^the postcode^an image~ agent2^the address^the postcode^^~ // the records have to have the same number of columns as specified in the definition, however they can be empty etc #END# The files can potentially be very large, the example file I have is 40Mb but they could be several hundred megabytes. Below is the code I had started on before I realised the columns were dynamic, I'm opening a filestream as I read that was the best way to handle large files. I'm not sure my idea of putting every record in a list then processing is any good though, don't know if that will work with such large files. List<string> recordList = new List<string>(); try { using (FileStream fs = new FileStream(path, FileMode.Open, FileAccess.Read)) { StreamReader file = new StreamReader(fs); string line; while ((line = file.ReadLine()) != null) { string[] records = line.Split('~'); foreach (string item in records) { if (item != String.Empty) { recordList.Add(item); } } } } } catch (FileNotFoundException ex) { Console.WriteLine(ex.Message); } foreach (string r in recordList) { Property property = new Property(); string[] fields = r.Split('^'); // can't do this as I don't know which field is the post code property.PostCode = fields[2]; // etc propertyList.Add(property); } Any ideas of how to do this better? It's C# 3.0 and .Net 3.5 if that helps. Thanks, Annelie

    Read the article

  • Clever ways of implementing different data structures in C & data structures that should be used mor

    - by Yktula
    What are some clever (not ordinary) ways of implementing data structures in C, and what are some data structures that should be used more often? For example, what is the most effective way (generating minimal overhead) to implement a directed and cyclic graph with weighted edges in C? I know that we can store the distances in an array as is done here, but what other ways are there to implement this kind of a graph?

    Read the article

  • Using GameKit to transfer CoreData data between iPhones, via NSDictionary

    - by OscarTheGrouch
    I have an application where I would like to exchange information, managed via Core Data, between two iPhones. First turning the Core Data object to an NSDictionary (something very simple that gets turned into NSData to be transferred). My CoreData has 3 string attributes, 2 image attributes that are transformables. I have looked through the NSDictionary API but have not had any luck with it, creating or adding the CoreData information to it. Any help or sample code regarding this would be greatly appreciated.

    Read the article

  • core-data relationships and data structure.

    - by Boaz
    What is the right way to build iPhone core data for this SMS like app (with location)? - I want to represent an entity of conversation with "profile1" "profile2" that heritage from a profile entity, and a message entity with: "to" "from" "body" where the "to" and "from" are equal to "profile1" and/or "profile2" in the conversation entity. How can I make such a relationships? is there a better way to represent the data (other structure)? Thanks

    Read the article

  • [Visual C++]Forcing memory alignment of variables/data-structures

    - by John
    I'm looking at using SSE and I gather aligning data on 16byte boundaries is recommended. There are two cases to consider: float data[4]; struct myystruct { float x,y,z,w; }; I'm not sure the first case can be done explicitly, though there's perhaps a compiler option I could use? In the second case I remember being able to control packing in old versions of GCC several years back, is this still possible?

    Read the article

  • SQL SERVER – Data Pages in Buffer Pool – Data Stored in Memory Cache

    - by pinaldave
    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any way one can know how much data in a table is stored in the memory cache? The more detailed question I usually get is if there are multiple indexes on table (and used in a query), were the data of the single table stored multiple times in the memory cache or only for a single time? Here is a query you can run to figure out what kind of data is stored in the cache. USE AdventureWorks GO SELECT COUNT(*) AS cached_pages_count, name AS BaseTableName, IndexName, IndexTypeDesc FROM sys.dm_os_buffer_descriptors AS bd INNER JOIN ( SELECT s_obj.name, s_obj.index_id, s_obj.allocation_unit_id, s_obj.OBJECT_ID, i.name IndexName, i.type_desc IndexTypeDesc FROM ( SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id ,allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.hobt_id AND (au.type = 1 OR au.type = 3) UNION ALL SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id, allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.partition_id AND au.type = 2 ) AS s_obj LEFT JOIN sys.indexes i ON i.index_id = s_obj.index_id AND i.OBJECT_ID = s_obj.OBJECT_ID ) AS obj ON bd.allocation_unit_id = obj.allocation_unit_id WHERE database_id = DB_ID() GROUP BY name, index_id, IndexName, IndexTypeDesc ORDER BY cached_pages_count DESC; GO Now let us run the query above and observe the output of the same. We can see in the above query that there are four columns. Cached_Pages_Count lists the pages cached in the memory. BaseTableName lists the original base table from which data pages are cached. IndexName lists the name of the index from which pages are cached. IndexTypeDesc lists the type of index. Now, let us do one more experience here. Please note that you should not run this test on a production server as it can extremely reduce the performance of the database. DBCC DROPCLEANBUFFERS This will drop all the clean buffers and we will be able to start again from there. Now run following script and check the execution plan for the same. USE AdventureWorks GO SELECT UnitPrice, ModifiedDate FROM Sales.SalesOrderDetail WHERE SalesOrderDetailID BETWEEN 1 AND 100 GO The execution plans contain the usage of two different indexes. Now, let us run the script that checks the pages cached in SQL Server. It will give us the following output. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. Let me know what you think of this article. I had a great pleasure while writing this article because I was able to write on this subject, which I like the most. In the next article, we will exactly see what data are cached and those that are not cached, using a few undocumented commands. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL DMV

    Read the article

  • Bitmask data insertions in SSDT Post-Deployment scripts

    - by jamiet
    On my current project we are using SQL Server Data Tools (SSDT) to manage our database schema and one of the tasks we need to do often is insert data into that schema once deployed; the typical method employed to do this is to leverage Post-Deployment scripts and that is exactly what we are doing. Our requirement is a little different though, our data is split up into various buckets that we need to selectively deploy on a case-by-case basis. I was going to use a SQLCMD variable for each bucket (defaulted to some value other than “Yes”) to define whether it should be deployed or not so we could use something like this in our Post-Deployment script: IF ($(DeployBucket1Flag) = 'Yes')BEGIN   :r .\Bucket1.data.sqlENDIF ($(DeployBucket2Flag) = 'Yes')BEGIN   :r .\Bucket2.data.sqlENDIF ($(DeployBucket3Flag) = 'Yes')BEGIN   :r .\Bucket3.data.sqlEND That works fine and is, I’m sure, a very common technique for doing this. It is however slightly ugly because we have to litter our deployment with various SQLCMD variables. My colleague James Rowland-Jones (whom I’m sure many of you know) suggested another technique – bitmasks. I won’t go into detail about how this works (James has already done that at Using a Bitmask - a practical example) but I’ll summarise by saying that you can deploy different combinations of the buckets simply by supplying a different numerical value for a single SQLCMD variable. Each bit of that value’s binary representation signifies whether a particular bucket should be deployed or not. This is better demonstrated using the following simple script (which can be easily leveraged inside your Post-Deployment scripts): /* $(DeployData) is a SQLCMD variable that would, if you were using this in SSDT, be declared in the SQLCMD variables section of your project file. It should contain a numerical value, defaulted to 0. In this example I have declared it using a :setvar statement. Test the affect of different values by changing the :setvar statement accordingly. Examples: :setvar DeployData 1 will deploy bucket 1 :setvar DeployData 2 will deploy bucket 2 :setvar DeployData 3   will deploy buckets 1 & 2 :setvar DeployData 6   will deploy buckets 2 & 3 :setvar DeployData 31  will deploy buckets 1, 2, 3, 4 & 5 */ :setvar DeployData 0 DECLARE  @bitmask VARBINARY(MAX) = CONVERT(VARBINARY,$(DeployData)); IF (@bitmask & 1 = 1) BEGIN     PRINT 'Bucket 1 insertions'; END IF (@bitmask & 2 = 2) BEGIN     PRINT 'Bucket 2 insertions'; END IF (@bitmask & 4 = 4) BEGIN     PRINT 'Bucket 3 insertions'; END IF (@bitmask & 8 = 8) BEGIN     PRINT 'Bucket 4 insertions'; END IF (@bitmask & 16 = 16) BEGIN     PRINT 'Bucket 5 insertions'; END An example of running this using DeployData=6 The binary representation of 6 is 110. The second and third significant bits of that binary number are set to 1 and hence buckets 2 and 3 are “activated”. Hope that makes sense and is useful to some of you! @Jamiet P.S. I used the awesome HTML Copy feature of Visual Studio’s Productivity Power Tools in order to format the T-SQL code above for this blog post.

    Read the article

  • Looking for Cutting-Edge Data Integration: 2014 Excellence Awards

    - by Sandrine Riley
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 It is nomination time!!! This year's Oracle Fusion Middleware Excellence Awards will honor customers and partners who are creatively using various products across Oracle Fusion Middleware. Think you have something unique and innovative with one or a few of our Oracle Data Integration products? We would love to hear from you! Please submit today. The deadline for the nomination is June 20, 2014. What you win: An Oracle Fusion Middleware Innovation trophy One free pass to Oracle OpenWorld 2014 Priority consideration for placement in Profit magazine, Oracle Magazine, or other Oracle publications & press release Oracle Fusion Middleware Innovation logo for inclusion on your own Website and/or press release Let us reminisce a little… For details on the 2013 Data Integration Winners: Royal Bank of Scotland’s Market and International Banking and The Yalumba Wine Company, check out this blog post: 2013 Oracle Excellence Awards for Fusion Middleware Innovation… and the Winners for Data Integration are… and for details on the 2012 Data Integration Winners: Raymond James and Morrisons, check out this blog post: And the Winners of Fusion Middleware Innovation Awards in Data Integration are…  Now to view the 2013 Winners (for all categories). We hope to honor you! Here's what you need to do:  Click here to submit your nomination today.  And just a reminder: the deadline to submit a nomination is 5pm Pacific Time on June 20, 2014. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Help with Perl persistent data storage using Data::Dumper

    - by stephenmm
    I have been trying to figure this out for way to long tonight. I have googled it to death and none of the examples or my hacks of the examples are getting it done. It seems like this should be pretty easy but I just cannot get it. Here is the code: #!/usr/bin/perl -w use strict; use Data::Dumper; my $complex_variable = {}; my $MEMORY = "$ENV{HOME}/data/memory-file"; $complex_variable->{ 'key' } = 'value'; $complex_variable->{ 'key1' } = 'value1'; $complex_variable->{ 'key2' } = 'value2'; $complex_variable->{ 'key3' } = 'value3'; print Dumper($complex_variable)."TEST001\n"; open M, ">$MEMORY" or die; print M Data::Dumper->Dump([$complex_variable], ['$complex_variable']); close M; $complex_variable = {}; print Dumper($complex_variable)."TEST002\n"; # Then later to restore the value, it's simply: do $MEMORY; #eval $MEMORY; print Dumper($complex_variable)."TEST003\n"; And here is my output: $VAR1 = { 'key2' => 'value2', 'key1' => 'value1', 'key3' => 'value3', 'key' => 'value' }; TEST001 $VAR1 = {}; TEST002 $VAR1 = {}; TEST003 Everything that I read says that the TEST003 output should look identical to the TEST001 output which is exactly what I am trying to achieve. What am I missing here? Should I be "do"ing differently or should I be "eval"ing instead and if so how? Thanks for any help...

    Read the article

  • Relational database data explorer / visualization?

    - by Ian Boyd
    Is there a tool that can let one browse relational data as a graph of connected nodes? For example, i'm faced with trying to cleanse some anomolous data. i can start with two offending rows. In this particular example, the TransactionID should, by business rules, be unique to the table, but i find a transaction that violates that rule: SELECT * FROM LCTTrans WHERE TransactionID = 1075048 LCTID TransactionID ========= ============= 4358 1075048 4359 1075048 2 row(s) affected But really what i want to begin to hunt down all the related data, to try to see which is right. So this hypothetical software would start by showing me these two rows: Next, i want to see that transaction that is linked into this table: Now that transaction points to an MAL, so show me that: Now lets add those two LCTs, that the transaction is "on". A transaction can be on only one LCT, yet this one is pointing to two: Okay computer, both of those LCTs point to an MAL and the transaction that created them, show me those: Those last two transactions, they also point at an MAL, and they themselves point to an LCT, show me those: Okay, now are there any entries in LCTTrans that point to LCTs 4358 or 4359?... And so on, and so on. Now i did all this manually, running single selects, copying and pasting uniqueidentifier keys and converting them into friendly id numbers so i could easily see the relationships. Is there software that can do this?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >