Search Results

Search found 70827 results on 2834 pages for 'data quality services'.

Page 22/2834 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • The package is of bad quality - google chrome

    - by hafichuk
    I'm doing a fresh install of 12.10 and am trying to install Google Chrome. I've downloaded the deb from http://chrome.google.com and am installing it through the Ubuntu Software Centre. I'm getting a message: The package is of bad quality (same as What is a "bad quality" package?) In the expanded section, the "error" states: E: google-chrome-stable: file-in-etc-not-marked-as-conffile etc/cron.daily/google-chrome Is it safe to click on the "Ignore and Install" button?

    Read the article

  • Clever ways of implementing different data structures in C & data structures that should be used mor

    - by Yktula
    What are some clever (not ordinary) ways of implementing data structures in C, and what are some data structures that should be used more often? For example, what is the most effective way (generating minimal overhead) to implement a directed and cyclic graph with weighted edges in C? I know that we can store the distances in an array as is done here, but what other ways are there to implement this kind of a graph?

    Read the article

  • Using GameKit to transfer CoreData data between iPhones, via NSDictionary

    - by OscarTheGrouch
    I have an application where I would like to exchange information, managed via Core Data, between two iPhones. First turning the Core Data object to an NSDictionary (something very simple that gets turned into NSData to be transferred). My CoreData has 3 string attributes, 2 image attributes that are transformables. I have looked through the NSDictionary API but have not had any luck with it, creating or adding the CoreData information to it. Any help or sample code regarding this would be greatly appreciated.

    Read the article

  • core-data relationships and data structure.

    - by Boaz
    What is the right way to build iPhone core data for this SMS like app (with location)? - I want to represent an entity of conversation with "profile1" "profile2" that heritage from a profile entity, and a message entity with: "to" "from" "body" where the "to" and "from" are equal to "profile1" and/or "profile2" in the conversation entity. How can I make such a relationships? is there a better way to represent the data (other structure)? Thanks

    Read the article

  • [Visual C++]Forcing memory alignment of variables/data-structures

    - by John
    I'm looking at using SSE and I gather aligning data on 16byte boundaries is recommended. There are two cases to consider: float data[4]; struct myystruct { float x,y,z,w; }; I'm not sure the first case can be done explicitly, though there's perhaps a compiler option I could use? In the second case I remember being able to control packing in old versions of GCC several years back, is this still possible?

    Read the article

  • SQL SERVER – Data Pages in Buffer Pool – Data Stored in Memory Cache

    - by pinaldave
    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any way one can know how much data in a table is stored in the memory cache? The more detailed question I usually get is if there are multiple indexes on table (and used in a query), were the data of the single table stored multiple times in the memory cache or only for a single time? Here is a query you can run to figure out what kind of data is stored in the cache. USE AdventureWorks GO SELECT COUNT(*) AS cached_pages_count, name AS BaseTableName, IndexName, IndexTypeDesc FROM sys.dm_os_buffer_descriptors AS bd INNER JOIN ( SELECT s_obj.name, s_obj.index_id, s_obj.allocation_unit_id, s_obj.OBJECT_ID, i.name IndexName, i.type_desc IndexTypeDesc FROM ( SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id ,allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.hobt_id AND (au.type = 1 OR au.type = 3) UNION ALL SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id, allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.partition_id AND au.type = 2 ) AS s_obj LEFT JOIN sys.indexes i ON i.index_id = s_obj.index_id AND i.OBJECT_ID = s_obj.OBJECT_ID ) AS obj ON bd.allocation_unit_id = obj.allocation_unit_id WHERE database_id = DB_ID() GROUP BY name, index_id, IndexName, IndexTypeDesc ORDER BY cached_pages_count DESC; GO Now let us run the query above and observe the output of the same. We can see in the above query that there are four columns. Cached_Pages_Count lists the pages cached in the memory. BaseTableName lists the original base table from which data pages are cached. IndexName lists the name of the index from which pages are cached. IndexTypeDesc lists the type of index. Now, let us do one more experience here. Please note that you should not run this test on a production server as it can extremely reduce the performance of the database. DBCC DROPCLEANBUFFERS This will drop all the clean buffers and we will be able to start again from there. Now run following script and check the execution plan for the same. USE AdventureWorks GO SELECT UnitPrice, ModifiedDate FROM Sales.SalesOrderDetail WHERE SalesOrderDetailID BETWEEN 1 AND 100 GO The execution plans contain the usage of two different indexes. Now, let us run the script that checks the pages cached in SQL Server. It will give us the following output. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. Let me know what you think of this article. I had a great pleasure while writing this article because I was able to write on this subject, which I like the most. In the next article, we will exactly see what data are cached and those that are not cached, using a few undocumented commands. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL DMV

    Read the article

  • Bitmask data insertions in SSDT Post-Deployment scripts

    - by jamiet
    On my current project we are using SQL Server Data Tools (SSDT) to manage our database schema and one of the tasks we need to do often is insert data into that schema once deployed; the typical method employed to do this is to leverage Post-Deployment scripts and that is exactly what we are doing. Our requirement is a little different though, our data is split up into various buckets that we need to selectively deploy on a case-by-case basis. I was going to use a SQLCMD variable for each bucket (defaulted to some value other than “Yes”) to define whether it should be deployed or not so we could use something like this in our Post-Deployment script: IF ($(DeployBucket1Flag) = 'Yes')BEGIN   :r .\Bucket1.data.sqlENDIF ($(DeployBucket2Flag) = 'Yes')BEGIN   :r .\Bucket2.data.sqlENDIF ($(DeployBucket3Flag) = 'Yes')BEGIN   :r .\Bucket3.data.sqlEND That works fine and is, I’m sure, a very common technique for doing this. It is however slightly ugly because we have to litter our deployment with various SQLCMD variables. My colleague James Rowland-Jones (whom I’m sure many of you know) suggested another technique – bitmasks. I won’t go into detail about how this works (James has already done that at Using a Bitmask - a practical example) but I’ll summarise by saying that you can deploy different combinations of the buckets simply by supplying a different numerical value for a single SQLCMD variable. Each bit of that value’s binary representation signifies whether a particular bucket should be deployed or not. This is better demonstrated using the following simple script (which can be easily leveraged inside your Post-Deployment scripts): /* $(DeployData) is a SQLCMD variable that would, if you were using this in SSDT, be declared in the SQLCMD variables section of your project file. It should contain a numerical value, defaulted to 0. In this example I have declared it using a :setvar statement. Test the affect of different values by changing the :setvar statement accordingly. Examples: :setvar DeployData 1 will deploy bucket 1 :setvar DeployData 2 will deploy bucket 2 :setvar DeployData 3   will deploy buckets 1 & 2 :setvar DeployData 6   will deploy buckets 2 & 3 :setvar DeployData 31  will deploy buckets 1, 2, 3, 4 & 5 */ :setvar DeployData 0 DECLARE  @bitmask VARBINARY(MAX) = CONVERT(VARBINARY,$(DeployData)); IF (@bitmask & 1 = 1) BEGIN     PRINT 'Bucket 1 insertions'; END IF (@bitmask & 2 = 2) BEGIN     PRINT 'Bucket 2 insertions'; END IF (@bitmask & 4 = 4) BEGIN     PRINT 'Bucket 3 insertions'; END IF (@bitmask & 8 = 8) BEGIN     PRINT 'Bucket 4 insertions'; END IF (@bitmask & 16 = 16) BEGIN     PRINT 'Bucket 5 insertions'; END An example of running this using DeployData=6 The binary representation of 6 is 110. The second and third significant bits of that binary number are set to 1 and hence buckets 2 and 3 are “activated”. Hope that makes sense and is useful to some of you! @Jamiet P.S. I used the awesome HTML Copy feature of Visual Studio’s Productivity Power Tools in order to format the T-SQL code above for this blog post.

    Read the article

  • Looking for Cutting-Edge Data Integration: 2014 Excellence Awards

    - by Sandrine Riley
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 It is nomination time!!! This year's Oracle Fusion Middleware Excellence Awards will honor customers and partners who are creatively using various products across Oracle Fusion Middleware. Think you have something unique and innovative with one or a few of our Oracle Data Integration products? We would love to hear from you! Please submit today. The deadline for the nomination is June 20, 2014. What you win: An Oracle Fusion Middleware Innovation trophy One free pass to Oracle OpenWorld 2014 Priority consideration for placement in Profit magazine, Oracle Magazine, or other Oracle publications & press release Oracle Fusion Middleware Innovation logo for inclusion on your own Website and/or press release Let us reminisce a little… For details on the 2013 Data Integration Winners: Royal Bank of Scotland’s Market and International Banking and The Yalumba Wine Company, check out this blog post: 2013 Oracle Excellence Awards for Fusion Middleware Innovation… and the Winners for Data Integration are… and for details on the 2012 Data Integration Winners: Raymond James and Morrisons, check out this blog post: And the Winners of Fusion Middleware Innovation Awards in Data Integration are…  Now to view the 2013 Winners (for all categories). We hope to honor you! Here's what you need to do:  Click here to submit your nomination today.  And just a reminder: the deadline to submit a nomination is 5pm Pacific Time on June 20, 2014. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Blank list of windows services

    - by Joe
    Recently when I open windows services (always as administrator) I get a blank list of services: When I try and click on one of the empty lines I get this "Script Error" message: This happens over and over again, after several times I restarted my computer. I can't pinpoint exactly when this started happening or if I made any specific changes to my computer at that time. Someone told my to try running scf /scannow as administrator, but when I try to do that the scan stops at 34% and I get the message: "Windows Resource Protection could not perform the requested operation." I am running Windows 7 Enterprise 64 bit, and I would really like to avoid reinstalling windows. Does anyone know how to fix this? Edit - Here is another attempt I made and some more information that might help: Following WhoIsRich's suggestion, I tried the command sfc /scannow /offbootdir=c:\ /offwindir=c:\windows. This gave the error message "The arguments passed to sfc are invalid. The offline windows directory specified points to the online system", and then I realized this command is meant to be run after booting from another system. Since I don't have my windows installation disk right now, I used my own system to create a recovery disk, and then restarted my computer and used the recovery disk to boot. I then ran the above command, and I got the following message: "Windows Resource Protection found corrupt files but was unable to fix some of them. Details are included in the CBS.log". I then restarted my computer and let it boot up normally. The problem with windows services persists, and the CBS.log file is a long log file with many entries, and I don't know if there is useful information in it, and if there is, how to find it.

    Read the article

  • What makes good software good?

    - by Jonta
    People probably have a lot of different answers here, like good...: scalability, speed, usability, stability, consistency, completeness, absence of bugs, accessibility, documentation, code-quality and so on. There are a lot of philosophies on development of software. Like the UNIX-philosophy. Often vague and not easy to understand. I am looking for statements such as the one cited below. Which you can ask about the software when it's in the design-stage, is ready to be coded, and has been coded and is ready for launch. The software I am talking about, is of course the software made for the end-user. Ken Rockwell wrote: "I expect that it will let me get more accomplished in less time." (Here one could ask "will this let me get more accomplished in less time?")

    Read the article

  • Help with Perl persistent data storage using Data::Dumper

    - by stephenmm
    I have been trying to figure this out for way to long tonight. I have googled it to death and none of the examples or my hacks of the examples are getting it done. It seems like this should be pretty easy but I just cannot get it. Here is the code: #!/usr/bin/perl -w use strict; use Data::Dumper; my $complex_variable = {}; my $MEMORY = "$ENV{HOME}/data/memory-file"; $complex_variable->{ 'key' } = 'value'; $complex_variable->{ 'key1' } = 'value1'; $complex_variable->{ 'key2' } = 'value2'; $complex_variable->{ 'key3' } = 'value3'; print Dumper($complex_variable)."TEST001\n"; open M, ">$MEMORY" or die; print M Data::Dumper->Dump([$complex_variable], ['$complex_variable']); close M; $complex_variable = {}; print Dumper($complex_variable)."TEST002\n"; # Then later to restore the value, it's simply: do $MEMORY; #eval $MEMORY; print Dumper($complex_variable)."TEST003\n"; And here is my output: $VAR1 = { 'key2' => 'value2', 'key1' => 'value1', 'key3' => 'value3', 'key' => 'value' }; TEST001 $VAR1 = {}; TEST002 $VAR1 = {}; TEST003 Everything that I read says that the TEST003 output should look identical to the TEST001 output which is exactly what I am trying to achieve. What am I missing here? Should I be "do"ing differently or should I be "eval"ing instead and if so how? Thanks for any help...

    Read the article

  • Relational database data explorer / visualization?

    - by Ian Boyd
    Is there a tool that can let one browse relational data as a graph of connected nodes? For example, i'm faced with trying to cleanse some anomolous data. i can start with two offending rows. In this particular example, the TransactionID should, by business rules, be unique to the table, but i find a transaction that violates that rule: SELECT * FROM LCTTrans WHERE TransactionID = 1075048 LCTID TransactionID ========= ============= 4358 1075048 4359 1075048 2 row(s) affected But really what i want to begin to hunt down all the related data, to try to see which is right. So this hypothetical software would start by showing me these two rows: Next, i want to see that transaction that is linked into this table: Now that transaction points to an MAL, so show me that: Now lets add those two LCTs, that the transaction is "on". A transaction can be on only one LCT, yet this one is pointing to two: Okay computer, both of those LCTs point to an MAL and the transaction that created them, show me those: Those last two transactions, they also point at an MAL, and they themselves point to an LCT, show me those: Okay, now are there any entries in LCTTrans that point to LCTs 4358 or 4359?... And so on, and so on. Now i did all this manually, running single selects, copying and pasting uniqueidentifier keys and converting them into friendly id numbers so i could easily see the relationships. Is there software that can do this?

    Read the article

  • How to properly set relationships in Core Data when using setValue and data already exists

    - by ern
    Let's say I have two objects: Articles and Categories. For the sake of this example all relevant categories have already been added to the data store. When looping through data that holds edits for articles, there is category relationship information that needs to be saved. I was planning on using the -setValue method in the Article class in order to set the relationships like so: - (void)setValue:(id)value forUndefinedKey:(NSString *)key { if([key isEqualToString:@"categories"]){ NSLog(@"trying to set categories..."); } } The problem is that value isn't a Category, it is just a string (or array of strings) holding the title of a category. I could certainly do a lookup within this method for each category and assign it, but that seems inefficient when processing a whole bunch of articles at once. Another option is to populate an array of all possible categories and just filter, but my question is where to store that array? Should it be a class method on Article? Is there a way to pass in additional data to the -setValue method? Is there another, better option for setting the relationship I'm not thinking of? Thanks for your help.

    Read the article

  • All downloads being interrupted

    - by Jake
    System: Windows 7 Professional 64bit. 8GB RAM, Intel i5-2400 CPU, +300GB free on the hard drive. AVG Internet Security 2012 (enabled & disabled, with firewall enabled and disabled - no effect for either). This computer is less than a year old. Network: This problem is occurring on a single computer on a network with multiple computers. The router is a Motorola Netopia 3347-02 (DSL Modem/Wireless Router combined). The computer is plugged in directly to the modem, other computers are using the wireless successfully. The router has been reset. The only thing odd about the connection between the router and computer is that it is configured to allow RDP through, so it is assigned a static IP by the router and port forwarding is enabled for port 3389. Also, though I doubt it matters, a second wireless router is active behind this router providing a second network that some computers in the area use without issues. Details: All downloads initiated on this specific computer eventually fail, this includes streaming from youtube, specialized downloads (itunes), downloads from websites, FTP downloads, etc. Failure occurs with all browsers, but in chrome this is the process it takes: 1) Download begins normally, 2) At some point between (observed) 7MBs and 229MBs the download stops progressing (at this point, if watching chrome's task manager, you can see the network activity for the downloading tab drop to 0kps), 3) for some time the download sits there still attempting to complete, but will eventually display "123,049,871/0 B, Interrupted" (where the number is whatever it actually got to). The file I am using to test this is a very large .zip file located on a server I control, but the problem seems to occur on any site. The amount downloaded is completely random, and seems to be more time-based than anything (if I start a download immediately after the last one fails, it tends to get further than the last one). Small files can get through for this reason, though they can fail as well. In a test where I simultaneously downloaded the same file via HTTP (chrome) and FTP (windows explorer), both downloads failed at the same instant, though explorer displayed "Connection timed out" several minutes before chrome finally showed the download as interrupted. Other things I have tried based on advice given to people with similar/identical problems: Setting my MTU to 1492 (as described here: http://blog.thecompwiz.com/2011/08/networking-issues.html) Disabling write caching to the hard drive storing the download on an external device successfully transmitted +1GB file from one computer on the same network to this computer disabling indexing in the folder the download was being stored in disabling all security software checked to make sure all drivers were up to date read about 50 accounts with nearly exact descriptions of what I'm experiencing, none of which had a solution given Running Processes: Image Name PID Session Name Session# Mem Usage ========================= ======== ================ =========== ============ System Idle Process 0 Services 0 24 K System 4 Services 0 104,836 K smss.exe 332 Services 0 1,276 K csrss.exe 764 Services 0 5,060 K wininit.exe 820 Services 0 4,748 K csrss.exe 844 Console 1 23,764 K services.exe 876 Services 0 11,856 K lsass.exe 892 Services 0 14,420 K lsm.exe 900 Services 0 7,820 K winlogon.exe 944 Console 1 7,716 K svchost.exe 428 Services 0 12,744 K svchost.exe 796 Services 0 12,240 K svchost.exe 1036 Services 0 22,372 K svchost.exe 1084 Services 0 174,132 K svchost.exe 1112 Services 0 56,144 K svchost.exe 1288 Services 0 18,640 K svchost.exe 1404 Services 0 29,616 K spoolsv.exe 1576 Services 0 25,924 K svchost.exe 1616 Services 0 12,788 K AppleMobileDeviceService. 1728 Services 0 9,796 K avgwdsvc.exe 1820 Services 0 8,268 K mDNSResponder.exe 1844 Services 0 5,832 K w3dbsmgr.exe 1108 Services 0 43,760 K QBCFMonitorService.exe 1336 Services 0 16,408 K svchost.exe 2404 Services 0 28,240 K taskhost.exe 3020 Console 1 12,372 K dwm.exe 2280 Console 1 5,968 K explorer.exe 2964 Console 1 152,476 K WUDFHost.exe 3316 Services 0 6,740 K svchost.exe 3408 Services 0 5,556 K RAVCpl64.exe 3684 Console 1 13,864 K igfxtray.exe 3700 Console 1 7,804 K hkcmd.exe 3772 Console 1 7,868 K igfxpers.exe 3788 Console 1 10,940 K sidebar.exe 3836 Console 1 84,400 K chrome.exe 3964 Console 1 19,640 K pptd40nt.exe 4068 Console 1 5,156 K acrotray.exe 3908 Console 1 14,676 K avgtray.exe 3872 Console 1 9,508 K jusched.exe 4076 Console 1 4,412 K iTunesHelper.exe 1532 Console 1 87,308 K SearchIndexer.exe 3492 Services 0 36,948 K iPodService.exe 4136 Services 0 7,944 K BrccMCtl.exe 4276 Console 1 18,132 K splwow64.exe 4380 Console 1 32,600 K qbupdate.exe 4836 Console 1 24,236 K svchost.exe 4288 Services 0 20,700 K wmpnetwk.exe 3112 Services 0 9,516 K FNPLicensingService.exe 5248 Services 0 5,852 K QBW32.EXE 5508 Console 1 127,068 K QBDBMgrN.exe 5600 Services 0 42,252 K EXCEL.EXE 2512 Console 1 99,100 K LMS.exe 3188 Services 0 5,616 K UNS.exe 1600 Services 0 7,308 K axlbridge.exe 5260 Console 1 5,132 K chrome.exe 5888 Console 1 200,336 K chrome.exe 3536 Console 1 26,076 K chrome.exe 1952 Console 1 20,168 K chrome.exe 4596 Console 1 24,696 K chrome.exe 4292 Console 1 48,096 K chrome.exe 2796 Console 1 23,520 K Acrobat.exe 1240 Console 1 87,252 K 123w.exe 4892 Console 1 22,728 K calc.exe 1700 Console 1 12,636 K chrome.exe 1328 Console 1 28,888 K chrome.exe 3696 Console 1 47,012 K rundll32.exe 6320 Console 1 7,104 K chrome.exe 4928 Console 1 44,248 K AVGIDSAgent.exe 260 Services 0 12,940 K avgfws.exe 6052 Services 0 26,912 K avgnsa.exe 5064 Services 0 2,496 K avgrsa.exe 3088 Services 0 2,200 K avgcsrva.exe 2596 Services 0 380 K avgcsrva.exe 6948 Services 0 408 K StikyNot.exe 452 Console 1 14,772 K chrome.exe 4580 Console 1 28,200 K chrome.exe 4016 Console 1 57,756 K svchost.exe 7140 Services 0 4,500 K chrome.exe 6264 Console 1 56,824 K chrome.exe 7008 Console 1 56,896 K chrome.exe 2224 Console 1 38,032 K taskhost.exe 612 Console 1 7,228 K chrome.exe 6000 Console 1 10,928 K chrome.exe 2568 Console 1 43,052 K chrome.exe 272 Console 1 75,988 K chrome.exe 7328 Console 1 53,240 K PaprPort.exe 7976 Console 1 137,152 K pplinks.exe 7500 Console 1 14,052 K ppscanmg.exe 5744 Console 1 18,996 K taskeng.exe 7388 Console 1 6,308 K SearchProtocolHost.exe 8024 Services 0 8,804 K SearchFilterHost.exe 7232 Services 0 7,848 K chrome.exe 8016 Console 1 37,440 K cmd.exe 7692 Console 1 3,096 K conhost.exe 7516 Console 1 5,872 K tasklist.exe 8160 Console 1 5,772 K WmiPrvSE.exe 7684 Services 0 6,400 K Any help with this would be greatly appreciated, I've been beating my head against a wall over this all day. This computer serves dual purpose as the main company document server and the Owner's work computer, it's fairly important it be fully functional and I cannot figure this out.

    Read the article

  • Suggested Web Application Framework and Database for Enterprise, “Big-Data” App?

    - by willOEM
    I have a web application that I have been developing for a small group within my company over the past few years, using Pipeline Pilot (plus jQuery and Python scripting) for web development and back-end computation, and Oracle 10g for my RDBMS. Users upload experimental genomic data, which is parsed into a database, and made available for querying, transformation, and reporting. Experimental data sets are large and have many layers of metadata. A given experimental data record might have a foreign key relationship with a table that describes this data point's assay. Assays can cover multiple genes, which can have multiple transcript, which can have multiple mutations, which can affect multiple signaling pathways, etc. Users need to approach this data from any point in those layers in the metadata. Since all data sets for a given data type can run over a billion rows, this results in some large, dynamic queries that are hard to predict. New data sets are added on a weekly basis (~1GB per set). Experimental data is never updated, but the associated metadata can be updated weekly for a few records and yearly for most others. For every data set insert the system sees, there will be between 10 and 100 selects run against it and associated data. It is okay for updates and inserts to run slow, so long as queries run quick and are as up-to-date as possible. The application continues to grow in size and scope and is already starting to run slower than I like. I am worried that we have about outgrown Pipeline Pilot, and perhaps Oracle (as the sole database). Would a NoSQL database or an OLAP system be appropriate here? What web application frameworks work well with systems like this? I'd like the solution to be something scalable, portable and supportable X-years down the road. Here is the current state of the application: Web Server/Data Processing: Pipeline Pilot on Windows Server + IIS Database: Oracle 10g, ~1TB of data, ~180 tables with several billion-plus row tables Network Storage: Isilon, ~50TB of low-priority raw data

    Read the article

  • Simple ADF page using BAM Data Control

    - by [email protected]
    var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-15829414-1"); pageTracker._trackPageview(); } catch(err) {} Purpose : In this blog I will walk you through very simple steps to create an ADF page using BAM data control connection.Details : Create the projectOpen JDeveloper (make sure you have installed the SOA extension for JDev)Create new Application using "Generic Application" template.Click on "Next"Shuttle  "ADF Faces" to right pane for the project technology.Click "Finish"Create a BAM connectionIn the resource palette click on "Folder->New Connection -> BAM"Enter the connection name and click "Next"Enter Connection details Click on "Test connection" and "Finish"Create the BAM Data Control Open the IDE connection created in above step.Drag and drop "Employees" to "Data controls" palette.Select "Flat Query" and Click "Finish".Create the View Create a new JSF page.From Data control Panel drag and drop "Employees->Query->ADF Read Only table"Right click and Run the page.

    Read the article

  • Data Aggregation of CSV files java

    - by royB
    I have k csv files (5 csv files for example), each file has m fields which produce a key and n values. I need to produce a single csv file with aggregated data. I'm looking for the most efficient solution for this problem, speed mainly. I don't think by the way that we will have memory issues. Also I would like to know if hashing is really a good solution because we will have to use 64 bit hashing solution to reduce the chance for a collision to less than 1% (we are having around 30000000 rows per aggregation). For example file 1: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,50,60,70,80 a3,b2,c4,60,60,80,90 file 2: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,30,50,90,40 a3,b2,c4,30,70,50,90 result: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,80,110,160,120 a3,b2,c4,90,130,130,180 algorithm that we thought until now: hashing (using concurentHashTable) merge sorting the files DB: using mysql or hadoop or redis. The solution needs to be able to handle Huge amount of data (each file more than two million rows) a better example: file 1 country,city,peopleNum england,london,1000000 england,coventry,500000 file 2: country,city,peopleNum england,london,500000 england,coventry,500000 england,manchester,500000 merged file: country,city,peopleNum england,london,1500000 england,coventry,1000000 england,manchester,500000 The key is: country,city. This is just an example, my real key is of size 6 and the data columns are of size 8 - total of 14 columns. We would like that the solution will be the fastest in regard of data processing.

    Read the article

  • SQL – Download FREE Book – Data Access for HighlyScalable Solutions: Using SQL, NoSQL, and Polyglot Persistence

    - by Pinal Dave
    Recently I was preparing for Big Data and I ended up on very interesting read for everybody. This is created by Microsoft and it is indeed a fantastic read as per my opinion. It took me some time to read this entire book but it was worth reading this as it tried to answer two of the very interesting questions related to muscle. Here is the abstract from the book: Organizations seeking to use a NoSQL database are therefore faced with a twofold challenge: • Which NoSQL database(s) best meet(s) the needs of the organization? • How does an organization integrate a NoSQL database into its solutions? As I keep on reading the book, I find it very interesting and informative. I suggest if you have time this weekend, download the book and read it. This guide focuses on the most common types of NoSQL database currently available, describes the situations for which they are most suited, and shows examples of how you might incorporate them into a business application. The guide summarizes the experiences of a fictitious organization named Adventure Works, who implemented a solution that comprised an assortment of different databases. Download Data Access for HighlyScalable Solutions:  Using SQL, NoSQL,  and Polyglot Persistence While we are talking about Big Data and NoSQL do not forget to check out my tomorrow’s blog as I am going to talk about the same subject and it will be very interesting. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, NoSQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Data structure for bubble shooter game

    - by SundayMonday
    I'm starting to make a bubble shooter game for a mobile OS. Assume this is just the basic "three or more same-color bubbles that touch pop" and all bubbles that are separated from their group fall/pop. What data structures are common for storing the bubbles? I've considered using an undirected, connected graph where each node is a bubble. This seems like it could help answer the question "which bubbles (if any) should fall now?" after some arbitrary bubbles are popped and corresponding nodes are removed from the graph. I think the answer is all bubbles that were just disconnected from the graph should fall. However the graph approach might be overkill so I'm not sure. Another consideration for the data structure is collision detection. Perhaps being able to grab a list of neighboring bubbles in constant time for a particular "bubble slot" is useful. So the collision detection would be something like "moving bubble is closest to slot ij, neighbors of slot ij are bubbles a,b,c, moving bubble is sufficiently close to bubble b hence moving bubble should come to rest in slot ij". A game like this could be probably be made with a relatively crude grid structure as the primary data structure. However it seems like answering "which bubbles (if any) should fall now?" would be trickier with this data structure.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >