Search Results

Search found 88733 results on 3550 pages for 'algebraic data type'.

Page 21/3550 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Big Data – ClustrixDB – Extreme Scale SQL Database with Real-time Analytics, Releases Software Download – NewSQL

    - by Pinal Dave
    There are so many things to learn and there is so little time we all have. As we have little time we need to be selective to learn whatever we learn. I believe I know quite a lot of things in SQL but I still do not know what is around SQL. I have started to learn about NewSQL recently. If you wonder what is NewSQL I encourage all of you to read my blog post about NewSQL over here Big Data – Buzz Words: What is NewSQL – Day 10 of 21. NewSQL databases are quickly becoming popular – providing the scale of NoSQL with the SQL features and transactions. As a part of learning NewSQL database, I have recently started to learn about ClustrixDB. ClustrixDB has been the most mature NewSQL database used by some of the largest internet sites in the world for over 3 years, with extensive SQL support. In addition to scale, it provides fast real-time analytics by bringing massively parallel processing (MPP), available only in warehousing databases, to the transactional database. The reason I am more intrigued about learning ClustrixDB is their recent announcement on Oct 31. ClustrixDB was only available as an appliance, but now with their software release on Oct 31, everyone can use it. It is now available as forever free for up to 12 cores with community support, and there is a 45 day trial for unlimited cluster sizes. With the forever free world, I am indeed interested in ClustrixDB now. I know that few of the leading eCommerce sites in the world uses them for their transactional database. Here are few of the details I have quickly noted for ClustrixDB. ClustrixDB allows user to: Scale by simply adding nodes to the cluster with a single command Run billions of transactions a day Run fast real-time analytics Achieve high-availability with recovery from node failure Manages itself Easily migrate from MySQL as it is nearly plug-and-play compatible, use MySQL drivers, tools and replication. While I was going through the documentation I realized that ClustrixDB also has extensive support for SQL features including complex queries involving joins on a dozen or more tables, aggregates, sorts, sub-queries. It also supports stored procedures, triggers, foreign keys, partitioned and temporary tables, and fully online schema changes. It is indeed a very matured product and SQL solution. Indeed Clusterix sound very promising solution, I decided to dig a bit deeper to understand who are current customers of the Clustrix as they exist in the industry for quite a few years. Their client list is indeed very interesting and here is my quick research about them. Twoo.com – Europe’s largest social discovery (dating) site runs 4.4 Billion Transactions a day with table sizes over a Terabyte, on a 168 core cluster. EngageBDR – Top 3 in the online advertising category uses ClustrixDB to serve 6.9 billion ads a day through real-time bidding platform. Their reports went from 4 hours to 15 seconds. NoMoreRack – Top 2 fastest growing e-commerce company in US used ClustrixDB for high availability and fast growth through Amazon cloud. MakeMyTrip – India’s leading travel site runs on ClustrixDB with two clusters running as multi-master in Chennai and Bangalore. Many enterprises such as AOL, CSC, Rakuten, Symantec use ClustrixDB when their applications need scale. I must accept that I am impressed with the information I have learned so far and now is the time to do some hand’s on experience with their product. I want to learn this technology so in future when it is about NewSQL, I know what I am talking about. Read more why Clustrix explains why you ClustrixDB might be the right database for you. Download ClustrixDB with me today and install it on your machine so in future when we discuss the technical aspects of it, we all are on the same page. The software can be downloaded here. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: Big Data, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Clustrix

    Read the article

  • Criteria strings, how many different criteria can be entered to retrieve specific data?

    - by Janet
    For our membership database we are currently using an old DOS program "Arclist". The program is old but the one feature we desperately need in a database program is to be able to enter multiple criteria at one time for more of a "one time" extraction of the data meeting all the various criteria entered in what I call a "criteria string". An example may be extracting only those records with zip codes matching (67893, 54235, 54323, 54201, 54302, 54303, 54301, 67894, 67895). Another set of criteria might be to omit records, not equal to, one type of criteria in one field and also extract records matching criteria in another field. So we would want records "not equal to" in one field, but whose information equals requested information in another field.

    Read the article

  • Oracle Communications Data Model

    - by jean-pierre.dijcks
    I've mentioned OCDM in previous posts but found the following (see end of the post) podcast on the topic and figured it is worthwhile to spread the news some more. ORetailDM and OCommunicationsDM are the two data models currently available from Oracle. Both are intended to capture: Business best practices and industry knowledge Pre-built advanced analytics intended to predict future events before they happen (like the Churn model shown below) Oracle technology best practices to ensure optimal performance of the model All of this typically comes with a reduced time to implementation, or as the marketing slogan goes, reduced time to value. Here are the links: Podcast on OCDM OTN pages for OCDM and ORDM

    Read the article

  • Javascript: Safely upload a client data file

    - by Jeffrey Sweeney
    I'm (still) working on a template-based XML editing program. It's a GUI-based XML editor that only allows users to add certain tags and attributes based off the requirements. You can see the current version here for an idea. Now, I'd like to allow users to upload their own data templates, but I'm concerned about potential XSS hacks. Currently, the template file is in Javascript object literal notation, which unsurprisingly is a security nightmare if the user can upload their own. I was thinking of using XML instead, but is there an even better alternative?

    Read the article

  • Get aggregated view of data for entire website with Google Analytics

    - by crmpicco
    I have a website (www.ayrshireminis.com), which has three main sections under different directories, these are: /forum /galleries /contact I would like to have an aggregated view of the data for the whole website, but also for each section. What is the recommended approach for doing this? I believe I can create a web property that includes a profile for the entire website and duplicated filtered profiles, each section having an include filter. This is my gut instinct, but i'd like to know if there is another (better) way to do it? Maybe by having one account that includes a profile for the whole site and another profile with an include filter for the individual sections?

    Read the article

  • Reuse the data CRUD methods in data access layer, but they are updated too quickly

    - by ValidfroM
    I agree that we should put CRUD methods in a data access layer, However, in my current project I have some issues. It is a legacy system, and there are quite a lot CRUD methods in some concrete manager classes. People including me seem to just add new methods to it, rather than reuse the existing methods. Because We don't know whether the existing method is what we need Even if we have source code, do we really need read other's code then make decision? It is updated too quickly. Do not have time get familiar with the DAO API. Back to the question, how do you solve that in your project? If we say "reuse", it really needs to be reusable rather than just an excuse.

    Read the article

  • Thoughts on type aliases/synonyms?

    - by Rei Miyasaka
    I'm going to try my best to frame this question in a way that doesn't result in a language war or list, because I think there could be a good, technical answer to this question. Different languages support type aliases to varying degrees. C# allows type aliases to be declared at the beginning of each code file, and they're valid only throughout that file. Languages like ML/Haskell use type aliases probably as much as they use type definitions. C/C++ are sort of a Wild West, with typedef and #define often being used seemingly interchangeably to alias types. The upsides of type aliasing don't invoke too much dispute: It makes it convenient to define composite types that are described naturally by the language, e.g. type Coordinate = float * float or type String = [Char]. Long names can be shortened: using DSBA = System.Diagnostics.DebuggerStepBoundaryAttribute. In languages like ML or Haskell, where function parameters often don't have names, type aliases provide a semblance of self-documentation. The downside is a bit more iffy: aliases can proliferate, making it difficult to read and understand code or to learn a platform. The Win32 API is a good example, with its DWORD = int and its HINSTANCE = HANDLE = void* and its LPHANDLE = HANDLE FAR* and such. In all of these cases it hardly makes any sense to distinguish between a HANDLE and a void pointer or a DWORD and an integer etc.. Setting aside the philosophical debate of whether a king should give complete freedom to their subjects and let them be responsible for themselves or whether they should have all of their questionable actions intervened, could there be a happy medium that would allow the benefits of type aliasing while mitigating the risk of its abuse? As an example, the issue of long names can be solved by good autocomplete features. Visual Studio 2010 for instance will alllow you to type DSBA in order to refer Intellisense to System.Diagnostics.DebuggerStepBoundaryAttribute. Could there be other features that would provide the other benefits of type aliasing more safely?

    Read the article

  • Design: How to model / where to store relational data between classes

    - by Walker
    I'm trying to figure out the best design here, and I can see multiple approaches, but none that seems "right." There are three relevant classes here: Base, TradingPost, and Resource. Each Base has a TradingPost which can offer various Resources depending on the Base's tech level. Where is the right place to store the minimum tech level a base must possess to offer any given resource? A database seems like overkill. Putting it in each subclass of Resource seems wrong--that's not an intrinsic property of the Resource. Do I have a mediating class, and if so, how does it work? It's important that I not be duplicating code; that I have one place where I set the required tech level for a given item. Essentially, where does this data belong? P.S. Feel free to change the title; I struggled to come up with one that fits.

    Read the article

  • Still no detected structured data in Google Webmaster Tools [on hold]

    - by user6211
    Can you give me some suggestions what's wrong with my structured data? Google still cannot read it. It looks like this: <div class="identity"> <div itemscope itemtype="http://schema.org/LocalBusiness"> <a itemprop="url" href="http://MYDOMAIN.co.uk/"><div itemprop="name"><strong>MY_COMPANY</strong></div></a> <div itemprop="address" itemscope itemtype="http://schema.org/PostalAddress"> <span itemprop="streetAddress">MY_ADDRESS</span>, <span itemprop="addressLocality">London</span>, <span itemprop="postalCode">SE5 MY_XYZ</span>, <span itemprop="addressCountry">UK</span> </div> </div> </div>

    Read the article

  • Ubuntu Tools for recovering data from damaged USB Flash Drive ~ 10 Gb

    - by PREDA LUCIAN
    I have technical issues with my USB Flash Drive - JetFlash®V15 (TS16GJFV15) It's very critical situation because I can not see the data from it and I should get a way to recover them ASAP. So, in general, I have connected Non-stop that USB Flash Disk at my laptop. Was appear Power surges and when I was coming back, I saw that problem with it. Details regarding JetFlash®V15 (in present): - when I connect it on USP slot, the led is working intermittent and later on remain with constant light. - if I inspect the computer drivers, I found "Generic USB Flash Disk" (when the stick it's connected). - if I inspect "Properties", I can see next details: --- Type: unknown (application/octet-stream) --- Size: unknown --- Volume: unknown --- Accessed: unknown --- Modified: unknown I inspected that stick on 2 different computers (as well in different different USB Ports) and was the same problem, I can not see the content. I was checking with Windows 7 and Ubuntu 10.04 OS, but without success. With both OS was working before this issue. I'll appreciate an answer which will solve the problem, not an answer which will certify the problem. What I have to do, to recover the information form it (nearly 10 Gb)? I'm looking forward to be guided from a technical expert.

    Read the article

  • Consolidating hotels data from various booking sites with different IDs or reference

    - by Victor
    In one of my projects, I have data for hotels, and other booking sites are able to book this hotel. For example: Hotel A - Booking (ID = 4002), Expedia (ID = 123), Priceline (ID = 147) The three booking engines each uses their own Id to reference to Hotel A. I would need to check manually and make the right reference to the hotel. If I have 100,000 hotels, I have to check manually 300,000 (considering 3 booking sites) times? They might provide API, then I can cross check the name, address or latitude/longitude, but if they differ a little bit then I might give the wrong reference to the wrong hotel. I'm sure there are better ways to do this. There are many travel sites out there which do hotel price checking on many booking sites, but how do they do to make sure they are checking the right hotel on these booking sites? Anyone has any experience on this?

    Read the article

  • Mount external HD ubuntu 12.10

    - by Luigi Tiburzi
    Although it's an abundantly treated matter, I'm unable to find an answer valid for my needs. I had a 12.04 installation of ubuntu and I decided to install the 12.10. I copied (using GParted) the partition where my system was to an external hd where there is a windows partition. Then I installed the newest ubuntu version and now I want to take back some files (for example my .emacs) from that partition but when I try to mount it, it is not found as sdb and if I mount it from /dev/usb/hddev0 I don't get any output, only a blinking cursor, no errors, no output. I even tried to mount it as an ntfs disk but the result was the same. It's like the hd cannot be detected. So how can I access data to that disk? Could I get them from GParted terminal instead of Ubuntu one? Thanks

    Read the article

  • Using Ubuntu to recover data from a crashed Windows install

    - by user289391
    I was using Windows on my laptop when suddenly the blue screen of death appeared and then laptop restarted and wrote for me this : Intel UNDI, PXE-2.1 (built 083) Copyright (C) 1997-200 Intel Corporation This Product is covered by one or more of the following patents: US5,307459, US5,434,872, US5732,094, US6579,884, US6115,776 and US6,327,625 Realtek PCIe FE Family Controller Series v120 (01/26/10) PXE-M0F: ExitingPXEROM. reboot failed I have Ubuntu on an external disk so I have now booted to that. Two questions: Any theories on what happened? How can I use Ubuntu to recovers my data from Windows install?

    Read the article

  • disk not accessible

    - by user107044
    i formatted my hard drive yesterday and it was working well even after the formatting. But when I restarted my system again , is is showing that the space is alloted to my files but they are inaccessible. I have even tried to unhide the files and folders, if they got hidden somehow. But nothing works. the hard drive is being shown empty but the properties are saying that it still conatins the data : http://imgur.com/ObjTE in the image, it is showing that the directory has only 1 file of size:4.8 kbps but the space being used by the drive is 11.6 GB. do suggest some solution.

    Read the article

  • SQL SERVER Data Pages in Buffer Pool Data Stored in MemoryCache

    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How To - Guide to Importing Data from a MySQL Database to Excel using MySQL for Excel

    - by Javier Treviño
    Fetching data from a database to then get it into an Excel spreadsheet to do analysis, reporting, transforming, sharing, etc. is a very common task among users. There are several ways to extract data from a MySQL database to then import it to Excel; for example you can use the MySQL Connector/ODBC to configure an ODBC connection to a MySQL database, then in Excel use the Data Connection Wizard to select the database and table from which you want to extract data from, then specify what worksheet you want to put the data into.  Another way is to somehow dump a comma delimited text file with the data from a MySQL table (using the MySQL Command Line Client, MySQL Workbench, etc.) to then in Excel open the file using the Text Import Wizard to attempt to correctly split the data in columns. These methods are fine, but involve some degree of technical knowledge to make the magic happen and involve repeating several steps each time data needs to be imported from a MySQL table to an Excel spreadsheet. So, can this be done in an easier and faster way? With MySQL for Excel you can. MySQL for Excel features an Import MySQL Data action where you can import data from a MySQL Table, View or Stored Procedure literally with a few clicks within Excel.  Following is a quick guide describing how to import data using MySQL for Excel. This guide assumes you already have a working MySQL Server instance, Microsoft Office Excel 2007 or 2010 and MySQL for Excel installed. 1. Opening MySQL for Excel Being an Excel Add-In, MySQL for Excel is opened from within Excel, so to use it open Excel, go to the Data tab located in the Ribbon and click MySQL for Excel at the far right of the Ribbon. 2. Creating a MySQL Connection (may be optional) If you have MySQL Workbench installed you will automatically see the same connections that you can see in MySQL Workbench, so you can use any of those and there may be no need to create a new connection. If you want to create a new connection (which normally you will do only once), in the Welcome Panel click New Connection, which opens the Setup New Connection dialog. Here you only need to give your new connection a distinctive Connection Name, specify the Hostname (or IP address) where the MySQL Server instance is running on (if different than localhost), the Port to connect to and the Username for the login. If you wish to test if your setup is good to go, click Test Connection and an information dialog will pop-up stating if the connection is successful or errors were found. 3.Opening a connection to a MySQL Server To open a pre-configured connection to a MySQL Server you just need to double-click it, so the Connection Password dialog is displayed where you enter the password for the login. 4. Selecting a MySQL Schema After opening a connection to a MySQL Server, the Schema Selection Panel is shown, where you can select the Schema that contains the Tables, Views and Stored Procedures you want to work with. To do so, you just need to either double-click the desired Schema or select it and click Next >. 5. Importing data… All previous steps were really the basic minimum needed to drill-down to the DB Object Selection Panel  where you can see the Database Objects (grouped by type: Tables, Views and Procedures in that order) that you want to perform actions against; in the case of this guide, the action of importing data from them. a. From a MySQL Table To import from a Table you just need to select it from the list of Database Objects’ Tables group, after selecting it you will note actions below the list become available; then click Import MySQL Data. The Import Data dialog is displayed; you can see some basic information here like the name of the Excel worksheet the data will be imported to (in the window title), the Table Name, the total Row Count and a 10 row preview of the data meant for the user to see the columns that the table contains and to provide a way to select which columns to import. The Import Data dialog is designed with defaults in place so all data is imported (all rows and all columns) by just clicking Import; this is important to minimize the number of clicks needed to get the job done. After the import is performed you will have the data in the Excel worksheet formatted automatically. If you need to override the defaults in the Import Data dialog to change the columns selected for import or to change the number of imported rows you can easily do so before clicking Import. In the screenshot below the defaults are overridden to import only the first 3 columns and rows 10 – 60 (Limit to 50 Rows and Start with Row 10). If the number of rows to be imported exceeds the maximum number of rows Excel can hold in its worksheet, a warning will be displayed in the dialog, meaning the imported number of rows will be limited by that maximum number (65,535 rows if the worksheet is in Compatibility Mode).  In the screenshot below you can see the Table contains 80,559 rows, but only 65,534 rows will be imported since the first row is used for the column names if the Include Column Names as Headers checkbox is checked. b. From a MySQL View Similar to the way of importing from a Table, to import from a View you just need to select it from the list of Database Objects’ Views group, then click Import MySQL Data. The Import Data dialog is displayed; identically to the way everything looks when importing from a table, the dialog displays the View Name, the total Row Count and the data preview grid. Since Views are really a filtered way to display data from Tables, it is actually as if we are extracting data from a Table; so the Import Data dialog is actually identical for those 2 Database Objects. After the import is performed, the data in the Excel spreadsheet looks like the following screenshot. Note that you can override the defaults in the Import Data dialog in the same way described above for importing data from Tables. Also the Compatibility Mode warning will be displayed if data exceeds the maximum number of rows explained before. c. From a MySQL Procedure Too import from a Procedure you just need to select it from the list of Database Objects’ Procedures group (note you can see Procedures here but not Functions since these return a single value, so by design they are filtered out). After the selection is made, click Import MySQL Data. The Import Data dialog is displayed, but this time you can see it looks different to the one used for Tables and Views.  Given the nature of Store Procedures, they require first that values are supplied for its Parameters and also Procedures can return multiple Result Sets; so the Import Data dialog shows the Procedure Name and the Procedure Parameters in a grid where their values are input. After you supply the Parameter Values click Call. After calling the Procedure, the Result Sets returned by it are displayed at the bottom of the dialog; output parameters and the return value of the Procedure are appended as the last Result Set of the group. You can see each Result Set is displayed as a tab so you can see a preview of the returned data.  You can specify if you want to import the Selected Result Set (default), All Result Sets – Arranged Horizontally or All Result Sets – Arranged Vertically using the Import drop-down list; then click Import. After the import is performed, the data in the Excel spreadsheet looks like the following screenshot.  Note in this example all Result Sets were imported and arranged vertically. As you can see using MySQL for Excel importing data from a MySQL database becomes an easy task that requires very little technical knowledge, so it can be done by any type of user. Hope you enjoyed this guide! Remember that your feedback is very important for us, so drop us a message: MySQL on Windows (this) Blog - https://blogs.oracle.com/MySqlOnWindows/ Forum - http://forums.mysql.com/list.php?172 Facebook - http://www.facebook.com/mysql Cheers!

    Read the article

  • Excel Conditional Formatting Multiple Data Bars and Data Icons in one cell

    - by wbeard52
    I am using Excel 2007 on a windows machine. I am attempting to place one data bar and one data icon into a cell under the conditional formatting. The issue is that I don't really want to have data icons or data bars for cells that have dates in the future and I only want to have data icons for dates in the at least one month in the past. This is what I have: This is what I want: I am using the EOMONTH function to determine the last day of the month for the conditional formatting calculations. For the data bar the formula is =EOMONTH(Now(), 4) and =EOMONTH(Now(), -1). The data icons formulas are =EOMONTH(Now(), -1) and =EOMONTH(Now(), -2) Is there a way in Excel 2007 to get rid of the data icons for all the dates in the future and lose the data bars when the date has past. Thanks

    Read the article

  • Folders in SQL Server Data Tools

    - by jamiet
    Recently I have begun a new project in which I am using SQL Server Data Tools (SSDT) and SQL Server Integration Services (SSIS) 2012. Although I have been using SSDT & SSIS fairly extensively while SQL Server 2012 was in the beta phase I usually find that you don’t learn about the capabilities and quirks of new products until you use them on a real project, hence I am hoping I’m going to have a lot of experiences to share on my blog over the coming few weeks. In this first such blog post I want to talk about file and folder organisation in SSDT. The predecessor to SSDT is Visual Studio Database Projects. When one created a new Visual Studio Database Project a folder structure was provided with “Schema Objects” and “Scripts” in the root and a series of subfolders for each schema: Apparently a few customers were not too happy with the tool arbitrarily creating lots of folders in Solution Explorer and hence SSDT has gone in completely the opposite direction; now no folders are created and new objects will get created in the root – it is at your discretion where they get moved to: After using SSDT for a few weeks I can safely say that I preferred the older way because I never used Solution Explorer to navigate my schema objects anyway so it didn’t bother me how many folders it created. Having said that the thought of a single long list of files in Solution Explorer without any folders makes me shudder so on this project I have been manually creating folders in which to organise files and I have tried to mimic the old way as much as possible by creating two folders in the root, one for all schema objects and another for Pre/Post deployment scripts: This works fine until different developers start to build their own different subfolder structures; if you are OCD-inclined like me this is going to grate on you eventually and hence you are going to want to move stuff around so that you have consistent folder structures for each schema and (if you have multiple databases) each project. Moreover new files get created with a filename of the object name + “.sql” and often people like to have an extra identifier in the filename to indicate the object type: The overall point is this – files and folders in your solution are going to change. Some version control systems (VCSs) don’t take kindly to files being moved around or renamed because they recognise the renamed/moved file simply as a new file and when they do that you lose the revision history which, to my mind, is one of the key benefits of using a VCS in the first place. On this project we have been using Team Foundation Server (TFS) and while it pains me to say it (as I am no great fan of TFS’s version control system) it has proved invaluable when dealing with the SSDT problems that I outlined above because it is integrated right into the Visual Studio IDE. Thus the advice from this blog post is: If you are using SSDT consider using an Visual-Studio-integrated VCS that can easily handle file renames and file moves I suspect that fans of other VCSs will counter by saying that their VCS weapon of choice can handle renames/file moves quite satisfactorily and if that’s the case…great…let me know about them in the comments. This blog post is not an attempt to make people use one particular VCS, only to make people aware of this issue that might rise when using SSDT. More to come in the coming few weeks! @jamiet

    Read the article

  • Isolating test data in acceptance tests

    - by Matt Phillips
    I'm looking for guidance on how to keep my acceptance tests isolated. Right now the issue I'm having with being able to run the tests in parallel is the database records that are manipulated in the tests. I've written helpers that take care of doing inserts and deletes before tests are executed, to make sure the state is correct. But now I can't run them in parallel against the same database without uniquely generating the test data fields for each test. For example. Testing creating a row i'll delete everything where column A = foo and column B = bar Then I'll navigate through the UI in the test and create a record with column A = foo and column B = bar. Testing that a duplicate row is not allowed to be created. I'll insert a row with column A = foo and column B = bar and then use the UI to try and do the exact same thing. This will display an error message in the UI as expected. These tests work perfectly when ran separately and serially. But I can't run them at the same time for fear that one will create or delete a record the other is expecting. Any tips on how to structure them better so they can be run in parallel?

    Read the article

  • Space-efficient data structures for broad-phase collision detection

    - by Marian Ivanov
    As far as I know, these are three types of data structures that can be used for collision detection broadphase: Unsorted arrays: Check every object againist every object - O(n^2) time; O(log n) space. It's so slow, it's useless if n isn't really small. for (i=1;i<objects;i++){ for(j=0;j<i;j++) narrowPhase(i,j); }; Sorted arrays: Sort the objects, so that you get O(n^(2-1/k)) for k dimensions O(n^1.5) for 2d and O(n^1.67) for 3d and O(n) space. Assuming the space is 2D and sortedArray is sorted so that if the object begins in sortedArray[i] and another object ends at sortedArray[i-1]; they don't collide Heaps of stacks: Divide the objects between a heap of stacks, so that you only have to check the bucket, its children and its parents - O(n log n) time, but O(n^2) space. This is probably the most frequently used approach. Is there a way of having O(n log n) time with less space? When is it more efficient to use sorted arrays over heaps and vice versa?

    Read the article

  • Data Structure for Small Number of Agents in a Relatively Big 2D World

    - by Seçkin Savasçi
    I'm working on a project where we will implement a kind of world simulation where there is a square 2D world. Agents live on this world and make decisions like moving or replicating themselves based on their neighbor cells(world=grid) and some extra parameters(which are not based on the state of the world). I'm looking for a data structure to implement such a project. My concerns are : I will implement this 3 times: sequential, using OpenMP, using MPI. So if I can use the same structure that will be quite good. The first thing comes up is keeping a 2D array for the world and storing agent references in it. And simulate the world for each time slice by checking every cell in each iteration and further processing if an agents is found in the cell. The downside is what if I have 1000x1000 world and only 5 agents in it. It will be an overkill for both sequential and parallel versions to check each cell and look for possible agents in them. I can use quadtree and store agents in it, but then how can I get the information about neighbor cells then? Please let me know if I should elaborate more.

    Read the article

  • How much information can you mine out of a name?

    - by Finglas Fjorn
    While not directly related to programming, I figured that the programmers on here would be just as curious as I was about this question. Feel free to close the question if it does not meet with the guidelines. A name: first, possibly a middle, and surname. I'm curious about how much information you can mine out of a name, using publicly available datasets. I know that you can get the following with anywhere between a low-high probability (depending on the input) using US census data: 1) Gender. 2) Race. Facebook for instance, used exactly that to find out, with a decent level of accuracy, the racial distribution of users of their site (https://www.facebook.com/note.php?note_id=205925658858). What else can be mined? I'm not looking for anything specific, this is a very open-ended question to assuage my curiousity. My examples are US specific, so we'll assume that the name is the name of someone located in the US; but, if someone knows of publicly available datasets for other countries, I'm more than open to them too. I hope this is an interesting question!

    Read the article

  • Data structure for grid with negative indeces

    - by The Secret Imbecile
    Sorry if this is an insultingly obvious concept, but it's something I haven't done before and I've been unable to find any material discussing the best way to approach it. I'm wondering what's the best data structure for holding a 2D grid of unknown size. The grid has integer coordinates (x,y), and will have negative indices in both directions. So, what is the best way to hold this grid? I'm programming in c# currently, so I can't have negative array indices. My initial thought was to have class with 4 separate arrays for (+x,+y),(+x,-y),(-x,+y), and (-x,-y). This seems to be a valid way to implement the grid, but it does seem like I'm over-engineering the solution, and array resizing will be a headache. Another idea was to keep track of the center-point of the array and set that as the topological (0,0), however I would have the issue of having to do a shift to every element of the grid when repeatedly adding to the top-left of the grid, which would be similar to grid resizing though in all likelihood more frequent. Thoughts?

    Read the article

  • Data model for unlockable card collection

    - by Karlos Zafra
    My game has a collection of cards (about 64) that are unlocked (one by one) provided that 4 items are collected/gained in the game. Right now the deck of cards is modeled with a plist file, like this: <plist> <dict> <key>card1</key> <dict> <key>available</key> <true/> <key>series</key> <integer>1</integer> <key>model</key> <integer>1</integer> <key>color</key> <integer>1</integer> </dict> <key>card2</key> <dict> <key>available</key> <false/> <key>series</key> <integer>1</integer> <key>model</key> <integer>1</integer> <key>color</key> <integer>2</integer> </dict> The value for the key named "available" marks if the item is locked or not. Right now I have to include the data for the items collected by the user, but including that inside the plist looks like too much hassle. What kind of structure should be more suitable for this? How do you manage a collection of unlockable items like this?

    Read the article

  • HTTP Content-Type in ASP.Net SoapHttpClientProtocol

    - by Daniel Fone
    Hi there, I have a problem with a Web Service Consumer written in ASP.NET. The error message is: System.InvalidOperationException: Client found response content type of 'application/xml; charset=utf-8', but expected 'text/xml'. The client is based on System.Web.Services.Protocols.SoapHttpClientProtocol. We can't change the Content-Type given by the provider, this has to be 'application/xml; charset=utf-8'. Is there any way to change what Content-Type the SoapHttpClientProtocol expects? Unfortunately, we are probably limited to .NET 1.1. Thanks! Update: We found a way to change the Content-Type sent by the provider, and this solved the problem. I'd still be curious to know how to change the expectations of the consumer though.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >