Search Results

Search found 58956 results on 2359 pages for 'data structures'.

Page 20/2359 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Importing data from text file to specific columns using BULK INSERT

    - by Dinesh Asanka
    Bulk insert is much faster than using other techniques such as  SSIS. However, when you are using bulk insert you can’t insert to specific columns. If, for example, there are five columns in a table you should have five values for each record in the text file you are importing from. This is an issue when you are expecting default values to be inserted into tables. Let us say you have table as below: In this table, you are expecting ID, Status and CreatedDate to be updated automatically, so your text file may only have   FirstName  LastName  values as below: Dinesh,Asanka Saman,Liyanage Ruwan,Silva Susantha,Bathige Jude,Peires Sanjeewa,Jayawickrama If you use bulk insert to this table like follows, You will be returned an error: Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 1 (ID). To avoid this you will need to create a view with the columns you are expecting to fill and use bulk insert against it. If you check the table now, you will see table with values in the text file and the default values.

    Read the article

  • Get aggregated view of data for entire website with Google Analytics

    - by crmpicco
    I have a website (www.ayrshireminis.com), which has three main sections under different directories, these are: /forum /galleries /contact I would like to have an aggregated view of the data for the whole website, but also for each section. What is the recommended approach for doing this? I believe I can create a web property that includes a profile for the entire website and duplicated filtered profiles, each section having an include filter. This is my gut instinct, but i'd like to know if there is another (better) way to do it? Maybe by having one account that includes a profile for the whole site and another profile with an include filter for the individual sections?

    Read the article

  • Reuse the data CRUD methods in data access layer, but they are updated too quickly

    - by ValidfroM
    I agree that we should put CRUD methods in a data access layer, However, in my current project I have some issues. It is a legacy system, and there are quite a lot CRUD methods in some concrete manager classes. People including me seem to just add new methods to it, rather than reuse the existing methods. Because We don't know whether the existing method is what we need Even if we have source code, do we really need read other's code then make decision? It is updated too quickly. Do not have time get familiar with the DAO API. Back to the question, how do you solve that in your project? If we say "reuse", it really needs to be reusable rather than just an excuse.

    Read the article

  • Design: How to model / where to store relational data between classes

    - by Walker
    I'm trying to figure out the best design here, and I can see multiple approaches, but none that seems "right." There are three relevant classes here: Base, TradingPost, and Resource. Each Base has a TradingPost which can offer various Resources depending on the Base's tech level. Where is the right place to store the minimum tech level a base must possess to offer any given resource? A database seems like overkill. Putting it in each subclass of Resource seems wrong--that's not an intrinsic property of the Resource. Do I have a mediating class, and if so, how does it work? It's important that I not be duplicating code; that I have one place where I set the required tech level for a given item. Essentially, where does this data belong? P.S. Feel free to change the title; I struggled to come up with one that fits.

    Read the article

  • Still no detected structured data in Google Webmaster Tools [on hold]

    - by user6211
    Can you give me some suggestions what's wrong with my structured data? Google still cannot read it. It looks like this: <div class="identity"> <div itemscope itemtype="http://schema.org/LocalBusiness"> <a itemprop="url" href="http://MYDOMAIN.co.uk/"><div itemprop="name"><strong>MY_COMPANY</strong></div></a> <div itemprop="address" itemscope itemtype="http://schema.org/PostalAddress"> <span itemprop="streetAddress">MY_ADDRESS</span>, <span itemprop="addressLocality">London</span>, <span itemprop="postalCode">SE5 MY_XYZ</span>, <span itemprop="addressCountry">UK</span> </div> </div> </div>

    Read the article

  • Consolidating hotels data from various booking sites with different IDs or reference

    - by Victor
    In one of my projects, I have data for hotels, and other booking sites are able to book this hotel. For example: Hotel A - Booking (ID = 4002), Expedia (ID = 123), Priceline (ID = 147) The three booking engines each uses their own Id to reference to Hotel A. I would need to check manually and make the right reference to the hotel. If I have 100,000 hotels, I have to check manually 300,000 (considering 3 booking sites) times? They might provide API, then I can cross check the name, address or latitude/longitude, but if they differ a little bit then I might give the wrong reference to the wrong hotel. I'm sure there are better ways to do this. There are many travel sites out there which do hotel price checking on many booking sites, but how do they do to make sure they are checking the right hotel on these booking sites? Anyone has any experience on this?

    Read the article

  • Mount external HD ubuntu 12.10

    - by Luigi Tiburzi
    Although it's an abundantly treated matter, I'm unable to find an answer valid for my needs. I had a 12.04 installation of ubuntu and I decided to install the 12.10. I copied (using GParted) the partition where my system was to an external hd where there is a windows partition. Then I installed the newest ubuntu version and now I want to take back some files (for example my .emacs) from that partition but when I try to mount it, it is not found as sdb and if I mount it from /dev/usb/hddev0 I don't get any output, only a blinking cursor, no errors, no output. I even tried to mount it as an ntfs disk but the result was the same. It's like the hd cannot be detected. So how can I access data to that disk? Could I get them from GParted terminal instead of Ubuntu one? Thanks

    Read the article

  • Using Ubuntu to recover data from a crashed Windows install

    - by user289391
    I was using Windows on my laptop when suddenly the blue screen of death appeared and then laptop restarted and wrote for me this : Intel UNDI, PXE-2.1 (built 083) Copyright (C) 1997-200 Intel Corporation This Product is covered by one or more of the following patents: US5,307459, US5,434,872, US5732,094, US6579,884, US6115,776 and US6,327,625 Realtek PCIe FE Family Controller Series v120 (01/26/10) PXE-M0F: ExitingPXEROM. reboot failed I have Ubuntu on an external disk so I have now booted to that. Two questions: Any theories on what happened? How can I use Ubuntu to recovers my data from Windows install?

    Read the article

  • disk not accessible

    - by user107044
    i formatted my hard drive yesterday and it was working well even after the formatting. But when I restarted my system again , is is showing that the space is alloted to my files but they are inaccessible. I have even tried to unhide the files and folders, if they got hidden somehow. But nothing works. the hard drive is being shown empty but the properties are saying that it still conatins the data : http://imgur.com/ObjTE in the image, it is showing that the directory has only 1 file of size:4.8 kbps but the space being used by the drive is 11.6 GB. do suggest some solution.

    Read the article

  • SQL SERVER Data Pages in Buffer Pool Data Stored in MemoryCache

    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Stack data storage order

    - by Jamie Dixon
    When talking about a stack in either computing or "real" life we usually assume a "first on, last off" type of functionality. Because the idea of a stack is based around something in the physical world, does it matter how the data in the stack is stored? I notice in a lot of examples that the storage of the stack data is quite often done using an array and the newest item added to the stack is placed at the bottom of the array. (like adding a new plate to an existing stack of plates except putting it underneath the other plates rather than on top). As a paradigm, does it matter in what order the data is stored within the stack as long as the operation of the stack acts as expected?

    Read the article

  • Data structure for Settlers of Catan map?

    - by templatetypedef
    Hello all- A while back someone asked me if I knew of a nice way to encode the information for the game Settlers of Catan. This would require storing a hexagonal grid in a way where each hex can have data associated with it. More importantly, though, I would need some way of efficiently looking up vertices and edges on the sides of these hexagons, since that's where all the action is. My question is this: is there a good, simple data structure for storing a hexagonal grid while allowing for fast lookup of hexagons, edges between hexagons, and vertices at the intersections of hexagons? I know that general structures like a winged-edge or quad-edge could do this, but that seems like massive overkill. Thanks!

    Read the article

  • How to find out memory layout of your data structure implementation on Linux 64bit machine

    - by ajay
    In this article, http://cacm.acm.org/magazines/2010/7/95061-youre-doing-it-wrong/fulltext the author talks about the memory layouts of 2 data structures - The Binary Heap and the B-Heap and compares how one has better memory layout than the other. http://deliveryimages.acm.org/10.1145/1790000/1785434/figs/f5.jpg http://deliveryimages.acm.org/10.1145/1790000/1785434/figs/f6.jpg I want to get hands on experience on this. I have an implementation of a N-Ary Tree and I want to find out the memory layout of my data structure. What is the best way to come up with a memory layout like the one in the article? Secondly, I think it is easier to identify the memory layout if it is an array based implementation. If the implementation of a Tree uses pointers then what Tools do we have or what kind of approach is required to map it's memory layout? Thanks!

    Read the article

  • Efficient data structure for fast random access, search, insertion and deletion

    - by Leonel
    I'm looking for a data structure (or structures) that would allow me keep me an ordered list of integers, no duplicates, with indexes and values in the same range. I need four main operations to be efficient, in rough order of importance: taking the value from a given index finding the index of a given value inserting a value at a given index deleting a value at a given index Using an array I have 1 at O(1), but 2 is O(N) and insertion and deletions are expensive (O(N) as well, I believe). A Linked List has O(1) insertion and deletion (once you have the node), but 1 and 2 are O(N) thus negating the gains. I tried keeping two arrays a[index]=value and b[value]=index, which turn 1 and 2 into O(1) but turn 3 and 4 into even more costly operations. Is there a data structure better suited for this?

    Read the article

  • Queue-like data structure with fast search and insertion

    - by Max
    I need a datastructure with the following properties: It contains integer numbers, no duplicates. After it reaches the maximal size the first element is removed. So if the capacity is 3, then this is how it would look when putting in it sequential numbers: {}, {1}, {1, 2}, {1, 2, 3}, {2, 3, 4}, {3, 4, 5} etc. Only two operations are needed: inserting a number into this container (INSERT) and checking if the number is already in the container (EXISTS). The number of EXISTS operations is expected to be approximately 2 * number of INSERT operations. I need these operations to be as fast as possible. What would be the fastest data structure or combination of data structures for this scenario?

    Read the article

  • Managing changes in memory-based data format

    - by kamziro
    So I've been using a compact data type in c++, and saving from memory or loading from the file involves just copying the bits of memory in and out. However, the obvious drawback of this is that if you need to add/remove elements on the data, it becomes kind of messy. There's also problems with versioning, suppose you distribute a program which uses version A of the data, and then the next day you make version B of it, and then later on version C. I suppose this can be solved by using something like xml or json. But suppose you can't do that for technical reasons. What is the best way to do this, apart from having to make different if cases etc (which would be pretty ugly, I'd imagine)

    Read the article

  • Ordered Data Structure that allows to efficiently remove duplicate items

    - by devoured elysium
    I need a data structure that Must be ordered (adding elements a, b and c to an empty structure, will make them be at positions 0, 1 and 2). Allows to add repeated items. This is, I can have a list with a, b, c, a, b. Allows removing all ocurrences of a given item (if I do something like delete(1), it will delete all ocurrences of 1 in the structure). I can't really pick what the best data structure could be in here. I thought at first about something like a List(the problem is having an O(n) operation when removing items), but maybe I'm missing something? What about trees/heaps? Hashtables/maps? I'll have to assume I'll do as much adding as removing with this data structure. Thanks

    Read the article

  • The data structure of libev watchers

    - by changchang
    Libev uses three data structures to storage different watchers. Heap: for watchers that sorted by time, such as ev_timer and ev_periodic. Linked list: such as ev_io, ev_signal, ev_child and etc. Array: such as ev_prepare, ev_check, ev_async and etc. There is no doubt about that uses heap to store timer watcher. But what is the criteria of selecting linked list and array? The data structure that stores ev_io watchers seems a little complex. It first is an array that with fd as its index and the element in the array is a linked list of ev_io watcher. It is more convenient to allocate space for array if use linked list as element. Is it the reason? Or just because of the insert or remove operation of ev_io is more frequently and the ev_prepare seems more stable? Or any other reasons?

    Read the article

  • How To - Guide to Importing Data from a MySQL Database to Excel using MySQL for Excel

    - by Javier Treviño
    Fetching data from a database to then get it into an Excel spreadsheet to do analysis, reporting, transforming, sharing, etc. is a very common task among users. There are several ways to extract data from a MySQL database to then import it to Excel; for example you can use the MySQL Connector/ODBC to configure an ODBC connection to a MySQL database, then in Excel use the Data Connection Wizard to select the database and table from which you want to extract data from, then specify what worksheet you want to put the data into.  Another way is to somehow dump a comma delimited text file with the data from a MySQL table (using the MySQL Command Line Client, MySQL Workbench, etc.) to then in Excel open the file using the Text Import Wizard to attempt to correctly split the data in columns. These methods are fine, but involve some degree of technical knowledge to make the magic happen and involve repeating several steps each time data needs to be imported from a MySQL table to an Excel spreadsheet. So, can this be done in an easier and faster way? With MySQL for Excel you can. MySQL for Excel features an Import MySQL Data action where you can import data from a MySQL Table, View or Stored Procedure literally with a few clicks within Excel.  Following is a quick guide describing how to import data using MySQL for Excel. This guide assumes you already have a working MySQL Server instance, Microsoft Office Excel 2007 or 2010 and MySQL for Excel installed. 1. Opening MySQL for Excel Being an Excel Add-In, MySQL for Excel is opened from within Excel, so to use it open Excel, go to the Data tab located in the Ribbon and click MySQL for Excel at the far right of the Ribbon. 2. Creating a MySQL Connection (may be optional) If you have MySQL Workbench installed you will automatically see the same connections that you can see in MySQL Workbench, so you can use any of those and there may be no need to create a new connection. If you want to create a new connection (which normally you will do only once), in the Welcome Panel click New Connection, which opens the Setup New Connection dialog. Here you only need to give your new connection a distinctive Connection Name, specify the Hostname (or IP address) where the MySQL Server instance is running on (if different than localhost), the Port to connect to and the Username for the login. If you wish to test if your setup is good to go, click Test Connection and an information dialog will pop-up stating if the connection is successful or errors were found. 3.Opening a connection to a MySQL Server To open a pre-configured connection to a MySQL Server you just need to double-click it, so the Connection Password dialog is displayed where you enter the password for the login. 4. Selecting a MySQL Schema After opening a connection to a MySQL Server, the Schema Selection Panel is shown, where you can select the Schema that contains the Tables, Views and Stored Procedures you want to work with. To do so, you just need to either double-click the desired Schema or select it and click Next >. 5. Importing data… All previous steps were really the basic minimum needed to drill-down to the DB Object Selection Panel  where you can see the Database Objects (grouped by type: Tables, Views and Procedures in that order) that you want to perform actions against; in the case of this guide, the action of importing data from them. a. From a MySQL Table To import from a Table you just need to select it from the list of Database Objects’ Tables group, after selecting it you will note actions below the list become available; then click Import MySQL Data. The Import Data dialog is displayed; you can see some basic information here like the name of the Excel worksheet the data will be imported to (in the window title), the Table Name, the total Row Count and a 10 row preview of the data meant for the user to see the columns that the table contains and to provide a way to select which columns to import. The Import Data dialog is designed with defaults in place so all data is imported (all rows and all columns) by just clicking Import; this is important to minimize the number of clicks needed to get the job done. After the import is performed you will have the data in the Excel worksheet formatted automatically. If you need to override the defaults in the Import Data dialog to change the columns selected for import or to change the number of imported rows you can easily do so before clicking Import. In the screenshot below the defaults are overridden to import only the first 3 columns and rows 10 – 60 (Limit to 50 Rows and Start with Row 10). If the number of rows to be imported exceeds the maximum number of rows Excel can hold in its worksheet, a warning will be displayed in the dialog, meaning the imported number of rows will be limited by that maximum number (65,535 rows if the worksheet is in Compatibility Mode).  In the screenshot below you can see the Table contains 80,559 rows, but only 65,534 rows will be imported since the first row is used for the column names if the Include Column Names as Headers checkbox is checked. b. From a MySQL View Similar to the way of importing from a Table, to import from a View you just need to select it from the list of Database Objects’ Views group, then click Import MySQL Data. The Import Data dialog is displayed; identically to the way everything looks when importing from a table, the dialog displays the View Name, the total Row Count and the data preview grid. Since Views are really a filtered way to display data from Tables, it is actually as if we are extracting data from a Table; so the Import Data dialog is actually identical for those 2 Database Objects. After the import is performed, the data in the Excel spreadsheet looks like the following screenshot. Note that you can override the defaults in the Import Data dialog in the same way described above for importing data from Tables. Also the Compatibility Mode warning will be displayed if data exceeds the maximum number of rows explained before. c. From a MySQL Procedure Too import from a Procedure you just need to select it from the list of Database Objects’ Procedures group (note you can see Procedures here but not Functions since these return a single value, so by design they are filtered out). After the selection is made, click Import MySQL Data. The Import Data dialog is displayed, but this time you can see it looks different to the one used for Tables and Views.  Given the nature of Store Procedures, they require first that values are supplied for its Parameters and also Procedures can return multiple Result Sets; so the Import Data dialog shows the Procedure Name and the Procedure Parameters in a grid where their values are input. After you supply the Parameter Values click Call. After calling the Procedure, the Result Sets returned by it are displayed at the bottom of the dialog; output parameters and the return value of the Procedure are appended as the last Result Set of the group. You can see each Result Set is displayed as a tab so you can see a preview of the returned data.  You can specify if you want to import the Selected Result Set (default), All Result Sets – Arranged Horizontally or All Result Sets – Arranged Vertically using the Import drop-down list; then click Import. After the import is performed, the data in the Excel spreadsheet looks like the following screenshot.  Note in this example all Result Sets were imported and arranged vertically. As you can see using MySQL for Excel importing data from a MySQL database becomes an easy task that requires very little technical knowledge, so it can be done by any type of user. Hope you enjoyed this guide! Remember that your feedback is very important for us, so drop us a message: MySQL on Windows (this) Blog - https://blogs.oracle.com/MySqlOnWindows/ Forum - http://forums.mysql.com/list.php?172 Facebook - http://www.facebook.com/mysql Cheers!

    Read the article

  • Excel Conditional Formatting Multiple Data Bars and Data Icons in one cell

    - by wbeard52
    I am using Excel 2007 on a windows machine. I am attempting to place one data bar and one data icon into a cell under the conditional formatting. The issue is that I don't really want to have data icons or data bars for cells that have dates in the future and I only want to have data icons for dates in the at least one month in the past. This is what I have: This is what I want: I am using the EOMONTH function to determine the last day of the month for the conditional formatting calculations. For the data bar the formula is =EOMONTH(Now(), 4) and =EOMONTH(Now(), -1). The data icons formulas are =EOMONTH(Now(), -1) and =EOMONTH(Now(), -2) Is there a way in Excel 2007 to get rid of the data icons for all the dates in the future and lose the data bars when the date has past. Thanks

    Read the article

  • Criteria strings, how many different criteria can be entered to retrieve specific data?

    - by Janet
    For our membership database we are currently using an old DOS program "Arclist". The program is old but the one feature we desperately need in a database program is to be able to enter multiple criteria at one time for more of a "one time" extraction of the data meeting all the various criteria entered in what I call a "criteria string". An example may be extracting only those records with zip codes matching (67893, 54235, 54323, 54201, 54302, 54303, 54301, 67894, 67895). Another set of criteria might be to omit records, not equal to, one type of criteria in one field and also extract records matching criteria in another field. So we would want records "not equal to" in one field, but whose information equals requested information in another field.

    Read the article

  • Isolating test data in acceptance tests

    - by Matt Phillips
    I'm looking for guidance on how to keep my acceptance tests isolated. Right now the issue I'm having with being able to run the tests in parallel is the database records that are manipulated in the tests. I've written helpers that take care of doing inserts and deletes before tests are executed, to make sure the state is correct. But now I can't run them in parallel against the same database without uniquely generating the test data fields for each test. For example. Testing creating a row i'll delete everything where column A = foo and column B = bar Then I'll navigate through the UI in the test and create a record with column A = foo and column B = bar. Testing that a duplicate row is not allowed to be created. I'll insert a row with column A = foo and column B = bar and then use the UI to try and do the exact same thing. This will display an error message in the UI as expected. These tests work perfectly when ran separately and serially. But I can't run them at the same time for fear that one will create or delete a record the other is expecting. Any tips on how to structure them better so they can be run in parallel?

    Read the article

  • Ubuntu Tools for recovering data from damaged USB Flash Drive ~ 10 Gb

    - by PREDA LUCIAN
    I have technical issues with my USB Flash Drive - JetFlash®V15 (TS16GJFV15) It's very critical situation because I can not see the data from it and I should get a way to recover them ASAP. So, in general, I have connected Non-stop that USB Flash Disk at my laptop. Was appear Power surges and when I was coming back, I saw that problem with it. Details regarding JetFlash®V15 (in present): - when I connect it on USP slot, the led is working intermittent and later on remain with constant light. - if I inspect the computer drivers, I found "Generic USB Flash Disk" (when the stick it's connected). - if I inspect "Properties", I can see next details: --- Type: unknown (application/octet-stream) --- Size: unknown --- Volume: unknown --- Accessed: unknown --- Modified: unknown I inspected that stick on 2 different computers (as well in different different USB Ports) and was the same problem, I can not see the content. I was checking with Windows 7 and Ubuntu 10.04 OS, but without success. With both OS was working before this issue. I'll appreciate an answer which will solve the problem, not an answer which will certify the problem. What I have to do, to recover the information form it (nearly 10 Gb)? I'm looking forward to be guided from a technical expert.

    Read the article

  • How much information can you mine out of a name?

    - by Finglas Fjorn
    While not directly related to programming, I figured that the programmers on here would be just as curious as I was about this question. Feel free to close the question if it does not meet with the guidelines. A name: first, possibly a middle, and surname. I'm curious about how much information you can mine out of a name, using publicly available datasets. I know that you can get the following with anywhere between a low-high probability (depending on the input) using US census data: 1) Gender. 2) Race. Facebook for instance, used exactly that to find out, with a decent level of accuracy, the racial distribution of users of their site (https://www.facebook.com/note.php?note_id=205925658858). What else can be mined? I'm not looking for anything specific, this is a very open-ended question to assuage my curiousity. My examples are US specific, so we'll assume that the name is the name of someone located in the US; but, if someone knows of publicly available datasets for other countries, I'm more than open to them too. I hope this is an interesting question!

    Read the article

  • What is the best way to store a table in C++

    - by Topo
    I'm programming a decision tree in C++ using a slightly modified version of the C4.5 algorithm. Each node represents an attribute or a column of your data set and it has a children per possible value of the attribute. My problem is how to store the training data set having in mind that I have to use a subset for each node so I need a quick way to only select a subset of rows and columns. The main goal is to do it in the most memory and time efficient possible (in that order of priority). The best way I have thought of is to have an array of arrays (or std::vector), or something like that, and for each node have a list (array, vector, etc) or something with the column,line(probably a tuple) pairs that are valid for that node. I now there should be a better way to do this, any suggestions? UPDATE: What I need is something like this: In the beginning I have this data: Paris 4 5.0 True New York 7 1.3 True Tokio 2 9.1 False Paris 9 6.8 True Tokio 0 8.4 False But for the second node I just need this data: Paris 4 5.0 New York 7 1.3 Paris 9 6.8 And for the third node: Tokio 2 9.1 Tokio 0 8.4 But with a table of millions of records with up to hundreds of columns. What I have in mind is keep all the data in a matrix, and then for each node keep the info of the current columns and rows. Something like this: Paris 4 5.0 True New York 7 1.3 True Tokio 2 9.1 False Paris 9 6.8 True Tokio 0 8.4 False Node 2: columns = [0,1,2] rows = [0,1,3] Node 3: columns = [0,1,2] rows = [2,4] This way on the worst case scenario I just have to waste size_of(int) * (number_of_columns + number_of_rows) * node That is a lot less than having an independent data matrix for each node.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >