Search Results

Search found 74473 results on 2979 pages for 'data table'.

Page 209/2979 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • SQL – Contest to Get The Date – Win USD 50 Amazon Gift Cards and Cool Gift

    - by Pinal Dave
    If you are a regular reader of this blog – you will find no issue at all in resolving this puzzle. This contest is based on my experience with NuoDB. If you are not familiar with NuoDB, here are few pointers for you. Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB Quick Start with Admin Sections of NuoDB – Manage NuoDB Database Quick Start with Explorer Sections of NuoDB – Query NuoDB Database In today’s contest you have to answer following questions: Q 1: Precision of NOW() What is the precision of the NuoDB’s NOW() function, which returns current date time? Hint: Run following script on NuoDB Console Explorer section: SELECT NOW() AS CurrentTime FROM dual; Here is the image. I have masked the area where the time precision is displayed. Q 2: Executing Date and Time Script When I execute following script - SELECT 'today' AS Today, 'tomorrow' AS Tomorrow, 'yesterday' AS Yesterday FROM dual; I will get the following result:   NOW – What will be the answer when we execute following script? and WHY? SELECT CAST('today' AS DATE) AS Today, CAST('tomorrow' AS DATE) AS Tomorrow, CAST('yesterday'AS DATE) AS Yesterday FROM dual; HINT: Install NuoDB (it takes 90 seconds). Prizes: 2 Amazon Gifts 2 Limited Edition Hoodies (US resident only)   Rules: Please leave an answer in the comments section below. You must answer both the questions together in a single comment. US resident who wants to qualify to win NuoDB apparel please mention your country in the comment. You can resubmit your answer multiple times, the latest entry will be considered valid. Last day to participate in the puzzle is June 24, 2013. All valid answer will be kept hidden till June 24, 2013. The winner will be announced on June 25, 2013. Two Winners will get USD 25 worth Amazon Gift Card. (Total Value = 25 x 2 = 50 USD) The winner will be selected using a random algorithm from all the valid answers. Anybody with a valid email address can take part in the contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • More Efficient Data Structure for Large Layered Tile Map

    - by Stupac
    It seems like the popular method is to break the map up into regions and load them as needed, my problem is that in my game there are many AI entities other than the player out performing actions in virtually all the regions of the map. Let's just say I have a 5000x5000 map, when I use a 2D array of byte's to render it my game uses around 17 MB of memory, as soon as I change that data structure to a my own defined MapCell class (which only contains a single field: byte terrain) my game's memory consumption rockets up to 400+ MB. I plan on adding layering, so an array of byte's won't cut it and I figure I'd need to add a List of some sort to the MapCell class to provide objects in the layers. I'm only rendering tiles that are on screen, but I need the rest of the map to be represented in memory since it is constantly used in Update. So my question is, how can I reduce the memory consumption of my map while still maintaining the above requirements? Thank you for your time! Here's a few snippets my C# code in XNA4: public static void LoadMapData() { // Test map generations int xSize = 5000; int ySize = 5000; MapCell[,] map = new MapCell[xSize,ySize]; //byte[,] map = new byte[xSize, ySize]; Terrain[] terrains = new Terrain[4]; terrains[0] = grass; terrains[1] = dirt; terrains[2] = rock; terrains[3] = water; Random random = new Random(); for(int x = 0; x < xSize; x++) { for(int y = 0; y < ySize; y++) { //map[x,y] = new MapCell(terrains[random.Next(4)]); map[x,y] = new MapCell((byte)random.Next(4)); //map[x, y] = (byte)random.Next(4); } } testMap = new TileMap(map, xSize, ySize); // End test map setup currentMap = testMap; } public class MapCell { //public TerrainType terrain; public byte terrain; public MapCell(byte itsTerrain) { terrain = itsTerrain; } // the type of terrain this cell is treated as /*public Terrain terrain { get; set; } public MapCell(Terrain itsTerrain) { terrain = itsTerrain; }*/ }

    Read the article

  • Mac Server bizzare routing table

    - by The Unix Janitor
    My mac routing table usually is very simple. I know it's based on bsd , but what's it doing or trying to do. My routing table is usually very simple however, the second one, default was point to link5 ? Is this normal, or is this IPV6 craziness at work? Can somehelp me understand what OSX/BSD is doing? nternet: Destination Gateway Flags Refs Use Netif Expire default 192.168.1.254 UGSc 22 0 en1 127 127.0.0.1 UCS 0 0 lo0 127.0.0.1 127.0.0.1 UH 4 44102 lo0 169.254 link#5 UCS 0 0 en1 192.168.1 link#5 UCS 6 0 en1 192.168.1.1 0:18:39:6d:89:c5 UHLWIi 0 0 en1 739 192.168.1.189 50:ea:d6:86:26:91 UHLWIi 0 0 en1 798 192.168.1.194 127.0.0.1 UHS 0 0 lo0 192.168.1.203 5c:95:ae:dd:34:8d UHLWIi 0 0 en1 316 192.168.1.253 a:76:ff:b5:51:79 UHLWIi 0 0 en1 911 192.168.1.254 8:76:ff:b5:51:79 UHLWIi 32 204 en1 1117 192.168.1.255 ff:ff:ff:ff:ff:ff UHLWbI 0 7 en1 Internet6: Destination Gateway Flags Netif Expire ::1 link#1 UHL lo0 fe80::%lo0/64 fe80::1%lo0 UcI lo0 fe80::1%lo0 link#1 UHLI lo0 fe80::%en1/64 link#5 UCI en1 fe80::21b:63ff:fec7:c486%en1 0:1b:63:c7:c4:86 UHLI lo0 fe80::223:12ff:fe01:d7fe%en1 0:23:12:1:d7:fe UHLWIi en1 ff01::%lo0/32 fe80::1%lo0 UmCI lo0 ff01::%en1/32 link#5 UmCI en1 ff02::%lo0/32 fe80::1%lo0 UmCI lo0 ff02::%en1/32 link#5 UmCI en1 ----------------------------------- Bizzare routing table here Internet: Destination Gateway Flags Refs Use Netif Expire default link#5 UCS 113 0 en1 17.72.255.12 0:50:7f:5e:92:e2 UHLWIi 2 7 en1 1156 64.4.23.141 0:50:7f:5e:92:e2 UHLWIi 0 3 en1 1181 64.4.23.143 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1189 64.4.23.147 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1183 64.4.23.149 link#5 UHLWIi 0 1 en1 64.4.23.150 0:50:7f:5e:92:e2 UHLWIi 0 24 en1 1175 64.4.23.151 link#5 UHLWIi 0 1 en1 64.4.23.153 link#5 UHLWIi 0 1 en1 64.4.23.155 link#5 UHLWIi 0 1 en1 64.4.23.157 0:50:7f:5e:92:e2 UHLWIi 0 3 en1 1181 64.4.23.165 link#5 UHLWIi 0 2 en1 64.4.23.166 link#5 UHLWIi 0 1 en1 65.55.223.15 0:50:7f:5e:92:e2 UHLWIi 3 21 en1 1189 65.55.223.16 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1181 65.55.223.17 0:50:7f:5e:92:e2 UHLWIi 0 2 en1 1199 65.55.223.20 link#5 UHLWIi 0 1 en1 65.55.223.23 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1199 65.55.223.31 link#5 UHLWIi 0 1 en1 65.55.223.32 link#5 UHLWIi 0 1 en1 65.55.223.37 0:50:7f:5e:92:e2 UHLWIi 3 21 en1 1189 65.55.223.38 link#5 UHLWIi 0 1 en1 69.163.252.33 0:50:7f:5e:92:e2 UHLWIi 1 9 en1 1181 77.67.32.254 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1164 111.221.74.13 0:50:7f:5e:92:e2 UHLWIi 0 24 en1 1183 111.221.74.15 link#5 UHLWIi 0 1 en1 111.221.74.16 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1183 111.221.74.17 0:50:7f:5e:92:e2 UHLWIi 3 23 en1 1172 111.221.74.21 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1181 111.221.74.23 0:50:7f:5e:92:e2 UHLWIi 0 2 en1 1172 111.221.74.24 0:50:7f:5e:92:e2 UHLWIi 0 2 en1 1181 111.221.74.26 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1199 111.221.74.29 0:50:7f:5e:92:e2 UHLWIi 0 2 en1 1181 111.221.74.31 link#5 UHLWIi 0 1 en1 111.221.74.37 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1190 111.221.74.38 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1199 111.221.77.141 0:50:7f:5e:92:e2 UHLWIi 0 3 en1 1199 111.221.77.144 link#5 UHLWIi 0 1 en1 111.221.77.145 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1190 111.221.77.149 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1183 111.221.77.154 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1181 111.221.77.156 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1190 111.221.77.157 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1183 111.221.77.162 link#5 UHLWIi 0 1 en1 111.221.77.165 link#5 UHLWIi 0 1 en1 127 127.0.0.1 UCS 0 0 lo0 127.0.0.1 127.0.0.1 UH 4 40073 lo0 157.55.56.140 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1199 157.55.56.141 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1181 157.55.56.143 link#5 UHLWIi 0 1 en1 157.55.56.147 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1183 157.55.56.148 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1183 157.55.56.149 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1189 157.55.56.150 link#5 UHLWIi 0 1 en1 157.55.56.157 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1172 157.55.56.158 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1175 157.55.130.143 link#5 UHLWIi 0 1 en1 157.55.130.144 link#5 UHLWIi 0 1 en1 157.55.130.145 0:50:7f:5e:92:e2 UHLWIi 0 24 en1 1181 157.55.130.152 0:50:7f:5e:92:e2 UHLWIi 0 2 en1 1199 157.55.130.153 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1172 157.55.130.155 0:50:7f:5e:92:e2 UHLWIi 0 2 en1 1189 157.55.130.156 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1186 157.55.130.157 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1189 157.55.130.158 0:50:7f:5e:92:e2 UHLWIi 0 3 en1 1172 157.55.130.160 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1189 157.55.130.162 0:50:7f:5e:92:e2 UHLWIi 3 21 en1 1193 157.55.130.166 link#5 UHLWIi 0 1 en1 157.55.235.141 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1193 157.55.235.142 link#5 UHLWIi 1 1 en1 157.55.235.144 0:50:7f:5e:92:e2 UHLWIi 0 2 en1 1172 157.55.235.145 0:50:7f:5e:92:e2 UHLWIi 0 2 en1 1172 157.55.235.149 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1181 157.55.235.151 link#5 UHRLWIi 0 36 en1 157.55.235.152 0:50:7f:5e:92:e2 UHLWIi 3 21 en1 1189 157.55.235.153 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1178 157.55.235.156 link#5 UHLWIi 0 2 en1 157.55.235.157 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1181 157.55.235.158 link#5 UHLWIi 0 1 en1 157.55.235.159 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1181 157.55.235.162 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1183 157.55.235.166 0:50:7f:5e:92:e2 UHLWIi 0 25 en1 1181 157.56.52.14 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1181 157.56.52.15 0:50:7f:5e:92:e2 UHLWIi 0 2 en1 1183 157.56.52.16 link#5 UHLWIi 0 1 en1 157.56.52.17 0:50:7f:5e:92:e2 UHLWIi 3 14 en1 1199 157.56.52.19 link#5 UHLWIi 0 1 en1 157.56.52.20 0:50:7f:5e:92:e2 UHLWIi 3 17 en1 1199 157.56.52.22 0:50:7f:5e:92:e2 UHLWIi 0 24 en1 1181 157.56.52.25 link#5 UHLWIi 0 1 en1 157.56.52.28 link#5 UHLWIi 0 1 en1 157.56.52.29 link#5 UHLWIi 0 1 en1 157.56.52.31 link#5 UHLWIi 0 1 en1 157.56.52.33 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1189 169.254 link#5 UC 1 0 en1 169.254.174.250 127.0.0.1 UHS 1 0 lo0 169.254.255.255 ff:ff:ff:ff:ff:ff UHLWb 0 2 en1 193.88.6.19 link#5 UHLWIi 0 1 en1 194.165.188.82 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1166 195.46.253.211 link#5 UHLWIi 0 1 en1 204.9.163.143 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1178 213.199.179.141 0:50:7f:5e:92:e2 UHLWIi 0 2 en1 1172 213.199.179.142 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1165 213.199.179.143 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1166 213.199.179.146 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1172 213.199.179.147 0:50:7f:5e:92:e2 UHLWIi 0 2 en1 1164 213.199.179.148 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1165 213.199.179.149 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1172 213.199.179.150 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1165 213.199.179.151 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1164 213.199.179.153 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1166 213.199.179.157 0:50:7f:5e:92:e2 UHLWIi 0 2 en1 1167 213.199.179.160 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1165 213.199.179.161 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1172 213.199.179.162 0:50:7f:5e:92:e2 UHLWIi 0 2 en1 1163 213.199.179.165 0:50:7f:5e:92:e2 UHLWIi 0 1 en1 1164 213.199.179.166 0:50:7f:5e:92:e2 UHLWIi 0 3 en1 1164 224.0.0.251 1:0:5e:0:0:fc UHmLWI 0 0 en1 255.255.255.255 ff:ff:ff:ff:ff:ff UHLWbI 0 2 en1 Internet6: Destination Gateway Flags Netif Expire ::1 link#1 UHL lo0 fe80::%lo0/64 fe80::1%lo0 UcI lo0 fe80::1%lo0 link#1 UHLI lo0 fe80::%en1/64 link#5 UCI en1 fe80::21b:63ff:fec7:c486%en1 0:1b:63:c7:c4:87 UHLI lo0 fe80::223:12ff:fe01:d7fe%en1 0:23:12:1:d7:ff UHLWIi en1 ff01::%lo0/32 fe80::1%lo0 UmCI lo0 ff01::%en1/32 link#5 UmCI en1 ff02::%lo0/32 fe80::1%lo0 UmCI lo0 ff02::%en1/32 link#5 UmCI en1

    Read the article

  • Error on table import

    - by Moazam Ali
    I am importing tables from my backup server to main server through import; all the tables import successfully but one table could not import and gives the error below. What should i do with it? error at destination for row number 2334233 errors encountered so far in this task : 1 could not allocate space for object 'operator_audit_trail' in database sens_ms because the 'sens_index' file group is full

    Read the article

  • MySQL: table is marked as crashed

    - by DrStalker
    After a disk full issue one of the MySQL DBs on the server is coming up with the following error when I try to back it up: [root@mybox ~]# mysqldump -p --result-file=/tmp/dbbackup.sql --database myDBname Enter password: mysqldump: Got error: 145: Table './myDBname/myTable1' is marked as crashed and should be repaired when using LOCK TABLES A bit of investigation shows two tables have this issue. What needs to be done to fix up the damaged tables?

    Read the article

  • Roughly how long should an "ALTER TABLE" take on an 1.3GB INNODB table?

    - by justkevin
    I'm trying to optimize a table that hasn't been optimized in a long time. It has around 2.5 million rows with 1.3 GB of data and a 152 MB index. I started optimizing it about 15 minutes ago and I have no idea how long it will take. The server is reasonably robust (quad xeon with 4GB ram) and has a 500MB innodb buffer pool size. Should I expect this to take minutes, hours or days?

    Read the article

  • Pressing tab to indent a list moves to the next table cell

    - by Ryan Ternier
    I have a list (e.g., a bulleted list) in Microsoft Word. This list is inside a table cell. When I press Tab to increase the indent, rather than increasing the indent, it moves the cursor to the next cell. When I try Ctrl+Tab, it just inserts a tab character without changing the indentation of the bullet (paragraph). How can I turn this off, so it does not go to the next cell but rather just indents the list?

    Read the article

  • Excel 2007: "Format as Table" Increments Column Names

    - by Mark
    I love using the formatting styles for tables in Excel 2007, but in my data I'm using the same column name for multiple columns. When I format my table using the pre-defined styles, it automatically adds an incremental number to each subsequent column name which I don't want. Is there any way to stop this from happening? If I attempt to manually rename the column back to the original name, it automatically appends the incremented number.

    Read the article

  • phpmyadmin import database table from google spreadsheet

    - by phunehehe
    I'm managing a small website, which allows people to register as members. Previously I used a google form to manage member registration, but now as the number of users becomes quite big I am switching to mysql. Currently I have around 500 members in the database, saved in a google spreadsheet. How can I do a bulk import from a google spreadsheet to a table in mysql? BTW I'm using phpmyadmin, so a solution for phpmyadmin is preferable :) Thanks.

    Read the article

  • postfix error fatal:table lookup

    - by samer na
    here the mail.log server localhost: Can't connect to local MySQL server through socket '/var/run/mysqld /mysqld.sock' (2) Mar 23 23:07:19 ubuntu postfix/trivial-rewrite[6417]: fatal: mysql:/etc/postfix/mysql_virtual_alias_maps.cf(0,lock|fold_fix): table lookup problem Mar 23 23:07:20 ubuntu postfix/smtpd[6401]: warning: problem talking to service rewrite: Success Mar 23 23:07:20 ubuntu postfix/cleanup[6296]: warning: problem talking to service rewrite: Connection reset by peer Mar 23 23:07:20 ubuntu postfix/master[6291]: warning: process /usr/lib/postfix/trivial-rewrite pid 6417 exit status 1 Mar 23 23:07:20 ubuntu postfix/master[6291]: warning: /usr/lib/postfix/trivial-rewrite: bad command startup -- throttling

    Read the article

  • Postrouting rule in NAT table

    - by codingfreak
    Hi I have a strange question regarding NAT using iptables. When I do SNAT in a postrouting chain in NAT table at the end of the rule should I give -J ACCEPT? I see counters on the postrouting rule getting incremented but no packet leaving the machine. So does it mean the packet is DROPPED automatically?

    Read the article

  • Saving table yields "Record is too large" in Access

    - by C. Ross
    I have an access database that I gave to a user (shame on my head). They were having trouble with some data being too long, so I suggested changing several text fields to memo fields. I tried this in my copy and it worked perfectly, but when the user tries it they get a "Record is too large" messagebox on saving the modified table design. Obviously the same record is not too large in my database, why would it be in theirs?

    Read the article

  • Switches with large MAC address table?

    - by user1290200
    Does anyone know which switches have a large MAC address table ? I see most switches having only 8K, but we need to store way, way more than that (hundreds of K). I know this may seem odd, but trust me, there's no other way we can make our setup work. The only thing we seem to be able to do is install Juniper routers that store up to 1M addresses, but that will get quite costly and we'd rather avoid doing that.

    Read the article

  • Increase Availability for Data Center Virtual Environments

    - by Antoinette O'Sullivan
    With Oracle VM, you can increase availability and add flexibility for data center virtual environments. To get started, take training on Oracle VM Server for x86 and Oracle VM Server for SPARC as appropriate for your systems. You can take these live instructor-led courses from your own desk as a live-virtual event or travel to an education center for an in-class event. The Oracle VM Administration: Oracle VM Server for x86 course, in 3 days, teaches you about creating NFS and iSCI repositories, migration, cloning and exercising high availabillity. In-class events already on the schedule include:  Location  Date  Delivery Language  Zagreb, Croatia  11 November 2013  Croatian  Prague, Czech Republic  21 October 2013  Czech  Ballerup, Denmark  26 August 2013  English  Bordeaux, France  18 September 2013  French  Paris, France  9 October 2013  French  Strasbourg, France  11 September 2013  French  Hamburg, Germany  30 Septemeber 2013  German  Munich, Germany  28 October 2013  German  Budapest, Hungary  9 September 2013  Hungarian  Riga, Latvia  30 September 2013  Latvian  Oslo, Norway  16 September 2013  English  Warsaw, Poland  28 October 2013  Polish  Bucharest, Romania  14 October 2013  English  Istanbul, Turkey  23 December 2013  Turkish  Indonesia, Jakarta  19 August 2013  English  Canberra, Australia  4 November 2013  English  Melbourne, Australia  6 November 2013  English  Sydney, Australia  25 November 2013  English  San Francisco, CA, United States  16 September 2013  English  Roseville, MN, United States  21 October 2013  English  St Louis, MO, United States  11 November 2013  English  Reston, VA, United States  31 July 2013  English  Buenos Aires, Argentina  21 August 2013  Spanish The Oracle VM Server for SPARC: Installation and Configuration course, in 2 days, teaches you about configuring control and service domains, creating guest domains, using virtual disks and networks, and migration. In-class events already on the schedule include:  Location  Date  Delivery Language  Budapest, Hungary  12 September 2013  Hungarian  Prague, Czech Republic  9 September 2013  Czech  Colombes, France  7 October 2013  French  Stuttgart, Germany  28 October 2013  German  Madrid, Spain  5 September 2013  Spanish  Istanbul, Turkey 30 September 2013  Turkish   Petaling Jaya, Malaysia 15 August 2013  English   Singapore 5 August 2013  English   Cnaberra, Australia  12 August 2013 English  Melbourne, Australia  30 October 2013 English  Sydney, Australia  26 August 2013 English To register for a course or to learn more about Oracle's virtualization curriculum, go to http://education.oracle.com/virtualization.

    Read the article

  • How to extract innermost table from html file with the help of the html agility pack ?

    - by Harikrishna
    I am parsing the tabular information from the html file with the help of the html agility pack. Now I can do it and it works. But when the table what I want to extract is inner most. Or I don't know at which position it is in nested tables.And there can be any number of nested tables and from that I want to extract the information of the table which has column name name,address. Ex. <table> <tr><td>PHONE NO.</td><td>OTHER INFO.</td></tr> <tr><td> <table> <tr><td>AMOUNT</td></tr> <tr><td>50000</td></tr> <tr><td>80000</td></tr> </table> </td></tr> <tr><td> <table> <tr><td> <table> <tr><td> <table> <tr><td> NAME </td><td>ADDRESS</td> <tr><td> ABC </td><td> kfks </td> <tr><td> BCD </td><td> fdsa </td> </table> </tr></td> </table> </td></tr> </table> </td></tr> </table> There are many tables but I want to extract the table which has column name name,address. So what should I do ?

    Read the article

  • Data denormalization and C# objects DB serialization

    - by Robert Koritnik
    I'm using a DB table with various different entities. This means that I can't have an arbitrary number of fields in it to save all kinds of different entities. I want instead save just the most important fields (dates, reference IDs - kind of foreign key to various other tables, most important text fields etc.) and an additional text field where I'd like to store more complete object data. the most obvious solution would be to use XML strings and store those. The second most obvious choice would be JSON, that usually shorter and probably also faster to serialize/deserialize... And is probably also faster. But is it really? My objects also wouldn't need to be strictly serializable, because JsonSerializer is usually able to serialize anything. Even anonymous objects, that may as well be used here. What would be the most optimal solution to solve this problem? Additional info My DB is highly normalised and I'm using Entity Framework, but for the purpose of having external super-fast fulltext search functionality I'm sacrificing a bit DB denormalisation. Just for the info I'm using SphinxSE on top of MySql. Sphinx would return row IDs that I would use to fast query my index optimised conglomerate table to get most important data from it much much faster than querying multiple tables all over my DB. My table would have columns like: RowID (auto increment) EntityID (of the actual entity - but not directly related because this would have to point to different tables) EntityType (so I would be able to get the actual entity if needed) DateAdded (record timestamp when it's been added into this table) Title Metadata (serialized data related to particular entity type) This table would be indexed with SPHINX indexer. When I would search for data using this indexer I would provide a series of EntityIDs and a limit date. Indexer would have to return a very limited paged amount of RowIDs ordered by DateAdded (descending). I would then just join these RowIDs to my table and get relevant results. So this won't actually be full text search but a filtering search. Getting RowIDs would be very fast this way and getting results back from the table would be much faster than comparing EntityIDs and DateAdded comparisons even though they would be properly indexed.

    Read the article

  • Oracle sample data problems

    - by Jay
    So, I have this java based data trasformation / masking tool, which I wanted to test out on Oracle 10g. The good part with Oracle 10g is that you get a load of sample schemas with half a million records in some. The schemas are : SH, OE, HR, IX and etc. So, I installed 10g, found out that the installation scripts are under ORACLE_HOME/demo/scripts. I customized these scripts a bit to run in batch mode. That solves one half of my requirement - to create source data for my testing my data transformation software. The second half of the requirement is that I create the same schemas under different names (TR_HR, TR_OE and so on...) without any data. These schemas would represent my target schemas. So, in short, my software would pick up data from a table in a schema and load it up in to the same table in a different schema. Now, I have two issues in creating my target schema and emptying it. I would like this in a batch job. But the oracle scripts you get, the sample schema names are not configurable. So, I tried creating a script, replacing OE with TR_OE, HR with TR_HR and so on. However, this approach is kind of irritating coz the sample schemas are kind of complicated in the way they are created; Oracle creates synonyms, views, materialized views, data types and lot of weird stuff. I would like the target schemas (TR_HR, TR_OE,...) to be empty. But some of the schemas have circular references, which would not allow me to delete data. The only work around seems to be removing certain foreign keys, deleting data and then adding the constraints back. Is there any easy way to all this, without all this fuss? I would need a complicated data set for my testing (complicated as in tables with triggers, multiple hierarchies.. for instance.. a child table that has children up to 5 levels, a parent table that refers to an IOT table and an IOT table that refers to a non-IOT table etc..). The sample schemas are just about perfect from a data set perspective. The only challenge I see is in automating this whole process of loading up the source schemas, and then creating the target schemas and emptying them. Appreciate your help and suggestions.

    Read the article

  • php code is not fetching data from mysql database using wamp server

    - by john
    I want to display a table from database in phpMyAdmin by putting the following conditions that in every different options in drop down menu it displays different table from database by pressing the button of search. But it is not doing so. <p class="h2">Quick Search</p> <div class="sb2_opts"> <p></p> <form method="post" action="" > <p>Enter your source and destination.</p> <p>From:</p> <select name="from"> <option value="Islamabad">Islamabad</option> <option value="Lahore">Lahore</option> <option value="murree">Murree</option> <option value="Muzaffarabad">Muzaffarabad</option> </select> <p>To:</p> <select name="To"> <option value="Islamabad">Islamabad</option> <option value="Lahore">Lahore</option> <option value="murree">Murree</option> <option value="Muzaffarabad">Muzaffarabad</option> </select> <input type="submit" value="search" /> </form> </form> </table> <?php $con=mysqli_connect("localhost","root","","test"); if (mysqli_connect_errno()) { echo "Failed to connect to MySQL: " . mysqli_connect_error(); } if(isset($_POST['from']) and isset($_POST['To'])) { $from = $_POST['from'] ; $to = $_POST['To'] ; $table = array($from, $to); switch ($table) { case array ("Islamabad", "Lahore") : $result = mysqli_query($con,"SELECT * FROM flights"); echo "</flights>"; //table name is flights break; case array ("Islamabad", "Murree") : $result = mysqli_query($con,"SELECT * FROM isb to murree"); echo "</isb to murree>"; //table name isb to murree ; break; case array ("Islamabad", "Muzaffarabad") : $result = mysqli_query($con,"SELECT * FROM isb to muzz"); echo "</isb to muzz>"; break; //..... //...... default: echo "Your choice is nor valid !!"; } } mysqli_close($con); ?>

    Read the article

  • Zabbix Proxy not collecting data

    - by Jordan Eunson
    I have a working Zabbix 1.8.2 server collecting data for our office and our colo facility. However the link between the colo and office is flaky. What I'm trying to do is setup a proxy on the colo side to have a 1 hour cache and relay the data to our primary server at the office. Our zabbix server is compiled from source and uses a mysql database I've followed the instructions found in the zabbix documentation to compile the proxy using a sqlite3 database. I add the proxy to zabbix under Administration-DM-Proxies. The zabbix server "sees" the proxy because the "last seen" field is always under 60s. However when I assign a colo host to the proxy I stop receiving data from it. The colo host's zabbix_agentd.log file says this: 29343:20100622:124847 Timeout while answering request 29343:20100622:124847 Getting list of active checks failed. Will retry after 60 seconds The zabbix_proxy.log says this. 2041:20100622:123131.760 Deleted 0 records from history [0.000994 seconds] 2028:20100622:124131.671 Error while receiving answer from server [ZBX_TCP_READ() failed I also am unable to receive any SNMP data which is more important to me than the zabbix agent data. Has anyone had this problem before? Zabbix Server OS: CentOS5.4 Zabbix Server Build: 1.8.2 from source Zabbix Proxy OS: CentOS5.4 Zabbix Proxy Build: 1.8.2 from source P.S. The SQLite database on the zabbix proxy never gets any data written to it, it is identical to when I created it from the blank schema in zabbix-1.8.2/create/schema. (Yes I've checked the permissions)

    Read the article

  • Cacti not working for SNMP data sources

    - by lorenzo-s
    I installed packages cacti and snmpd on a Debian server. I'm able to display common graphs in Cacti (such as memory usage, load average, logged in users, etc) using the data templates listed as Unix. Now I want to replace these graphs with new ones using SNMP data sources, because I see there is also CPU usage and because it's not excluded I have to manage multiple hosts in the future. So, I installed snmpd on the machine and left the snmpd.conf as it is. In Cacti, I created three new data sources from SNMP templates for 127.0.0.1 host: ucd/net - CPU Usage - Nice ucd/net - CPU Usage - System ucd/net - CPU Usage - User Then I created a new graph from template ucd/net - CPU Usage, and select the three data sources in the Graph Item Fields section. Graph is now enabled and running, but empty. No data have been collected. Under Console - Devices my SNMP host is listed as up and running: System:Linux ip-xx-xx-xxx-xxx 3.2.0-23-virtual #36-Ubuntu SMP Tue Apr 10 22:29:03 UTC 2012 x86_64 Uptime: 929267 (0 days, 2 hours, 34 minutes) Hostname: ip-xx-xx-xxx-xxx Location: Sitting on the Dock of the Bay Contact: Me [email protected] In SNMP Options I left all as it is: SNMP Version: Version 1 SNMP Community: public SNMP Timeout: 500 ms Maximum OID's Per Get Request: 10 In Console - Utilities - Cacti Log I have multiple warning (two for each data source) every 5 minutes: 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] Host[2] DS[18] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.4.15.0' 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] Host[1] DS[9] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.11.52.0' 10/29/2012 01:40:01 PM - CMDPHP: Poller[0] Host[2] DS[19] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:40:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.4.6.0' [...] I have the feeling I'm missing something, but I cannot get it...

    Read the article

  • Force database read to master if slave data is stale

    - by Jeff Storey
    I previously asked a specific question about this database replication for new user signup to which I got an answer, but I want to ask this in the more general sense. I have a database setup in which I am using a master/slave combination. I am using the slaves for load balancing (the data itself is partitioned/sharded across multiple databases, but each database has X slaves for load balancing). Let's say I write some data to the master. Now I do a subsequent read which hits a slave, but the slave has not yet caught up to the master. Is there a way (which can be done quickly since it will happen frequently) to determine if the data is stale in the slave so I can then route to the master? In my previous question, it was suggested to do simultaneous writes to the cache and the database. This solution seems practical, but there is still a chance that the data may have been removed from the cache but not yet updated in the slave. A possible solution is to ensure the cache is big enough (based on the typical application load) so the data will not be evicted within the time frame it takes to replicate the data. This seems like it may be feasible. Can anyone provide additional insight into this question? Thanks!

    Read the article

  • How to deduplicate 40TB of data?

    - by Michael Stauffer
    I've inherited a research cluster with ~40TB of data across three filesystems. The data stretches back almost 15 years, and there are most likely a good amount of duplicates as researchers copy each others data for different reasons and then just hang on to the copies. I know about de-duping tools like fdupes and rmlint. I'm trying to find one that will work on such a large dataset. I don't care if it takes weeks (or maybe even months) to crawl all the data - I'll probably throttle it anyway to go easy on the filesystems. But I need to find a tool that's either somehow super efficient with RAM, or can store all the intermediary data it needs in files rather than RAM. I'm assuming that my RAM (64GB) will be exhausted if I crawl through all this data as one set. I'm experimenting with fdupes now on a 900GB tree. It's 25% of the way through and RAM usage has been slowly creeping up the whole time, now it's at 700MB. Or, is there a way to direct a process to use disk-mapped RAM so there's much more available and it doesn't use system RAM? I'm running CentOS 6.

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >