Search Results

Search found 7740 results on 310 pages for 'book reader'.

Page 264/310 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • Authoritative sources about Database vs. Flatfile decision

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • PDF Corruption When Sending with Microsoft Products

    - by Winner
    I have the same PDF corruption problem in two different offices that I am the tech support for. Office 1: Started in the middle of December. PDF received from outside the office and is viewable with no problems. I have no control over how it is created. If it is forwarded to anyone else, the PDF is corrupted. I have forwarded it to multiple people in the office. I have tried viewing with Reader 8, 9, Sumatra and Fox IT. I have tried forwarding to Gmail and their viewer says it is corrupted. If I save the PDF and create a new email, it will be corrupted when sent using Outlook 2003, Outlook 2007, Microsoft Live Mail and Outlook Express. If I create the email using Thunderbird 3, Gmail or the webclient Iclient for IPSwitch IMail it will not be corrupted. I have confirmed the same results when using our IMail SMTP and also Using Gmail as the SMTP server. To be clear, if I created in Thunderbird, Gmail or Iclient and received on any of the MS products, it will be viewable. This office receives PDFs daily from multiple sources. There is only a small subset that are having this problem. So far they problem PDFs are from two different companies they deal with, but not all of the PDFs are bad. Office 2: PDFs are created by a management system. I'm not sure what engine is used to create them. Same exact same issues. At both offices, I noticed that the file size is wrong. One small PDF the proper file size is 12kb for the PDF when it's viewable, when it shows up corrupted it is only 8kb. We handle the email for both offices. Both are POP servers, not Exchange. IMail was updated after these issues start. I have tried different SMTP servers and it still seems to happen only when using Microsoft products to send. Anyone else having problems with PDFs getting corrupted? Any ideas how to find out a resolution?

    Read the article

  • How do I prove I should put a table of values in source code instead of a database table?

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • How to install missing Sound Drivers in Ubuntu?

    - by Sakamoto Kazuma
    I seem to be missing drivers for my Gateway laptop MA7. I have looked in System-Admin-Hardware Drivers, but it does not show up in there.There are also no devices listed in Sound-Hardware. I'm guessing at this point that I don't have the driver installed. However, I get the following output: admin@machine001:~$ cat /proc/asound/cards 0 [Intel ]: HDA-Intel - HDA Intel HDA Intel at 0xd8240000 irq 22 admin@machine001:~$ And my lspci shows: 00:00.0 Host bridge: Intel Corporation Mobile 945GM/PM/GMS, 943/940GML and 945GT Express Memory Controller Hub (rev 03) 00:02.0 VGA compatible controller: Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller (rev 03) 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) 00:1b.0 Audio device: Intel Corporation 82801G (ICH7 Family) High Definition Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 1 (rev 02) 00:1c.1 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 2 (rev 02) 00:1d.0 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 (rev 02) 00:1d.1 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 (rev 02) 00:1d.2 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 (rev 02) 00:1d.3 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4 (rev 02) 00:1d.7 USB Controller: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev e2) 00:1f.0 ISA bridge: Intel Corporation 82801GBM (ICH7-M) LPC Interface Bridge (rev 02) 00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 02) 00:1f.2 SATA controller: Intel Corporation 82801GBM/GHM (ICH7 Family) SATA AHCI Controller (rev 02) 00:1f.3 SMBus: Intel Corporation 82801G (ICH7 Family) SMBus Controller (rev 02) 02:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8038 PCI-E Fast Ethernet Controller (rev 14) 03:00.0 Network controller: Intel Corporation PRO/Wireless 3945ABG [Golan] Network Connection (rev 02) 04:09.0 CardBus bridge: Texas Instruments PCIxx12 Cardbus Controller 04:09.1 FireWire (IEEE 1394): Texas Instruments PCIxx12 OHCI Compliant IEEE 1394 Host Controller 04:09.2 Mass storage controller: Texas Instruments 5-in-1 Multimedia Card Reader (SD/MMC/MS/MS PRO/xD) I have also checked alsamixer, and nothing is muted. No headphones plugged into headphone jack either. So the question now is, how do I get sound to work on my laptop? It doesn't work for any application.

    Read the article

  • debian packages version convention

    - by JackWu
    I'm using debian/Ubuntu, and get confused about versions of packages. When using dpkg -l command, I get: ii vim 2:7.3.429-2ubuntu2.1 Vi IMproved - enhanced vi editor ii vim-common 2:7.3.429-2ubuntu2.1 Vi IMproved - Common files ii vim-runtime 2:7.3.429-2ubuntu2.1 Vi IMproved - Runtime files ii vim-tiny 2:7.3.429-2ubuntu2.1 Vi IMproved - enhanced vi editor - compact version ii virt-what 1.11-1 detect if we are running in a virtual machine ii w3m 0.5.3-5ubuntu1 WWW browsable pager with excellent tables/frames support ii watershed 6 reduce superfluous executions of idempotent command ii wget 1.13.4-2ubuntu1 retrieves files from the web ii whiptail 0.52.11-2ubuntu10 Displays user-friendly dialog boxes from shell scripts ii whoopsie 0.1.33 Ubuntu crash database submission daemon ii wimlib9 1.5.0-1~webupd8~precise Library to extract, create, modify, and mount WIM files ii wimtools 1.5.0-1~webupd8~precise Tools to extract, create, modify, and mount WIM files ii wireless-tools 30~pre9-5ubuntu2 Tools for manipulating Linux Wireless Extensions ii wpasupplicant 0.7.3-6ubuntu2.1 client support for WPA and WPA2 (IEEE 802.11i) ii x11-common 1:7.6+12ubuntu2 X Window System (X.Org) infrastructure ii x11-utils 7.6+4ubuntu0.1 X11 utilities ii xauth 1:1.0.6-1 X authentication utility ii xbitmaps 1.1.1-1 Base X bitmaps ii xclip 0.12-1 command line interface to X selections ii xfonts-encodings 1:1.0.4-1ubuntu1 Encodings for X.Org fonts ii xfonts-utils 1:7.6+1 X Window System font utility programs ii xkb-data 2.5-1ubuntu1.3 X Keyboard Extension (XKB) configuration data ii xml-core 0.13 XML infrastructure and XML catalog file support rc xpdf 3.02-21build1 Portable Document Format (PDF) reader ii xterm 271-1ubuntu2.1 X terminal emulator ii xz-lzma 5.1.1alpha+20110809-3 XZ-format compression utilities - compatibility commands ii xz-utils 5.1.1alpha+20110809-3 XZ-format compression utilities ii zabbix-agent 1:1.8.11-1 network monitoring solution - agent ii zlib1g 1:1.2.3.4.dfsg-3ubuntu4 compression library - runtime ii zlib1g-dev 1:1.2.3.4.dfsg-3ubuntu4 compression library - development ii zsh 4.3.17-1ubuntu1 shell with lots of features The third column is version, but it all messed up in a way I can't understand. I mean, different packages use total different naming specification. Here are the major questions: Why there are ubuntu in them, and there are not? what all the special -~+ mean? alpha and build, dfsg, what are they? Can I just use them casually? vim and other packages have 2:, what does that mean? How version comparison works, since they can be so different? Can anyone please explain this to me? Or where can I find an official document? Thanks in advance.

    Read the article

  • C# delegates problem

    - by Mick Taylor
    Hello I am getting the following error from my C# Windows Application: Error 1 No overload for 'CreateLabelInPanel' matches delegate 'WorksOrderStore.ProcessDbConnDetailsDelegate' H:\c\WorksOrderFactory\WorksOrderFactory\WorksOrderClient.cs 43 39 WorksOrderFactory I have 3 .cs files that essentially: Opens a windows Has an option for the users to connect to a db When that is selected, the system will go off and connect to the db, and load some data in (just test data for now) Then using a delegate, the system should do soemthing, which for testing will be to create a label. However I haven't coded this part yet. But I can't build until I get this error sorted. The 3 fiels are called: WorksOrderClient.cs (which is the MAIN) WorksOrderStore.cs LoginBox.cs Here's the code for each file: WorksOrderClient.cs using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using WorksOrderStore; namespace WorksOrderFactory { using WorksOrderStore; public partial class WorksOrderClient : Form { LoginBox lb = new LoginBox(); private static WorksOrderDB wodb = new WorksOrderDB(); private static int num_conns = 0; public WorksOrderClient() { InitializeComponent(); } private void connectToADBToolStripMenuItem_Click(object sender, EventArgs e) { lb.ShowDialog(); lb.Visible = true; } public static bool createDBConnDetObj(string username, string password, string database) { // increase the number of connections num_conns = num_conns + 1; // create the connection object wodb.AddDbConnDetails(username, password, database, num_conns); // create a new delegate object associated with the static // method WorksOrderClient.createLabelInPanel wodb.ProcessDbConnDetails(new ProcessDbConnDetailsDelegate(CreateLabelInPanel)); return true; } static void CreateLabelInPanel(DbConnDetails dbcd) { Console.Write("hellO"); string tmp = (string)dbcd.username; //Console.Write(tmp); } private void WorksOrderClient_Load(object sender, EventArgs e) { } } } WorksOrderStore.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using WorksOrderFactory; namespace WorksOrderStore { using System.Collections; // Describes a book in the book list: public struct WorksOrder { public string contractor_code { get; set; } // contractor ID public string email_address { get; set; } // contractors email address public string date_issued { get; set; } // date the works order was issued public string wo_ref { get; set; } // works order ref public string status { get; set; } // status ... not used public job_status js { get; set; } // status of this worksorder within this system public WorksOrder(string contractor_code, string email_address, string date_issued, string wo_ref) : this() { this.contractor_code = contractor_code; this.email_address = email_address; this.date_issued = date_issued; this.wo_ref = wo_ref; this.js = job_status.Pending; } } // Declare a delegate type for processing a WorksOrder: //public delegate void ProcessWorksOrderDelegate(WorksOrder worksorder); // Maintains a worksorder database. public class WorksOrderDB { // List of all worksorders in the database: ArrayList list = new ArrayList(); // Add a worksorder to the database: public void AddWorksOrder(string contractor_code, string email_address, string date_issued, string wo_ref) { list.Add(new WorksOrder(contractor_code, email_address, date_issued, wo_ref)); } // Call a passed-in delegate on each pending works order to process it: /*public void ProcessPendingWorksOrders(ProcessWorksOrderDelegate processWorksOrder) { foreach (WorksOrder wo in list) { if (wo.js.Equals(job_status.Pending)) // Calling the delegate: processWorksOrder(wo); } }*/ // Add a DbConnDetails to the database: public void AddDbConnDetails(string username, string password, string database, int conn_num) { list.Add(new DbConnDetails(username, password, database, conn_num)); } // Call a passed-in delegate on each dbconndet to process it: public void ProcessDbConnDetails(ProcessDbConnDetailsDelegate processDBConnDetails) { foreach (DbConnDetails wo in list) { processDBConnDetails(wo); } } } // statuses for worksorders in this system public enum job_status { Pending, InProgress, Completed } public struct DbConnDetails { public string username { get; set; } // username public string password { get; set; } // password public string database { get; set; } // database public int conn_num { get; set; } // this objects connection number. public ArrayList woList { get; set; } // list of works orders for this connection // this constructor just sets the db connection details // the woList array will get created later .. not a lot later but a bit. public DbConnDetails(string username, string password, string database, int conn_num) : this() { this.username = username; this.password = password; this.database = database; this.conn_num = conn_num; woList = new ArrayList(); } } // Declare a delegate type for processing a DbConnDetails: public delegate void ProcessDbConnDetailsDelegate(DbConnDetails dbConnDetails); } and LoginBox.cs using System; using System.Collections.Generic; using System.ComponentModel; using System.Drawing; using System.Data; using System.Linq; using System.Text; using System.Windows.Forms; namespace WorksOrderFactory { public partial class LoginBox : Form { public LoginBox() { InitializeComponent(); } private void LoginBox_Load(object sender, EventArgs e) { this.Visible = true; this.Show(); //usernameText.Text = "Username"; //new Font(usernameText.Font, FontStyle.Italic); } private void cancelBtn_Click(object sender, EventArgs e) { this.Close(); } private void loginBtn_Click(object sender, EventArgs e) { // set up a connection details object. bool success = WorksOrderClient.createDBConnDetObj(usernameText.Text, passwordText.Text, databaseText.Text); } private void LoginBox_Load_1(object sender, EventArgs e) { } } } Any ideas?? Cheers, m

    Read the article

  • USB hard drive not recognized

    - by user318772
    Until recently I was using the portable USB hard drive in my win 7 laptop and ubuntu laptop. Suddenly now none of the laptops recognize it. This is the message i get by doing lsusb... Bus 001 Device 004: ID 1058:1010 Western Digital Technologies, Inc. Elements External HDD Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 003: ID 0b97:7762 O2 Micro, Inc. Oz776 SmartCard Reader Bus 003 Device 002: ID 0b97:7761 O2 Micro, Inc. Oz776 1.1 Hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 413c:a005 Dell Computer Corp. Internal 2.0 Hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub fdisk doesn't show the external hard drive Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004a743 Device Boot Start End Blocks Id System /dev/sda1 * 2048 152111103 76054528 83 Linux /dev/sda2 152113150 156301311 2094081 5 Extended /dev/sda5 152113152 156301311 2094080 82 Linux swap / Solaris when i do testdisk TestDisk 6.14, Data Recovery Utility, July 2013 Christophe GRENIER <[email protected]> http://www.cgsecurity.org TestDisk is free software, and comes with ABSOLUTELY NO WARRANTY. Select a media (use Arrow keys, then press Enter): >Disk /dev/sda - 80 GB / 74 GiB - ST980825AS Disk /dev/sdb - 2199 GB / 2048 GiB testdisk-> Intel->analyse I get partition error Disk /dev/sdb - 2199 GB / 2048 GiB - CHS 2097152 64 32 Current partition structure: Partition Start End Size in sectors Partition: Read error Here is the output of dmesg [11948.549171] Add. Sense: Invalid command operation code [11948.549177] sd 2:0:0:0: [sdb] CDB: [11948.549181] Read(16): 88 00 00 00 00 00 00 00 00 00 00 00 00 08 00 00 [11948.550489] sd 2:0:0:0: [sdb] Invalid command failure [11948.550495] sd 2:0:0:0: [sdb] [11948.550499] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [11948.550505] sd 2:0:0:0: [sdb] [11948.550508] Sense Key : Illegal Request [current] [11948.550514] Info fld=0x0 [11948.550519] sd 2:0:0:0: [sdb] [11948.550525] Add. Sense: Invalid command operation code [11948.550531] sd 2:0:0:0: [sdb] CDB: [11948.550534] Read(16): 88 00 00 00 00 00 00 00 00 00 00 00 00 08 00 00 [11948.551870] sd 2:0:0:0: [sdb] Invalid command failure [11948.551876] sd 2:0:0:0: [sdb] [11948.551880] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [11948.551885] sd 2:0:0:0: [sdb] [11948.551888] Sense Key : Illegal Request [current] [11948.551895] Info fld=0x0 [11948.551900] sd 2:0:0:0: [sdb] [11948.551905] Add. Sense: Invalid command operation code [11948.551911] sd 2:0:0:0: [sdb] CDB: [11948.551914] Read(16): 88 00 00 00 00 00 00 00 00 00 00 00 00 08 00 00 If possible i want to retrive at least some data from this hard drive. If thats not possible I would like to format it and use it. Any help will be greatly appreciated Thanks

    Read the article

  • Passing data between android ListActivities in Java

    - by Will Janes
    I am new to Android! I am having a problem getting this code to work... Basically I Go from one list activity to another and pass the text from a list item through the intent of the activity to the new list view, then retrieve that text in the new list activity and then preform a http request based on value of that list item. Log Cat 04-05 17:47:32.370: E/AndroidRuntime(30135): FATAL EXCEPTION: main 04-05 17:47:32.370: E/AndroidRuntime(30135): java.lang.ClassCastException:android.widget.LinearLayout 04-05 17:47:32.370: E/AndroidRuntime(30135): at com.thickcrustdesigns.ufood.CatogPage$1.onItemClick(CatogPage.java:66) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.widget.AdapterView.performItemClick(AdapterView.java:284) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.widget.ListView.performItemClick(ListView.java:3731) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.widget.AbsListView$PerformClick.run(AbsListView.java:1959) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.os.Handler.handleCallback(Handler.java:587) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.os.Handler.dispatchMessage(Handler.java:92) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.os.Looper.loop(Looper.java:130) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.app.ActivityThread.main(ActivityThread.java:3691) 04-05 17:47:32.370: E/AndroidRuntime(30135): at java.lang.reflect.Method.invokeNative(Native Method) 04-05 17:47:32.370: E/AndroidRuntime(30135): at java.lang.reflect.Method.invoke(Method.java:507) 04-05 17:47:32.370: E/AndroidRuntime(30135): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:907) 04-05 17:47:32.370: E/AndroidRuntime(30135): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:665) 04-05 17:47:32.370: E/AndroidRuntime(30135): at dalvik.system.NativeStart.main(Native Method) ListActivity 1 package com.thickcrustdesigns.ufood; import java.util.ArrayList; import org.apache.http.NameValuePair; import org.apache.http.message.BasicNameValuePair; import org.json.JSONException; import org.json.JSONObject; import android.app.ListActivity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.widget.AdapterView; import android.widget.AdapterView.OnItemClickListener; import android.widget.Button; import android.widget.ListView; import android.widget.TextView; public class CatogPage extends ListActivity { ListView listView1; Button btn_bk; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.definition_main); btn_bk = (Button) findViewById(R.id.btn_bk); listView1 = (ListView) findViewById(android.R.id.list); ArrayList<NameValuePair> nvp = new ArrayList<NameValuePair>(); nvp.add(new BasicNameValuePair("request", "categories")); ArrayList<JSONObject> jsondefs = Request.fetchData(this, nvp); String[] defs = new String[jsondefs.size()]; for (int i = 0; i < jsondefs.size(); i++) { try { defs[i] = jsondefs.get(i).getString("Name"); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } } uFoodAdapter adapter = new uFoodAdapter(this, R.layout.definition_list, defs); listView1.setAdapter(adapter); ListView lv = getListView(); lv.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> parent, View view, int position, long id) { TextView tv = (TextView) view; String p = tv.getText().toString(); Intent i = new Intent(getApplicationContext(), Results.class); i.putExtra("category", p); startActivity(i); } }); btn_bk.setOnClickListener(new View.OnClickListener() { public void onClick(View arg0) { Intent i = new Intent(getApplicationContext(), UFoodAppActivity.class); startActivity(i); } }); } } **ListActivity 2** package com.thickcrustdesigns.ufood; import java.util.ArrayList; import org.apache.http.NameValuePair; import org.apache.http.message.BasicNameValuePair; import org.json.JSONException; import org.json.JSONObject; import android.app.ListActivity; import android.os.Bundle; import android.widget.ListView; public class Results extends ListActivity { ListView listView1; enum Category { Chicken, Beef, Chinese, Cocktails, Curry, Deserts, Fish, ForOne { public String toString() { return "For One"; } }, Lamb, LightBites { public String toString() { return "Light Bites"; } }, Pasta, Pork, Vegetarian } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); this.setContentView(R.layout.definition_main); listView1 = (ListView) findViewById(android.R.id.list); Bundle data = getIntent().getExtras(); String category = data.getString("category"); Category cat = Category.valueOf(category); String value = null; switch (cat) { case Chicken: value = "Chicken"; break; case Beef: value = "Beef"; break; case Chinese: value = "Chinese"; break; case Cocktails: value = "Cocktails"; break; case Curry: value = "Curry"; break; case Deserts: value = "Deserts"; break; case Fish: value = "Fish"; break; case ForOne: value = "ForOne"; break; case Lamb: value = "Lamb"; break; case LightBites: value = "LightBites"; break; case Pasta: value = "Pasta"; break; case Pork: value = "Pork"; break; case Vegetarian: value = "Vegetarian"; } ArrayList<NameValuePair> nvp = new ArrayList<NameValuePair>(); nvp.add(new BasicNameValuePair("request", "category")); nvp.add(new BasicNameValuePair("cat", value)); ArrayList<JSONObject> jsondefs = Request.fetchData(this, nvp); String[] defs = new String[jsondefs.size()]; for (int i = 0; i < jsondefs.size(); i++) { try { defs[i] = jsondefs.get(i).getString("Name"); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } } uFoodAdapter adapter = new uFoodAdapter(this, R.layout.definition_list, defs); listView1.setAdapter(adapter); } } Request package com.thickcrustdesigns.ufood; import java.io.BufferedReader; import java.io.InputStream; import java.io.InputStreamReader; import java.util.ArrayList; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.NameValuePair; import org.apache.http.client.HttpClient; import org.apache.http.client.entity.UrlEncodedFormEntity; import org.apache.http.client.methods.HttpPost; import org.apache.http.impl.client.DefaultHttpClient; import org.json.JSONArray; import org.json.JSONObject; import android.content.Context; import android.util.Log; import android.widget.Toast; public class Request { @SuppressWarnings("null") public static ArrayList<JSONObject> fetchData(Context context, ArrayList<NameValuePair> nvp) { ArrayList<JSONObject> listItems = new ArrayList<JSONObject>(); InputStream is = null; try { HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost( "http://co350-11d.projects02.glos.ac.uk/php/database.php"); httppost.setEntity(new UrlEncodedFormEntity(nvp)); HttpResponse response = httpclient.execute(httppost); HttpEntity entity = response.getEntity(); is = entity.getContent(); } catch (Exception e) { Log.e("log_tag", "Error in http connection" + e.toString()); } // convert response to string String result = ""; try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); InputStream stream = null; StringBuilder sb = null; while ((result = reader.readLine()) != null) { sb.append(result + "\n"); } stream.close(); result = sb.toString(); } catch (Exception e) { Log.e("log_tag", "Error converting result " + e.toString()); } try { JSONArray jArray = new JSONArray(result); for (int i = 0; i < jArray.length(); i++) { JSONObject jo = jArray.getJSONObject(i); listItems.add(jo); } } catch (Exception e) { Toast.makeText(context.getApplicationContext(), "None Found!", Toast.LENGTH_LONG).show(); } return listItems; } } Any help would be grateful! Many Thanks EDIT Sorry very tired so missed out my 2nd ListActivity package com.thickcrustdesigns.ufood; import java.util.ArrayList; import org.apache.http.NameValuePair; import org.apache.http.message.BasicNameValuePair; import org.json.JSONException; import org.json.JSONObject; import android.app.ListActivity; import android.os.Bundle; import android.widget.ListView; public class Results extends ListActivity { ListView listView1; enum Category { Chicken, Beef, Chinese, Cocktails, Curry, Deserts, Fish, ForOne { public String toString() { return "For One"; } }, Lamb, LightBites { public String toString() { return "Light Bites"; } }, Pasta, Pork, Vegetarian } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); this.setContentView(R.layout.definition_main); listView1 = (ListView) findViewById(android.R.id.list); Bundle data = getIntent().getExtras(); String category = data.getString("category"); Category cat = Category.valueOf(category); String value = null; switch (cat) { case Chicken: value = "Chicken"; break; case Beef: value = "Beef"; break; case Chinese: value = "Chinese"; break; case Cocktails: value = "Cocktails"; break; case Curry: value = "Curry"; break; case Deserts: value = "Deserts"; break; case Fish: value = "Fish"; break; case ForOne: value = "ForOne"; break; case Lamb: value = "Lamb"; break; case LightBites: value = "LightBites"; break; case Pasta: value = "Pasta"; break; case Pork: value = "Pork"; break; case Vegetarian: value = "Vegetarian"; } ArrayList<NameValuePair> nvp = new ArrayList<NameValuePair>(); nvp.add(new BasicNameValuePair("request", "category")); nvp.add(new BasicNameValuePair("cat", value)); ArrayList<JSONObject> jsondefs = Request.fetchData(this, nvp); String[] defs = new String[jsondefs.size()]; for (int i = 0; i < jsondefs.size(); i++) { try { defs[i] = jsondefs.get(i).getString("Name"); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } } uFoodAdapter adapter = new uFoodAdapter(this, R.layout.definition_list, defs); listView1.setAdapter(adapter); } } Sorry again! Cheers guys!

    Read the article

  • Get data from MySQL to Android application

    - by Mona
    I want to get data from MySQL database using PHP and display it in Android activity. I code it and pass JSON Array but there is a problem i dont know how to connect to server and my all database is on local server. I code it Kindly tell me where i go wrong so I can get exact results. I'll be very thankful to you. My PHP code is: <?php $response = array(); require_once __DIR__ . '/db_connect.php'; $db = new DB_CONNECT(); if (isset($_GET["cid"])) { $cid = $_GET['cid']; // get a product from products table $result = mysql_query("SELECT *FROM my_task WHERE cid = $cid"); if (!empty($result)) { // check for empty result if (mysql_num_rows($result) > 0) { $result = mysql_fetch_array($result); $task = array(); $task["cid"] = $result["cid"]; $task["cus_name"] = $result["cus_name"]; $task["contact_number"] = $result["contact_number"]; $task["ticket_no"] = $result["ticket_no"]; $task["task_detail"] = $result["task_detail"]; // success $response["success"] = 1; // user node $response["task"] = array(); array_push($response["my_task"], $task); // echoing JSON response echo json_encode($response); } else { // no task found $response["success"] = 0; $response["message"] = "No product found"; // echo no users JSON echo json_encode($response); } } else { // no task found $response["success"] = 0; $response["message"] = "No product found"; echo json_encode($response); } } else { $response["success"] = 0; $response["message"] = "Required field(s) is missing"; // echoing JSON response echo json_encode($response);} ?> My Android code is: public class My_Task extends Activity { TextView cus_name_txt, contact_no_txt, ticket_no_txt, task_detail_txt; EditText attend_by_txtbx, cus_name_txtbx, contact_no_txtbx, ticket_no_txtbx, task_detail_txtbx; Button btnSave; Button btnDelete; String cid; // Progress Dialog private ProgressDialog tDialog; // Creating JSON Parser object JSONParser jParser = new JSONParser(); ArrayList<HashMap<String, String>> my_taskList; // single task url private static final String url_read_mytask = "http://198.168.0.29/mobile/read_My_Task.php"; // url to update product private static final String url_update_mytask = "http://198.168.0.29/mobile/update_mytask.php"; // url to delete product private static final String url_delete_mytask = "http://198.168.0.29/mobile/delete_mytask.php"; // JSON Node names private static String TAG_SUCCESS = "success"; private static String TAG_MYTASK = "my_task"; private static String TAG_CID = "cid"; private static String TAG_NAME = "cus_name"; private static String TAG_CONTACT = "contact_number"; private static String TAG_TICKET = "ticket_no"; private static String TAG_TASKDETAIL = "task_detail"; private static String attend_by_txt; // task JSONArray JSONArray my_task = null; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.my_task); cus_name_txt = (TextView) findViewById(R.id.cus_name_txt); contact_no_txt = (TextView)findViewById(R.id.contact_no_txt); ticket_no_txt = (TextView)findViewById(R.id.ticket_no_txt); task_detail_txt = (TextView)findViewById(R.id.task_detail_txt); attend_by_txtbx = (EditText)findViewById(R.id.attend_by_txt); attend_by_txtbx.setText(My_Task.attend_by_txt); Spinner severity = (Spinner) findViewById(R.id.severity_spinner); // Create an ArrayAdapter using the string array and a default spinner layout ArrayAdapter<CharSequence> adapter3 = ArrayAdapter.createFromResource(this, R.array.Severity_array, android.R.layout.simple_dropdown_item_1line); // Specify the layout to use when the list of choices appears adapter3.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); // Apply the adapter to the spinner severity.setAdapter(adapter3); // save button btnSave = (Button) findViewById(R.id.btnSave); btnDelete = (Button) findViewById(R.id.btnDelete); // getting product details from intent Intent i = getIntent(); // getting product id (pid) from intent cid = i.getStringExtra(TAG_CID); // Getting complete product details in background thread new GetProductDetails().execute(); // save button click event btnSave.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View arg0) { // starting background task to update product new SaveProductDetails().execute(); } }); // Delete button click event btnDelete.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View arg0) { // deleting product in background thread new DeleteProduct().execute(); } }); } /** * Background Async Task to Get complete product details * */ class GetProductDetails extends AsyncTask<String, String, String> { /** * Before starting background thread Show Progress Dialog * */ @Override protected void onPreExecute() { super.onPreExecute(); tDialog = new ProgressDialog(My_Task.this); tDialog.setMessage("Loading task details. Please wait..."); tDialog.setIndeterminate(false); tDialog.setCancelable(true); tDialog.show(); } /** * Getting product details in background thread * */ protected String doInBackground(String... params) { // updating UI from Background Thread runOnUiThread(new Runnable() { public void run() { // Check for success tag int success; try { // Building Parameters List<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair("cid", cid)); // getting product details by making HTTP request // Note that product details url will use GET request JSONObject json = JSONParser.makeHttpRequest( url_read_mytask, "GET", params); // check your log for json response Log.d("Single Task Details", json.toString()); // json success tag success = json.getInt(TAG_SUCCESS); if (success == 1) { // successfully received product details JSONArray my_taskObj = json .getJSONArray(TAG_MYTASK); // JSON Array // get first product object from JSON Array JSONObject my_task = my_taskObj.getJSONObject(0); // task with this cid found // Edit Text // display task data in EditText cus_name_txtbx = (EditText) findViewById(R.id.cus_name_txt); cus_name_txtbx.setText(my_task.getString(TAG_NAME)); contact_no_txtbx = (EditText) findViewById(R.id.contact_no_txt); contact_no_txtbx.setText(my_task.getString(TAG_CONTACT)); ticket_no_txtbx = (EditText) findViewById(R.id.ticket_no_txt); ticket_no_txtbx.setText(my_task.getString(TAG_TICKET)); task_detail_txtbx = (EditText) findViewById(R.id.task_detail_txt); task_detail_txtbx.setText(my_task.getString(TAG_TASKDETAIL)); } else { // task with cid not found } } catch (JSONException e) { e.printStackTrace(); } } }); return null; } /** * After completing background task Dismiss the progress dialog * **/ protected void onPostExecute(String file_url) { // dismiss the dialog once got all details tDialog.dismiss(); } } /** * Background Async Task to Save product Details * */ class SaveProductDetails extends AsyncTask<String, String, String> { /** * Before starting background thread Show Progress Dialog * */ @Override protected void onPreExecute() { super.onPreExecute(); tDialog = new ProgressDialog(My_Task.this); tDialog.setMessage("Saving task ..."); tDialog.setIndeterminate(false); tDialog.setCancelable(true); tDialog.show(); } /** * Saving product * */ protected String doInBackground(String... args) { // getting updated data from EditTexts String cus_name = cus_name_txt.getText().toString(); String contact_no = contact_no_txt.getText().toString(); String ticket_no = ticket_no_txt.getText().toString(); String task_detail = task_detail_txt.getText().toString(); // Building Parameters List<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair(TAG_CID, cid)); params.add(new BasicNameValuePair(TAG_NAME, cus_name)); params.add(new BasicNameValuePair(TAG_CONTACT, contact_no)); params.add(new BasicNameValuePair(TAG_TICKET, ticket_no)); params.add(new BasicNameValuePair(TAG_TASKDETAIL, task_detail)); // sending modified data through http request // Notice that update product url accepts POST method JSONObject json = JSONParser.makeHttpRequest(url_update_mytask, "POST", params); // check json success tag try { int success = json.getInt(TAG_SUCCESS); if (success == 1) { // successfully updated Intent i = getIntent(); // send result code 100 to notify about product update setResult(100, i); finish(); } else { // failed to update product } } catch (JSONException e) { e.printStackTrace(); } return null; } /** * After completing background task Dismiss the progress dialog * **/ protected void onPostExecute(String file_url) { // dismiss the dialog once product uupdated tDialog.dismiss(); } } /***************************************************************** * Background Async Task to Delete Product * */ class DeleteProduct extends AsyncTask<String, String, String> { /** * Before starting background thread Show Progress Dialog * */ @Override protected void onPreExecute() { super.onPreExecute(); tDialog = new ProgressDialog(My_Task.this); tDialog.setMessage("Deleting Product..."); tDialog.setIndeterminate(false); tDialog.setCancelable(true); tDialog.show(); } /** * Deleting product * */ protected String doInBackground(String... args) { // Check for success tag int success; try { // Building Parameters List<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair("cid", cid)); // getting product details by making HTTP request JSONObject json = JSONParser.makeHttpRequest( url_delete_mytask, "POST", params); // check your log for json response Log.d("Delete Task", json.toString()); // json success tag success = json.getInt(TAG_SUCCESS); if (success == 1) { // product successfully deleted // notify previous activity by sending code 100 Intent i = getIntent(); // send result code 100 to notify about product deletion setResult(100, i); finish(); } } catch (JSONException e) { e.printStackTrace(); } return null; } /** * After completing background task Dismiss the progress dialog * **/ protected void onPostExecute(String file_url) { // dismiss the dialog once product deleted tDialog.dismiss(); } } public void onItemSelected(AdapterView<?> parent, View view, int pos, long id) { // An item was selected. You can retrieve the selected item using // parent.getItemAtPosition(pos) } public void onNothingSelected(AdapterView<?> parent) { // Another interface callback } } My JSONParser code is: public class JSONParser { static InputStream is = null; static JSONObject jObj = null; static String json = ""; // constructor public JSONParser() { } // function get json from url // by making HTTP POST or GET mehtod public static JSONObject makeHttpRequest(String url, String method, List<NameValuePair> params) { // Making HTTP request try { // check for request method if(method == "POST"){ // request method is POST // defaultHttpClient DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); httpPost.setEntity(new UrlEncodedFormEntity(params)); HttpResponse httpResponse = httpClient.execute(httpPost); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); }else if(method == "GET"){ // request method is GET DefaultHttpClient httpClient = new DefaultHttpClient(); String paramString = URLEncodedUtils.format(params, "utf-8"); url += "?" + paramString; HttpGet httpGet = new HttpGet(url); HttpResponse httpResponse = httpClient.execute(httpGet); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); } } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } is.close(); json = sb.toString(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } // try parse the string to a JSON object try { jObj = new JSONObject(json); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); } // return JSON String return jObj; my all database is in localhost and it is not opening an activity. displays an error "Stopped unexpectedly":( How can i get exact results. Kindly guide me

    Read the article

  • Convert Java program to C

    - by imicrothinking
    I need a bit of guidance with writing a C program...a bit of quick background as to my level, I've programmed in Java previously, but this is my first time programming in C, and we've been tasked to translate a word count program from Java to C that consists of the following: Read a file from memory Count the words in the file For each occurrence of a unique word, keep a word counter variable Print out the top ten most frequent words and their corresponding occurrences Here's the source program in Java: package lab0; import java.io.File; import java.io.FileReader; import java.util.ArrayList; import java.util.Calendar; import java.util.Collections; public class WordCount { private ArrayList<WordCountNode> outputlist = null; public WordCount(){ this.outputlist = new ArrayList<WordCountNode>(); } /** * Read the file into memory. * * @param filename name of the file. * @return content of the file. * @throws Exception if the file is too large or other file related exception. */ public char[] readFile(String filename) throws Exception{ char [] result = null; File file = new File(filename); long size = file.length(); if (size > Integer.MAX_VALUE){ throw new Exception("File is too large"); } result = new char[(int)size]; FileReader reader = new FileReader(file); int len, offset = 0, size2read = (int)size; while(size2read > 0){ len = reader.read(result, offset, size2read); if(len == -1) break; size2read -= len; offset += len; } return result; } /** * Make article word by word. * * @param article the content of file to be counted. * @return string contains only letters and "'". */ private enum SPLIT_STATE {IN_WORD, NOT_IN_WORD}; /** * Go through article, find all the words and add to output list * with their count. * * @param article the content of the file to be counted. * @return words in the file and their counts. */ public ArrayList<WordCountNode> countWords(char[] article){ SPLIT_STATE state = SPLIT_STATE.NOT_IN_WORD; if(null == article) return null; char curr_ltr; int curr_start = 0; for(int i = 0; i < article.length; i++){ curr_ltr = Character.toUpperCase( article[i]); if(state == SPLIT_STATE.IN_WORD){ article[i] = curr_ltr; if ((curr_ltr < 'A' || curr_ltr > 'Z') && curr_ltr != '\'') { article[i] = ' '; //printf("\nthe word is %s\n\n",curr_start); if(i - curr_start < 0){ System.out.println("i = " + i + " curr_start = " + curr_start); } addWord(new String(article, curr_start, i-curr_start)); state = SPLIT_STATE.NOT_IN_WORD; } }else{ if (curr_ltr >= 'A' && curr_ltr <= 'Z') { curr_start = i; article[i] = curr_ltr; state = SPLIT_STATE.IN_WORD; } } } return outputlist; } /** * Add the word to output list. */ public void addWord(String word){ int pos = dobsearch(word); if(pos >= outputlist.size()){ outputlist.add(new WordCountNode(1L, word)); }else{ WordCountNode tmp = outputlist.get(pos); if(tmp.getWord().compareTo(word) == 0){ tmp.setCount(tmp.getCount() + 1); }else{ outputlist.add(pos, new WordCountNode(1L, word)); } } } /** * Search the output list and return the position to put word. * @param word is the word to be put into output list. * @return position in the output list to insert the word. */ public int dobsearch(String word){ int cmp, high = outputlist.size(), low = -1, next; // Binary search the array to find the key while (high - low > 1) { next = (high + low) / 2; // all in upper case cmp = word.compareTo((outputlist.get(next)).getWord()); if (cmp == 0) return next; else if (cmp < 0) high = next; else low = next; } return high; } public static void main(String args[]){ // handle input if (args.length == 0){ System.out.println("USAGE: WordCount <filename> [Top # of results to display]\n"); System.exit(1); } String filename = args[0]; int dispnum; try{ dispnum = Integer.parseInt(args[1]); }catch(Exception e){ dispnum = 10; } long start_time = Calendar.getInstance().getTimeInMillis(); WordCount wordcount = new WordCount(); System.out.println("Wordcount: Running..."); // read file char[] input = null; try { input = wordcount.readFile(filename); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); System.exit(1); } // count all word ArrayList<WordCountNode> result = wordcount.countWords(input); long end_time = Calendar.getInstance().getTimeInMillis(); System.out.println("wordcount: completed " + (end_time - start_time)/1000000 + "." + (end_time - start_time)%1000000 + "(s)"); System.out.println("wordsort: running ..."); start_time = Calendar.getInstance().getTimeInMillis(); Collections.sort(result); end_time = Calendar.getInstance().getTimeInMillis(); System.out.println("wordsort: completed " + (end_time - start_time)/1000000 + "." + (end_time - start_time)%1000000 + "(s)"); Collections.reverse(result); System.out.println("\nresults (TOP "+ dispnum +" from "+ result.size() +"):\n" ); // print out result String str ; for (int i = 0; i < result.size() && i < dispnum; i++){ if(result.get(i).getWord().length() > 15) str = result.get(i).getWord().substring(0, 14); else str = result.get(i).getWord(); System.out.println(str + " - " + result.get(i).getCount()); } } public class WordCountNode implements Comparable{ private String word; private long count; public WordCountNode(long count, String word){ this.count = count; this.word = word; } public String getWord() { return word; } public void setWord(String word) { this.word = word; } public long getCount() { return count; } public void setCount(long count) { this.count = count; } public int compareTo(Object arg0) { // TODO Auto-generated method stub WordCountNode obj = (WordCountNode)arg0; if( count - obj.getCount() < 0) return -1; else if( count - obj.getCount() == 0) return 0; else return 1; } } } Here's my attempt (so far) in C: #include <stdio.h> #include <stdlib.h> #include <stdbool.h> #include <string.h> // Read in a file FILE *readFile (char filename[]) { FILE *inputFile; inputFile = fopen (filename, "r"); if (inputFile == NULL) { printf ("File could not be opened.\n"); exit (EXIT_FAILURE); } return inputFile; } // Return number of words in an array int wordCount (FILE *filePointer, char filename[]) {//, char *words[]) { // count words int count = 0; char temp; while ((temp = getc(filePointer)) != EOF) { //printf ("%c", temp); if ((temp == ' ' || temp == '\n') && (temp != '\'')) count++; } count += 1; // counting method uses space AFTER last character in word - the last space // of the last character isn't counted - off by one error // close file fclose (filePointer); return count; } // Print out the frequencies of the 10 most frequent words in the console int main (int argc, char *argv[]) { /* Step 1: Read in file and check for errors */ FILE *filePointer; filePointer = readFile (argv[1]); /* Step 2: Do a word count to prep for array size */ int count = wordCount (filePointer, argv[1]); printf ("Number of words is: %i\n", count); /* Step 3: Create a 2D array to store words in the file */ // open file to reset marker to beginning of file filePointer = fopen (argv[1], "r"); // store words in character array (each element in array = consecutive word) char allWords[count][100]; // 100 is an arbitrary size - max length of word int i,j; char temp; for (i = 0; i < count; i++) { for (j = 0; j < 100; j++) { // labels are used with goto statements, not loops in C temp = getc(filePointer); if ((temp == ' ' || temp == '\n' || temp == EOF) && (temp != '\'') ) { allWords[i][j] = '\0'; break; } else { allWords[i][j] = temp; } printf ("%c", allWords[i][j]); } printf ("\n"); } // close file fclose (filePointer); /* Step 4: Use a simple selection sort algorithm to sort 2D char array */ // PStep 1: Compare two char arrays, and if // (a) c1 > c2, return 2 // (b) c1 == c2, return 1 // (c) c1 < c2, return 0 qsort(allWords, count, sizeof(char[][]), pstrcmp); /* int k = 0, l = 0, m = 0; char currentMax, comparedElement; int max; // the largest element in the current 2D array int elementToSort = 0; // elementToSort determines the element to swap with starting from the left // Outer a iterates through number of swaps needed for (k = 0; k < count - 1; k++) { // times of swaps max = k; // max element set to k // Inner b iterates through successive elements to fish out the largest element for (m = k + 1; m < count - k; m++) { currentMax = allWords[k][l]; comparedElement = allWords[m][l]; // Inner c iterates through successive chars to set the max vars to the largest for (l = 0; (currentMax != '\0' || comparedElement != '\0'); l++) { if (currentMax > comparedElement) break; else if (currentMax < comparedElement) { max = m; currentMax = allWords[m][l]; break; } else if (currentMax == comparedElement) continue; } } // After max (count and string) is determined, perform swap with temp variable char swapTemp[1][20]; int y = 0; do { swapTemp[0][y] = allWords[elementToSort][y]; allWords[elementToSort][y] = allWords[max][y]; allWords[max][y] = swapTemp[0][y]; } while (swapTemp[0][y++] != '\0'); elementToSort++; } */ int a, b; for (a = 0; a < count; a++) { for (b = 0; (temp = allWords[a][b]) != '\0'; b++) { printf ("%c", temp); } printf ("\n"); } // Copy rows to different array and print results /* char arrayCopy [count][20]; int ac, ad; char tempa; for (ac = 0; ac < count; ac++) { for (ad = 0; (tempa = allWords[ac][ad]) != '\0'; ad++) { arrayCopy[ac][ad] = tempa; printf("%c", arrayCopy[ac][ad]); } printf("\n"); } */ /* Step 5: Create two additional arrays: (a) One in which each element contains unique words from char array (b) One which holds the count for the corresponding word in the other array */ /* Step 6: Sort the count array in decreasing order, and print the corresponding array element as well as word count in the console */ return 0; } // Perform housekeeping tasks like freeing up memory and closing file I'm really stuck on the selection sort algorithm. I'm currently using 2D arrays to represent strings, and that worked out fine, but when it came to sorting, using three level nested loops didn't seem to work, I tried to use qsort instead, but I don't fully understand that function as well. Constructive feedback and criticism greatly welcome (...and needed)!

    Read the article

  • Feb 2nd Links: Visual Studio, ASP.NET, ASP.NET MVC, JQuery, Windows Phone

    - by ScottGu
    Here is the latest in my link-listing series.  Also check out my Best of 2010 Summary for links to 100+ other posts I’ve done in the last year. [I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Community News MVCConf Conference Next Wednesday: Attend the free, online ASP.NET MVC Conference being organized by the community next Wednesday.  Here is a list of some of the talks you can watch live. Visual Studio HTML5 and CSS3 in VS 2010 SP1: Good post from the Visual Studio web tools team that talks about the new support coming in VS 2010 SP1 for HTML5 and CSS3. Database Deployment with the VS 2010 Package/Publish Database Tool: Rachel Appel has a nice post that covers how to enable database deployment using the built-in VS 2010 web deployment support.  Also check out her ASP.NET web deployment post from last month. VsVim Update Released: Jared posts about the latest update of his VsVim extension for Visual Studio 2010.  This free extension enables VIM based key-bindings within VS. ASP.NET How to Add Mobile Pages to your ASP.NET Web Forms / MVC Apps: Great whitepaper by Steve Sanderson that covers how to mobile-enable your ASP.NET and ASP.NET MVC based applications. New Entity Framework Tutorials for ASP.NET Developers: The ASP.NET and EF teams have put together a bunch of nice tutorials on using the Entity Framework data library with ASP.NET Web Forms. Using ASP.NET Dynamic Data with EF Code First (via NuGet): Nice post from David Ebbo that talks about how to use the new EF Code First Library with ASP.NET Dynamic Data. Common Performance Issues with ASP.NET Web Sites: Good post with lots of performance tuning suggestions (mostly deployment settings) for ASP.NET apps. ASP.NET MVC Razor View Converter: Free, automated tool from Terlik that can convert existing .aspx view templates to Razor view templates. ASP.NET MVC 3 Internationalization: Nadeem has a great post that talks about a variety of techniques you can use to enable Globalization and Localization within your ASP.NET MVC 3 applications. ASP.NET MVC 3 Tutorials by David Hayden: Great set of tutorials and posts by David Hayden on some of the new ASP.NET MVC 3 features. EF Fixed Concurrency Mode and MVC: Chris Sells has a nice post that talks about how to handle concurrency with updates done with EF using ASP.NET MVC. ASP.NET and jQuery jQuery Performance Tips and Tricks: A free 30 minute video that covers some great tips and tricks to keep in mind when using jQuery. jQuery 1.5’s AJAX rewrite and ASP.NET services - All is well: Nice post by Dave Ward that talks about using the new jQuery 1.5 to call ASP.NET ASMX Services. Good news according to Dave is that all is well :-) jQuery UI Modal Dialogs for ASP.NET MVC: Nice post by Rob Regan that talks about a few approaches you can use to implement dialogs with jQuery UI and ASP.NET MVC.  Windows Phone 7 Free PDF eBook on Building Windows Phone 7 Applications with Silverlight: Free book that walksthrough how to use Silverlight and Visual Studio to build Windows Phone 7 applications. Hope this helps, Scott

    Read the article

  • SQL SERVER – Introduction to Wait Stats and Wait Types – Wait Type – Day 1 of 28

    - by pinaldave
    I have been working a lot on Wait Stats and Wait Types recently. Last Year, I requested blog readers to send me their respective server’s wait stats. I appreciate their kind response as I have received  Wait stats from my readers. I took each of the results and carefully analyzed them. I provided necessary feedback to the person who sent me his wait stats and wait types. Based on the feedbacks I got, many of the readers have tuned their server. After a while I got further feedbacks on my recommendations and again, I collected wait stats. I recorded the wait stats and my recommendations and did further research. At some point at time, there were more than 10 different round trips of the recommendations and suggestions. Finally, after six month of working my hands on performance tuning, I have collected some real world wisdom because of this. Now I plan to share my findings with all of you over here. Before anything else, please note that all of these are based on my personal observations and opinions. They may or may not match the theory available at other places. Some of the suggestions may not match your situation. Remember, every server is different and consequently, there is more than one solution to a particular problem. However, this series is written with kept wait stats in mind. While I was working on various performance tuning consultations, I did many more things than just tuning wait stats. Today we will discuss how to capture the wait stats. I use the script diagnostic script created by my friend and SQL Server Expert Glenn Berry to collect wait stats. Here is the script to collect the wait stats: -- Isolate top waits for server instance since last restart or statistics clear WITH Waits AS (SELECT wait_type, wait_time_ms / 1000. AS wait_time_s, 100. * wait_time_ms / SUM(wait_time_ms) OVER() AS pct, ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS rn FROM sys.dm_os_wait_stats WHERE wait_type NOT IN ('CLR_SEMAPHORE','LAZYWRITER_SLEEP','RESOURCE_QUEUE','SLEEP_TASK' ,'SLEEP_SYSTEMTASK','SQLTRACE_BUFFER_FLUSH','WAITFOR', 'LOGMGR_QUEUE','CHECKPOINT_QUEUE' ,'REQUEST_FOR_DEADLOCK_SEARCH','XE_TIMER_EVENT','BROKER_TO_FLUSH','BROKER_TASK_STOP','CLR_MANUAL_EVENT' ,'CLR_AUTO_EVENT','DISPATCHER_QUEUE_SEMAPHORE', 'FT_IFTS_SCHEDULER_IDLE_WAIT' ,'XE_DISPATCHER_WAIT', 'XE_DISPATCHER_JOIN', 'SQLTRACE_INCREMENTAL_FLUSH_SLEEP')) SELECT W1.wait_type, CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s, CAST(W1.pct AS DECIMAL(12, 2)) AS pct, CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct FROM Waits AS W1 INNER JOIN Waits AS W2 ON W2.rn <= W1.rn GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct HAVING SUM(W2.pct) - W1.pct < 99 OPTION (RECOMPILE); -- percentage threshold GO This script uses Dynamic Management View sys.dm_os_wait_stats to collect the wait stats. It omits the system-related wait stats which are not useful to diagnose performance-related bottleneck. Additionally, not OPTION (RECOMPILE) at the end of the DMV will ensure that every time the query runs, it retrieves new data and not the cached data. This dynamic management view collects all the information since the time when the SQL Server services have been restarted. You can also manually clear the wait stats using the following command: DBCC SQLPERF('sys.dm_os_wait_stats', CLEAR); Once the wait stats are collected, we can start analysis them and try to see what is causing any particular wait stats to achieve higher percentages than the others. Many waits stats are related to one another. When the CPU pressure is high, all the CPU-related wait stats show up on top. But when that is fixed, all the wait stats related to the CPU start showing reasonable percentages. It is difficult to have a sure solution, but there are good indications and good suggestions on how to solve this. I will keep this blog post updated as I will post more details about wait stats and how I reduce them. The reference to Book On Line is over here. Of course, I have selected February to run this Wait Stats series. I am already cheating by having the smallest month to run this series. :) Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Nepotism In The SQL Family

    - by Rob Farley
    There’s a bunch of sayings about nepotism. It’s unpopular, unless you’re the family member who is getting the opportunity. But of course, so much in life (and career) is about who you know. From the perspective of the person who doesn’t get promoted (when the family member is), nepotism is simply unfair; even more so when the promoted one seems less than qualified, or incompetent in some way. We definitely get a bit miffed about that. But let’s also look at it from the other side of the fence – the person who did the promoting. To them, their son/daughter/nephew/whoever is just another candidate, but one in whom they have more faith. They’ve spent longer getting to know that person. They know their weaknesses and their strengths, and have seen them in all kinds of situations. They expect them to stay around in the company longer. And yes, they may have plans for that person to inherit one day. Sure, they have a vested interest, because they’d like their family members to have strong careers, but it’s not just about that – it’s often best for the company as well. I’m not announcing that the next LobsterPot employee is one of my sons (although I wouldn’t be opposed to the idea of getting them involved), but actually, admitting that almost all the LobsterPot employees are SQLFamily members… …which makes this post good for T-SQL Tuesday, this month hosted by Jeffrey Verheul (@DevJef). You see, SQLFamily is the concept that the people in the SQL Server community are close. We have something in common that goes beyond ordinary friendship. We might only see each other a few times a year, at events like the PASS Summit and SQLSaturdays, but the bonds that are formed are strong, going far beyond typical professional relationships. And these are the people that I am prepared to hire. People that I have got to know. I get to know their skill level, how well they explain things, how confident people are in their expertise, and what their values are. Of course there people that I wouldn’t hire, but I’m a lot more comfortable hiring someone that I’ve already developed a feel for. I need to trust the LobsterPot brand to people, and that means they need to have a similar value system to me. They need to have a passion for helping people and doing what they can to make a difference. Above all, they need to have integrity. Therefore, I believe in nepotism. All the people I’ve hired so far are people from the SQL community. I don’t know whether I’ll always be able to hire that way, but I have no qualms admitting that the things I look for in an employee are things that I can recognise best in those that are referred to as SQLFamily. …like Ted Krueger (@onpnt), LobsterPot’s newest employee and the guy who is representing our brand in America. I’m completely proud of this guy. He’s everything I want in an employee. He’s an experienced consultant (even wrote a book on it!), loving husband and father, genuine expert, and incredibly respected by his peers. It’s not favouritism, it’s just choosing someone I’ve been interviewing for years. @rob_farley

    Read the article

  • SQL SERVER – Enumerations in Relational Database – Best Practice

    - by pinaldave
    Marko Parkkola This article has been submitted by Marko Parkkola, Data systems designer at Saarionen Oy, Finland. Marko is excellent developer and always thinking at next level. You can read his earlier comment which created very interesting discussion here: SQL SERVER- IF EXISTS(Select null from table) vs IF EXISTS(Select 1 from table). I must express my special thanks to Marko for sending this best practice for Enumerations in Relational Database. He has really wrote excellent piece here and welcome comments here. Enumerations in Relational Database This is a subject which is very basic thing in relational databases but often not very well understood and sometimes badly implemented. There are of course many ways to do this but I concentrate only two cases, one which is “the right way” and one which is definitely wrong way. The concept Let’s say we have table Person in our database. Person has properties/fields like Firstname, Lastname, Birthday and so on. Then there’s a field that tells person’s marital status and let’s name it the same way; MaritalStatus. Now MaritalStatus is an enumeration. In C# I would definitely make it an enumeration with values likes Single, InRelationship, Married, Divorced. Now here comes the problem, SQL doesn’t have enumerations. The wrong way This is, in my opinion, absolutely the wrong way to do this. It has one upside though; you’ll see the enumeration’s description instantly when you do simple SELECT query and you don’t have to deal with mysterious values. There’s plenty of downsides too and one would be database fragmentation. Consider this (I’ve left all indexes and constraints out of the query on purpose). CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] NVARCHAR(10) ) You have nvarchar(20) field in the table that tells the marital status. Obvious problem with this is that what if you create a new value which doesn’t fit into 20 characters? You’ll have to come and alter the table. There are other problems also but I’ll leave those for the reader to think about. The correct way Here’s how I’ve done this in many projects. This model still has one problem but it can be alleviated in the application layer or with CHECK constraints if you like. First I will create a namespace table which tells the name of the enumeration. I will add one row to it too. I’ll write all the indexes and constraints here too. CREATE TABLE [CodeNamespace] ( [Id] INT IDENTITY(1, 1), [Name] NVARCHAR(100) NOT NULL, CONSTRAINT [PK_CodeNamespace] PRIMARY KEY ([Id]), CONSTRAINT [IXQ_CodeNamespace_Name] UNIQUE NONCLUSTERED ([Name]) ) GO INSERT INTO [CodeNamespace] SELECT 'MaritalStatus' GO Then I create a table that holds the actual values and which reference to namespace table in order to group the values under different namespaces. I’ll add couple of rows here too. CREATE TABLE [CodeValue] ( [CodeNamespaceId] INT NOT NULL, [Value] INT NOT NULL, [Description] NVARCHAR(100) NOT NULL, [OrderBy] INT, CONSTRAINT [PK_CodeValue] PRIMARY KEY CLUSTERED ([CodeNamespaceId], [Value]), CONSTRAINT [FK_CodeValue_CodeNamespace] FOREIGN KEY ([CodeNamespaceId]) REFERENCES [CodeNamespace] ([Id]) ) GO -- 1 is the 'MaritalStatus' namespace INSERT INTO [CodeValue] SELECT 1, 1, 'Single', 1 INSERT INTO [CodeValue] SELECT 1, 2, 'In relationship', 2 INSERT INTO [CodeValue] SELECT 1, 3, 'Married', 3 INSERT INTO [CodeValue] SELECT 1, 4, 'Divorced', 4 GO Now there’s four columns in CodeValue table. CodeNamespaceId tells under which namespace values belongs to. Value tells the enumeration value which is used in Person table (I’ll show how this is done below). Description tells what the value means. You can use this, for example, column in UI’s combo box. OrderBy tells if the values needs to be ordered in some way when displayed in the UI. And here’s the Person table again now with correct columns. I’ll add one row here to show how enumerations are to be used. CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] INT ) GO INSERT INTO [Person] SELECT 'Marko', 'Parkkola', '1977-03-04', 3 GO Now I said earlier that there is one problem with this. MaritalStatus column doesn’t have any database enforced relationship to the CodeValue table so you can enter any value you like into this field. I’ve solved this problem in the application layer by selecting all the values from the CodeValue table and put them into a combobox / dropdownlist (with Value field as value and Description as text) so the end user can’t enter any illegal values; and of course I’ll check the entered value in data access layer also. I said in the “The wrong way” section that there is one benefit to it. In fact, you can have the same benefit here by using a simple view, which I schema bound so you can even index it if you like. CREATE VIEW [dbo].[Person_v] WITH SCHEMABINDING AS SELECT p.[Firstname], p.[Lastname], p.[BirthDay], c.[Description] MaritalStatus FROM [dbo].[Person] p JOIN [dbo].[CodeValue] c ON p.[MaritalStatus] = c.[Value] JOIN [dbo].[CodeNamespace] n ON n.[Id] = c.[CodeNamespaceId] AND n.[Name] = 'MaritalStatus' GO -- Select from View SELECT * FROM [dbo].[Person_v] GO This is excellent write up byMarko Parkkola. Do you have this kind of design setup at your organization? Let us know your opinion. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Database, DBA, Readers Contribution, Software Development, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – What is Spatial Database? – Developing with SQL Server Spatial and Deep Dive into Spati

    - by pinaldave
    What is Spatial Database? A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. (Source: Wikipedia) Today I will be talking about the same subject at Microsoft TechEd India. If you want to learn about how to spatial aspect of data and how to integrate them with SQL Server this is the perfect session for you. Spatial is very special concept of SQL Server and I really like how it is implemented in SQL Server. In general Performance Tuning and Query Optimization is something I always have enjoyed in my professional life. Index are my best friends and many time, by implementing and many time by removing I have improved the performance of the system. In this session, I will be talking about Index along with Spatial Data. As Spatial Database is very interesting concept, I will cover super short but very interesting 10 quick slides about this subject. I will make sure in very first 20 mins, you will understand following topics Introduction to Spatial Database One line definition Understanding Spatial Indexing Index Internals Query/Performance Tuning Query Hinting/Cost Analysis Spatial Index Catalog Views Performance Troubleshooting Finding Optimal Index using Spatial Index SP Common Errors Index Maintenance This slides decks will be followed by around 30 mins demo which will have story of geometry, geography, index internals and performance tuning. If you are interested in learning how GIS works and how SQL Server out of the box supports this wonderful tools, you will really like how the story is told. I am sure all people who attend the event will know how the Bangalore is positioned on the map of India. I will take example of Bangalore and Hyderabad and demonstrate how index can improve the performance. Well there are lots of story to tell in the session, and I will be opening this session with the beautiful script of Botticelli’s Birth of Venus created by Michael J. Swart. I will also demonstrate few real life scenario where I will be talking about Spatial Database and its usage. Do not miss this session. At the end of session there will be book awarded to best participant. My session details: Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Date: April 14, 2010 Time: 5:00pm-6:00pm Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: Spatial Database

    Read the article

  • Convert Video and Remove Commercials in Windows 7 Media Center with MCEBuddy 1.1

    - by DigitalGeekery
    Today look at MCEBuddy for Windows 7 Media Center. This handy app automatically takes your recorded TV files and converts them to MP4, AVI, WMV, or MPEG format. It even has the option to cut out those annoying commercials during the conversion process. Installation and Configuration Download and extract MCE Buddy. (Download link below) Run the setup.exe file and take all the default settings.   Open MCEBuddy Configuration by going to Start > All Programs > MCEBuddy > MCEBuddy Configuration.   Video Paths The MCEBuddy application is comprised of a single window. The first step you’ll want to take is to define your Source and Destination paths. The “Source” will most likely be your Recorded TV directory. The Destination should NOT be the same as the Source folder. Note: The Recorded TV directory in Windows 7 Media Center will only display and play WTV & DVR-MS files. To watch the converted MP4, AVI, WMV, or MPEG files in Windows Media Center you’ll need to add them to your Video Library or Movie Library. Video Conversion Next, choose your preferred format for conversion from the “Convert to” drop down list. The default is MP4 with the H.264 codec. You’ll find a wide variety of formats. The first set of conversion options in the drop down list will resize the video to 720 pixels wide. The next two sections maintain the original size, and the final section is for a variety of portable devices.   Next, you’ll see a group of check boxes below the “Convert to” drop down list. The Commercial Skipping option will cut the commercials while converting the file. Sort By Series will create a sub-folder in your Destination folder for each TV show. Delete Original will delete the WTV file after conversion is complete. (This option is not recommended unless you are sure your files are converting properly and you no longer need the WTV file.) Start Minimized is ideal if you want to run MCEBuddy on Windows startup. Note: MCEBuddy installs and uses Comskip for commercial cutting by default. However, if you have ShowAnalyzer installed, it will use that application instead. Advanced Options To choose a specific time of day to perform the conversions, click the checkbox under the “Advanced Options,” and select the starting and ending times for conversion. For example, convert between 2 hours and 5 hours would be between 2 am and 5am. If you want MCEBuddy to constantly look for and immediately convert new recordings, leave the box unchecked.   The “Video age” option lets you choose a specific number of days to wait before performing the conversion. This can be useful if you want to watch the recordings first and delete those you don’t wish to convert. You can also choose the “Sub Directories” if you’d like MCEBuddy to convert files that are in a sub-folder in your “Source” directory. Second Conversion As you might expect, this option allows MCEBuddy to perform a second conversion of your file. This can be useful if you want to use your first conversion to create a higher quality MP4 or AVI file for playback on a larger screen, and a second one for a portable device such as Zune or iPhone. The same options from the first conversion are also available for the second. You’ll want to choose a separate Destination folder for the second conversion.   Start and Monitor Progress To start converting your video files, simply press the “Start” button at the bottom. You’ll be able to follow the progress in the “Current Activity” section. When all the video files have finished converting, or there are no current files to convert, MCEBuddy will display a “Started – Idle” status. Click “Stop” if you don’t want MCEBuddy to continue scanning for new files.   Conclusion MCEBuddy 1.1 will convert all WTV files in it’s source folder. If you want to pick and choose which recordings to convert, you may want to define a source folder different than the Recorded TV folder and then just copy or move the files you wish to convert into the new source folder. The conversion process does take a good bit of time. If you choose the commercial skipping and second conversion options it can take several hours to fully convert one TV recording. Overall, MCEBuddy makes a nice Media Center addition for those that want to save some space with smaller size files, convert Recorded TV files for their portable device, or automatically remove commercials. If you’re looking for a different method to skip commercials check out our post on how to skip commercials in Windows 7 Media Center. Download MCEBuddy 1.1 Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)How To Skip Commercials in Windows 7 Media CenterHow To Convert Video Files to MP3 with VLCStartup Customizations for Media Center in Windows 7Add Folders to the Movie Library in Windows 7 Media Center TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher Automate Tasks in Linux with Crontab Discover New Bundled Feeds in Google Reader Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam

    Read the article

  • Using C# 4.0’s DynamicObject as a Stored Procedure Wrapper

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] Overview Ignoring the fashion, I still make a lot of use of DALs – typically when inheriting a codebase with an established database schema which is full of tried and trusted stored procedures. In the DAL a collection of base classes have all the scaffolding, so the usual pattern is to create a wrapper class for each stored procedure, giving typesafe access to parameter values and output. DAL calls then looks like instantiate wrapper-populate parameters-execute call:       using (var sp = new uspGetManagerEmployees())     {         sp.ManagerID = 16;         using (var reader = sp.Execute())         {             //map entities from the output         }     }   Or rolling it all into a fluent DAL call – which is nicer to read and implicitly disposes the resources:   This is fine, the wrapper classes are very simple to handwrite or generate. But as the codebase grows, you end up with a proliferation of very small wrapper classes: The wrappers don't add much other than encapsulating the stored procedure call and giving you typesafety for the parameters. With the dynamic extension in .NET 4.0 you have the option to build a single wrapper class, and get rid of the one-to-one stored procedure to wrapper class mapping. In the dynamic version, the call looks like this:       dynamic getUser = new DynamicSqlStoredProcedure("uspGetManagerEmployees", Database.AdventureWorks);     getUser.ManagerID = 16;       var employees = Fluently.Load<List<Employee>>()                             .With<EmployeeMap>()                             .From(getUser);   The important difference is that the ManagerId property doesn't exist in the DynamicSqlStoredProcedure class. Declaring the getUser object with the dynamic keyword allows you to dynamically add properties, and the DynamicSqlStoredProcedure class intercepts when properties are added and builds them as stored procedure parameters. When getUser.ManagerId = 16 is executed, the base class adds a parameter call (using the convention that parameter name is the property name prefixed by "@"), specifying the correct SQL Server data type (mapping it from the type of the value the property is set to), and setting the parameter value. Code Sample This is worked through in a sample project on github – Dynamic Stored Procedure Sample – which also includes a static version of the wrapper for comparison. (I'll upload this to the MSDN Code Gallery once my account has been resurrected). Points worth noting are: DynamicSP.Data – database-independent DAL that has all the data plumbing code. DynamicSP.Data.SqlServer – SQL Server DAL, thin layer on top of the generic DAL which adds SQL Server specific classes. Includes the DynamicSqlStoredProcedure base class. DynamicSqlStoredProcedure.TrySetMember. Invoked when a dynamic member is added. Assumes the property is a parameter named after the SP parameter name and infers the SqlDbType from the framework type. Adds a parameter to the internal stored procedure wrapper and sets its value. uspGetManagerEmployees – the static version of the wrapper. uspGetManagerEmployeesTest – test fixture which shows usage of the static and dynamic stored procedure wrappers. The sample uses stored procedures from the AdventureWorks database in the SQL Server 2008 Sample Databases. Discussion For this scenario, the dynamic option is very favourable. Assuming your DAL is itself wrapped by a higher layer, the stored procedure wrapper classes have very little reuse. Even if you're codegening the classes and test fixtures, it's still additional effort for very little value. The main consideration with dynamic classes is that the compiler ignores all the members you use, and evaluation only happens at runtime. In this case where scope is strictly limited that's not an issue – but you're relying on automated tests rather than the compiler to find errors, but that should just encourage better test coverage. Also you can codegen the dynamic calls at a higher level. Performance may be a consideration, as there is a first-time-use overhead when the dynamic members of an object are bound. For a single run, the dynamic wrapper took 0.2 seconds longer than the static wrapper. The framework does a good job of caching the effort though, so for 1,000 calls the dynamc version still only takes 0.2 seconds longer than the static: You don't get IntelliSense on dynamic objects, even for the declared members of the base class, and if you've been using class names as keys for configuration settings, you'll lose that option if you move to dynamics. The approach may make code more difficult to read, as you can't navigate through dynamic members, but you do still get full debugging support.     var employees = Fluently.Load<List<Employee>>()                             .With<EmployeeMap>()                             .From<uspGetManagerEmployees>                             (                                 i => i.ManagerID = 16,                                 x => x.Execute()                             );

    Read the article

  • GPGPU

    WhatGPU obviously stands for Graphics Processing Unit (the silicon powering the display you are using to read this blog post). The extra GP in front of that stands for General Purpose computing.So, altogether GPGPU refers to computing we can perform on GPU for purposes beyond just drawing on the screen. In effect, we can use a GPGPU a bit like we already use a CPU: to perform some calculation (that doesn’t have to have any visual element to it). The attraction is that a GPGPU can be orders of magnitude faster than a CPU.WhyWhen I was at the SuperComputing conference in Portland last November, GPGPUs were all the rage. A quick online search reveals many articles introducing the GPGPU topic. I'll just share 3 here: pcper (ignoring all pages except the first, it is a good consumer perspective), gizmodo (nice take using mostly layman terms) and vizworld (answering the question on "what's the big deal").The GPGPU programming paradigm (from a high level) is simple: in your CPU program you define functions (aka kernels) that take some input, can perform the costly operation and return the output. The kernels are the things that execute on the GPGPU leveraging its power (and hence execute faster than what they could on the CPU) while the host CPU program waits for the results or asynchronously performs other tasks.However, GPGPUs have different characteristics to CPUs which means they are suitable only for certain classes of problem (i.e. data parallel algorithms) and not for others (e.g. algorithms with branching or recursion or other complex flow control). You also pay a high cost for transferring the input data from the CPU to the GPU (and vice versa the results back to the CPU), so the computation itself has to be long enough to justify the overhead transfer costs. If your problem space fits the criteria then you probably want to check out this technology.HowSo where can you get a graphics card to start playing with all this? At the time of writing, the two main vendors ATI (owned by AMD) and NVIDIA are the obvious players in this industry. You can read about GPGPU on this AMD page and also on this NVIDIA page. NVIDIA's website also has a free chapter on the topic from the "GPU Gems" book: A Toolkit for Computation on GPUs.If you followed the links above, then you've already come across some of the choices of programming models that are available today. Essentially, AMD is offering their ATI Stream technology accessible via a language they call Brook+; NVIDIA offers their CUDA platform which is accessible from CUDA C. Choosing either of those locks you into the GPU vendor and hence your code cannot run on systems with cards from the other vendor (e.g. imagine if your CPU code would run on Intel chips but not AMD chips). Having said that, both vendors plan to support a new emerging standard called OpenCL, which theoretically means your kernels can execute on any GPU that supports it. To learn more about all of these there is a website: gpgpu.org. The caveat about that site is that (currently) it completely ignores the Microsoft approach, which I touch on next.On Windows, there is already a cross-GPU-vendor way of programming GPUs and that is the DirectX API. Specifically, on Windows Vista and Windows 7, the DirectX 11 API offers a dedicated subset of the API for GPGPU programming: DirectCompute. You use this API on the CPU side, to set up and execute the kernels that run on the GPU. The kernels are written in a language called HLSL (High Level Shader Language). You can use DirectCompute with HLSL to write a "compute shader", which is the term DirectX uses for what I've been referring to in this post as a "kernel". For a comprehensive collection of links about this (including tutorials, videos and samples) please see my blog post: DirectCompute.Note that there are many efforts to build even higher level languages on top of DirectX that aim to expose GPGPU programming to a wider audience by making it as easy as today's mainstream programming models. I'll mention here just two of those efforts: Accelerator from MSR and Brahma by Ananth. Comments about this post welcome at the original blog.

    Read the article

  • XNA Notes 008

    - by George Clingerman
    This week has been a rough one. I’ve been sick and then in some kind of slump for my afternoon coding sessions. It could be from the cold, could be I’m still tired from writing that Windows Phone 7 game development book (which is out now!) or it could just be I’m tired of winter and want some sunshine. All I know is that even while I’m stick, the XNA world keeps going along at it’s whirlwind pace. Below are the things I caught in between my coughing fits.. Time Critical XNA News: The 2011 MVP summit is almost here so pass along your feelings and thoughts so the MVPs can take them and share them with the team in person http://forums.create.msdn.com/forums/p/76317/464136.aspx#464136 Dream Build Play - there’s no new announcement yet, but you can’t get much more to the end of February than this! http://www.dreambuildplay.com/Main/Home.aspx XNA Team: Dean Johnson from the XNA team shares an excellent way of handling Guide.IsTrialMode on WP7 http://blogs.msdn.com/b/dejohn/archive/2011/02/21/calling-guide-istrialmode-on-windows-phone-7.aspx Nick Gravelyn tries a new tactic in deciding if there’s enough interest to develop a sequel or not. Don’t YOU want Pixel Man 2 to come out? http://nickgravelyn.com/pixelman2/ XNA MVPs: Andy “The ZMan” Dunn finally shares what he’s been secretly working on these past 4 months http://twitter.com/#!/The_Zman/status/40590269392887808 http://www.youtube.com/watch?v=Rg8Z0ZdYbvg&feature=youtu.be Joel Martinez lets developers around NYC know they should by signing up for Game Hack Day http://twitter.com/joelmartinez/statuses/41118590862102528 http://gamehackday.org/71fdk XNA Developers: Michael McLaughlin shares an XNA RenderTarget2D Sample http://geekswithblogs.net/mikebmcl/archive/2011/02/18/xna-rendertarget2d-sample.aspx Martin Caine starts a new series on Deferred Rendering in XNA 4.0 http://twitter.com/#!/MartinCaine/status/39735221339291648 http://martincaine.com/xna/deferred_rendering_in_xna_4_introduction ElemenyCy posts about his fun time with the IntermediateSerializer http://www.ubergamermonkey.com/xna/holy-bloated-xml-batman/ Ben Kane releases a narrated dev diary video for Project Splice. Let him know if you’d like to see more! (I know I do!) http://twitter.com/#!/benkane/status/39846959498002432 http://www.youtube.com/watch?v=1EmziXZUo08&feature=youtu.be Jason Swearingen (of Novaleaf) posts his part 1 of Spatial Partitioning solutions http://altdevblogaday.org/2011/02/21/spatial-partitioning-part-1-survey-of-spatial-partitioning-solutions/ Brian Lawson of Dark Flow Studios shares what his been up to lately with lots of pretty screenshots and hints of announcements from Microsoft... http://www.darkflowstudios.com/entry/short-and-sweet-part-1 Luke Avery starts a new blog where he plans on making XNA tutorials for beginners (and he’s got a few started already!) http://programmingwithovery.wordpress.com/ Xbox LIVE Indie Games (XBLIG): GameMarx Episode 10 http://www.gamemarx.com/video/the-show/24/ep-10-february-18-2010.aspx Minecraft clone FortressCraft coming to XBLIG http://www.eurogamer.net/articles/2011-02-23-minecraft-clone-fortresscraft-hits-xblig ezMuze+ starts an IndieGoGo fundraiser campaign to help fund their second game and get it onto even more devices! http://www.indiegogo.com/ezmuze Gamergeddon XBLIG round up http://www.gamergeddon.com/2011/02/20/xbox-indie-game-round-up-february-20th/?utm_campaign=twitter&utm_medium=twitter&utm_source=twitter JForce Games loses their Ego http://jforcegames.com/blog/index.php?itemid=121&catid=4 XNA Game Development: @BallerIndustry reminds all XNA developers that the Maths are important ;) http://twitter.com/#!/BallerIndustry/status/39317618280243200 http://www.youtube.com/watch?v=MjV3XDFsjP4&feature=player_embedded#at=106 @suhinini stumbles on an older but extremely useful post on XNA Content Pipeline debugging http://twitter.com/#!/suhinini/status/39270189476352000 http://badcorporatelogo.wordpress.com/2010/10/31/xna-content-pipeline-debugging-4-0/ XNA Game Development Workshops at Singapore Universities http://innovativesingapore.com/2011/02/xna-game-development-workshops-at-singapore-universities/ Indiefreaks announces that IGF v0.3 is out with Xbox 360 support, SunBurn 2.0.12 and it’s now Open Source! http://twitter.com/#!/indiefreaks/status/39391953971982336 @liotral announces a new series on properly designing a game http://twitter.com/#!/liortal53/status/39466905081217024 http://liortalblog.wordpress.com/2011/02/20/hello-cosmos/ Indies and XNA at CodeStock 2011 http://www.gamemarx.com/news/2011/02/20/indies-and-xna-at-codestock-2011.aspx Train Frontier Express posts about XNA Content Hotloading http://trainfrontierexpress.blogspot.com/2011/02/xna-content-hotloading-overview.html Slyprid announces a new character editor in Transmute http://twitter.com/#!/slyprid/status/40146992818696192 http://www.youtube.com/watch?v=OKhFAc78LDs&feature=youtu.be The XNA 2D from the ground up tutorial series http://xna-uk.net/blogs/darkgenesis/archive/2011/02/23/recap-the-xna-2d-from-the-ground-up-tutorial-series.aspx Sgt.Conker posts a “Clingerman” (hey that’s me!) to stay relevant http://www.sgtconker.com/2011/02/posting-a-clingerman-to-stay-relevant/

    Read the article

  • Travelling MVP #1: Visit to SharePoint User Group Finland

    - by DigiMortal
    My first self organized trip this autumn was visit to SharePoint User Group Finland community evening. As active community leaders who make things like these possible they are worth mentioning and on spug.fi side there was Jussi Roine the one who invited me. Here is my short review about my trip to Helsinki. User group meeting As Helsinki is near Tallinn I went there using ship. It was easy to get from sea port to venue and I had also some minutes of time to visit academic book store. Community evening was held on the ground floor of one city center hotel and room was conveniently located near hotel bar and restaurant. Here is the meeting schedule: Welcome (Jussi Roine) OpenText application archiving and governance for SharePoint (Bernd Hennicke, OpenText) Using advanced C# features in SharePoint development (Alexey Sadomov, NED Consulting) Optimizing public-facing internet sites for SharePoint (Gunnar Peipman) After meeting, of course, local dudes doesn’t walk away but continue with some beers and discussion. Sessions After welcome words by Jussi there was session by Bernd Hennicke who spoke about OpenText. His session covered OpenText history and current moment. After this introduction he spoke about OpenText products for SharePoint and gave the audience good overview about where their SharePoint extensions fit in big picture. I usually don’t like those vendors sessions but this one was good. I mean vendor dudes were not aggressively selling something. They were way different – kind people who introduced their stuff and later answered questions. They acted like good guests. Second speaker was Alexey Sadomov who is working on SharePoint development projects. He introduced some ways how to get over some limitations of SharePoint. I don’t go here deeply with his session but it’s worth to mention that this session was strong one. It is not rear case when developers have to make nasty hacks to SharePoint. I mean really nasty hacks. Often these hacks are long blocks of code that uses terrible techniques to achieve the result. Alexey introduced some very much civilized ways about how to apply hacks. Alex Sadomov, SharePoint MVP, speaking about SharePoint coding tips and tricks on C# I spoke about how I optimized caching of Estonian Microsoft community portal that runs on SharePoint Server and that uses publishing infrastructure. I made no actual demos on SharePoint because I wanted to focus on optimizing process and share some experiences about how to get caches optimized and how to measure caches. Networking After official part there was time to talk and discuss with people. Finns are cool – they have beers and they are glad. It was not big community event but people were like one good family. Developers there work often for big companies and it was very interesting to me to hear about their experiences with SharePoint. One thing was a little bit surprising for me – SharePoint guys in Finland are talking actively also about Office 365 and online SharePoint. It doesn’t happen often here in Estonia. I had to leave a little bit 21:00 to get to my ship back to Tallinn. I am sure spug.fi dudes continued nice evening and they had at least same good time as I did. Do I want to go back to Finland and meet these guys again? Yes, sure, let’s do it again! :)

    Read the article

  • Top 10 Tips & Tricks for Oracle SQL Developer

    - by thatjeffsmith
    Being a short week due to the holiday, and with everyone enjoying their Summer vacations (apologies Southern Hemispherians), I reckoned it was a great time to do one of those lazy recap-Top 10-Reader’s Digest type posts. I’ve been sharing 1-3 tips or ‘tricks’ a week since I started blogging about SQL Developer, and I have more than enough content to write a book. But since I’m lazy, I’m just going to compile a list of my favorite ‘must know’ tips instead. I always have to leave out a few tips when I do my presentations, so now I can refer back to this list to make sure I’m not forgetting anything. So without further ado… 1. Configure Your Preferences Yes, there are a LOT of options. But you don’t need to worry about all of them just yet. I do recommend you take a quick look at these ones in particular. Whether you’re new to the tool or have been using it for 5 years, don’t overlook these settings! 2. Disable Extensions You Aren’t Using If you’re not using Data Miner, or if you’re not working on a Migration – disable those extensions! SQL Developer will run leaner & meaner, plus the user interface will be a bit more simplified making the tool easier to navigate as well. 3. SQL Recall via Keyboard Access your history via the keyboard! Cycle through your recent SQL statements just using these magic key strokes! Ctrl+Up or Ctrl+Down. 4. Format Your Query Output Directly to CSV, XML, HTML, etc Have the query results pre-formatted in the format of your choice! Too lazy to run the Export wizard for your query result sets? Just add the SQL Developer output hints to your statement and have the output auto-magically formatted to the style of your choice! 5. Drag & Drop Multiple Tables to the Worksheet SQL Developer will auto-join the related objects. You can then toggle over to the Query Builder to toggle off the columns you don’t want to query. I guarantee this tip will save you time if you’re joining 3 or more tables! 6. Drag & Drop Multiple Tables to a Relational Model A pretty picture is worth a few dozen DDL scripts? SQL Developer does data modeling! If you ctrl-drag a table to a model, it will take that table and any related tables and reverse engineer them to a relational model! You can then print it out or export it to HTML, PDF, etc. 7. View Your PL/SQL Execution Output Automatically Function returns a refcursor? Procedure had 3 out parameters? When you run these programs via the Procedure Editor, we automatically capture the output and place them into one or more data grids for you to browse. 8. Disable Automatic Code Insight and Use It On-Demand Code Editor – Completion Insight – Enable Completion Auto-Popup (Keyword being Auto) Some folks really don’t like it when their IDEs or word-processors try to do ‘too much’ for them. Thankfully SQL Developer allows you to either increase the delay before it attempts to auto-complete your text OR to disable the automatic bit. Instead, you can invoke it on-demand. 9. Interactive Debugging – Change Your Variable Values as You Step Through Your PLSQL Watches aren’t just for watching. You can actually interact with your programs and ‘see what happens’ when X = 256 instead of 1. 10. Ditch the Tree View for the Schema Browser There’s nothing wrong with the Connection tree for browsing your database objects. But some folks just can’t seem to get comfortable with it. So, we built them a Schema Browser that uses a drop down control instead for changing up your schema and object types. Already Know This Stuff, Want More? Just check out my SQL Developer resource page, it’s one of the main links on the top of this page. Or if you can’t find something, just drop me a note in the form of a comment on this page and I’ll do my best to find it or write it for you.

    Read the article

  • Data Quality and Master Data Management Resources

    - by Dejan Sarka
    Many companies or organizations do regular data cleansing. When you cleanse the data, the data quality goes up to some higher level. The data quality level is determined by the amount of work invested in the cleansing. As time passes, the data quality deteriorates, and you need to repeat the cleansing process. If you spend an equal amount of effort as you did with the previous cleansing, you can expect the same level of data quality as you had after the previous cleansing. And then the data quality deteriorates over time again, and the cleansing process starts over and over again. The idea of Data Quality Services is to mitigate the cleansing process. While the amount of time you need to spend on cleansing decreases, you will achieve higher and higher levels of data quality. While cleansing, you learn what types of errors to expect, discover error patterns, find domains of correct values, etc. You don’t throw away this knowledge. You store it and use it to find and correct the same issues automatically during your next cleansing process. The following figure shows this graphically. The idea of master data management, which you can perform with Master Data Services (MDS), is to prevent data quality from deteriorating. Once you reach a particular quality level, the MDS application—together with the defined policies, people, and master data management processes—allow you to maintain this level permanently. This idea is shown in the following picture. OK, now you know what DQS and MDS are about. You can imagine the importance on maintaining the data quality. Here are some resources that help you preparing and executing the data quality (DQ) and master data management (MDM) activities. Books Dejan Sarka and Davide Mauri: Data Quality and Master Data Management with Microsoft SQL Server 2008 R2 – a general introduction to MDM, MDS, and data profiling. Matching explained in depth. Dejan Sarka, Matija Lah and Grega Jerkic: MCTS Self-Paced Training Kit (Exam 70-463): Building Data Warehouses with Microsoft SQL Server 2012 – I wrote quite a few chapters about DQ and MDM, and introduced also SQL Server 2012 DQS. Thomas Redman: Data Quality: The Field Guide – you should start with this book. Thomas Redman is the father of DQ and MDM. Tyler Graham: Microsoft SQL Server 2012 Master Data Services – MDS in depth from a product team mate. Arkady Maydanchik: Data Quality Assessment – data profiling in depth. Tamraparni Dasu, Theodore Johnson: Exploratory Data Mining and Data Cleaning – advanced data profiling with data mining. Forthcoming presentations I am presenting a DQS and MDM seminar at PASS SQL Rally Amsterdam 2013: Wednesday, November 6th, 2013: Enterprise Information Management with SQL Server 2012 – a good kick start to your first DQ and / or MDM project. Courses Data Quality and Master Data Management with SQL Server 2012 – I wrote a 2-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. Start improving the quality of your data now!

    Read the article

  • BI and EPM Landscape

    - by frank.buytendijk
    Most of my blog entries are not about Oracle products, and most of the latest entries are about topics such as IT strategy and enterprise architecture. However, given my background at Gartner, and at Hyperion, I still keep a close eye on what's happening in BI and EPM. One important reason is that I believe there is significant competitive value for organizations getting BI and EPM right. Davenport and Harris wrote a great book called "Competing on Analytics", in which they explain this in a very engaging and convincing way. At Oracle we have defined the concept of "management excellence" that outlines what organizations have to do to keep or create a competitive edge. It's not only in the business processes, but also in the management processes. Recently, Gartner published its 2009 market shares report for BI, Analytics, and Performance Management. Gartner identifies the same three segments that Oracle does: (1) CPM Suites (Oracle refers not to Corporate Performance Management, but Enterprise Performance Management), (2) BI Platform, and (3) Analytic Applications & Performance Management. According to Gartner, Oracle's share is increasing with revenue growing by more than 5%. Oracle currently holds the #2 market share position in the overall BI Software space based on total BI software revenue. Source: Gartner Dataquest Market Share: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009; Dan Sommer and Bhavish Sood; Apr 2010 Gartner has ranked Oracle as #1 in the CPM Suites worldwide sub-segment based on total BI software revenue, and Oracle is gaining share with revenue growing by more than 6% in 2009. Source: Gartner Dataquest Market Share: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009; Dan Sommer and Bhavish Sood; Apr 2010 The Analytic Applications & Performance Management subsegment is more fragmented. It has for instance a very large "Other Vendors" category. The largest player traditionally is SAS. Analytic Applications are often meant for very specific analytic needs in very specific industry sectors. According to Gartner, from the large vendors, again Oracle is the one who is gaining the most share - with total BI software revenue growth close to 15% in 2009. Source: Gartner Dataquest Market Share: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009; Dan Sommer and Bhavish Sood; Apr 2010 I believe this shows Oracle's integration strategy is working. In fact, integration actually is the innovation. BI and EPM have been silo technology platforms and application suites way too long. Management and measuring performance should be very closely linked to strategy execution, which is the domain of other business application areas such as CRM, ERP, and Supply Chain. BI and EPM are not about "making better decisions" anymore, but are part of a tangible action framework. Furthermore, organizations are getting more serious about ecosystem thinking. They do not evaluate single tools anymore for different application areas, but buy into a complete ecosystem of hardware, software and services. The best ecosystem is the one that offers the most options, in environments where the uncertainty is high and investments are hard to reverse. The key to successfully managing such an environment is middleware, and BI and EPM become increasingly middleware intensive. In fact, given the horizontal nature of BI and EPM, sitting on top of all business functions and applications, you could call them "upperware". Many are active in the BI and EPM space. Big players can offer a lot, but there are always many areas that are covered by specialty vendors. Oracle openly embraces those technologies within the ecosystem as well. Complete, open and integrated still accurately describes the Oracle product strategy. frank

    Read the article

  • Building the Elusive Windows Phone Panorama Control

    When the Windows Phone 7 Developer SDK was released a couple of weeks ago at MIX10 many people noticed the SDK doesnt include a template for a Panorama control.   Here at Clarity we decided to build our own Panorama control for use in some of our prototypes and I figured I would share what we came up with. There have been a couple of implementations of the Panorama control making their way through the interwebs, but I didnt think any of them really nailed the experience that is shown in the simulation videos.   One of the key design principals in the UX Guide for Windows Phone 7 is the use of motion.  The WP7 OS is fairly stripped of extraneous design elements and makes heavy use of typography and motion to give users the necessary visual cues.  Subtle animations and wide layouts help give the user a sense of fluidity and consistency across the phone experience.  When building the panorama control I was fairly meticulous in recreating the motion as shown in the videos.  The effect that is shown in the application hubs of the phone is known as a Parallax Scrolling effect.  This this pseudo-3D technique has been around in the computer graphics world for quite some time. In essence, the background images move slower than foreground images, creating an illusion of depth in 2D.  Here is an example of the traditional use: http://www.mauriciostudio.com/.  One of the animation gems I've learned while building interactive software is the follow animation.  The premise is straightforward: instead of translating content 1:1 with the interaction point, let the content catch up to the mouse or finger.  The difference is subtle, but the impact on the smoothness of the interaction is huge.  That said, it became the foundation of how I achieved the effect shown below.   Source Code Available HERE Before I briefly describe the approach I took in creating this control..and Ill add some **asterisks ** to the code below as my coding skills arent up to snuff with the rest of my colleagues.  This code is meant to be an interpretation of the WP7 panorama control and is not intended to be used in a production application.  1.  Layout the XAML The UI consists of three main components :  The background image, the Title, and the Content.  You can imagine each  these UI Elements existing on their own plane with a corresponding Translate Transform to create the Parallax effect.  2.  Storyboards + Procedural Animations = Sexy As I mentioned above, creating a fluid experience was at the top of my priorities while building this control.  To recreate the smooth scroll effect shown in the video we need to add some place holder storyboards that we can manipulate in code to simulate the inertia and snapping.  Using the easing functions built into Silverlight helps create a very pleasant interaction.    3.  Handle the Manipulation Events With Silverlight 3 we have some new touch event handlers.  The new Manipulation events makes handling the interactivity pretty straight forward.  There are two event handlers that need to be hooked up to enable the dragging and motion effects: the ManipulationDelta event :  (the most relevant code is highlighted in pink) Here we are doing some simple math with the Manipulation Deltas and setting the TO values of the animations appropriately. Modifying the storyboards dynamically in code helps to create a natural feel.something that cant easily be done with storyboards alone.   And secondly, the ManipulationCompleted event:  Here we take the Final Velocities from the Manipulation Completed Event and apply them to the Storyboards to create the snapping and scrolling effects.  Most of this code is determining what the next position of the viewport will be.  The interesting part (shown in pink) is determining the duration of the animation based on the calculated velocity of the flick gesture.  By using velocity as a variable in determining the duration of the animation we can produce a slow animation for a soft flick and a fast animation for a strong flick. Challenges to the Reader There are a couple of things I didnt have time to implement into this control.  And I would love to see other WPF/Silverlight approaches.  1.  A good mechanism for deciphering when the user is manipulating the content within the panorama control and the panorama itself.   In other words, being able to accurately determine what is a flick and what is click. 2.  Dynamically Sizing the panorama control based on the width of its content.  Right now each control panel is 400px, ideally the Panel items would be measured and then panorama control would update its size accordingly.  3.  Background and content wrapping.  The WP7 UX guidelines specify that the content and background should wrap at the end of the list.  In my code I restrict the drag at the ends of the list (like the iPhone).  It would be interesting to see how this would effect the scroll experience.     Well, Its been fun building this control and if you use it Id love to know what you think.  You can download the Source HERE or from the Expression Gallery  Erik Klimczak  | [email protected] | twitter.com/eklimczDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Silverlight Cream for March 17, 2010 -- #814

    - by Dave Campbell
    In this Issue: Tim Heuer(-2-), René Schulte(-2-), Bart Czernicki, Mark Monster, Pencho Popadiyn, Alex Golesh, Phil Middlemiss, and Yochay Kiriaty. Shoutouts: Check out the new themes, and Tim Heuer's poetry skills: SNEAK PEEK: New Silverlight application themes I learned to program Windows 3.1 from reading Charles Petzold's book, and here we are again: Free ebook: Programming Windows Phone 7 Series (DRAFT Preview) Here's a blog you're going to want to watch, and first up on the blog tonight is links to the complete set of MIX10 phone sessions: The Windows Phone Developer Blog First let me get a couple of things out of my system... "Holy Crap it's March 17th already" and "Holy Crap, we're all Windows Phone Developers!" I'm sure both of those were old news to anyone that's not been in a coma since Monday, but I've been a tad busy here at #MIX10. I'm not complainin' ... I'm just sayin' From SilverlightCream.com: Getting Started with Silverlight and Windows Phone 7 Development With any new Silverlight technology we have to begin with Tim Heuer... and this is Tim's announcement of Silverlight on the Windows Phone 7 Series ('cmon, can I call it a "Silverlight Phone"? ... please?) ... hope I didn't type that out loud :) ... so... in case you fell asleep Sunday, and just woke up, Tim let the dogs out on this and we could all talk about it. In all seriousness, bookmark this page... lots of good links. A guide to what has changed in the Silverlight 4 RC Continuing the 'bookmark this page' thought... Tim Heuer also has one up on what the heck is all in the Silverlight 4 RC they released on Monday... check this out... really good stuff in there... and a great post detailing it all. The Silverlight 4 Release Candidate René Schulte has a good post up detailing the new stuff in Silverlight 4 RC, with special attention paid to the webcam/mic and AsyncCaptureImage Let it ring - WriteableBitmapEx for Windows Phone René Schulte has a Windows Phone post up as well, introducing the WriteableBitmapEx library for Windows Phone... how cool is that?? Silverlight for Windows Phone 7 is NOT the same full Silverlight 3 RTM Bart Czernicki dug into the docs to expose some of the differences between Silverlight for the Windows Phone and Silverlight 3. If you've been developing in SL3 and want to also do Phone, check out this post and his resource listings. Trying to sketch a Windows Phone 7 application Mark Monster tried to SketchFlow a Windows Phone app and hit some problems... if anyone has thoughts, contribute on his blog page. Using Reactive Extensions in Silverlight – part 2 – Web Services Pencho Popadiyn has part 2 of his tutorial on Rx, and this one is concentrating on asynchronous service calls. Silverlight 4 Quick Tip: Out-Of-Browser Improvements This post from Alex Golesh is a little weird since he was sitting next to me in a session at MIX10 when he submitted it :) ... good update on what's new in OOB in the RC Turning a round button into a rounded panel I like Phil Middlemiss' other title for this post: "A Scalable Orb Panel-Button-Thingy" ... this is a very cool resizing button that works amazingly similar to the resizable skinned dialogs I did in Win32!... very cool, Phil! Go Get It – The Windows Phone Developer Training Kit Did you know there was a Windows Phone Training Kit with Hands-on Labs? Yochay Kiriaty at the Windows Phone Developer Blog wrote about it... I pulled it down, and it looks really good! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >