Search Results

Search found 24705 results on 989 pages for 'tally table'.

Page 191/989 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • object references an unsaved transient instance

    - by developer
    Hi, I have 2 tables, user and userprofile, both with almost identical fields. user table references userprofile table by primary key ID. My requirement is that on click of a button I need to dump user table record to userprofile table. Now for a particular user table, if there is a corresponding userprofile entry, I am successfully able to dump the data, but if there is no record in userprofile table then I need to create a new record by dumping all the data. My problem is that I am able to update the data when the record is present in userprofile table, but in the case wherein I have to create a new record I get the below error "object references an unsaved transient instance - save the transient instance before flushing". `<class name="User"> <id name="ID" type="Int32"> <generator class="native" /> </id> <many-to-one name="Pid" class="UserProfile" /> </class>` UserProfile is another table and Pid above references the Primary key ID of UserProfile table.

    Read the article

  • ( Sql Server 2005 C#.Net ) - I want just the insert query for a temp table.

    - by John Stephen
    Hi..I am using C#.Net and Sql Server ( Windows Application ). I had created a temporary table. When a button is clicked, temporary table (#tmp_emp_details) is created. I am having another button called "insert Values" and also 5 textboxes. The values that are entered in the textbox are used and whenever com.ExecuteNonQuery(); line comes, it throws an error message called "Invalid object name '#tbl_emp_answer'.". Below is the set of code..Please give me a solution. Code for insert (in insert value button): private void btninsertvalues_Click(object sender, EventArgs e) { username = txtusername.Text; examloginid = txtexamloginid.Text; question = txtquestion.Text; answer = txtanswer.Text; useranswer = txtanswer.Text; SqlConnection con = new SqlConnection("Data Source=.;Initial Catalog=tempdb;Integrated Security=True;"); SqlCommand com = new SqlCommand("Insert into #tbl_emp_answer values('"+username+"','"+examloginid+"','"+question+"','"+answer+"','"+useranswer+"')", con); con.Open(); com.ExecuteNonQuery(); con.Close(); }

    Read the article

  • Using placeholders/variables in a sed command

    - by jesse_galley
    I want to store a specific part of a matched result as a variable to be used for replacement later. I would like to keep this in a one liner instead of finding the variable I need before hand. when configuring apache, and use mod_rewrite, you can specificy specific parts of patterns to be used as variables,like this: RewriteRule ^www.example.com/page/(.*)$ http://www.example.com/page.php?page=$1 [R=301,L] the part of the pattern match that's contained inside the parenthesis is stored as $1 for use later. So if the url was www.example.com/page/home, it would be replaced with www.example.com/page.php?page=home. So the "home" part of the match was saved in $1 because it was the part of the pattern inside the parenthesis. I want something like this functionality with a sed command, I need to automatically replace many strings in a SQL dump file, to add drop table if exist commands before each create table, but I need to know the table name to do this, so if the dump file contains something like: ... CREATE TABLE `orders` ... I need to run something like: cat dump.sql | sed "s/CREATE TABLE `(.*)`/DROP TABLE IF EXISTS $1\N CREATE TABLE `$1`/g" to get the result of: ... DROP TABLE IF EXISTS `orders` CREATE TABLE `orders` ... I'm using the mod_rewrite syntax in the sed command as a logical example of what I'm trying to do. Any suggestions?

    Read the article

  • How would i output a message that says "values inserted to table" ?

    - by ranlo
    <?php error_reporting(0); $con=mysql_connect("localhost","root",""); if (!$con) { die('could not connect:'.mysql_error()); } mysql_select_db("final?orgdocs",$con); $org_name = $_POST["org_name"]; $org_type = $_POST["org_type"]; $org_code = $_POST["org_code"]; $description = $_POST["description"]; $stmt = "INSERT INTO organization VALUES('".$org_name."','".$org_type."','".$org_code."','".$description."')"; $result = mysql_query("SELECT * FROM organization WHERE org_name = '$org_name' "); echo '<TABLE BORDER = "1">'; $result1 = $result; while ($row = mysql_fetch_array($result1)){ echo '<TR>'.'<TD>'.'Organization Name'.'</TD>'.'<TD>'.'Organization Type'.'</TD>'.'<TD>'.'Organization Code'.'</TD>'.'<TD>'.'Description'.'</TD>'.'<TD>'.'Constitution'.'</TD>'; echo '</TR>'; echo '<TR>'.'<TD>'.$row['org_name'].'</TD>'.'<TD>'.$row['org_type'].'</TD>'; echo '<TD>'.$row['org_code'].'</TD>'.'<TD>'.$row['description'].'</TD>'.'<TD>'; echo '</TR>'; } echo '</TABLE>'; ?>

    Read the article

  • How to check dates don't overlap in a table using TSQL.

    - by Jon
    I have a table with start and finish datetimes that I need to determine if any overlap and not quite sure the best way to go. Initially I was thinking of using a nested cursor as shown below which does work, however I'm checking the same records against each other twice and I'm sure it is not very efficient. eg: this table would result in an overlap. id start end ------------------------------------------------------- 1 2009-10-22 10:19:00.000 2009-10-22 11:40:00.000 2 2009-10-22 10:31:00.000 2009-10-22 13:34:00.000 3 2009-10-22 16:31:00.000 2009-10-22 17:34:00.000 Declare @Start datetime, @End datetime, @OtherStart datetime, @OtherEnd datetime, @id int, @endCheck bit Set @endCheck = 0 DECLARE Cur1 CURSOR FOR select id, start, end from table1 OPEN Cur1 FETCH NEXT FROM Cur1 INTO @id, @Start, @End WHILE @@FETCH_STATUS = 0 AND @endCheck = 0 BEGIN -- Get a cursor on all the other records DECLARE Cur2 CURSOR FOR select start, end from table1 and id != @id OPEN Cur2 FETCH NEXT FROM Cur2 INTO @OtherStart, @OtherEnd WHILE @@FETCH_STATUS = 0 AND @endCheck = 0 BEGIN if ( @Start > @OtherStart AND @Start < @OtherEnd OR @End > @OtherStart AND @End < @OtherEnd ) or ( @OtherStart > @Start AND @OtherStart < @End OR @OtherEnd > @Start AND @OtherEnd < @End ) BEGIN SET @endCheck = 1 END FETCH NEXT FROM Cur2 INTO @OtherStart, @OtherEnd END CLOSE Cur2 DEALLOCATE Cur2 FETCH NEXT FROM Cur1 INTO @id, @Start, @End END CLOSE Cur1 DEALLOCATE Cur1

    Read the article

  • How can I pull multiple rows from a MySQL table and use all of them automatically for the same thing

    - by Rob
    Basically, I have multiple URL's stored in a MySQL table. I want to pull those URLs from the table and have cURL connect to all of them. Currently I've been storing the URL's in the local script, but I've added a new page that I can add and remove them from the database, and I'd like the page to reflect it appropriately. Here is what I currently have: $sites[0]['url'] = "http://example0.com "; $sites[1]['url'] = "http://example1.com"; $sites[2]['url'] = "http://example2.com"; $sites[3]['url'] = "http://example3.com"; foreach($sites as $s) { // Now for some cURL to run it. $ch = curl_init($s['url']); //load the urls and send GET data curl_setopt($ch, CURLOPT_TIMEOUT, 2); //No need to wait for it to load. Execute it and go. curl_exec($ch); //Execute curl_close($ch); //Close it off } Now I assume it can't be too amazingly difficult to do, I just don't know how. So if you could point me in the right direction, I'd be grateful. But if you supply me with some code, please comment it appropriately so that I can understand what each line is doing.

    Read the article

  • Data base design with Blob

    - by mmuthu
    Hi, I have a situation where i need to store the binary data into database as blob column. There are three different table exists in my database where in i need to store a blob data for each record. Not every record will have the blob data all the time. It is time and user based. The table one will have to store the *.doc files almost for all the record The table two will have to store the *.xml optionally. The table three will have to store images (not sure what is frequency, etc) Now my questions is whether it is a good idea to maintain a separate table to store the blob data pointing it to the respective table PK's (Yes, there will be no FK's and assuming program will maintain it). It will be some thing like below, BLOB|PK_ID|TABLE_NAME Alternatively, is it a good idea to keep the blob column in respective tables. As for as my application runtime is concerned, The table 2 will be read very frequently. Though the blob column will not be required. The table 2 record will gets deleted frequently. Similarly other blob data in respective table will not be accessed frequently. All of the blob content will be read on-demand basis. I'm thinking first approach will work better for me. What do you guys think? Btw, I'm using Oracle.

    Read the article

  • Create an index only on certain rows in mysql

    - by dhruvbird
    So, I have this funny requirement of creating an index on a table only on a certain set of rows. This is what my table looks like: USER: userid, friendid, created, blah0, blah1, ..., blahN Now, I'd like to create an index on: (userid, friendid, created) but only on those rows where userid = friendid. The reason being that this index is only going to be used to satisfy queries where the WHERE clause contains "userid = friendid". There will be many rows where this is NOT the case, and I really don't want to waste all that extra space on the index. Another option would be to create a table (query table) which is populated on insert/update of this table and create a trigger to do so, but again I am guessing an index on that table would mean that the data would be stored twice. How does mysql store Primary Keys? I mean is the table ordered on the Primary Key or is it ordered by insert order and the PK is like a normal unique index? I checked up on clustered indexes (http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html), but it seems only InnoDB supports them. I am using MyISAM (I mention this because then I could have created a clustered index on these 3 fields in the query table). I am basically looking for something like this: ALTER TABLE USERS ADD INDEX (userid, friendid, created) WHERE userid=friendid

    Read the article

  • Implementing search functionality with multiple optional parameters against database table.

    - by quarkX
    Hello, I would like to check if there is a preferred design pattern for implementing search functionality with multiple optional parameters against database table where the access to the database should be only via stored procedures. The targeted platform is .Net with SQL 2005, 2008 backend, but I think this is pretty generic problem. For example, we have customer table and we want to provide search functionality to the UI for different parameters, like customer Type, customer State, customer Zip, etc., and all of them are optional and can be selected in any combinations. In other words, the user can search by customerType only or by customerType, customerZIp or any other possible combinations. There are several available design approaches, but all of them have some disadvantages and I would like to ask if there is a preferred design among them or if there is another approach. Generate sql where clause sql statement dynamically in the business tier, based on the search request from the UI, and pass it to a stored procedure as parameter. Something like @Where = ‘where CustomerZip = 111111’ Inside the stored procedure generate dynamic sql statement and execute it with sp_executesql. Disadvantage: dynamic sql, sql injection Implement a stored procedure with multiple input parameters, representing the search fields from the UI, and use the following construction for selecting the records only for the requested fields in the where statement. WHERE (CustomerType = @CustomerType OR @CustomerType is null ) AND (CustomerZip = @CustomerZip OR @CustomerZip is null ) AND ………………………………………… Disadvantage: possible performance issue for the sql. 3.Implement separate stored procedure for each search parameter combinations. Disadvantage: The number of stored procedures will increase rapidly with the increase of the search parameters, repeated code.

    Read the article

  • C# how to get trailing spaces from the end of a varchar(513) field while exporting SQL table to a fl

    - by svon
    How do I get empty spaces from the end of a varchar(513) field while exporting data from SQL table to a flat file. I have a console application. Here is what I am using to export a SQL table having only one column of varchar(513) to a flat file. But I need to get all the 513 characters including spaces at the end. How do I change this code to incorporate that. Thanks { var destination = args[0]; var command = string.Format("Select * from {0}", Validator.Check(args[1])); var connectionstring = string.Format("Data Source={0}; Initial Catalog=dbname;Integrated Security=SSPI;", args[2]); var helper = new SqlHelper(command, CommandType.Text, connectionstring); using (StreamWriter writer = new StreamWriter(destination)) using (IDataReader reader = helper.ExecuteReader()) { while (reader.Read()) { Object[] values = new Object[reader.FieldCount]; int fieldCount = reader.GetValues(values); for (int i = 0; i < fieldCount; i++) writer.Write(values[i]); writer.WriteLine(); } writer.Close(); }

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • setCurrentRowWithKey and setCurrentRowWithKeyValue

    - by raghu.yadav
    Good demo demo by shay by shay on how to use setCurrentRowWithKeyValue to synchronize the master and details of same table. Example : Employee master table list and EmployeeDetail form to edit records. However there are many ways to achieve the same usecase. You can use the clicktoEdit property on table row to edit records in place within in table and there is also a feature to show more for few columns in table tools. But objective here is to see how and where and all we can make use of setCurrentRowWithKey and setCurrentRowWithKeyValue. Here is the link about this explains how we can make use of this. link more to be added.

    Read the article

  • SQL SERVER – Fundamentals of Columnstore Index

    - by pinaldave
    There are two kind of storage in database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data is relevant or not, column store queries need only to search much lesser number of the columns. This means major increases in search speed and hard drive use. Additionally, the column store indexes are heavily compressed, which translates to even greater memory and faster searches. I am sure this looks very exciting and it does not mean that you convert every single index from row store to column store index. One has to understand the proper places where to use row store or column store indexes. Let us understand in this article what is the difference in Columnstore type of index. Column store indexes are run by Microsoft’s VertiPaq technology. However, all you really need to know is that this method of storing data is columns on a single page is much faster and more efficient. Creating a column store index is very easy, and you don’t have to learn new syntax to create them. You just need to specify the keyword “COLUMNSTORE” and enter the data as you normally would. Keep in mind that once you add a column store to a table, though, you cannot delete, insert or update the data – it is READ ONLY. However, since column store will be mainly used for data warehousing, this should not be a big problem. You can always use partitioning to avoid rebuilding the index. A columnstore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored. The difference between column store and row store approaches is illustrated below: In case of the row store indexes multiple pages will contain multiple rows of the columns spanning across multiple pages. In case of column store indexes multiple pages will contain multiple single columns. This will lead only the columns needed to solve a query will be fetched from disk. Additionally there is good chance that there will be redundant data in a single column which will further help to compress the data, this will have positive effect on buffer hit rate as most of the data will be in memory and due to same it will not need to be retrieved. Let us see small example of how columnstore index improves the performance of the query on a large table. As a first step let us create databaseset which is large enough to show performance impact of columnstore index. The time taken to create sample database may vary on different computer based on the resources. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 Now let us do quick performance test. I have kept STATISTICS IO ON for measuring how much IO following queries take. In my test first I will run query which will use regular index. We will note the IO usage of the query. After that we will create columnstore index and will measure the IO of the same. -- Performance Test -- Comparing Regular Index with ColumnStore Index USE AdventureWorks GO SET STATISTICS IO ON GO -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO -- Table 'MySalesOrderDetail'. Scan count 1, logical reads 342261, physical reads 0, read-ahead reads 0. -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Select Table with Columnstore Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO It is very clear from the results that query is performance extremely fast after creating ColumnStore Index. The amount of the pages it has to read to run query is drastically reduced as the column which are needed in the query are stored in the same page and query does not have to go through every single page to read those columns. If we enable execution plan and compare we can see that column store index performance way better than regular index in this case. Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In future posts we will see cases where Columnstore index is not appropriate solution as well few other tricks and tips of the columnstore index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Override an IOCTL Handler in PQOAL

    - by Kate Moss' Big Fan
    When porting or creating a BSP to a new platform, we often need to make change to OEMIoControl or HAL IOCTL handler for more specific. Since Microsoft introduced PQOAL in CE 5.0 and more and more BSP today leverages PQOAL to simplify the OAL, we no longer define the OEMIoControl directly. It is somehow analogous to migrate from pure Windows SDK to MFC; people starts to define those MFC handlers and forgot the WinMain and the big message loop. If you ever take a look at the interface between OAL and Kernel, PUBLIC\COMMON\OAK\INC\oemglobal.h, the pfnOEMIoctl is still there just as the entry point of Windows Program is WinMain since day one. (For those may argue about pfnOEMIoctl is not OEMIoControl, I will encourage you to dig into PRIVATE\WINCEOS\COREOS\NK\OEMMAIN\oemglobal.c which initialized pfnOEMIoctl to OEMIoControl. The interface is just to split OAL and Kernel which no longer linked to one executable file in CE 6, all of the function signature is still identical) So let's trace into PQOAL to realize how it implements OEMIoControl and how can we override an IOCTL handler we interest. First thing to know is the entry point (just as finding the WinMain in MFC), OEMIoControl is defined in PLATFORM\COMMON\SRC\COMMON\IOCTL\ioctl.c. Basically, it does nothing special but scan a pre-defined IOCTL table, g_oalIoCtlTable, and then execute the handler. (The highlight part) Other than that is just for error handling and the use of critical section to serialize the function. BOOL OEMIoControl(     DWORD code, VOID *pInBuffer, DWORD inSize, VOID *pOutBuffer, DWORD outSize,     DWORD *pOutSize ) {     BOOL rc = FALSE;     UINT32 i; ...     // Search the IOCTL table for the requested code.     for (i = 0; g_oalIoCtlTable[i].pfnHandler != NULL; i++) {         if (g_oalIoCtlTable[i].code == code) break;     }     // Indicate unsupported code     if (g_oalIoCtlTable[i].pfnHandler == NULL) {         NKSetLastError(ERROR_NOT_SUPPORTED);         OALMSG(OAL_IOCTL, (             L"OEMIoControl: Unsupported Code 0x%x - device 0x%04x func %d\r\n",             code, code >> 16, (code >> 2)&0x0FFF         ));         goto cleanUp;     }            // Take critical section if required (after postinit & no flag)     if (         g_ioctlState.postInit &&         (g_oalIoCtlTable[i].flags & OAL_IOCTL_FLAG_NOCS) == 0     ) {         // Take critical section                    EnterCriticalSection(&g_ioctlState.cs);     }     // Execute the handler     rc = g_oalIoCtlTable[i].pfnHandler(         code, pInBuffer, inSize, pOutBuffer, outSize, pOutSize     );     // Release critical section if it was taken above     if (         g_ioctlState.postInit &&         (g_oalIoCtlTable[i].flags & OAL_IOCTL_FLAG_NOCS) == 0     ) {         // Release critical section                    LeaveCriticalSection(&g_ioctlState.cs);     } cleanUp:     OALMSG(OAL_IOCTL&&OAL_FUNC, (L"-OEMIoControl(rc = %d)\r\n", rc ));     return rc; }   Where is the g_oalIoCtlTable? It is defined in your BSP. Let's use DeviceEmulator BSP as an example. The PLATFORM\DEVICEEMULATOR\SRC\OAL\OALLIB\ioctl.c defines the table as const OAL_IOCTL_HANDLER g_oalIoCtlTable[] = { #include "ioctl_tab.h" }; And that leads to PLATFORM\DEVICEEMULATOR\SRC\INC\ioctl_tab.h which defined some of IOCTL handler but others are defined in oal_ioctl_tab.h which is under PLATFORM\COMMON\SRC\INC\. Finally, we got the full table body! (Just like tracing MFC, always jumping back and forth). The format of table is very straight forward, IOCTL code, Flags and Handler Function // IOCTL CODE,                          Flags   Handler Function //------------------------------------------------------------------------------ { IOCTL_HAL_INITREGISTRY,                   0,  OALIoCtlHalInitRegistry     }, { IOCTL_HAL_INIT_RTC,                       0,  OALIoCtlHalInitRTC          }, { IOCTL_HAL_REBOOT,                         0,  OALIoCtlHalReboot           }, The PQOAL scans through the table until it find a matched IOCTL code, then invokes the handler function. Since it scans the table from the top which means if we define TWO handler with same IOCTL code, the first one is always invoked with no exception. Now back to the PLATFORM\DEVICEEMULATOR\SRC\INC\ioctl_tab.h, with the following table { IOCTL_HAL_INITREGISTRY,                   0,  OALIoCtlDeviceEmulatorHalInitRegistry     }, ... #include <oal_ioctl_tab.h> Note the IOCTL_HAL_INITREGISTRY handler are defined in both BSP's local ioctl_tab.h and the common oal_ioctl_tab.h, but due to BSP's local handler comes before "#include <oal_ioctl_tab.h>" so we know the OALIoCtlDeviceEmulatorHalInitRegistry always get called. In this example, the DeviceEmulator BSP overrides the IOCTL_HAL_INITREGISTRY handler from OALIoCtlHalInitRegistry to OALIoCtlDeviceEmulatorHalInitRegistry by manipulating the g_oalIoCtlTable table. (In some point of view, it is similar to message map in MFC) Please be aware, when you override an IOCTL handler in PQOAL, you may want to clone the original implementation to your BSP and change to meet your need. It is recommended and save you the redundant works but remember to rename the handler function (Just like the DeviceEmulator it changes the name of OALIoCtlHalInitRegistry to OALIoCtlDeviceEmulatorHalInitRegistry). If you don't change the name, linker may not be happy (due to name conflict) and the more important is by using different handler name, you could always redirect the handler back to original one. (It is like the concept of OOP that calling a function in base class; still not so clear? I am goinf to show you soon!) The OALIoCtlDeviceEmulatorHalInitRegistry setups DeviceEmulator specific registry settings and in the end, if everything goes well, it calls the OALIoCtlHalInitRegistry (PLATFORM\COMMON\SRC\COMMON\IOCTL\reginit.c) to do the rest.     if(fOk) {         fOk = OALIoCtlHalInitRegistry(code, pInpBuffer, inpSize, pOutBuffer,             outSize, pOutSize);     } Now you got the picture, whenever you want to override an IOCTL hadnler that is implemented in PQOAL just Clone the handler function to your BSP as a template. Simple name change for the handler function, and a name change in the IOCTL table header file that maps the IOCTL with the function Implement your IOCTL handler and whenever you need to redirect it back just calling the original handler function. It is the standard way of implementing a custom IOCTL and most Microsoft developers prefer. The mapping of IOCTL routine to IOCTL code is platform specific - you control the header file that does that mapping.

    Read the article

  • HLSL Pixel Shader that does palette swap

    - by derrace
    I have implemented a simple pixel shader which can replace a particular colour in a sprite with another colour. It looks something like this: sampler input : register(s0); float4 PixelShaderFunction(float2 coords: TEXCOORD0) : COLOR0 { float4 colour = tex2D(input, coords); if(colour.r == sourceColours[0].r && colour.g == sourceColours[0].g && colour.b == sourceColours[0].b) return targetColours[0]; return colour; } What I would like to do is have the function take in 2 textures, a default table, and a lookup table (both same dimensions). Grab the current pixel, and find the location XY (coords) of the matching RGB in the default table, and then substitute it with the colour found in the lookup table at XY. I have figured how to pass the Textures from C# into the function, but I am not sure how to find the coords in the default table by matching the colour. Could someone kindly assist? Thanks in advance.

    Read the article

  • Sorting and Filtering By Model-Based LOV Display Value

    - by Steven Davelaar
    If you use a model-based LOV and you use display type "choice", then ADF nicely displays the display value, even if the table is read-only. In the screen shot below, you see the RegionName attribute displayed instead of the RegionId. This is accomplished by the model-based LOV, I did not modify the Countries view object to include a join with Regions.  Also note the sort icon, the table is sorted by RegionId. This sorting typically results in a bug reported by your test team. Europe really shouldn't come before America when sorting ascending, right? To fix this, we could of course change the Countries view object query and add a join with the Regions table to include the RegionName attribute. If the table is updateable, we still need the choice list, so we need to move the model-based LOV from the RegionId attribute to the RegionName attribute and hide the RegionId attribute in the table. But that is a lot of work for such a simple requirement, in particular if we have lots of model-based choice lists in our view object. Fortunately, there is an easier way to do this, with some generic code in your view object base class that fixes this at once for all model-based choice lists that we have defined in our application. The trick is to override the method getSortCriteria() in the base view object class. By default, this method returns null because the sorting is done in the database through a SQL Order By clause. However, if the getSortCriteria method does return a sort criteria the framework will perform in memory sorting which is what we need to achieve sorting by region name. So, inside this method we need to evaluate the Order By clause, and if the order by column matches an attribute that has a model-based LOV choicelist defined with a display attribute that is different from the value attribute, we need to return a sort criterria. Here is the complete code of this method: public SortCriteria[] getSortCriteria() {   String orderBy = getOrderByClause();          if (orderBy!=null )   {     boolean descending = false;     if (orderBy.endsWith(" DESC"))      {       descending = true;       orderBy = orderBy.substring(0,orderBy.length()-5);     }     // extract column name, is part after the dot     int dotpos = orderBy.lastIndexOf(".");     String columnName = orderBy.substring(dotpos+1);     // loop over attributes and find matching attribute     AttributeDef orderByAttrDef = null;     for (AttributeDef attrDef : getAttributeDefs())     {       if (columnName.equals(attrDef.getColumnName()))       {         orderByAttrDef = attrDef;         break;       }     }     if (orderByAttrDef!=null && "choice".equals(orderByAttrDef.getProperty("CONTROLTYPE"))          && orderByAttrDef.getListBindingDef()!=null)     {       String orderbyAttr = orderByAttrDef.getName();       String[] displayAttrs = orderByAttrDef.getListBindingDef().getListDisplayAttrNames();       String[] listAttrs = orderByAttrDef.getListBindingDef().getListAttrNames();       // if first list display attributes is not the same as first list attribute, than the value       // displayed is different from the value copied back to the order by attribute, in which case we need to       // use our custom comparator       if (displayAttrs!=null && listAttrs!=null && displayAttrs.length>0 && !displayAttrs[0].equals(listAttrs[0]))       {                  SortCriteriaImpl sc1 = new SortCriteriaImpl(orderbyAttr, descending);         SortCriteria[] sc = new SortCriteriaImpl[]{sc1};         return sc;                           }     }     }   return super.getSortCriteria(); } If this method returns a sort criteria, then the framework will call the sort method on the view object. The sort method uses a Comparator object to determine the sequence in which the rows should be returned. This comparator is retrieved by calling the getRowComparator method on the view object. So, to ensure sorting by our display value, we need to override this method to return our custom comparator: public Comparator getRowComparator() {   return new LovDisplayAttributeRowComparator(getSortCriteria()); } The custom comparator class extends the default RowComparator class and overrides the method compareRows and looks up the choice display value to compare the two rows. The complete code of this class is included in the sample application.  With this code in place, clicking on the Region sort icon nicely sorts the countries by RegionName, as you can see below. When using the Query-By-Example table filter at the top of the table, you typically want to use the same choice list to filter the rows. One way to do that is documented in ADF code corner sample 16 - How To Customize the ADF Faces Table Filter.The solution in this sample is perfectly fine to use. This sample requires you to define a separate iterator binding and associated tree binding to populate the choice list in the table filter area using the af:iterator tag. You might be able to reuse the same LOV view object instance in this iterator binding that is used as view accessor for the model-bassed LOV. However, I have seen quite a few customers who have a generic LOV view object (mapped to one "refcodes" table) with the bind variable values set in the LOV view accessor. In such a scenario, some duplicate work is needed to get a dedicated view object instance with the correct bind variables that can be used in the iterator binding. Looking for ways to maximize reuse, wouldn't it be nice if we could just reuse our model-based LOV to populate this filter choice list? Well we can. Here are the basic steps: 1. Create an attribute list binding in the page definition that we can use to retrieve the list of SelectItems needed to populate the choice list <list StaticList="false" Uses="LOV_RegionId"               IterBinding="CountriesView1Iterator" id="RegionId"/>  We need this "current row" list binding because the implicit list binding used by the item in the table is not accessible outside a table row, we cannot use the expression #{row.bindings.RegionId} in the table filter facet. 2. Create a Map-style managed bean with the get method retrieving the list binding as key, and returning the list of SelectItems. To return this list, we take the list of selectItems contained by the list binding and replace the index number that is normally used as key value with the actual attribute value that is set by the choice list. Here is the code of the get method:  public Object get(Object key) {   if (key instanceof FacesCtrlListBinding)   {     // we need to cast to internal class FacesCtrlListBinding rather than JUCtrlListBinding to     // be able to call getItems method. To prevent this import, we could evaluate an EL expression     // to get the list of items     FacesCtrlListBinding lb = (FacesCtrlListBinding) key;     if (cachedFilterLists.containsKey(lb.getName()))     {       return cachedFilterLists.get(lb.getName());     }     List<SelectItem> items = (List<SelectItem>)lb.getItems();     if (items==null || items.size()==0)     {       return items;     }     List<SelectItem> newItems = new ArrayList<SelectItem>();     JUCtrlValueDef def = ((JUCtrlValueDef)lb.getDef());     String valueAttr = def.getFirstAttrName();     // the items list has an index number as value, we need to replace this with the actual     // value of the attribute that is copied back by the choice list     for (int i = 0; i < items.size(); i++)     {       SelectItem si = (SelectItem) items.get(i);       Object value = lb.getValueFromList(i);       if (value instanceof Row)       {         Row row = (Row) value;         si.setValue(row.getAttribute(valueAttr));                 }       else       {         // this is the "empty" row, set value to empty string so all rows will be returned         // as user no longer wants to filter on this attribute         si.setValue("");       }       newItems.add(si);     }     cachedFilterLists.put(lb.getName(), newItems);     return newItems;   }   return null; } Note that we added caching to speed up performance, and to handle the situation where table filters or search criteria are set such that no rows are retrieved in the table. When there are no rows, there is no current row and the getItems method on the list binding will return no items.  An alternative approach to create the list of SelectItems would be to retrieve the iterator binding from the list binding and loop over the rows in the iterator binding rowset. Then we wouldn't need the import of the ADF internal oracle.adfinternal.view.faces.model.binding.FacesCtrlListBinding class, but then we need to figure out the display attributes from the list binding definition, and possible separate them with a dash if multiple display attributes are defined in the LOV. Doable but less reuse and more work. 3. Inside the filter facet for the column create an af:selectOneChoice with the value property of the f:selectItems tag referencing the get method of the managed bean:  <f:facet name="filter">   <af:selectOneChoice id="soc0" autoSubmit="true"                       value="#{vs.filterCriteria.RegionId}">     <!-- attention: the RegionId list binding must be created manually in the page definition! -->                       <f:selectItems id="si0"                    value="#{viewScope.TableFilterChoiceList[bindings.RegionId]}"/>   </af:selectOneChoice> </f:facet> Note that the managed bean is defined in viewScope for the caching to take effect. Here is a screen shot of the tabe filter in action: You can download the sample application here. 

    Read the article

  • Using Sql Server Change Data Capture with a frequently changing schema

    - by Pete
    We are looking into enabling Sql Server Change Data Capture for a new subsystem we are building. It's not really because we need it, but we are being pushed for having a complete history traceability, and CDC would nicely solve this requirement with minimum effort on our parts. We are following an agile development process, which in this case means that we frequently make changes to the database schema, e.g. adding new columns, moving data to other columns, etc. We did a small test where we created a table, enabled CDC for that table, and then added a new column to the table. Changes to the new column is not registered in the CDC table. Is there a mechanism to update the CDC table to the new schema, and are there any best practices to how you deal with captured data when migrating the database schema?

    Read the article

  • Breaking 1NF to model subset constraints. Does this sound sane?

    - by Chris Travers
    My first question here. Appologize if it is in the wrong forum but this seems pretty conceptual. I am looking at doing something that goes against conventional wisdom and want to get some feedback as to whether this is totally insane or will result in problems, so critique away! I am on PostgreSQL 9.1 but may be moving to 9.2 for this part of this project. To re-iterate: Does it seem sane to break 1NF in this way? I am not looking for debugging code so much as where people see problems that this might lead. The Problem In double entry accounting, financial transactions are journal entries with an arbitrary number of lines. Each line has either a left value (debit) or a right value (credit) which can be modelled as a single value with negatives as debits and positives as credits or vice versa. The sum of all debits and credits must equal zero (so if we go with a single amount field, sum(amount) must equal zero for each financial journal entry). SQL-based databases, pretty much required for this sort of work, have no way to express this sort of constraint natively and so any approach to enforcing it in the database seems rather complex. The Write Model The journal entries are append only. There is a possibility we will add a delete model but it will be subject to a different set of restrictions and so is not applicable here. If and when we allow deletes, we will probably do them using a simple ON DELETE CASCADE designation on the foreign key, and require that deletes go through a dedicated stored procedure which can enforce the other constraints. So inserts and selects have to be accommodated but updates and deletes do not for this task. My Proposed Solution My proposed solution is to break first normal form and model constraints on arrays of tuples, with a trigger that breaks the rows out into another table. CREATE TABLE journal_line ( entry_id bigserial primary key, account_id int not null references account(id), journal_entry_id bigint not null, -- adding references later amount numeric not null ); I would then add "table methods" to extract debits and credits for reporting purposes: CREATE OR REPLACE FUNCTION debits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount < 0 THEN $1.amount * -1 ELSE NULL END; $$; CREATE OR REPLACE FUNCTION credits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount > 0 THEN $1.amount ELSE NULL END; $$; Then the journal entry table (simplified for this example): CREATE TABLE journal_entry ( entry_id bigserial primary key, -- no natural keys :-( journal_id int not null references journal(id), date_posted date not null, reference text not null, description text not null, journal_lines journal_line[] not null ); Then a table method and and check constraints: CREATE OR REPLACE FUNCTION running_total(journal_entry) returns numeric language sql immutable as $$ SELECT sum(amount) FROM unnest($1.journal_lines); $$; ALTER TABLE journal_entry ADD CONSTRAINT CHECK (((journal_entry.running_total) = 0)); ALTER TABLE journal_line ADD FOREIGN KEY journal_entry_id REFERENCES journal_entry(entry_id); And finally we'd have a breakout trigger: CREATE OR REPLACE FUNCTION je_breakout() RETURNS TRIGGER LANGUAGE PLPGSQL AS $$ BEGIN IF TG_OP = 'INSERT' THEN INSERT INTO journal_line (journal_entry_id, account_id, amount) SELECT NEW.id, account_id, amount FROM unnest(NEW.journal_lines); RETURN NEW; ELSE RAISE EXCEPTION 'Operation Not Allowed'; END IF; END; $$; And finally CREATE TRIGGER AFTER INSERT OR UPDATE OR DELETE ON journal_entry FOR EACH ROW EXECUTE_PROCEDURE je_breaout(); Of course the example above is simplified. There will be a status table that will track approval status allowing for separation of duties, etc. However the goal here is to prevent unbalanced transactions. Any feedback? Does this sound entirely insane? Standard Solutions? In getting to this point I have to say I have looked at four different current ERP solutions to this problems: Represent every line item as a debit and a credit against different accounts. Use of foreign keys against the line item table to enforce an eventual running total of 0 Use of constraint triggers in PostgreSQL Forcing all validation here solely through the app logic. My concerns are that #1 is pretty limiting and very hard to audit internally. It's not programmer transparent and so it strikes me as being difficult to work with in the future. The second strikes me as being very complex and required a series of contraints and foreign keys against self to make work, and therefore it strikes me as complex, hard to sort out at least in my mind, and thus hard to work with. The fourth could be done as we force all access through stored procedures anyway and this is the most common solution (have the app total things up and throw an error otherwise). However, I think proof that a constraint is followed is superior to test cases, and so the question becomes whether this in fact generates insert anomilies rather than solving them. If this is a solved problem it isn't the case that everyone agrees on the solution....

    Read the article

  • Developing Schema Compare for Oracle (Part 4): Script Configuration

    - by Simon Cooper
    If you've had a chance to play around with the Schema Compare for Oracle beta, you may have come across this screen in the synchronization wizard: This screen is one of the few screens that, along with the project configuration form, doesn't come from SQL Compare. This screen was designed to solve a couple of issues that, although aren't specific to Oracle, are much more of a problem than on SQL Server: Datatype conversions and NOT NULL columns. 1. Datatype conversions SQL Server is generally quite forgiving when it comes to datatype conversions using ALTER TABLE. For example, you can convert from a VARCHAR to INT using ALTER TABLE as long as all the character values are parsable as integers. Oracle, on the other hand, only allows ALTER TABLE conversions that don't change the internal data format. Essentially, every change that requires an actual datatype conversion has to be done using a rebuild with a conversion function. That's OK, as we can simply hard-code the various conversion functions for the valid datatype conversions and insert those into the rebuild SELECT list. However, as there always is with Oracle, there's a catch. Have a look at the NUMTODSINTERVAL function. As well as specifying the value (or column) to convert, you have to specify an interval_unit, which tells oracle how to interpret the input number. We can't hardcode a default for this parameter, as it is entirely dependent on the user's data context! So, in order to convert NUMBER to INTERVAL DAY TO SECOND/INTERVAL YEAR TO MONTH, we need to have feedback from the user as to what to put in this parameter while we're generating the sync script - this requires a new step in the engine action/script generation to insert these values into the script, as well as new UI to allow the user to specify these values in a sensible fashion. In implementing the engine and UI infrastructure to allow this it made much more sense to implement it for any rebuild datatype conversion, not just NUMBER to INTERVALs. For conversions which we can do, we pre-fill the 'value' box with the appropriate function from the documentation. The user can also type in arbitary SQL expressions, which allows the user to specify optional format parameters for the relevant conversion functions, or indeed call their own functions to convert between values that don't have a built-in conversion defined. As the value gets inserted as-is into the rebuild SELECT list, any expression that is valid in that context can be specified as the conversion value. 2. NOT NULL columns Another problem that is solved by the new step in the sync wizard is adding a NOT NULL column to a table. If the table contains data (as most database tables do), you can't just add a NOT NULL column, as Oracle doesn't know what value to put in the new column for existing rows - the DDL statement will fail. There are actually 3 separate scenarios for this problem that have separate solutions within the engine: Adding a NOT NULL column to a table without a rebuild Here, the workaround is to add a column default with an appropriate value to the column you're adding: ALTER TABLE tbl1 ADD newcol NUMBER DEFAULT <value> NOT NULL; Note, however, there is something to bear in mind about this solution; once specified on a column, a default cannot be removed. To 'remove' a default from a column you change it to have a default of NULL, hence there's code in the engine to treat a NULL default the same as no default at all. Adding a NOT NULL column to a table, where a separate change forced a table rebuild Fortunately, in this case, a column default is not required - we can simply insert the default value into the rebuild SELECT clause. Changing an existing NULL to a NOT NULL column To implement this, we run an UPDATE command before the ALTER TABLE to change all the NULLs in the column to the required default value. For all three, we need some way of allowing the user to specify a default value to use instead of NULL; as this is essentially the same problem as datatype conversion (inserting values into the sync script), we can re-use the UI and engine implementation of datatype conversion values. We also provide the option to alter the new column to allow NULLs, or to ignore the problem completely. Note that there is the same (long-running) problem in SQL Compare, but it is much more of an issue in Oracle as you cannot easily roll back executed DDL statements if the script fails at some point during execution. Furthermore, the engine of SQL Compare is far less conducive to inserting user-supplied values into the generated script. As we're writing the Schema Compare engine from scratch, we used what we learnt from the SQL Compare engine and designed it to be far more modular, which makes inserting procedures like this much easier.

    Read the article

  • SQL SERVER – Merge Two Columns into a Single Column

    - by Pinal Dave
    Here is a question which I have received from user yesterday. Hi Pinal, I want to build queries in SQL server that merge two columns of the table If I have two columns like, Column1 | Column2 1                5 2                6 3                7 4                8 I want to output like, Column1 1 2 3 4 5 6 7 8 It is a good question. Here is how we can do achieve the task. I am making the assumption that both the columns have different data and there is no duplicate. USE TempDB GO CREATE TABLE TestTable (Col1 INT, Col2 INT) GO INSERT INTO TestTable (Col1, Col2) SELECT 1, 5 UNION ALL SELECT 2, 6 UNION ALL SELECT 3, 7 UNION ALL SELECT 4, 8 GO SELECT Col1 FROM TestTable UNION SELECT Col2 FROM TestTable GO DROP TABLE TestTable GO Here is the original table. Here is the result table. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How to configure DD-WRT routing table when creating an isolated network segment for PCI C VT compliance

    - by tetranz
    I'm the volunteer support and system admin person at a small private school. We need to setup a PCI compliant Windows PC as a virtual terminal for credit card processing. I've read questionnaire SAQ C-VT and, to quote, this computer needs to be accessed: "via a computer that is isolated in a single location, and is not connected to other locations or systems within your environment (this can be achieved via a firewall or network segmentation to isolate the computer from other systems)" Our setup is as follows: DSL modem from ISP is setup to be a "transparent pipe" with no extra services. That goes into the WAN port of Linksys WRT54-GL running a DD-WRT. The LAN is 192.168.1.x. There are a couple of other WRT54-GL / DD-WRT devices. One is used as a wireless AP and another is a client bridge. To isolate the VT (virtual terminal) machine, I have another DD-WRT device. Its WAN is connected to a port on the 192.168.1.x LAN. The virtual terminal machine is connected to its LAN which is at 192.168.10.x. The SPI Firewall etc is turned on. It's basically the default DD-WRT gateway setup where the "ISP" is our own LAN. That's working. All incoming traffic to the VT machine is blocked, including from our own LAN. The VT can access the internet BUT, and here's the problem, it can also ping any of the computers on the 192.168.1.x LAN. I think I need to stop that. I'm guessing that I could do something with the Static Routing table in the VT machine's DD-WRT device. I need to route anything going to 192.168.1.x other than the gateway which is 192.168.1.1 to 0.0.0.0 or something like that. That's where I'm stuck at the end of my knowledge. Or ... do I need to get yet another DD-WRT so the network is "balanced". Maybe I need to have the internet from the DSL going into a DD-WRT which has only two devices on its LAN i.e., two other DD-WRTs, one for the main LAN and one for the VT. I think that would do but I'd like to avoid the extra cost and complexity if I don't need it. Thanks

    Read the article

  • What is the best database design for managing historical information? [closed]

    - by Emmad Kareem
    Say you have a Person table with columns such as: ID, FirstName, LastName, BirthCountry, ...etc. And you want to keep track of changes on such a table. For example, the user may want to see previous names of a person or previous addresses, etc. The normalized way is to keep names in separate table, addresses in a separate table,...etc. and the main person table will contain only the information that you are not interested in monitoring changes for (such information will be updated in place). The problem I see here, aside form the coding hassle due to the extensive number of joins required in a real-life situation, is that I have never seen this type of design in any real application (maybe because most did not provide this feature!). So, is there a better way to design this? Thanks.

    Read the article

  • How to find classes that use certain DB tables

    - by Songo
    Problem: I'm asked to prepare a document where all our DB tables are listed and I'm supposed to list all Controllers that uses these DB tables for read and another list for Controllers that do write operations. Ex: +------------------------------------------+------------+ | DB table | tbl_Orders | +------------------------------------------+------------+ |Controllers that perform read operations | ?? | +------------------------------------------+------------+ |Controllers that perform write operations | ?? | +------------------------------------------+------------+ We are trying to write some documentation for a legacy system built using Zend framework. The code is scattered everywhere. There is code in the Controllers, in the models and even in the views. The application uses PROPEL as an ORM. What makes this really difficult is that the Controller may not be directly calling the table, but it may be instantiating a model class that calls that table. Is there an educated way to approach this crazy task? Note: Searching for the table name won't provide a solution because if a model uses that table I wouldn't know which Controller is using that model.

    Read the article

  • ?????? ??????????! ?Gold???? vol.5 <??>

    - by M.Morozumi
    ???ORACLE MASTER Gold Oracle Database 11g?????????????? ?????????????????????? ------------------------------- FLASHBACK ( ) ??????????Point-in-Time ???????????? DROP TABLE ?????????????????( ) ????????????????1????????? a. DATABASE b. TABLE c. COLUMN d. ????????????? ??????????????? ------------------------------- ??:b. TABLE ??: FLASHBACK TABLE ???????????Point-in-Time ???????????? DROP TABLE ?????????????????FLASHBACK COLUMN ??????????????

    Read the article

  • how to extract data from excel (apache poi) to put it in mysql table using jsp? [ SOLVED]

    - by Nihad KH
    I want to extract data from excel sheet to insert it into a mysql table using jsp, so far i've done this and its printing data into the outpout(using apache poi),what should i add to this code ? Output : Name Age Adress Mark 35 New york,AA Elise 22 India,bb Charlotte 45 France,cc Readexcel.jsp : <%@page import="java.sql.Statement"%> <%@page import="java.util.ArrayList"%> <%@page import="java.sql.PreparedStatement"%> <%@page import="java.sql.Connection"%> <%@page import="java.util.Date"%> <%@page import="org.apache.poi.ss.usermodel.Cell"%> <%@page import="org.apache.poi.ss.usermodel.Row"%> <%@page import="org.apache.poi.xssf.usermodel.XSSFSheet"%> <%@page import="org.apache.poi.xssf.usermodel.XSSFWorkbook"%> <%@page import="java.io.File"%> <%@page import="org.apache.commons.io.FilenameUtils"%> <%@page import="org.apache.commons.fileupload.FileItem"%> <%@page import="java.util.Iterator"%> <%@page import="java.util.List"%> <%@page import="org.apache.commons.fileupload.servlet.ServletFileUpload"%> <%@page import="org.apache.commons.fileupload.disk.DiskFileItemFactory"%> <%@page import="org.apache.commons.fileupload.FileItemFactory"%> <%@page contentType="text/html" pageEncoding="UTF-8"%> <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>PRINT DATA FROM EXCEL FILE</title> </head> <body> <% try{ boolean ismultipart=ServletFileUpload.isMultipartContent(request); if(!ismultipart){ }else{ FileItemFactory factory = new DiskFileItemFactory(); ServletFileUpload upload = new ServletFileUpload(factory); List items = null; try{ items = upload.parseRequest(request); }catch(Exception e){ } Iterator itr = items.iterator(); while(itr.hasNext()){ FileItem item = (FileItem)itr.next(); if(item.isFormField()){ }else{ String itemname = item.getName(); if((itemname==null || itemname.equals(""))){ continue; } String filename = FilenameUtils.getName(itemname); File f = checkExist(filename); item.write(f); try{ XSSFWorkbook workbook = new XSSFWorkbook(item.getInputStream()); XSSFSheet sheet = workbook.getSheetAt(0); Iterator<Row> rowIterator = sheet.iterator(); while (rowIterator.hasNext()){ Row row = rowIterator.next(); Iterator<Cell> cellIterator = row.cellIterator(); while (cellIterator.hasNext()) { Cell cell = cellIterator.next(); switch (cell.getCellType()){ case Cell.CELL_TYPE_NUMERIC: out.print(cell.getNumericCellValue() + "t"); break; case Cell.CELL_TYPE_STRING: out.print(cell.getStringCellValue() + "t"); break;} } out.println(""); } }catch (Exception e){ e.printStackTrace(); } } } } }catch(Exception e){ } finally { out.close(); } %> <%! private File checkExist(String fileName){ String saveFile = "D:/upload/"; File f = new File(saveFile+"/"+fileName); if(f.exists()){ StringBuffer sb = new StringBuffer(fileName); sb.insert(sb.lastIndexOf("."),"-"+new Date().getTime()); f = new File(saveFile+"/"+sb.toString()); } return f; } %> </body> </html> I've created a table in my database named EXCELDATA with the header of the excel sheet : ExcelData (Name varchar(50),age int,adress varchar(50)); what should i add to this code to get the data from the excel sheet to the mysql table ??

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >