Search Results

Search found 41147 results on 1646 pages for 'database security'.

Page 67/1646 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Exadata X3 In-Memory Database Machine: To be or not to be

    - by Luis Moreno Campos
    Since Larry Ellison announced Oracle Exadata X3 as the new generation of the Database Machine, he established the product in the In-Memory Database arena. And that annoyed some people. We all know that In-Memory Databases are the ones that *only* execute in memory and use the other layers of storage for persistency (mainly disk). Oracle database has always been a technology that uses memory as a caching mechanism and that hasn't change nor it will change with Oracle Database 12c. So this is the central point of fuss when it comes to announcing an Engineered Systems as In-Memory Database, when in fact it still runs Oracle Database, not vanilla but still the same product. Let me tell you purist people out there: when you find no new ground breaking point to get all excited about you decide to bash it, and go against its claims. It's not like a car manufacturer that launches a mini-van in the market and calls it a Sports Car, we are talking about a fundamental change in the ILM stack: level 2 of caching is now self sufficient. It's not DRAM? Who cares, still let's you put in flash amounts of data not done up until now, so I guess Oracle can name it whatever Larry wants because in the end it's something never done before. Now let's imagine that you hop on the pure In-Memory Database bandwagon. You would be stuck with a database technology that lags behind the Oracle Database hundreds of light years in man/hours innovations and features. Do you really want to travel back in time? Remember, the first rule about time travelling is that "Security is not Guaranteed". Your choice. LMC

    Read the article

  • Database commands

    - by user12609425
    Ops Center has two database options - you can have Ops Center automatically install a database on the Enterprise Controller system, or you can use your own database on any system you choose. If you use your own database, it's obviously important to make sure that this database is running smoothly. You have a few tools that can help you do this. The first is the ecadm command. This command has a variety of subcommands that let you view and control the status of the Enterprise Controller. Two subcommands in particular are relevant to the database: ecadm verify-db: This subcommand verifies that the database is reachable and that the schemas are configured with the proper permissions. Use the -v option if you want more details; the command is normally terse if the DB is configured correctly. ecadm sqlplus -r: This subcommand opens an sqlplus console connection to the database. The -r option makes this console read-only, which isn't necessary, but is generally a good idea. You can also view the database contents using Oracle SQL Developer or other tools. The Accessing Core Product Data how-to describes this process.

    Read the article

  • Oracle Database 11gR2?????????11.2.0.4??????????

    - by Yusuke Yamamoto
    2013?8??Oracle Database 11g Release 2 ???????????? 11.2.0.4 ? Linux x86-64 ????????????????????? ???2013?10??? Microsoft Windows x64 ?????????????????????? Patch Set Release (PSR) ???????????????? Oracle Database ????????????? ???????????????? Oracle Database 11g Release 2 PSR 11.2.0.4 ???? ????1:?????? PSR ?? 2013?10?????PSR 11.2.0.4 ??????????? PSR ?????????=??? PSR???????????????????????????? PSR ??? ????2:12c ??????????????????????? PSR ?? ??????????????????? Oracle Data Redaction ???????????? ???? ????????·???·???????? ???????????????? ??????Oracle Database 11g Release 2 PSR 11.2.0.4 ?? Oracle Database 11g Release 2 (11.2.0.4) New Features ????3:Windows 8 / Windows Server 2012 ???????? ?Oracle Database 12c ???Windows 8 / Windows Server 2012 ???????? Statement of Direction: Oracle Database on Microsoft Windows 8 and Windows Server 2012 ??????????????????????????????? My Oracle Support Release Schedule of Current Database Releases [Doc ID 742060.1] ???? ??????Oracle Database 11g Release 2 (11.2.0.4) Real Application Clusters ???????·??? Linux x86-64 ? ????????????????????????????????????/?????Tips ???????/????????|???????????

    Read the article

  • Unable to back up SQL Server databases using a maintenance plan

    - by Stephen Jennings
    I am trying to create a maintenance plan that will run automatically and back up my SQL Server 2005 databases automatically. I create a new maintenance plan and add a "Back Up Database Task", select all databases, and choose a path to back up to. When I save and try to execute this plan, I get the following error message: =================================== Execution failed. See the maintenance plan and SQL Server Agent job history logs for details. =================================== Job 'Backup.Subplan_1' failed. (SqlManagerUI) ------------------------------ Program Location: at Microsoft.SqlServer.Management.SqlManagerUI.MaintenancePlanMenu_Run.PerformActions() I've checked the maintenance plan log, the agent log, and just about every log file I can find and there are no entries at all to help me figure out why this is failing. If I right-click on a specific database and select "Back Up", the task succeeds. I tried changing the plan to back up just that one database and it still failed. I've tried running the plan with both Windows authentication and SQL Server authentication with the sa account. I also tried specifically granting the SQL Server Agent user account full privileges on the backup folder, but it still failed. While searching the web for clues, the only solution I've run across so far suggests running sp_configure 'allow_update', 0. I tried this but allow_update was already set to 0 and it did not fix the problem. The Windows server and SQL Server have all updates applied to them. Thanks for any suggestions!

    Read the article

  • Creating a secure SQL Server 2008 database environment

    - by user279521
    I am in the process of setting up a corporate SQL Server 2008 database. The data on this machine will be related to financial services. There will be low level traffic (not like your average investment broker's website). However, a secure data environment is very crucial. What would I need to know / do in order to ensure that I have a secure database?

    Read the article

  • SQL Server authentication - limit access to database to only connect through application

    - by Mauro
    I have a database which users should not be able to alter data in unless they use the specific app. I know best practice is to use windows authentication however that would mean that users could then connect to the database using any other data enabled app and change values which would then not be audited. Unfortunately SQL 2008 with its inbuilt auditing is not available. Any ideas how to ensure that users cannot change anything unless its through the controlling app?

    Read the article

  • Is the master database backup crucial for restoring MS SQL server in the event where you have to res

    - by Imagineer
    I have been advise by Commvault partner support to turn off the backup of the master database as the backup failed due to the log file being lock. The following is the advise given: "The message is caused by Commvault’s inability to backup the master database’s transaction log. If this is happening intermittently its possible that something is locking the transaction log, preventing SQL iData agent from accessing the log. Typically the master database is just a template and is not used by any applications (applications that do require the use of an SQL database create their own) so there should be no harm in preventing it from being backed up You can do this by nominating NOT to back it up in the primary copy for the SQL data agent" The following is the error that I get. sqlxx SQL Server/ SQLxx N/A/ System DBs 19856* (CWE) Transaction Log N/A 01/08/2010 19:00:16 (01/08/2010 19:00:18 ) 01/08/2010 19:03:15 (01/08/2010 19:03:14 ) 1.44 MB 0:01:11 0.071 2 0 1 ITD014L2 Failure Reason: • ERROR CODE [30:325]: Error encountered during backup. Error: [ERROR: [Microsoft][ODBC SQL Server Driver][SQL Server]Cannot back up the log of the master database. Use BACKUP DATABASE instead. [Microsoft][ODBC SQL Server Driver][SQL Server]BACKUP LOG is terminating abnormally.] Job Options:Create new index, Start new media, Backup all subclients, Truncation Log, Follow mount points , Backup files protected by system file protection , Stop DHCP service when backing up system state data, Stop WINS service when backing up system state data Associated Events: • 79714 [backupxx/JobManager] [01/08/2010 19:03:15 ]: Backup job [19856] completed. Client [sqlxx], Agent Type [SQL Server], Subclient [System DBs], Backup Level [Transaction Log], Objects [2], Failed [1], Duration [00:02:59], Total Size [1.44 MB], Media or Mount Path Used [ITD014L2]. • 79712 [sqlxx/SQLiDA] [01/08/2010 19:01:53 ]: Error encountered during backup. Error: [ERROR: [Microsoft][ODBC SQL Server Driver][SQL Server]Cannot back up the log of the master database. Use BACKUP DATABASE instead. [Microsoft][ODBC SQL Server Driver][SQL Server]BACKUP LOG is terminating abnormally.] • 79711 [sqlxx/SQLiDA] [01/08/2010 19:01:51 ]: Query Result [[Microsoft][ODBC SQL Server Driver][SQL Server]Cannot back up the log of the master database. Use BACKUP DATABASE instead. [Microsoft][ODBC SQL Server Driver][SQL Server]BACKUP LOG is terminating abnormally.]. • 79707 [backupxx/JobManager] [01/08/2010 19:00:15 ]: New backup request received for Client [sqlxx], iDataAgent [SQL Server], Instance [SQLxx], Subclient [System DBs], Backup Level [Transaction Log]. Files failed to back up: • Backup Database[master] Failed Please advise, thank you.

    Read the article

  • SharePoint: Can't Connect to a Configuration Database When Using Configuration Wizard

    - by Denis
    Hello everyone. I wonder if you could help me with the following problem: We have a SharePoint farm consisted of two servers with an NLB (a load balancer), a database server and an index server (4 servers in total). The issues initially appeared when we were trying to change Search settings via Shared services provider and an error was appearing with the message I can’t even remember now. To fix that problem, we decided to restart Search services on the Index server via Central Administration… and the process had stuck with a status “stopping” for several days. We’ve found a few solutions on the forums, but nothing had helped. Then we’ve tried to exclude the Index server from the farm, but yet again, Search services would not stop. However, after a week we’ve noticed that the services finally stopped. That was a prologue. So, after the Search services have finally stopped, we decided to return the Index server back to the farm via the SharePoint Configuration wizard. When we click on a button “Search database instances” (don’t know the exact name in English, we have a Russian version), a correct database name and user are automatically appear in the relevant fields. Then we supply a password for the user and click “next” two times and the configuration begins. Unfortunately, at the second stage we receive the error “Could not connect to a configuration database” with the exception in the log file: “Exception: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation.” Now we can’t figure out why the configuration wizard can’t connect to the configuration database. Any suggestions will be really appreciated.

    Read the article

  • Advice on database design / SQL for retrieving data with chronological order

    - by Remnant
    I am creating a database that will help keep track of which employees have been on a certain training course. I would like to get some guidance on the best way to design the database. Specifically, each employee must attend the training course each year and my database needs to keep a history of all the dates on which they have attend the course in the past. The end user will use the software as a planning tool to help them book future course dates for employees. When they select a given employee they will see: (a) Last attendance date (b) Projected future attendance date(i.e. last attendance date + 1 calendar year) In terms of my database, any given employee may have multiple past course attendance dates: EmpName AttandanceDate Joe Bloggs 1st Jan 2007 Joe Bloggs 4th Jan 2008 Joe Bloggs 3rd Jan 2009 Joe Bloggs 8th Jan 2010 My question is what is the best way to set up the database to make it easy to retrieve the most recent course attendance date? In the example above, the most recent would be 8th Jan 2010. Is there a good way to use SQL to sort by date and pick the MAX date? My other idea was to add a column called ‘MostRecent’ and just set this to TRUE. EmpName AttandanceDate MostRecent Joe Bloggs 1st Jan 2007 False Joe Bloggs 4th Jan 2008 False Joe Bloggs 3rd Jan 2009 False Joe Bloggs 8th Jan 2010 True I wondered if this would simplify the SQL i.e. SELECT Joe Bloggs WHERE MostRecent = ‘TRUE’ Also, when the user updates a given employee’s attendance record (i.e. with latest attendance date) I could use SQL to: Search for the employee and set the MostRecent value to FALSE Add a new record with MostRecent set to TRUE? Would anybody recommended either method over the other? Or do you have a completely different way of solving this problem?

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • Good place to look for example Database Designs - Best practices

    - by Younes
    I have been given the task to design a database to store a lot of information for our company. Because the task is rather big and contains multiple modules where users should be able to do stuff, I'm worried about designing a good data model for this. I just don't want to end up with a badly designed database. I want to have some decent examples of database structures for contracts / billing / orders etc to combine those in one nice relational database. Are there any resources out there that can help me with some examples regarding this?

    Read the article

  • Unit testing a database connection and general questions on database-dependent code and unit testing

    - by dotnetdev
    Hi, If I have a method which establishes a database connection, how could this method be tested? Returning a bool in the event of a successful connection is one way, but is that the best way? From a testability method, is it best to have the connection method as one method and the method to get data back a seperate method? Also, how would I test methods which get back data from a database? I may do an assert against expected data but the actual data can change and still be the right resultset. EDIT: For the last point, to check data, if it's supposed to be a list of cars, then I can check they are real car models. Or if they are a bunch of web servers, I can have a list of existant web servers on the system, return that from the code under test, and get the test result. If the results are different, the data is the issue but the query not? THnaks

    Read the article

  • Delphi 2010 Professional and remote database access

    - by Kico Lobo
    Hi, When looking for which version of Delphi 2010 to buy, we found the following limitation on the professional one: Delphi 2010 Professional is designed for developers building high-performance desktop GUI and touch-screen applications with (or without) embedded and local database persistence. What does this really mean? Does this mean that we'll only face this restriction if we choose to use the native vcl components for database access we'll face this restriction. And what if we choose to use ADO components instead of those? In this case, how can Delphi avoid us to access remote database servers? Did anyone here ever tried this? Going even further: if we choose to use a database like Firebird, which is just one file, and used a network mapped drive. Could we be facing the same limitation? Supposing we opt for ADO, what will be the main consequences?

    Read the article

  • Android Jelly bean database is locked (code 5)

    - by mtraxdroid
    Im getting a database is locked (code 5) in my ListActivity the code works in the other versions of the Emulator but fails in the 4.1 version of the emulator E/SQLiteLog( 2132): (5) database is locked E/SQLiteDatabase( 2132): Failed to open database '/data/data/id.online.mydroid/databases/geo.db'. E/SQLiteDatabase( 2132): android.database.sqlite.SQLiteDatabaseLockedException: database is locked (code 5): , while compiling: PRAGMA al_mode E/SQLiteDatabase( 2132): at android.database.sqlite.SQLiteConnection.nativePrepareStatement(Native Method) E/SQLiteDatabase( 2132): at android.database.sqlite.SQLiteConnection.acquirePreparedStatement(SQLiteConnection.java:882) E/SQLiteDatabase( 2132): at android.database.sqlite.SQLiteConnection.executeForString(SQLiteConnection.java:627) E/SQLiteDatabase( 2132): at android.database.sqlite.SQLiteConnection.setJournalMode(SQLiteConnection.java:313) E/SQLiteDatabase( 2132): at android.database.sqlite.SQLiteConnection.setWalModeFromConfiguration(SQLiteConnection.java:287) E/SQLiteDatabase( 2132): at android.database.sqlite.SQLiteConnection.open(SQLiteConnection.java:215) E/SQLiteDatabase( 2132): at android.database.sqlite.SQLiteConnection.open(SQLiteConnection.java:193) E/SQLiteDatabase( 2132): at android.database.sqlite.SQLiteConnectionPool.openConnectionLocked(SQLiteConnectionPool.java:463) E/SQLiteDatabase( 2132): at android.database.sqlite.SQLiteConnectionPool.open(SQLiteConnectionPool.java:185) E/SQLiteDatabase( 2132): at android.database.sqlite.SQLiteConnectionPool.open(SQLiteConnectionPool.java:177) E/SQLiteDatabase( 2132): at android.database.sqlite.SQLiteDatabase.openInner(SQLiteDatabase.java:804) E/SQLiteDatabase( 2132): at android.database.sqlite.SQLiteDatabase.open(SQLiteDatabase.java:789) E/SQLiteDatabase( 2132): at android.database.sqlite.SQLiteDatabase.openDatabase(SQLiteDatabase.java:694) E/SQLiteDatabase( 2132): at android.app.ContextImpl.openOrCreateDatabase(ContextImpl.java:804) E/SQLiteDatabase( 2132): at android.content.ContextWrapper.openOrCreateDatabase(ContextWrapper.java:221) E/SQLiteDatabase( 2132): at android.database.sqlite.SQLiteOpenHelper.getDatabaseLocked(SQLiteOpenHelper.java:224) E/SQLiteDatabase( 2132): at android.database.sqlite.SQLiteOpenHelper.getReadableDatabase(SQLiteOpenHelper.java:188) E/SQLiteDatabase( 2132): at id.online.mydroid.myDB.openForRead(myDB.java:158) E/SQLiteDatabase( 2132): at id.online.mydroid.mydroid.refreshCount(mydroid.java:207) E/SQLiteDatabase( 2132): at id.online.mydroid.mydroid.onResume(mydroid.java:525) Blockquote

    Read the article

  • Change find method in database search so that it isn't case sensitive in Rails app

    - by Ryan
    Hello, I am learning Rails and have created a work-in-progress app that does one-word searches on a database of shortcut keys for various programs (http://keyboardcuts.heroku.com/shortcuts/home). The search method in the Shortcut model is the following: def self.search(search) search_condition = "%" + search + "%" find(:all, :conditions => ['action LIKE ? OR application LIKE ?', search_condition, search_condition]) end ...where 'action' and 'application' are columns in a SQLite table. (source: https://we.riseup.net/rails/simple-search-tutorial) For some reason, the search seems to be case sensitive (you can see this by searching 'Paste' vs. 'paste'). Can anyone help me figure out why and what I can do to make it not case sensitive? If not, can you at least point me in the right direction? Database creation: I first copied shortcuts from various website into Excel and saved it as a CSV. Then I migrated the database and filled it with the data using db:seed and a small script I wrote (I viewed the database and it looked fine). To get the SQLite database to the server, I used Taps as outline by the Heroku website (http://blog.heroku.com/archives/2009/3/18/push_and_pull_databases_to_and_from_heroku/). I am using Ubuntu. Please let me know if you need more information. Thanks in advance for you help, very much appreciated! Ryan

    Read the article

  • SQL SERVER – Install Samples Database Adventure Works for SQL Server 2012

    - by pinaldave
    AdventureWorks is a Sample Database shipped with SQL Server and it can be downloaded from CodePlex site. AdventureWorks has replaced Northwind and Pubs from the sample database in SQL Server 2005.The Microsoft team keeps updating the sample database as they release new versions. For SQL Server 2012 RTM Samples AdventureWorks Database is released: AdventureWorks2012 Data File AdventureWorks2012 Case Sensitive Data File You can download either of the datafile and create database using the same. Here is the script which demonstrates how to create sample database in SQL Server 2012. CREATE DATABASE AdventureWorks2012 ON (FILENAME = 'D:\AdventureWorks2012_Data.mdf') FOR ATTACH_REBUILD_LOG ; Please specify your filepath in the filename variable. Here is the link for additional downloads. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • OleDb database to DataSet and back in c#?

    - by Troy
    I'm writing a program that lets a user: Connect to an (arbitrary) View all of the tables in that database in separate DataGridViews Edit them in the program, generate random data, and see the results Choose to commit those changes or revert So I discovered the DataSet class, which looks like it's capable of holding everything that a database would, and I decided that the best thing to do here would be to load everything into one dataset, let the user edit it, and then save the dataset back to the database. The problem is that the only way I can find to load the database tables is this: set = new DataSet(); DataTable schema = connection.GetOleDbSchemaTable( OleDbSchemaGuid.Tables, new string[] { null, null, null, "TABLE" }); foreach (DataRow row in schema.Rows) { string tableName = row.Field<string>("TABLE_NAME"); DataTable dTable = new DataTable(); new OleDbDataAdapter("SELECT * FROM " + tableName, connection).Fill(dTable); dTable.TableName = tableName; set.Tables.Add(dTable); } while it seems like there should be a simpler way given that datasets appear to be designed for exactly this purpose. The real problem though is when I try saving these things. In order to use the OleDbDataAdapter.Update() method, I'm told that I have to provide valid INSERT queries. Doesn't that kind of negate the whole point of having a class to handle this stuff for me? Anyway, I'm hoping somebody can either explain how to load and save a database into a dataset or maybe give me a better idea of how to do what I'm trying to do. I could always parse the commands together myself, but that doesn't seem like the best solution.

    Read the article

  • Are document-oriented databases meant to replace relational databases?

    - by evolve
    Recently I've been working a little with MongoDB and I have to say I really like it. However it is a completely different type of database then I am used. I've noticed that it is most definitely better for certain types of data, however for heavily normalized databases it might not be the best choice. It appears to me however that it can completely take the place of just about any relational database you may have and in most cases perform better, which is mind boggling. This leads me to ask a few questions: Are document-oriented databases have been developed to be the next generation of databases and basically replace relational databases completely? Is it possible that projects would be better off using both a document-oriented database and a relational database side by side for various data which is better suited for one or the other? If document-oriented databases are not meant to replace relational databases, then does anyone have an example of a database structure which would absolutely be better off in a relational database (or vice-versa)?

    Read the article

  • How do I shrink my SQL Server Database?

    - by Rory Becker
    I have a Database nearly 1.9Gb Database in size MSDE2000 does not allow DBs that exceed 2.0Gb I need to shrink this DB (and many like it at various client locations) I have found and deleted many 100's of 1000's of records which are considered unneeded. these records account for a large percentage of some of the main (largest) tables in the Database. Therefore it's reasonable to assume much space should now be retrievable. So now I need to shrink the DB to account for the missing records I execute "DBCC ShrinkDatabase('MyDB')"......No effect. I have tried the various shrink facilities provided in MSSMS.... Still no effect. I have backed up the database and restored it... Still no effect. Still 1.9Gb Why? Whatever procedure I eventually find needs to be replayable on a client machine with access to nothing other than OSql or similar.

    Read the article

  • Accommodating hierarchical data in SQL Server 2005 database design

    - by Remnant
    Context I am fairly new to database design (=know the basics) and am grappling with how best to design my database for a project I am currently working on. In short, my database will keep a log of which employees have attended certain health and safety courses throughout the year. There are multiple types of course e.g. moving objects, fire safety, hygiene etc. In terms of my database design I need to accommodate the following: Each location can have multiple divisions Each division can have multiple departments Each department can have multiple functions Each function can have multiple job roles Each job role can have different course requirements Also note that the structure at each location may not be the same e.g. the departments within divisions are not the same across locations and the functions within departments may also differ. Edit - updated to better articulate problem Let's assume I am just looking at Location, Division and Department and I have my database as follows: LocationTable DivisionTable DepartmentTable LocationID(PK) DivisionID(PK) DepartmentID(PK) LocationName DivisionName DepartmentName There is a many-to-many relationship between Locations and Divisions and also between Departments and Divisions. Suppose I set up a 'Junction Table' as follows: Location_Division LocationID(FK) DivisionID(FK) Using Location_Division I could easily pull back the Divisions for any Location. However, suppose I want to pull back all departments for a given Division in a given Location. If I set up another 'Junction Table' for Division and Department then I can't see how I would differentiate Division by Location? Division_Department DivisionID(FK) DepartmentID(FK) Location_Division Division_Department LocationID DivisionID DivisionID DepartmentID 1 1 1 1 1 2 1 2 2 1 2 1 2 2 2 2 Do I need to expand the number of columns in my 'Junction Table' e.g. Location_Division_Department LocationID(FK) DivisionID(FK) DepartmentID(FK) Location_Division_Department LocationID DivisionID DepartmentID 1 1 1 1 1 2 1 1 3 2 1 1 2 1 2 2 1 3

    Read the article

  • SO what RDF database do i use?

    - by keisimone
    Hi, i have a similar issue as espoused in http://stackoverflow.com/questions/695752/product-table-many-kinds-of-product-each-product-has-many-parameters i am convinced to use RDF now. but i already have a database in mysql and the code is in php. 1) So what RDF database should I use? 2) do i combine the approach? meaning i have a class table inheritance in the mysql database and just the weird product attributes in the RDF? I dont think i should move everything to a RDF database since it is only just products and the wide array of possible attributes and value that is giving me the problem. 3) what php resources, articles should i look at that will help me better in the creation of this? 4) thank you.

    Read the article

  • python / sqlite - database locked despite large timeouts

    - by Chris Phillips
    Hi, I'm sure I'm missing something pretty obvious, but I can't for the life of me stop my pysqlite scripts crashing out with a database is locked error. I have two scripts, one to load data into the database, and one to read data out, but both will frequently, and instantly, crash depending on what the other is doing with the database at any given time. I've got the timeout on both scripts set to 30 seconds: cx = sqlite.connect("database.sql", timeout=30.0) and think I can see some evidence of the timeouts in that i get what appears to be a timing stamp (e.g 0.12343827e-06 0.1 - and how do I stop that being printed?) dumped occasionally in the middle of my curses formatted output screen, but no delay that ever gets remotely near the 30 second timeout, but still one of the other keeps crashing again and again from this. I'm running RHEL5.4 on a 64 bit 4 cpu HS21 IBM blade, and have heard some mention about issues about multi-threading and am not sure if this might be relevant. Packages in use are sqlite-3.3.6-5 and python-sqlite-1.1.7-1.2.1, and upgrading to newer versions outside of RedHat's official provisions is not a great option for me. Possible, but not desirable due to the environment in general. I have had autocommit=1 on previously on both scripts, but have since disabled on both, and am now cx.commit()ing on the inserting script and not committing on the select script. Ultimately as I only ever have one script actually making any modifications, I don't really see why this locking should ever ever happen. I have noticed that this is significantly worse over time when the database has gotten larger. It was recently at 13mb with 3 equal sized tables, which was about 1 day's worth of data. creating a new file has significantly improved this, which seems understandable, but the timeout ultimately just doesn't seem to be being obeyed. Any pointers very much appreciated. Thanks Chris

    Read the article

  • Setting SQL database Permissions for Visual Studio Data Config Wizard

    - by Raven Dreamer
    Hello, Stackoverflow! I'm new to SQL. I have created a new database in SQL Server Management Studio, and am now trying to attach it to a windows forms project in Visual Studio via the built in Data Configuration Wizard. Currently, whenever I try to attach the database file, I get a permissions error: "You don't have permission to open this file. Contact file owner or administrator to obtain permission" So, simple question -- how do I modify the permissions of my database to allow this?

    Read the article

  • How should I design my database API commands? [closed]

    - by WebDev
    I am developing a database API for a project, with commands for getting data from the database. For example, I have one gib table, so the command for that is: getgib name alias limit fields If the user pass their name: getgib rahul Then it will return all gib data whose name is like rahul. If an alias is given then it will return all the gib owned by the user whose alias (userid) was given. I want to design the commands: limit: to limit the record in query, fields: extra fields I want to add in the select query. So now the commands are set, but: I want the gibs by the gibid, so how to make this or any suggestion to improve my command is welcome. If the user doesn't want to specify the name, and he wants only the gibs by providing alias, then what separator should I use instead of name?

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >