Search Results

Search found 20668 results on 827 pages for 'last modified'.

Page 216/827 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • Some VS 2010 RC Updates (including patches for Intellisense and Web Designer fixes)

    [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] We are continuing to make progress on shipping Visual Studio 2010.  Id like to say a big thank you to everyone who has downloaded and tried out the VS 2010 Release Candidate, and especially to those who have sent us feedback or reported issues with it. This data has been invaluable in helping us find and fix remaining bugs before we ship the final release. Last...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Framework 4 Features: Summary of Security enhancements

    - by Anthony Shorten
    In the last log entry I mentioned one of the new security features in Oracle Utilities Application Framework 4.0.1. Security is one of the major "tent poles" (to borrow a phrase from Steve Jobs) in this release of the framework. There are a number of security related enhancements requested by customers and as a result of internal reviews that we have introduced. Here is a summary of some of the security enchancements we have added in this release: Security Cache Changes - Security authorization information is automatically cached on the server for performance reasons (security is checked for every single call the product makes for all modes of access). Prior to this release the cache auto-refreshed every 30 minutes (or so). This has beem made more nimble by supporting a cache refresh every minute (or so). This means authorization changes are reflected quicker than before. Business Level security - Business Services are configurable services that are based upon Application Services. Typically, the business service inherited its security profile from its parent service. Whilst this is sufficient for most needs, it is now required to further specify security on the Business Service definition itself. This will allow granular security and allow the same application service to be exposed as different Business Services with their own security. This is particularly useful when you base a Business Service on a query zone. User Propogation - As with other client server applications, the database connections are pooled and shared as needed. This means that a common database user is used to access the database from the pool to allow sharing. Unfortunently, this means that tracability at the database level is that much harder. In Oracle Utilities Application Framework V4 the end userid is now propogated to the database using the CLIENT_IDENTIFIER as part of the Oracle JDBC connection API. This not only means that the common database userid is still used but the end user is indentifiable for the duration of the database call. This can be used for monitoring or to hook into Oracle's database security products. This enhancement is only available to Oracle Database customers. Enhanced Security Definitions - Security Administrators use the product browser front end to control access rights of defined users. While this is sufficient for most sites, a new security portal has been introduced to speed up the maintenance of security information. Oracle Identity Manager Integration - With the popularity of Oracle's Identity Management Suite, the Framework now provides an integration adapter and Identity Manager Generic Transport Connector (GTC) to allow users and group membership to be provisioned to any Oracle Utilities Application Framework based product from Oracle's Identity Manager. This is also available for Oracle Utilties Application Framework V2.2 customers. Refer to My Oracle Support KBid 970785.1 - Oracle Identity Manager Integration Overview. Audit On Inquiry - Typically the configurable audit facility in the Oracle Utilities Application Framework is used to audit changes to records. In Oracle Utilities Application Framework the Business Services and Service Scripts could be configured to audit inquiries as well. Now it is possible to attach auditing capabilities to zones on the product (including base package ones). Time Zone Support - In some of the Oracle Utilities Application Framework based products, the timezone of the end user is a factor in the processing. The user object has been extended to allow the recording of time zone information for use in product functionality. JAAS Suport - Internally the Oracle Utilities Application Framework uses a number of techniques to validate and transmit security information across the architecture. These various methods have been reconciled into using Java Authentication and Authorization Services for standardized security. This is strictly an internal change with no direct on how security operates externally. JMX Based Cache Management - In the last bullet point, I mentioned extra security applied to cache management from the browser. Alternatively a JMX based interface is now provided to allow IT operations to control the cache without the browser interface. This JMX capability can be initiated from a JSR120 compliant JMX console or JMX browser. I will be writing another more detailed blog entry on the JMX enhancements as it is quite a change and an exciting direction for the product line. Data Patch Permissions - The database installer provided with the product required lower levels of security for some operations. At some sites they wanted the ability for non-DBA's to execute the utilities in a controlled fashion. The framework now allows feature configuration to allow delegation for patch execution. User Enable Support - At some sites, the use of temporary staff such as contractors is commonplace. In this scenario, temporary security setups were required and used. A potential issue has arisen when the contractor left the company. Typically the IT group would remove the contractor from the security repository to prevent login using that contractors userid but the userid could NOT be removed from the authorization model becuase of audit requirements (if any user in the product updates financials or key data their userid is recorded for audit purposes). It is now possible to effectively diable the user from the security model to prevent any use of the useridwhilst retaining audit information. These are a subset of the security changes in Oracle Utilities Application Framework. More details about the security capabilities of the product is contained in My Oracle Support KB Id 773473.1 - Oracle Utilities Application Framework Security Overview.

    Read the article

  • SQL SERVER – Get Latest SQL Query for Sessions – DMV

    - by pinaldave
    In recent SQL Training I was asked, how can one figure out what was the last SQL Statement executed in sessions. The query for this is very simple. It uses two DMVs and created following quick script for the same. SELECT session_id, TEXT FROM sys.dm_exec_connections CROSS APPLY sys.dm_exec_sql_text(most_recent_sql_handle) AS ST While working with DMVs if you ever find any DMV has column with name sql_handle you can right away join that DMV with another DMV sys.dm_exec_sql_text and can get the text of the SQL statement. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: DMV, SQL DMV

    Read the article

  • WPF ListView as a DataGrid – Part 3

    - by psheriff
    I have had a lot of great feedback on the blog post about turning the ListView into a DataGrid by creating GridViewColumn objects on the fly. So, in the last 2 parts, I showed a couple of different methods for accomplishing this. Let’s now look at one more and that is use Reflection to extract the properties from a Product, Customer, or Employee object to create the columns. Yes, Reflection is a slower approach, but you could create the columns one time then cache the View object for re-use. Another potential drawback is you may have columns in your object that you do not wish to display on your ListView. But, just because so many people asked, here is how to accomplish this using Reflection.   Figure 1: Use Reflection to create GridViewColumns. Using Reflection to gather property names is actually quite simple. First you need to pass any type (Product, Customer, Employee, etc.) to a method like I did in my last two blog posts on this subject. Below is the method that I created in the WPFListViewCommon class that now uses reflection. C#public static GridView CreateGridViewColumns(Type anyType){  // Create the GridView  GridView gv = new GridView();  GridViewColumn gvc;   // Get the public properties.  PropertyInfo[] propInfo =          anyType.GetProperties(BindingFlags.Public |                                BindingFlags.Instance);   foreach (PropertyInfo item in propInfo)  {    gvc = new GridViewColumn();    gvc.DisplayMemberBinding = new Binding(item.Name);    gvc.Header = item.Name;    gvc.Width = Double.NaN;    gv.Columns.Add(gvc);  }   return gv;} VB.NETPublic Shared Function CreateGridViewColumns( _  ByVal anyType As Type) As GridView  ' Create the GridView   Dim gv As New GridView()  Dim gvc As GridViewColumn   ' Get the public properties.   Dim propInfo As PropertyInfo() = _    anyType.GetProperties(BindingFlags.Public Or _                          BindingFlags.Instance)   For Each item As PropertyInfo In propInfo    gvc = New GridViewColumn()    gvc.DisplayMemberBinding = New Binding(item.Name)    gvc.Header = item.Name    gvc.Width = [Double].NaN    gv.Columns.Add(gvc)  Next   Return gvEnd Function The key to using Relection is using the GetProperties method on the type you pass in. When you pass in a Product object as Type, you can now use the GetProperties method and specify, via flags, which properties you wish to return. In the code that I wrote, I am just retrieving the Public properties and only those that are Instance properties. I do not want any static/Shared properties or private properties. GetProperties returns an array of PropertyInfo objects. You can loop through this array and build your GridViewColumn objects by reading the Name property from the PropertyInfo object. Build the Product Screen To populate the ListView shown in Figure 1, you might write code like the following: C#private void CollectionSample(){  Product prod = new Product();   // Setup the GridView Columns  lstData.View =      WPFListViewCommon.CreateGridViewColumns(typeOf(Product));  lstData.DataContext = prod.GetProducts();} VB.NETPrivate Sub CollectionSample()  Dim prod As New Product()   ' Setup the GridView Columns  lstData.View = WPFListViewCommon.CreateGridViewColumns( _       GetType(Product))  lstData.DataContext = prod.GetProducts()End Sub All you need to do now is to pass in a Type object from your Product class that you can get by using the typeOf() function in C# or the GetType() function in VB. That’s all there is to it! Summary There are so many different ways to approach the same problem in programming. That is what makes programming so much fun! In this blog post I showed you how to create ListView columns on the fly using Reflection. This gives you a lot of flexibility without having to write extra code as was done previously. NOTE: You can download the complete sample code (in both VB and C#) at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "WPF ListView as a DataGrid – Part 3" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free eBook on "Fundamentals of N-Tier".  

    Read the article

  • Screencasts introducing C++ AMP

    - by Daniel Moth
    It has been almost 2.5 years since I last recorded a screencast, and I had forgotten how time consuming they are to plan/record/edit/produce/publish, but at the same time so much fun to see the end result! So below are links to 4 screencasts to teach you C++ AMP basics from scratch (even if you class yourself as a .NET developer you'll be able to follow). Setup code - part 1 array_view, extent, index - part 2 parallel_for_each - part 3 accelerator - part 4 If you have comments/questions about what is shown in each video, please leave them at each video recoding. If you have generic questions about C++ AMP, please ask in the C++ AMP MSDN forum. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Evolution of Apple: A Fan Spliced Mega Tribute to the Apple Product Lineup

    - by Jason Fitzpatrick
    Whether you’re an Apple fan or not, this 3.5 minute tribute to the evolution of Apple products is a neat look back at decades of computing history and iconic design. Put together by Apple fan August Brandels, the video splices together Apple commercials and promotional footage from the last 30 years (remixed against the catchy background tune Silhouettes by Avicii) into a mega tribute to the computer giant. If nothing else they should hire the guy to do motivational videos for annual employee meetings. [via Tech Crunch] HTG Explains: How Antivirus Software Works HTG Explains: Why Deleted Files Can Be Recovered and How You Can Prevent It HTG Explains: What Are the Sys Rq, Scroll Lock, and Pause/Break Keys on My Keyboard?

    Read the article

  • Learning to Grow

    - by jack.flynn
    A Conversation with Ted Simpson of HEUG A great place to revisit Oracle OpenWorld year round is OracleWebVideo on YouTube. Oracle Magazine Senior Editor Jeff Erickson sat down with Ted Simpson at last year's Oracle OpenWorld to find out how the Higher Education Users Group (HEUG) is helping hundreds of member institutions and thousands of individuals across the globe meet the technological challenges in colleges and universities. Simpson joined HEUG back when it was a PeopleSoft special interest group. Now that higher education institutions have expanded into IT infrastructures the size of global corporations or small municipalities, his user group has also been challenged by growth.

    Read the article

  • Visual Studio 2010 Productivity Power Tool Extensions

    Last month I blogged about the Extension Manager that is built-into VS 2010 as well as about a cool VS 2010 PowerCommands extension that provides some extra features for Visual Studio.  The Visual Studio 2010 Extension Manager provides an easy way for developers to quickly find and install extensions and plugins that enhance the built-in functionality to VS 2010. New VS 2010 Productivity Power Tools Release Earlier this week Jason Zander announced the availability of a new VS 2010...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • Setting up a new Silverlight 4 Project with WCF RIA Services

    - by Kevin Grossnicklaus
    Many of my clients are actively using Silverlight 4 and RIA Services to build powerful line of business applications.  Getting things set up correctly is critical to being to being able to take full advantage of the RIA services plumbing and when developers struggle with the setup they tend to shy away from the solution as a whole.  I’m a big proponent of RIA services and wanted to take the opportunity to share some of my experiences in setting up these types of projects.  In late 2010 I presented a RIA Services Master Class here in St. Louis, MO through my firm (ArchitectNow) and the information shared in this post was promised during that presentation. One other thing I want to mention before diving in is the existence of a number of other great posts on this subject.  I’ve learned a lot from many of them and wanted to call out a few of them.  The purpose of my post is to point out some of the gotchas that people get caught up on in the process but I would still encourage you to do as much additional research as you can to find the perfect setup for your needs. Here are a few additional blog posts and articles you should check out on the subject: http://msdn.microsoft.com/en-us/library/ee707351(VS.91).aspx http://adam-thompson.com/post/2010/07/03/Getting-Started-with-WCF-RIA-Services-for-Silverlight-4.aspx Technologies I don’t intend for this post to turn into a full WCF RIA Services tutorial but I did want to point out what technologies we will be using: Visual Studio.NET 2010 Silverlight 4.0 WCF RIA Services for Visual Studio 2010 Entity Framework 4.0 I also wanted to point out that the screenshots came from my personal development box which has a number of additional plug-ins and frameworks loaded so a few of the screenshots might not match 100% with what you see on your own machines. If you do not have Visual Studio 2010 you can download the express version from http://www.microsoft.com/express.  The Silverlight 4.0 tools and the WCF RIA Services components are installed via the Web Platform Installer (http://www.microsoft.com/web/download). Also, the examples given in this post are done in C#…sorry to you VB folks but the concepts are 100% identical. Setting up anew RIA Services Project This section will provide a step-by-step walkthrough of setting up a new RIA services project using a shared DLL for server side code and a simple Entity Framework model for data access.  All projects are created with the consistent ArchitectNow.RIAServices filename prefix and default namespace.  This would be modified to match your companies standards. First, open Visual Studio and open the new project window via File->New->Project.  In the New Project window, select the Silverlight folder in the Installed Templates section on the left and select “Silverlight Application” as your project type.  Verify your solution name and location are set appropriately.  Note that the project name we specified in the example below ends with .Client.  This indicates the name which will be given to our Silverlight project. I consider Silverlight a client-side technology and thus use this name to reflect that.  Click Ok to continue. During the creation on a new Silverlight 4 project you will be prompted with the following dialog to create a new web ASP.NET web project to host your Silverlight content.  As we are demonstrating the setup of a WCF RIA Services infrastructure, make sure the “Enable WCF RIA Services” option is checked and click OK.  Obviously, there are some other options here which have an effect on your solution and you are welcome to look around.  For our example we are going to leave the ASP.NET Web Application Project selected.  If you are interested in having your Silverlight project hosted in an MVC 2 application or a Web Site project these options are available as well.  Also, whichever web project type you select, the name can be modified here as well.  Note that it defaults to the same name as your Silverlight project with the addition of a .Web suffix. At this point, your full Silverlight 4 project and host ASP.NET Web Application should be created and will now display in your Visual Studio solution explorer as part of a single Visual Studio solution as follows: Now we want to add our WCF RIA Services projects to this same solution.  To do so, right-click on the Solution node in the solution explorer and select Add->New Project.  In the New Project dialog again select the Silverlight folder under the Visual C# node on the left and, in the main area of the screen, select the WCF RIA Services Class Library project template as shown below.  Make sure your project name is set appropriately as well.  For the sample below, we will name the project “ArchitectNow.RIAServices.Server.Entities”.   The .Server.Entities suffix we use is meant to simply indicate that this particular project will contain our WCF RIA Services entity classes (as you will see below).  Click OK to continue. Once you have created the WCF RIA Services Class Library specified above, Visual Studio will automatically add TWO projects to your solution.  The first will be an project called .Server.Entities (using our naming conventions) and the other will have the same name with a .Web extension.  The full solution (with all 4 projects) is shown in the image below.  The .Entities project will essentially remain empty and is actually a Silverlight 4 class library that will contain generated RIA Services domain objects.  It will be referenced by our front-end Silverlight project and thus allow for simplified sharing of code between the client and the server.   The .Entities.Web project is a .NET 4.0 class library into which we will put our data access code (via Entity Framework).  This is our server side code and business logic and the RIA Services plumbing will maintain a link between this project and the front end.  Specific entities such as our domain objects and other code we set to be shared will be copied automatically into the .Entities project to be used in both the front end and the back end. At this point, we want to do a little cleanup of the projects in our solution and we will do so by deleting the “Class1.cs” class from both the .Entities project and the .Entities.Web project.  (Has anyone ever intentionally named a class “Class1”?) Next, we need to configure a few references to make RIA Services work.  THIS IS A KEY STEP THAT CAUSES MANY HEADACHES FOR DEVELOPERS NEW TO THIS INFRASTRUCTURE! Using the Add References dialog in Visual Studio, add a project reference from the *.Client project (our Silverlight 4 client) to the *.Entities project (our RIA Services class library).  Next, again using the Add References dialog in Visual Studio, add a project reference from the *.Client.Web project (our ASP.NET host project) to the *.Entities.Web project (our back-end data services DLL).  To get to the Add References dialog, simply right-click on the project you with to add a reference to in the Visual Studio solution explorer and select “Add Reference” from the resulting context menu.  You will want to make sure these references are added as “Project” references to simplify your future debugging.  To reiterate the reference direction using the project names we have utilized in this example thus far:  .Client references .Entities and .Client.Web reference .Entities.Web.  If you have opted for a different naming convention, then the Silverlight project must reference the RIA Services Silverlight class library and the ASP.NET host project must reference the server-side class library. Next, we are going to add a new Entity Framework data model to our data services project (.Entities.Web).  We will do this by right clicking on this project (ArchitectNow.Server.Entities.Web in the above diagram) and selecting Add->New Project.  In the New Project dialog we will select ADO.NET Entity Data Model as in the following diagram.  For now we will call this simply SampleDataModel.edmx and click OK. It is worth pointing out that WCF RIA Services is in no way tied to the Entity Framework as a means of accessing data and any data access technology is supported (as long as the server side implementation maps to the RIA Services pattern which is a topic beyond the scope of this post).  We are using EF to quickly demonstrate the RIA Services concepts and setup infrastructure, as such, I am not providing a database schema with this post but am instead connecting to a small sample database on my local machine.  The following diagram shows a simple EF Data Model with two tables that I reverse engineered from a local data store.   If you are putting together your own solution, feel free to reverse engineer a few tables from any local database to which you have access. At this point, once you have an EF data model generated as an EDMX into your .Entites.Web project YOU MUST BUILD YOUR SOLUTION.  I know it seems strange to call that out but it important that the solution be built at this point for the next step to be successful.  Obviously, if you have any build errors, these must be addressed at this point. At this point we will add a RIA Services Domain Service to our .Entities.Web project (our server side code).  We will need to right-click on the .Entities.Web project and select Add->New Item.  In the Add New Item dialog, select Domain Service Class and verify the name of your new Domain Service is correct (ours is called SampleService.cs in the image below).  Next, click "Add”. After clicking “Add” to include the Domain Service Class in the selected project, you will be presented with the following dialog.  In it, you can choose which entities from the selected EDMX to include in your services and if they should be allowed to be edited (i.e. inserted, updated, or deleted) via this service.  If the “Available DataContext/ObjectContext classes” dropdown is empty, this indicates you have not yes successfully built your project after adding your EDMX.  I would also recommend verifying that the “Generate associated classes for metadata” option is selected.  Once you have selected the appropriate options, click “OK”. Once you have added the domain service class to the .Entities.Web project, the resulting solution should look similar to the following: Note that in the solution you now have a SampleDataModel.edmx which represents your EF data mapping to your database and a SampleService.cs which will contain a large amount of generated RIA Services code which RIA Services utilizes to access this data from the Silverlight front-end.  You will put all your server side data access code and logic into the SampleService.cs class.  The SampleService.metadata.cs class is for decorating the generated domain objects with attributes from the System.ComponentModel.DataAnnotations namespace for validation purposes. FINAL AND KEY CONFIGURATION STEP!  One key step that causes significant headache to developers configuring RIA Services for the first time is the fact that, when we added the EDMX to the .Entities.Web project for our EF data access, a connection string was generated and placed within a newly generated App.Context file within that project.  While we didn’t point it out at the time you can see it in the image above.  This connection string will be required for the EF data model to successfully locate it’s data.  Also, when we added the Domain Service class to the .Entities.Web project, a number of RIA Services configuration options were added to the same App.Config file.   Unfortunately, when we ultimately begin to utilize the RIA Services infrastructure, our Silverlight UI will be making RIA services calls through the ASP.NET host project (i.e. .Client.Web).  This host project has a reference to the .Entities.Web project which actually contains the code so all will pass through correctly EXCEPT the fact that the host project will utilize it’s own Web.Config for any configuration settings.  For this reason we must now merge all the sections of the App.Config file in the .Entities.Web project into the Web.Config file in the .Client.Web project.  I know this is a bit tedious and I wish there were a simpler solution but it is required for our RIA Services Domain Service to be made available to the front end Silverlight project.  Much of this manual merge can be achieved by simply cutting and pasting from App.Config into Web.Config.  Unfortunately, the <system.webServer> section will exist in both and the contents of this section will need to be manually merged.  Fortunately, this is a step that needs to be taken only once per solution.  As you add additional data structures and Domain Services methods to the server no additional changes will be necessary to the Web.Config. Next Steps At this point, we have walked through the basic setup of a simple RIA services solution.  Unfortunately, there is still a lot to know about RIA services and we have not even begun to take advantage of the plumbing which we just configured (meaning we haven’t even made a single RIA services call).  I plan on posting a few more introductory posts over the next few weeks to take us to this step.  If you have any questions on the content in this post feel free to reach out to me via this Blog and I’ll gladly point you in (hopefully) the right direction. Resources Prior to closing out this post, I wanted to share a number or resources to help you get started with RIA services.  While I plan on posting more on the subject, I didn’t invent any of this stuff and wanted to give credit to the following areas for helping me put a lot of these pieces into place.   The books and online resources below will go a long way to making you extremely productive with RIA services in the shortest time possible.  The only thing required of you is the dedication to take advantage of the resources available. Books Pro Business Applications with Silverlight 4 http://www.amazon.com/Pro-Business-Applications-Silverlight-4/dp/1430272074/ref=sr_1_2?ie=UTF8&qid=1291048751&sr=8-2 Silverlight 4 in Action http://www.amazon.com/Silverlight-4-Action-Pete-Brown/dp/1935182374/ref=sr_1_1?ie=UTF8&qid=1291048751&sr=8-1 Pro Silverlight for the Enterprise (Books for Professionals by Professionals) http://www.amazon.com/Pro-Silverlight-Enterprise-Books-Professionals/dp/1430218673/ref=sr_1_3?ie=UTF8&qid=1291048751&sr=8-3 Web Content RIA Services http://channel9.msdn.com/Blogs/RobBagby/NET-RIA-Services-in-5-Minutes http://silverlight.net/riaservices/ http://www.silverlight.net/learn/videos/all/net-ria-services-intro/ http://www.silverlight.net/learn/videos/all/ria-services-support-visual-studio-2010/ http://channel9.msdn.com/learn/courses/Silverlight4/SL4BusinessModule2/SL4LOB_02_01_RIAServices http://www.myvbprof.com/MainSite/index.aspx#/zSL4_RIA_01 http://channel9.msdn.com/blogs/egibson/silverlight-firestarter-ria-services http://msdn.microsoft.com/en-us/library/ee707336%28v=VS.91%29.aspx Silverlight www.silverlight.net http://msdn.microsoft.com/en-us/silverlight4trainingcourse.aspx http://channel9.msdn.com/shows/silverlighttv

    Read the article

  • What is the objective of unit testing?

    - by user728750
    I've been working with C# for the last 2 years, and I've never done any unit testing. I just need to know what the objective of unit testing is. What kind of results do we expect from unit testing? Is code quality checked by unit testing? In my view, unit testing is the job of testers; if that is true, then as a developer why would I need to write test code if the tester does the unit testing? Why should I write extra code for testing? Do I need to maintain a separate copy of a project for unit testing?

    Read the article

  • CodePlex Daily Summary for Saturday, January 01, 2011

    CodePlex Daily Summary for Saturday, January 01, 2011Popular ReleasesBloodSim: BloodSim - 1.3.0.0: - Added tally for number of boss swings and swing avoids - Removed a large number of options that were carried over from Beta and are no longer relevant - Changed stat entry to use Rating format for Dodge, Parry, Haste and Mastery - Rearranged Settings interface - BloodSim will now check for updates on startup and notify the user if a new version is available - Added option to Show/Hide the Simulation Log to increase speed during large simulationsEnhSim: EnhSim 2.2.8 ALPHA: 2.2.8 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 Rebuilt Feral Spir...CBM-Command: Version 2.0 Beta 2 - 2010-12-31: This version fixes three major bugs in Version 2.0 Beta 1 Changes(64, 128, Plus/4) Changed the timing code back to version 1.7 because the clock() function in cc65 does not reflect accurate timing. (VIC, PET) The time(NULL) function is not available on these targets and thus had to be removed. Help Hot-Key Fixed - Now when you press either F1 or H you get the help file (if the CBM-Command disk is in the drive you started CBM-Command from). Updated Help File - the new help file by popmi...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.6 Released: Hi, Today we are releasing final version of Visifire, v3.6.6 with the following new feature: * TextDecorations property is implemented in Title for Chart. * TitleTextDecorations property is implemented in Axis. * MinPointHeight property is now applicable for Column and Bar Charts. Also this release includes few bug fixes: * ToolTipText property of DataSeries was not getting applied from Style. * Chart threw exception if IndicatorEnabled property was set to true and Too...StyleCop Compliant Visual Studio Code Snippets: Visual Studio Code Snippets - January 2011: StyleCop Compliant Visual Studio Code Snippets Visual Studio 2010 provides C# developers with 38 code snippets, enhancing developer productivty and increasing the consistency of the code. Within this project the original code snippets have been refactored to provide StyleCop compliant versions of the original code snippets while also adding many new code snippets. Within the January 2011 release you'll find 82 code snippets to make you more productive and the code you write more consistent!...WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.2: Version: 2.0.0.2 (Milestone 2): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ...eCompany: eCompany v0.2.0 Build 63: Version 0.2.0 Build 63: Added Splash screen & about box Added downloading of currencies when eCompany launched for the first time (must close any bug caused by no currency rate existing) Added corp creation when eCompany launched for the first time (for now, you didn't need to edit the company.xml file manually) You just need to decompress file "eCompany v0.2.0.63.zip" into your current eCompany install directory.SQL Monitor - tracking sql server activities: SQL Monitor 3.0 alpha 8: 1. added truncate table/defrag index/check db functions 2. improved alert 3. fixed problem with alert causing config file corrupted(hopefully)Temporary Data Storage Folder: TDS Folder version 0.2 Beta: In this release following bugs are fixed: 'Send to' entry bug fixed Preferences bug fixedSilverlight File Upload and Download with Interlink: HSS Interlink v.2.1.300: Latest Release 2.1.300 - December 29th 2010 Change Log DownloadFileDialog Modified to support an absolute uri for the DownloadUri property, which is required for OOB support UploadFileDialog Modified to support an absolute uri for the UploadUri property, which is required for OOB support For existing users be sure to uninstall the older version prior to installing this version Note: The demo application is NOT included with the installer but can be reviewed here Stable release and ready...RDPAddins .NET: RDPAddins Alpha 2 (0.2.0.0): Second alpha release... Breaking changes (now this project has 0.2 version): now addin should implement RDPAddins.Common.IAddin and should me exported with RDPAddins.Common.AddinMetadataAtrribute !!!most of all old Addin base class method you can fide in IChannel or IUI interfeces see FileTransferAddin Now RDPAddins.Common.dll just provide interfaces for addin, channel, ui, and export metadata Whole implementation is in RDPAddins.exe RDPAddins.Common.dll has some documentation :) why all this...DocX: DocX v1.0.0.11: Building Examples projectTo build the Examples project, download DocX.dll and add it as a reference to the project. OverviewThis version of DocX contains many bug fixes, it is a serious step towards a stable release. Added1) Unit testing project, 2) Examples project, 3) To many bug fixes to list here, see the source code change list history.Cosmos (C# Open Source Managed Operating System): 71406: This is the second release supporting the full line of Visual Studio 2010 editions. Changes since release 71246 include: Debug info is now stored in a single .cpdb file (which is a Firebird database) Keyboard input works now (using Console.ReadLine) Console colors work (using Console.ForegroundColor and .BackgroundColor)AutoLoL: AutoLoL v1.5.0: Added the all new Masteries Browser which replaces the Quick Open combobox AutoLoL will now attemt to create file associations for mastery (*.lolm) files Each Mastery Build can now contain keywords that the Masteries Browser will use for filtering Changed the way AutoLoL detects if another instance is already running Changed the format of the mastery files to allow more information stored in* Dialogs will now focus the Ok or Cancel button which allows the user to press Return to clo...Paint.NET PSD Plugin: 1.6.0: Handling of layer masks has been greatly improved. Improved reliability. Many PSD files that previously loaded in as garbage will now load in correctly. Parallelized loading. PSD files containing layer masks will load in a bit quicker thanks to the removal of the sequential bottleneck. Hidden layers are no longer made visible on save. Many thanks to the users who helped expose the layer masks problem: Rob Horowitz, M_Lyons10. Please keep sending in those bug reports and PSD repro files!Facebook C# SDK: 4.1.1: From 4.1.1 Release: Authentication bug fix caused by facebook change (error with redirects in Safari) Authenticator fix, always returning true From 4.1.0 Release Lots of bug fixes Removed Dynamic Runtime Language dependencies from non-dynamic platforms. Samples included in release for ASP.NET, MVC, Silverlight, Windows Phone 7, WPF, WinForms, and one Visual Basic Sample Changed internal serialization to use Json.net BREAKING CHANGE: Canvas Session is no longer supported. Use Signed...Catel - WPF and Silverlight MVVM library: 1.0.0: And there it is, the final release of Catel, and it is no longer a beta version!Euro for Windows XP: ChangeRegionalSettings 1..0: *Rocket Framework (.Net 4.0): Rocket Framework for Windows V 1.0.0: Architecture is reviewed and adjusted in a way so that I can introduce the Web version and WPF version of this framework next. - Rocket.Core is introduced - Controller button functions revisited and updated - DB is renewed to suite the implemented features - Create New button functionality is changed - Add Question Handling featuresFlickr Wallpaper Rotator (for Windows desktop): Wallpaper Flickr 1.1: Some minor bugfixes (mostly covering when network connection is flakey, so I discovered them all while at my parents' house for Christmas).New Projects7-Up: PowerShell Scripts for Upgrading SharePoint 2007 to 2010: PowerShell scripts to automate the upgrade of SharePoint 2007 to SharePoint 2010 using a content database or hybrid upgrade approach.Apple Wireless Keyboard: Helper that Allows people use the Apple Wireless (or Wired possibly) Keyboard under Windows 7 without loosing the mac functionalityckTest: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttestCrude - .Net Dependency Management: Crude is light dependency management for .net, there was no dependency management solution for .net as Maven or Ivy until now. DeepTime: Event tacking, management and plotting site.Dev/Test Cloud Platform: Dev/Test Cloud Platform is implemented by Beyondsoft and Microsoft. The platform is dedicated to software development and test scenarios. With Dev/Test Cloud you can save your cost, improve work efficiency, and improve product quality.Image Viewer (wpf version): Image viewer helps windows users to review images on their computer. It is written i Csharp and wpf.LNOne: All common code, framework code, utility code in one.LOGL::GLib: LOGL::GLib is an OpenGL game library written in C++ that allows new C++/OpenGL programmers to focus more on the game and less on the details.MAPI.GUI: Graphical program uses function from mcopyapi.codeplex.com and mdeleteapi.codeplex.com console programs. Program support longPath (above 259 chars and less 32000 chars)Movie Collection Manager: Gerenciador que ajuda a manusear coleção de filmes. Possui uma interface super atraente e simples de usar. Ferramentas e tecnologias usadas: - Visual C# 2010 Express - SQL Server 2008 R2 Express - Windows Forms - Entity Framework OBS: Projeto concorrente do Desafio .NETMyStudioServer.com: Source code for the DotNetNuke http://MyStudioServer.com websiteNusya Tester: A software to create and use various tests with right answers explanations to help one in self-education. It's developed in C#Project Zylaphon: Project Zylaphon is a C# GPL Open Source Computer Aided Music Composition System. Its design goals include highly interactive composition, music computation, and analysis; system control to be provided by an A.I. goal based agenda and quasi Blackboard for maximum flexibility.Remember The Task: RememberTheTask makes it easier to remember what you have been doing the last hour. It's developed in Visual Basic.Rent Payments Scheduling: Register rent schedules and keep track of them.Simple MVVM Toolkit for Silverlight: Simple MVVM Toolkit makes it easier to develop Silverlight applications using the Model-View-ViewModel design pattern.Sparrow.NET HtmlTemplate: Its a template engine for generating web pages.(?????HTML?????,??????html Tag???????html????。)SQLDiagUI: SQLDiagUI provides a GUI for Microsoft SQlDiag Utility which allows users to create a configuration file and start/stop/schedule SQLDiag against a SQL Server.Test Control: testing frameworkTwicko: Simple twitter client.Unity3D Utilities: Unity3D Utilities provides additional functionality to Unity3D (3.0+) via extensions, utilities, mini-frameworks, etc.VBA Composite Controls Object Model: VBA Composite Controls encapsulate complex event-driven interactions between ActiveX controls and other objects in Microsoft Office. It's uses are limited only by the imagination, from binding controls together for updating, to creating unique user interfaces!World of Warcraft Backit up(wow backit up): a project that helps you backup and restore your settings in wow easilyYamma: Yet Another Money Management Application. Build in .NET 4, SQLCE, LINQ and the Entity Framework.

    Read the article

  • Umbraco Certified Developer - Level 2 - Chris Houston

    - by Vizioz Limited
    I just thought I'd create a quick blog post to say that I have now been on the Umbraco Level 2 course (which I would recommend!) and although it turned out that I pretty much new 95% of what was taught, the extra 5% and a chance to have a trip to Copenhagen made it worth it :)I am now officially Umbraco level 2 certified :)Hopefully over the next month I will have some time to start adding a few more useful blog posts to my blog. I know I've been a little slack on the posting in the last month, it's just been a busy time for me!

    Read the article

  • Bluetooth DUN Tethering fails

    - by tacone
    I have an HTC Desire HD, with Android Froyo (2.2) and PDANet installed. I am using Ubuntu 10.10. I cannot tether it over Bluetooth either with Network Manager or BlueMan. (note, I installed Blueman only after failing with NetWork manager, and I even tried the last version from the PPA). With both my device is discovered, paired, setup. But connecting always fail. Network manager says it cannot get the details of my device Blueman says Connection Refused (111) Here are some relevant entries from syslog. Mar 11 22:13:00 tacone-macbook bluetoothd[2242]: Bluetooth deamon 4.69 Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Starting SDP server Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Starting experimental netlink support Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Failed to find Bluetooth netlink family Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Failed to init netlink plugin Mar 11 22:13:00 tacone-macbook kernel: [ 158.284357] Bluetooth: L2CAP ver 2.14 Mar 11 22:13:00 tacone-macbook kernel: [ 158.284361] Bluetooth: L2CAP socket layer initialized Mar 11 22:13:00 tacone-macbook kernel: [ 158.446781] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 Mar 11 22:13:00 tacone-macbook kernel: [ 158.446784] Bluetooth: BNEP filters: protocol multicast Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: HCI dev 0 registered Mar 11 22:13:00 tacone-macbook kernel: [ 158.569481] Bluetooth: SCO (Voice Link) ver 0.6 Mar 11 22:13:00 tacone-macbook kernel: [ 158.569484] Bluetooth: SCO socket layer initialized Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: HCI dev 0 up Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Starting security manager 0 Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: ioctl(HCIUNBLOCKADDR): Invalid argument (22) Mar 11 22:13:00 tacone-macbook kernel: [ 158.818600] Bluetooth: RFCOMM TTY layer initialized Mar 11 22:13:00 tacone-macbook kernel: [ 158.818607] Bluetooth: RFCOMM socket layer initialized Mar 11 22:13:00 tacone-macbook kernel: [ 158.818610] Bluetooth: RFCOMM ver 1.11 Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: probe failed with driver input-headset for device /org/bluez/2242/hci0/dev_F8_DB_7F_AF_6B_EE Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Adapter /org/bluez/2242/hci0 has been enabled Mar 11 22:13:00 tacone-macbook pulseaudio[1757]: bluetooth-util.c: Error from ListDevices reply: org.freedesktop.DBus.Error.AccessDenied Mar 11 22:13:00 tacone-macbook NetworkManager[1247]: <warn> bluez error getting adapter properties: Rejected send message, 1 matched rules; type="method_call", sender=":1.4" (uid=0 pid=1247 comm="NetworkManager) interface="org.bluez.Adapter" member="GetProperties" error name="(unset)" requested_reply=0 destination="org.bluez" (uid=0 pid=2242 comm="/usr/sbin/bluetoothd)) Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: return_link_keys (sba=00:23:6C:B5:03:6F, dba=00:23:6C:C0:F1:B0) Mar 11 22:13:00 tacone-macbook pulseaudio[1757]: bluetooth-util.c: Error from GetProperties reply: org.freedesktop.DBus.Error.AccessDenied Mar 11 22:15:02 tacone-macbook bluetoothd[2243]: Discovery session 0x2262d7c0 with :1.45 activated Mar 11 22:15:15 tacone-macbook bluetoothd[2243]: Stopping discovery Mar 11 22:15:15 tacone-macbook pulseaudio[1757]: bluetooth-util.c: Error from GetProperties reply: org.freedesktop.DBus.Error.AccessDenied Mar 11 22:15:16 tacone-macbook bluetoothd[2243]: link_key_request (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:15:16 tacone-macbook bluetoothd[2243]: io_capa_request (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:15:17 tacone-macbook bluetoothd[2243]: io_capa_response (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:15:18 tacone-macbook bluetoothd[2243]: Stopping discovery Mar 11 22:15:28 tacone-macbook bluetoothd[2243]: link_key_notify (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE, type=5) Mar 11 22:15:28 tacone-macbook kernel: [ 306.585725] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:15:28 tacone-macbook kernel: [ 306.630757] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:15:28 tacone-macbook bluetoothd[2243]: Authentication requested Mar 11 22:15:28 tacone-macbook bluetoothd[2243]: link_key_request (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:15:28 tacone-macbook kernel: [ 306.784829] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:15:28 tacone-macbook kernel: [ 306.857861] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:15:29 tacone-macbook bluetoothd[2243]: probe failed with driver input-headset for device /org/bluez/2242/hci0/dev_F8_DB_7F_AF_6B_EE Mar 11 22:15:29 tacone-macbook pulseaudio[1757]: bluetooth-util.c: Error from GetProperties reply: org.freedesktop.DBus.Error.AccessDenied Mar 11 22:15:29 tacone-macbook pulseaudio[1757]: last message repeated 8 times Mar 11 22:15:29 tacone-macbook bluetoothd[2243]: Stopping discovery Mar 11 22:15:30 tacone-macbook modem-manager: (tty/rfcomm0): could not get port's parent device Mar 11 22:15:30 tacone-macbook modem-manager: (rfcomm0) opening serial device... Mar 11 22:15:30 tacone-macbook modem-manager: (rfcomm0): probe requested by plugin 'Generic' Mar 11 22:15:43 tacone-macbook modem-manager: (rfcomm0) closing serial device... Mar 11 22:15:43 tacone-macbook modem-manager: (rfcomm0) opening serial device... Mar 11 22:15:49 tacone-macbook modem-manager: (rfcomm0) closing serial device... Mar 11 22:16:15 tacone-macbook modem-manager: (tty/rfcomm0): could not get port's parent device Mar 11 22:16:19 tacone-macbook kernel: [ 357.375108] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:16:24 tacone-macbook bluetoothd[2243]: link_key_request (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:16:24 tacone-macbook kernel: [ 362.169506] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:16:24 tacone-macbook kernel: [ 362.215529] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:16:24 tacone-macbook bluetoothd[2243]: link_key_request (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:16:24 tacone-macbook kernel: [ 362.281559] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:16:24 tacone-macbook kernel: [ 362.330588] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:16:24 tacone-macbook modem-manager: (tty/rfcomm0): could not get port's parent device Any help ? PS: tethering via USB or WiFi is not an option, I need to do it over Bluetooth.

    Read the article

  • Installation Won't Finish

    - by Joey G
    I installed Ubuntu 12.10 (32-bit) on my Acer Aspire One notebook and replaced the Windows 8 Consumer Preview. Everything went fine, but right before the installation finished, it got stuck. The loading bar at the bottom is full, and it says "Copying installation logs," but my mouse won't move and it's been at this point for almost an hour. Also, the mouse is in the loading spin, so I know my computer didn't freeze. Should I just restart now? I'm not sure if it's at the last stage, but it seems like it is, and this has taken more than the rest of the installation together. EDIT- I had my computer go in sleep mode for a minute and now I can move the mouse again. When I click the "Copying.." part, it says "Activation (eth1) Stage 4 of 5 complete" but "5 of 5" (I assume that comes next) isn't starting.

    Read the article

  • Enabling SSL on apache2 causes address already in use error

    - by durron597
    My server works just fine on a normal apache2 install. Now, I'm trying to install subversion on this server using this guide: http://alephzarro.com/blog/2007/01/07/installation-of-subversion-on-ubuntu-with-apache-ssl-and-basicauth/ I get the following error: (98)Address already in use: make_sock: could not bind to address 0.0.0.0:443 When I do grep -rH 443 /etc/apache2/, I get results in two files: ports.conf and sites-enabled/default-ssl I tried it both with and without that last Listen 443 commented out, here's ports.conf: NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> NameVirtualHost *:443 Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> #Listen 443 And the first few lines of default-ssl <IfModule mod_ssl.c> <VirtualHost *:443> ServerAdmin webmaster@localhost SSLEngine on SSLCertificateFile /this/isnt/relevant/probably.pem SSLProtocol all SSLCipherSuite HIGH:MEDIUM And netstat -an --inet | grep 443 returns nothing. Any ideas?

    Read the article

  • Windows CE Chat March 30, 2010

    - by Bruce Eitman
    Another great opportunity to ask Microsoft engineers your technical questions is coming up on Tuesday, March 30th.  These chats are your opportunity to get advice and answers from the engineers at Microsoft.   You may want to review the transcript from last month to get an idea about what kind of topics are discussed. Title:    Windows CE Live Chat! When:  Tuesday, March 30, 2010 9:00 - 10:00 A.M. Pacific Time   Add to Calendar Description: Do you have tough technical questions regarding Windows CE or Windows Mobile for which you're seeking answers? Do you want to tap into the deep knowledge of the talented Microsoft Embedded Devices Group members? If so, please join us for a live Windows CE chat and bring on the questions! Windows CE is the operating system that is powering the next generation of 32-bit, small-footprint and mobile devices. This chat will cover the tools and technologies used to develop devices using the Windows CE operating system. To join this chat, please log on via the main MSDN chat page at: EnterChatRoom   Copyright © 2010 – Bruce Eitman All Rights Reserved

    Read the article

  • SQL Server - MVP 2010

    - by JustinL
    I was very happy to receive an email last week to confirm I would receive the MVP Award for SQL Server for 2010 - very exciting news ! I missed the first FedEx delivery, however this weekend they were able to successfully deliver the package from Microsoft and it began to feel very real as I opened the box to find the MVP glass-ware! Since leaving Microsoft, the past couple of years have been incredibly challenging, exciting and satisfying.  The MVP Award is really special, the SQL community has a fantastic, international base with many successful events, leaders and contributors providing an impressive network both online and in-person. I'm really excited about the year ahead - starting this week with SQL Bits in London, followed by PASS EMEA in Germany next week and at the London PASS user group meeting on Monday 26th April. Regards,   Justin Langford - Coeo Ltd

    Read the article

  • TDD and your emerging design

    - by andrewstopford
    I was at DevWeek last week, it was a great week and I got a chance to speak with some of my geek heroes (Jeff Richter is a walking, talking CLR). One of the folks I most enjoyed listening to was ThoughtWorker Neal Ford who gave a session on emergeant design in TDD. Something struck me about the RGR cycle in TDD in that design could either be missed or misplaced if the refactor phase is never carried out and after the inital green phase the design is considered done. In TDD the emergant design that evolves as part of the cycle is key to the approach.  Neal talked about using cyclometric complexity as a measure of your emerging design but other considerations would surely include SOLID and DRY during the cycles. As you refactor to these kinds of design principles your design evolves.

    Read the article

  • Identifier for the “completed” stage of a process: 0, 99, something else?

    - by Arnold Sakhnov
    Say, that you are handling a multi-step process (like a complex registration form, with a number of steps the user has go through in order). You need to be able to save the current state of the process (e.g. so the user can come back to that registration form later and continue form the step where they were left off). Obviously, you’ll probably want to give each “step” an identifier you can refer to: 1, 2, 3, 4, etc. You logic will check for this step_id (or whatever you call it) to render the appropriate data. The question: how would you identify the stage after the final step, like the completed registration state (say, that you have to give that last “step” its own id, that’s how your logic is structured). Would it be a 0, 999, a non-integer value, something else entirely?

    Read the article

  • Your thoughts on Best Practices for Scientific Computing?

    - by John Smith
    A recent paper by Wilson et al (2014) pointed out 24 Best Practices for scientific programming. It's worth to have a look. I would like to hear opinions about these points from experienced programmers in scientific data analysis. Do you think these advices are helpful and practical? Or are they good only in an ideal world? Wilson G, Aruliah DA, Brown CT, Chue Hong NP, Davis M, Guy RT, Haddock SHD, Huff KD, Mitchell IM, Plumbley MD, Waugh B, White EP, Wilson P (2014) Best Practices for Scientific Computing. PLoS Biol 12:e1001745. http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001745 Box 1. Summary of Best Practices Write programs for people, not computers. (a) A program should not require its readers to hold more than a handful of facts in memory at once. (b) Make names consistent, distinctive, and meaningful. (c) Make code style and formatting consistent. Let the computer do the work. (a) Make the computer repeat tasks. (b) Save recent commands in a file for re-use. (c) Use a build tool to automate workflows. Make incremental changes. (a) Work in small steps with frequent feedback and course correction. (b) Use a version control system. (c) Put everything that has been created manually in version control. Don’t repeat yourself (or others). (a) Every piece of data must have a single authoritative representation in the system. (b) Modularize code rather than copying and pasting. (c) Re-use code instead of rewriting it. Plan for mistakes. (a) Add assertions to programs to check their operation. (b) Use an off-the-shelf unit testing library. (c) Turn bugs into test cases. (d) Use a symbolic debugger. Optimize software only after it works correctly. (a) Use a profiler to identify bottlenecks. (b) Write code in the highest-level language possible. Document design and purpose, not mechanics. (a) Document interfaces and reasons, not implementations. (b) Refactor code in preference to explaining how it works. (c) Embed the documentation for a piece of software in that software. Collaborate. (a) Use pre-merge code reviews. (b) Use pair programming when bringing someone new up to speed and when tackling particularly tricky problems. (c) Use an issue tracking tool. I'm relatively new to serious programming for scientific data analysis. When I tried to write code for pilot analyses of some of my data last year, I encountered tremendous amount of bugs both in my code and data. Bugs and errors had been around me all the time, but this time it was somewhat overwhelming. I managed to crunch the numbers at last, but I thought I couldn't put up with this mess any longer. Some actions must be taken. Without a sophisticated guide like the article above, I started to adopt "defensive style" of programming since then. A book titled "The Art of Readable Code" helped me a lot. I deployed meticulous input validations or assertions for every function, renamed a lot of variables and functions for better readability, and extracted many subroutines as reusable functions. Recently, I introduced Git and SourceTree for version control. At the moment, because my co-workers are much more reluctant about these issues, the collaboration practices (8a,b,c) have not been introduced. Actually, as the authors admitted, because all of these practices take some amount of time and effort to introduce, it may be generally hard to persuade your reluctant collaborators to comply them. I think I'm asking your opinions because I still suffer from many bugs despite all my effort on many of these practices. Bug fix may be, or should be, faster than before, but I couldn't really measure the improvement. Moreover, much of my time has been invested on defence, meaning that I haven't actually done much data analysis (offence) these days. Where is the point I should stop at in terms of productivity? I've already deployed: 1a,b,c, 2a, 3a,b,c, 4b,c, 5a,d, 6a,b, 7a,7b I'm about to have a go at: 5b,c Not yet: 2b,c, 4a, 7c, 8a,b,c (I could not really see the advantage of using GNU make (2c) for my purpose. Could anyone tell me how it helps my work with MATLAB?)

    Read the article

  • Chrome Apps Office Hours: Chrome Storage APIs

    Chrome Apps Office Hours: Chrome Storage APIs Ask and vote for questions: goo.gl You spoke, we listened. Join Paul Kinlan, Paul Lewis, Pete LePage, and Renato Dias to learn about the new storage APIs that are available to Chrome Packaged Apps in the next installment of Chrome Apps Office Hours. We'll take a look at the new sync-able and local storage APIs as well as other ways you can save data locally on your users machine. We didn't get through quite as many questions as we hoped last week, and are going to dedicate some extra time this week, so be sure to post your questions on Moderator below! From: GoogleDevelopers Views: 0 9 ratings Time: 00:00 More in Science & Technology

    Read the article

  • On automating a split-mirror ASM backup with EMC TimeFinder ...

    - by [email protected]
    Normal 0 21 false false false MicrosoftInternetExplorer4 st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} Hi clerks,   Offloading the backup operation to another host using disk cloning could really improve the performance on highly busy databases ( 24x7, zero downtime and all this stuff ...) There are well know white papers on this subject, ASM included, but today Im showing you a nice way to automate the procedure using shell scripting with EMC TimeFinder technologies:   Assumptions: *********** ASM diskgroups name:   +data_${db_name} : asm data diskgroup +fra_${db_name} :  asm fra  diskgroup   EMC Time Finder sync groups name:   rac_${DB_NAME}_data_tf : data group rac_${DB_NAME}_fra_tf:   fra group     There are two scripts, one located on the production box ( bck_database.sh ) and the other one on the backup server node ( bck_database_mirror.sh ) The second one is remotly executed from the production host There are a bunch of variables along the code with selfexplanatory names I guess, anyway let me know if you want some help     #!/bin/ksh ### ###  Copyright (c) 1988, 2010, Oracle Corporation.  All Rights Reserved. ### ###    NAME ###     bck_database.sh ### ###    DESCRIPTION ###     Database backup on third mirror ### ###    RETURNS ### ###    NOTES ### ###    MODIFIED                                 (DD/MM/YY) ###    Oracle            28/01/10             - Creacion ###   V_DATE=`/bin/date +%Y%m%d_%H%M%S` V_FICH_LOG=`dirname $0`/trace_dir_location/`basename $0`.${V_DATE}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1     ADMIN_DIR=`dirname $0` . ${ADMIN_DIR}/setenv_instance.sh -- This script should set the instance vars like Oracle Home, Sid, db_name ... if [ $? -ne 0 ] then   echo "Error when setting the environment."   exit 1 fi   echo "${V_DATE} ####################################################" echo "Executing database backup: ${DB_NAME}" echo "####################################################################"   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Sync asm data diskgroups ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf establish -noprompt if [ $? -ne 0 ] then   echo "Error when sync asm data diskgroups"   exit 2 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Verifying asm data disks ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf -i 30 verify if [ $? -ne 0 ] then   echo "Error when verifying asm data diskgroups"   exit 3 fi     V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Sync asm fra diskgroups ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf establish -noprompt if [ $? -ne 0 ] then   echo "Error when sync asm fra diskgroups"   exit 4 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Verifying asm fra disks ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf -i 30 verify if [ $? -ne 0 ] then   echo "Error when verifying asm fra diskgroups"   exit 5 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "ASM sync sucessfully completed!" echo "####################################################################"     V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Updating status ${DB_NAME} to BEGIN BACKUP ..." echo "####################################################################" sqlplus -s /nolog <<-!   whenever sqlerror exit 1   connect / as sysdba   whenever sqlerror exit   alter system archive log current;   alter database ${DB_NAME} begin backup; ! if [ $? -ne 0 ] then   echo "Error when updating database status to BEGIN backup"   exit 6 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Splitting asm data disks....." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf split -noprompt if [ $? -ne 0 ] then   echo "Error when splitting asm data disks"   exit 7 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Updating status ${DB_NAME} to END BACKUP ..." echo "####################################################################" sqlplus -s /nolog <<-!   whenever sqlerror exit 1   connect / as sysdba   whenever sqlerror exit   alter database ${DB_NAME} end backup;   alter system archive log current; ! if [ $? -ne 0 ] then   echo "Error when updating database status to END backup"   exit 8 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Generating controlfile copies...." echo "####################################################################" rman<<-! connect target / run { allocate channel ch1 type DISK; copy current controlfile to '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_mount.ctl'; copy current controlfile to '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_backup.ctl'; } ! if [ $? -ne 0 ] then   echo "Error generating controlfile copies"   exit 9 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Resync RMAN catalog ....." echo "####################################################################" rman<<-! connect target / connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG} resync catalog; ! if [ $? -ne 0 ] then   echo "Error when resyncing RMAN catalog"   exit 10 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Splitting asm fra disks....." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf split -noprompt if [ $? -ne 0 ] then   echo "Error when splitting asm fra disks"   exit 11 fi     echo "WARNING!: Calling bck_database_mirror.sh host ${NODE_BCK_SERVER}..." ssh ${NODO_BCK_SERVER} ${ADMIN_DIR_BCK}/bck_database_mirror.sh if [ $? -ne 0 ] then   echo "Error, when remote executing the backup "   exit 12 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Cleaning the archived redo logs already copied to tape ..." echo "####################################################################" rman<<-! connect target / connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG} run { resync catalog; delete noprompt archivelog all backed up 1 times to device type sbt; } ! if [ $? -ne 0 ] then   echo "Error when cleaning the archived redo logs"   exit 13 fi echo "${V_DATE} ####################################################" echo "Backup sucessfully executed!!" echo "####################################################################" exit 0   ------------------------------------------------------------------------------ ------------------------** BACKUP SERVER NODE ** ----------------------------- ------------------------------------------------------------------------------   #!/bin/ksh ### ###  Copyright (c) 1988, 2010, Oracle Corporation.  All Rights Reserved. ### ###    ###    NAME ###     bck_database_mirror.sh ### ###    DESCRIPTION ###      Backup @ backup server ### ###    RETURNS ### ###    NOTES ### ###    MODIFIED                                 (DD/MM/YY) ###      Oracle                    28/01/10     - Creacion         V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Starting ASM instance ..."   echo "####################################################################"   ${V_ADMIN_DIR}/start_asm.sh -- This script is supposed to start the ASM instance in the backup server   if [ $? -ne 0 ]   then     echo "Error when tying to start ASM instance."     exit 1   fi       . ${V_ADMIN_DIR}/setenv_asm.sh -- This script is supposed to set the env. variables of the ASM instance   if [ $? -ne 0 ]   then     echo "Error when setting the ASM environment"     exit 1   fi       V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "The asm diskgroups/disks dettected are the following ..."   echo "####################################################################"     sqlplus /nolog <<-!     whenever sqlerror exit 1     connect / as sysdba     whenever sqlerror exit     SET LINES 200     COL PATH FORMAT A25     SELECT DISK.MOUNT_STATUS, DISK.PATH, DISK.NAME, DISK_GROUP.NAME, DISK_GROUP.TOTAL_MB FROM V\$ASM_DISK DISK, V\$ASM_DISKGROUP DISK_GROUP WHERE DISK.GROUP_NUMBER=DISK_GROUP.GROUP_NUMBER; !       V_ADMIN_DIR=`dirname $0`   . ${V_ADMIN_DIR}/setenv_instance.sh -- This script is supposed to set the env. variables of the database instance   if [ $? -ne 0 ]   then     echo "Error when setting the database instance environment"     exit 1   fi     V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Starting ${DB_NAME} in MOUNT mode..."   echo "####################################################################"   ${V_ADMIN_DIR}/start_instance_mount.sh -- This script is supposed to do a startup mount   if [ $? -ne 0 ]   then     echo "Error starting  ${DB_NAME} in MOUNT mode"     exit 1   fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Executing RMAN backup..."   echo "####################################################################"   rman<<-!   connect target /   connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG}   run {   allocate channel ch1 type 'SBT_TAPE' parms'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo.opt)'; -- TDPO Media Library   crosscheck archivelog all;   backup tag BCK_CONTROLFILE_ST_${DB_NAME}   format 'ctl_%d_%s__%p_%t'   controlfilecopy '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_backup.ctl';   backup tag BCK_DATAFILE_ST_${DB_NAME} full   format 'db_%d_%s_%p_%t'database;   backup tag BCK_ARCHLOG_ST_${DB_NAME} format 'al_%d_%s_%p_%t' archivelog all;   release channel ch1;   } !   if [ $? -ne 0 ]   then     echo "Error executing the RMAN backup"     exit 1   fi     ${V_ADMIN_DIR}/stop_instance_immediate.sh -- This script is supposed to do a shutdown immediate of the database instance   ${ADMIN_DIR}/stop_asm_immediate.sh -- This script is supposed to do a shutdown immediate of the ASM instance   exit 0     fi   Hope it helps someone! --L

    Read the article

  • How to find entry level positions in a new city.

    - by sixtyfootersdude
    I am just graduating from a computer science degree (tomorrow is my last exam). I have been thinking about job hunting this semester but I wanted to focus on my studies and part time job so I am a bit late on the job hunt. I want to find a job in a city that I have very little professional network in (Ottawa, Ontario, Canada). How would you go about job hunting in a new city? I do not live there yet and I cannot easy go there so that makes finding places to apply a bit trickier. Normally I would ask people that I studied and worked with but I have few contacts in Ottawa. Where would you look to find jobs? I have been using Craigs-list My Universities job listings (but they are mostly focused on the east coast) This government job listing page: http://www.careerbeacon.com/ Anyone have any great job finding resources?

    Read the article

  • ActAs and OnBehalfOf support in WIF

    I discussed a time ago how WIF supported a new WS-Trust 1.4 element, ActAs, and how that element could be used for authentication delegation.  The thing is that there is another feature in WS-Trust 1.4 that also becomes handy for this kind of scenario, and I did not mention in that last post, OnBehalfOf. Shiung Yong wrote an excellent summary about the difference of these two new features in this forum thread. He basically commented the following, An ActAs RST element indicates that the requestor...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >