Search Results

Search found 8664 results on 347 pages for 'lost with coding'.

Page 236/347 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • Naming PowerPoint Components With A VSTO Add-In

    - by Tim Murphy
    Note: Cross posted from Coding The Document. Permalink Sometimes in order to work with Open XML we need a little help from other tools.  In this post I am going to describe  a fairly simple solution for marking up PowerPoint presentations so that they can be used as templates and processed using the Open XML SDK. Add-ins are tools which it can be hard to find information on.  I am going to up the obscurity by adding a Ribbon button.  For my example I am using Visual Studio 2008 and creating a PowerPoint 2007 Add-in project.  To that add a Ribbon Visual Designer.  The new ribbon by default will show up on the Add-in tab. Add a button to the ribbon.  Also add a WinForm to collect a new name for the object selected.  Make sure to set the OK button’s DialogResult to OK. In the ribbon button click event add the following code. ObjectNameForm dialog = new ObjectNameForm(); Selection selection = Globals.ThisAddIn.Application.ActiveWindow.Selection;   dialog.objectName = selection.ShapeRange.Name;   if (dialog.ShowDialog() == DialogResult.OK) { selection.ShapeRange.Name = dialog.objectName; } This code will first read the current Name attribute of the Shape object.  If the user clicks OK on the dialog it save the string value back to the same place. Once it is done you can retrieve identify the control through Open XML via the NonVisualDisplayProperties objects.  The only problem is that this object is a child of several different classes.  This means that there isn’t just one way to retrieve the value.  Below are a couple of pieces of code to identify the container that you have named. The first example is if you are naming placeholders in a layout slide. foreach(var slideMasterPart in slideMasterParts) { var layoutParts = slideMasterPart.SlideLayoutParts; foreach(SlideLayoutPart slideLayoutPart in layoutParts) { foreach (assmPresentation.Shape shape in slideLayoutPart.SlideLayout.CommonSlideData.ShapeTree.Descendants<assmPresentation.Shape>()) { var slideMasterProperties = from p in shape.Descendants<assmPresentation.NonVisualDrawingProperties>() where p.Name == TokenText.Text select p;   if (slideMasterProperties.Count() > 0) tokenFound = true; } } } The second example allows you to find charts that you have named with the add-in. foreach(var slidePart in slideParts) { foreach(assmPresentation.Shape slideShape in slidePart.Slide.CommonSlideData.ShapeTree.Descendants<assmPresentation.Shape>()) { var slideProperties = from g in slidePart.Slide.Descendants<GraphicFrame>() where g.NonVisualGraphicFrameProperties.NonVisualDrawingProperties.Name == TokenText.Text select g;   if(slideProperties.Count() > 0) { tokenFound = true; } } } Together the combination of Open XML and VSTO add-ins make a powerful combination in creating a process for maintaining a template and generating documents from the template.

    Read the article

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • C# Java Objective-C need expert advices

    - by Kevino
    Which platform as the edge today in 2012 with the rise of cloud computing, mobile development and the revolution of HTML5/Javascript between J2EE, .Net framework and IOS Objective-C ??? I want to start learning 1 language between Java, C# and Objective-C and get back into programming after 14 years and I don't know which to choose I need expert advices... I already know a little C++ and I remember my concepts in example pointers arithmetic, class etc so I tend to prefer learning C# and Objective-C but I've been told by some experienced programmers that Windows 8 could flop and .Net could be going away slowly since C++ and Html5/Javascript could be king in mobile is that true ? and that C# is more advanced compared to Java with Linq/Lambda... but not truly as portable if we consider android, etc but Java as a lot going for him too Scala, Clojure, Groovy, JRuby, JPython etc etc so I am lost Please help me, and don't close this right away I really need help and expert advices thanks you very much ANSWER : ElYusubov : thanks for everything please continue with the answers/explanations I just did some native C++ in dos mode in 1998 before Cli and .Net I don't know the STL,Templates, Win32 or COM but I remember a little the concept of memory management and oop etc I already played around a little with C# 1.0 in 2002 but things changed a lot with linq and lambda... I am here because I talked with some experienced programmers and authors of some the best selling programming books like apress wrox and deitel and they told me a few things are likely to happen like .Net could be on his way out because of Html5/Javascript combo could kill xaml and C++ native apps on mobile dev will outperform them by a lot... Secondly ios and android are getting so popular that mobile dev is the future so Objective-C is very hard to ignore so why get tied down in Windows long term (.Net) compared to Java (android)... but again android is very fragmented, they also said Windows 8 RT will give you access to only a small part of the .Net framework... so that's what they think so I don't know which direction to choose I wanted to learn C# & .Net but what if it die off or Windows 8 flop Windows Phone marketshare really can't compare to ios... so I'll be stuck that's why I worry is Java safer long term or more versatile if you want 'cause of the support for android ??

    Read the article

  • Emperors don’t come cheap

    - by RoyOsherove
    “Sorry” I replied in a polite email. “Maybe next year, when budgets allow for this”. It was addressed to the organizer of TechEd US, which was to be in New Orleans this year. Man, I would have loved to be in new Orleans this year, but, I guess these guys only understand one language – and I won’t be their puppy any more. You see, they wouldn’t pay for my business class flight to TechEd from Israel. Me– the great emperor of unit testing?! travelling coach for 12 hours? No thanks. I have better things to do! And this is after last year, they only invited me to have one talk throughout the conference. one talk. After the year before I was on the top ten speakers list of that conference?! No sir! They did give it a good try, though. They said they can pay up to 4,000$ per flight cost for me, and that they only found a flight at about 5460$. “Unacceptable” I told them when they asked if I would pay the difference. And that was that. Goodbye teched. As I closed up gmail, wondering if I should have told them that I found a similar flight at 4,300$, and came back to the living room, I told my wife, all full of myself “I just canceled teched”. “Oh good” she said. Not even looking at me as she tried to feed our one year old. “did you tell them you need to cancel because you already have another flight that month and your wife won’t let you travel more than once a month anymore?” “Yeah right” I said. Just what I need – for people to realize I’m totally whipped. I still need an ounce of dignity. “I told those bastards that if they want me they have to make an effort. People like me don’t come cheap, you know?” “You’re an idiot for not telling them the real reason.” She handed me the baby.  “What if they found a flight that matches their budget? How would you have gotten away from that engagement?” . She put on “Lost” on the media center and sat next to me. I did not reply.

    Read the article

  • What’s new in IIS8, Perf, Indexing Service-Week 49

    - by OWScott
    You can find this week’s video here. After some delays in the publishing process week 49 is finally live.  This week I'm taking Q&A from viewers, starting with what's new in IIS8, a question on enable32BitAppOnWin64, performance settings for asp.net, the ARR Helper, and Indexing Services. Starting this week for the remaining four weeks of the 52 week series I'll be taking questions and answers from the viewers. Already a number of questions have come in. This week we look at five topics. Pre-topic: We take a look at the new features in IIS8. Last week Internet Information Services (IIS) 8 Beta was released to the public. This week's video touches on the upcoming features in the next version of IIS. Here’s a link to the blog post which was mentioned in the video Question 1: In a number of places (http://learn.iis.net/page.aspx/201/32-bit-mode-worker-processes/, http://channel9.msdn.com/Events/MIX/MIX08/T06), I've saw that enable32BitAppOnWin64 is recommended for performance reasons. I'm guessing it has to do with memory usage... but I never could find detailed explanation on why this is recommended (even Microsoft books are vague on this topic - they just say - do it, but provide no reason why it should be done). Do you have any insight into this? (Predrag Tomasevic) Question 2: Do you have any recommendations on modifying aspnet.config and machine.config to deliver better performance when it comes to "high number of concurrent connections"? I've implemented recommendations for modifying machine.config from this article (http://www.codeproject.com/KB/aspnet/10ASPNetPerformance.aspx - ASP.NET Process Configuration Optimization section)... but I would gladly listen to more recommendations if you have them. (Predrag Tomasevic) Question 3: Could you share more of your experience with ARR Helper? I'm specifically interested in configuring ARR Helper (for example - how to only accept only X-Forwards-For from certain IPs (proxies you trust)). (Predrag Tomasevic) Question 4: What is the replacement for indexing service to use in coding web search pages on a Windows 2008R2 server? (Susan Williams) Here’s the link that was mentioned: http://technet.microsoft.com/en-us/library/ee692804.aspx This is now week 49 of a 52 week series for the web pro. You can view past and future weeks here: http://dotnetslackers.com/projects/LearnIIS7/ You can find this week’s video here.

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • June Oracle Technology Network NEW Member Benefits - books books and more books!!!

    - by Cassandra Clark
    As we mentioned a few posts ago we are working to bring Oracle Technology Network members NEW benefits each month. Listed below are several discounts on technology books brought to you by Apress, Pearson, CRC Press and Packt Publishing. Happy reading!!! Apress Offers - Get 50% off the eBook below using promo code ORACLEJUNEJCCF. Pro ODP.NET for Oracle Database 11g By Edmund T. Zehoo This book is a comprehensive and easy-to-understand guide for using the Oracle Data Provider (ODP) version 11g on the .NET Framework. It also outlines the core GoF (Gang of Four) design patterns and coding techniques employed to build and deploy high-impact mission-critical applications using advanced Oracle database features through the ODP.NET provider. Pearson Offers - Get 35% off all titles listed below using code OTNMEMBER. SOA Design Patterns | Thomas Earl | ISBN: 0136135161 In cooperation with experts and practitioners throughout the SOA community, best-selling author Thomas Erl brings together the de facto catalog of design patterns for SOA and service-orientation. Oracle Performance Survival Guide | Guy Harrison | ISBN: 9780137011957 The fast, complete, start-to-finish guide to optimizing Oracle performance. Core JavaServer Faces, Third Edition | David Geary and Cay S. Horstmann | ISBN: 9780137012893 Provides everything you need to master the powerful and time-saving features of JSF 2.0? Solaris Security Essentials | ISBN: 9780137012336 A superb guide to deploying and managing secure computer environments.? Effective C#, Second Edition | Bill Wagner | ISBN: 9780321658708 Respected .NET expert Bill Wagner identifies fifty ways you can leverage the full power of the C# 4.0 language to express your designs concisely and clearly. CRC Press Offers - Use 813DA to get 20% off this the title below. Secure and Resilient Software Development This book illustrates all phases of the secure software development life cycle. It details quality software development strategies that stress resilience requirements with precise, actionable, and ground-level inputs. Packt Publishing Offers - Use the promo code "Java35June", to save 35% off of each eBook mentioned below. JSF 2.0 Cookbook By Anghel Leonard ISBN: 978-1-847199-52-2 Packed with fast, practical solutions and techniques for JavaServer Faces developers who want to push past the JSF basics. JavaFX 1.2 Application Development Cookbook By Vladimir Vivien ISBN: 978-1-847198-94-5 Fast, practical solutions and techniques for building powerful, responsive Rich Internet Applications in JavaFX.

    Read the article

  • SQL SERVER – WRITELOG – Wait Type – Day 17 of 28

    - by pinaldave
    WRITELOG is one of the most interesting wait types. So far we have seen a lot of different wait types, but this log type is associated with log file which makes it interesting to deal with. From Book On-Line: WRITELOG Occurs while waiting for a log flush to complete. Common operations that cause log flushes are checkpoints and transaction commits. WRITELOG Explanation: This wait type is usually seen in the heavy transactional database. When data is modified, it is written both on the log cache and buffer cache. This wait type occurs when data in the log cache is flushing to the disk. During this time, the session has to wait due to WRITELOG. I have recently seen this wait type’s persistence at my client’s place, where one of the long-running transactions was stopped by the user causing it to roll back. In the future, I will see if I could re-create this situation once again on my machine to validate the relation. Reducing WRITELOG wait: There are several suggestions to reduce this wait stats: Move Transaction Log to Separate Disk from mdf and other files. Avoid cursor-like coding methodology and frequent committing of statements. Find the most active file based on IO stall time based on the script written over here. You can also use fn_virtualfilestats to find IO-related issues using the script mentioned over here. Check the IO-related counters (PhysicalDisk:Avg.Disk Queue Length, PhysicalDisk:Disk Read Bytes/sec and PhysicalDisk :Disk Write Bytes/sec) for additional details. Read about them over here. There are two excellent resources by Paul Randal, I suggest you understand the subject from those videos. The links to videos are here and here. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Measuring Social Media Efforts

    - by David Dorf
    So you're on the bandwagon and you've created a Facebook page, you're tweeting everyday, and maybe you've even got a YouTube channel. Now what? After you put any program in place, you need to measure, set new goals, then execute and this is no different. But how does one measure social media efforts? First, I guess we need some goals. Typical ones might be to acquire customers, engage them, then convert them. So that translates to: Increase Facebook fans and Twitter followers Increase comments/posting and retweets Increase redemption of offers via Facebook and Twitter Counting fans and followers is easy, and tracking the redemption of coupons isn't that hard either, but measuring engagement is a tough one. How do you know whether your fans are reading your posts, and whether your posts have any meaning to them? For Facebook, the fan page administrator has access to analytics called Facebook Insights. There you can check weekly metrics such as total fans, new fans, lost fans, demographics of fans, number of postings, numbers clicks, etc. Not nearly as comprehensive as Google Analytics, but well on its way. For Twitter, getting information is a little tougher. Again, its easy to track followers and you can use tools like TweetMeme to encourage and track retweets. An interesting website called WeFollow tries to measure influence for certain topics. For example, the top three influencers for the topic "retail" are retailweek, retailwire, and retailerdaily. Other notables are #10 BestBuy, #11 GapOfficial, #12 JeffPR, and #17 OracleRetail. I assume influence is calculated based on number of followers, number of retweets, frequency of tweets, and perhaps depth of dialogs. If you want to get serious about monitoring and measuring social marketing efforts, you'd be wise to invest in a strong tool. Several are listed on this wiki, including big ones like Radian6, Nielsen, Omniture, and Buzzient. Buzzient might be particularly interesting because its integrated with Oracle CRM OnDemand -- see the demo. As always, I'm interested in hearing how others approach goal setting and monitoring of social media efforts, so feel free to post comments.

    Read the article

  • TFS Auto Shelve - New Visual Studio 2010 / TFS 2010 Extension

    - by MikeParks
    We've been working with the Visual Studio 2010 SDK and the TFS 2010 SDK a lot recently to create new Visual Studio Extensions. You can find these extensions in the Visual Studio Gallery. If you're a developer/programmer, you should check it out, they have some pretty cool tools out there. I'd be surprised if you told me you went there and couldn't find any tools that could help you. One of the new extensions Cory and I made is called TFS Auto Shelve. Check out the description and read about it below. If you're interested and you have VS 2010 w/TFS 2010, feel free to try it out and let us know what you think. You can download it here: http://visualstudiogallery.msdn.microsoft.com/en-us/080540cb-e35f-4651-b71c-86c73e4a633d   Here's a description and screenshots of what it does: Automatically shelves the latest version of all pending changes from local TFS workspaces to the TFS Server every "x" number of minutes when solutions are opened.   ·         Purpose o    Created for Team Foundation Server 2010 and Visual Studio 2010 o    This tool is mainly aimed at the Programmer/Developer audience so they can always have the latest copy of their pending changes backed up to the TFS Server while coding ·         Functionality o    Menu options become active and automatic shelving begins when a solution that mapped to a TFS Workspace is opened in Visual Studio o    In Tools > TFS Auto Shelve (Running/NotRunning):  Automatic shelving can be turned on/off o    In Tools > TFS Auto Shelve Now : Shelve all code can be manually triggered o    Each TFS workspace has its own shelveset which is re-used to save the latest version of pending changes o    Shelvesets are named as Base Name + Workspace Name o    Shelveset comment contains item count o    If there are no pending changes, no shelvesets will be created/updated o    If a solution is opened that is not mapped to a TFS Workspace, menu options are disabled since shelving only works for mapped workspaces. ·         Configuration o    In Tools > Options > TFS Auto Shelve Options: Base Name is configurable o    In Tools > Options > TFS Auto Shelve Options: "x" number of minutes is configurable in options ·         Logging o    Custom Visual Studio Activity Logging is implemented. If you run into any errors, please startup Visual Studio with the /log switch, re-create the error, then close Visual Studio. You can browse to “%AppData%\Microsoft\VisualStudio\10.0\ActivityLog.XML” to view the log. Please feel free to inform us of any errors you see and we can work it out via email. ·         Other Helpful Information o    To view shelvesets, open Source Control Explorer, click on File > Source Control > Unshelve Pending Changes o    Workspaces can be modified by opening the Source Control Explorer > Clicking on Workspaces drop down > Click Workspaces… > Click Add / Edit / Removed   Thanks! - Mike

    Read the article

  • Access Your favorite RSS Feeds in Windows Media Center

    - by Mysticgeek
    There are a lot of apps out there that help you organize and view your favorite RSS feeds. If you subscribe to a lot, sitting at a computer to view them all can be overwhelming. Today we take a look at accessing them from the couch with WMC. Using Media Center RSS Feeds To get RSS feeds to work with this plugin you need to subscribe to them through Internet Explorer.   The first thing you’ll need to do is activate Media Center RSS Reader (link below) on their site. Next install the Media Center RSS Reader plugin (link below). Installation is easy, just select the defaults when going through the wizard. Now when you open Media Center you’ll see the RSS icon in the main menu under Accessories. You can also find it in the Extras section. Enter in the username and activation code you received when you activated the plugin earlier. After activation you’ll see a list of the RSS feeds you currently subscribed through Internet Explorer. Click on the site feed you want to read and you’ll get a list of the different items available. Next you get and overview of the contents for the item you selected. From there you can show the page of the website containing that item. For any audio or video feeds you subscribe to, at the overview screen, click on Play to watch it. Then just sit back and watch your favorite video RSS feeds on WMC.   Media Center RSS Reader plugin will work with Vista and Windows 7. If you’re looking for a way to check out your RSS feeds in WMC this is a cool plugin for it. Download Media Center RSS Reader –You can activate it here as well. Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Integrate Boxee with Media Center in Windows 7Integrate Hulu Desktop and Windows Media Center in Windows 7Add Color Coding to Windows 7 Media Center Program GuideSchedule Updates for Windows Media Center TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more Download Microsoft Office Help tab

    Read the article

  • Craftsmanship Tour: Day 3 &amp; 4 8th Light

    - by Liam McLennan
    Thursday morning the Illinois public transport system came through for me again. I took the Metra train north from Union Station (which was seething with inbound commuters) to Prairie Crossing (Libertyville). At Prairie Crossing I met Paul and Justin from 8th Light and then Justin drove us to the office. The 8th Light office is in an small business park, in a semi-rural area, surrounded by ponds. Upstairs there are two spacious, open areas for developers. At one end of the floor is Doug Bradbury’s walk-and-code station; a treadmill with a desk and computer so that a developer can get exercise at work. At the other end of the floor is a hammock. This irregular office furniture is indicative of the 8th Light philosophy, to pursue excellence without being limited by conventional wisdom. 8th Light have a wall covered in posters, each illustrating one person’s software craftsmanship journey. The posters are a fascinating visualisation of the similarities and differences between each of our progressions. The first thing I did Thursday morning was to create my own poster and add it to the wall. Over two days at 8th Light I did some pairing with the 8th Lighters and we shared thoughts on software development. I am not accustomed to such a progressive and enlightened environment and I found the experience inspirational. At 8th Light TDD, clean code, pairing and kaizen are deeply ingrained in the culture. Friday, during lunch, 8th Light hosted a ‘lunch and learn’ event. Paul Pagel lead us through a coding exercise using micro-pomodori. We worked in pairs, focusing on the pedagogy of pair programming and TDD. After lunch I recorded this interview with Paul Pagel and Justin Martin. We discussed 8th light, craftsmanship, apprenticeships and the limelight framework. Interview with Paul Pagel and Justin Martin My time at Didit, Obtiva and 8th Light has convinced me that I need to give up some of my independence and go back to working in a team. Craftsmen advance their skills by learning from each other, and I can’t do that working at home by myself. The challenge is finding the right team, and becoming a part of it.

    Read the article

  • SOA, Cloud and Service Technology Symposium a super success!

    - by JuergenKress
    SOA, Cloud and Service Technology Symposium in London was a huge success. More than 600 international attendees participated in it. Our SOA & BPM Community had a great presence there. At joint booth with the Specialized partners link consulting, eProseed and Griffiths Waite, we presented the latest product updates and had many interesting discussions with customers and speakers. Special thanks to our HQ product management team Demed, Tim, Manas for coming over right before OOW. Also a very big Thank to Matthias Ziegler from Accenture for presenting our joint presentation individually! If you missed the conference here are the key presentations links for your reference: Big Data and its impact on SOA Demed L'Her [View PDF] Building 21st Century Service-Oriented Airports Shyam Kumar [View PDF] Building Cloudy Services Anne Thomas Manes [View PDF] Community Management: The Next Wave of SOA Governance and API Management Tim E. Hall [View PDF] Elastic SOA in the Cloud Steve Millidge [View PDF] Governing Shared Services: On-Premise & In the Cloud Thomas Erl [View Video] Introducing the Cloud Computing Design Patterns Catalogue Thomas Erl and Amin Naserpour [CloudPatterns.org] Lost in Translation - Common Mistakes Interpreting Patterns Mark Simpson [View PDF] Moving Applications to the Cloud: Migration Options Anne Thomas Manes [View PDF] New Paradigms for Application Architecture: From Applications to IT Services Anne Thomas Manes [View PDF] NoSQL for Data Services, Data Virtualization & Big Data Guido Schmutz [View PDF] A Pragmatic Approach to Cloud Computing Andrea Morena [View PDF] The Successful Execution of the SOA and BPM Vision Using a Business Capability Framework: Concepts and Examples Clemens Utschig and Manas Deb [View PDF] Service Modeling & BPM Business Value Patterns Matthias Ziegler [View PDF] [Podcast] SOA Adoption in the Brazilian Ministry of Health - Case Study Ricardo Puttini and Andre Toffanello [PDF Coming Soon] SOA Environments are a Big Data Problem Markus Zirn, Splunk and Maciej Barcz [View PDF] SOA Governance at EDP: A Global Energy Company Manuel Rosa [View PDF] For all presentations please visit the SOA, Cloud and Service Technology Symposium Website SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • The embarrassingly obvious about SQL Server CE

    - by Edward Boyle
    I have been working with SQL servers in one form or another for almost two decades now. But I am new to SQL Server Compact Edition. In the past weeks I have been working with SQL Serve CE a lot. The SQL, not a problem, but the engine itself is very new to me. One of the issues I ran into was a simple SQL statement taking excusive amounts of time; by excessive, I mean over one second. I wrote a little code to time the method. Sometimes it took under one second, other times as long as three seconds. –But it was a simple update statement! As embarrassing as it is, why it was slow eluded me. I posted my issue to MSDN and I got a reply from ErikEJ (MS MVP) who runs the blog “Everything SQL Server Compact” . I know little to nothing about SQL Server Compact. This guy is completely obsessed very well versed in CE. If you spend any time in MSDN forums, it seems that this guy single handedly has the answer for every CE question that comes up. Anyway, he said: “Opening a connection to a SQL Server Compact database file is a costly operation, keep one connection open per thread (incl. your UI thread) in your app, the one on the UI thread should live for the duration of your app.” It hit me, all databases have some connection overhead and SQL Server CE is not a database engine running as a service drinking Jolt Cola waiting for someone to talk to him so he can spring into action and show off his quarter-mile sprint capabilities. Imagine if you had to start the SQL Server process every time you needed to make a database connection. Principally, that is what you are doing with SQL Server CE. For someone who has worked with Enterprise Level SQL Servers a lot, I had to come to the mental image that my Open connection to SQL Server CE is basically starting a service, my own private service, and by closing the connection, I am shutting down my little private service. After making the changes in my code, I lost any reservations I had with using CE. At present, my Data Access Layer class has a constructor; in that constructor I open my connection, I also have OpenConnection and CloseConnection methods, I also implemented IDisposable and clean up any connections in Dispose(). I am still finalizing how this assembly will function. – That’s beside the point. All I’m trying to say is: “Opening a connection to a SQL Server Compact database file is a costly operation”

    Read the article

  • Unable to transfer data to or from mounted hard drive

    - by user210335
    So usually i'm good at sorting out issues. But this one has me at a loss! This issues has occured since upgrading my ubuntu so this was workingg prior. I use mounted hard drives to manage my downloads which are then copied over accordingly by a python based app. I found it was having issues with permissions to create anything on these mounted hard drives. I'm able to play and access he content of these drives so they're not faulty. My mount script looks like the following rw,user,exec,auto I really am stuck. Could anyone shed any light on how to fix this and allow me to access it. I've checked the properties and all groups should have read and write access so i'm very confused! thanks, edit here's the output of my mount options /dev/sda2 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/cgroup type tmpfs (rw) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /sys/firmware/efi/efivars type efivarfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755) none on /sys/fs/pstore type pstore (rw) /dev/sda1 on /boot/efi type vfat (rw) /dev/sdc1 on /mnt/tv type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) /dev/sdb1 on /mnt/B88A30E88A30A4B2 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) systemd on /sys/fs/cgroup/systemd type cgroup (rw,noexec,nosuid,nodev,none,name=systemd) gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,user=simon) /dev/sdd1 on /media/simon/New Volume3 type fuseblk (rw,nosuid,nodev,allow_other,default_permissions,blksize=4096) the main mount in question is /dev/sdc1 on /mnt/tv type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) heres my dmesg output. I tried cchanging permissions in a terminal and I got an io error. [52803.343417] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.343420] sd 2:0:0:0: [sdc] CDB: [52803.343422] Read(10): 28 00 00 60 9e 3f 00 00 08 00 [52803.343805] sd 2:0:0:0: [sdc] Unhandled error code [52803.343808] sd 2:0:0:0: [sdc] [52803.343810] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.343812] sd 2:0:0:0: [sdc] CDB: [52803.343813] Read(10): 28 00 00 67 64 67 00 00 08 00 [52803.344389] sd 2:0:0:0: [sdc] Unhandled error code [52803.344392] sd 2:0:0:0: [sdc] [52803.344394] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.344396] sd 2:0:0:0: [sdc] CDB: [52803.344397] Read(10): 28 00 09 bd e7 6f 00 00 08 00 [52803.344584] sd 2:0:0:0: [sdc] Unhandled error code [52803.344587] sd 2:0:0:0: [sdc] [52803.344589] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.344591] sd 2:0:0:0: [sdc] CDB: [52803.344592] Read(10): 28 00 07 3a cf b7 00 00 08 00 [52803.344776] sd 2:0:0:0: [sdc] Unhandled error code [52803.344779] sd 2:0:0:0: [sdc] [52803.344781] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.344783] sd 2:0:0:0: [sdc] CDB: [52803.344784] Read(10): 28 00 09 bd e7 97 00 00 08 00 [52803.344973] sd 2:0:0:0: [sdc] Unhandled error code [52803.344976] sd 2:0:0:0: [sdc] [52803.344978] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.344980] sd 2:0:0:0: [sdc] CDB: [52803.344981] Read(10): 28 00 08 dd 57 ef 00 00 08 00 [52803.346745] sd 2:0:0:0: [sdc] Unhandled error code [52803.346748] sd 2:0:0:0: [sdc] [52803.346750] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.346752] sd 2:0:0:0: [sdc] CDB: [52803.346754] Read(10): 28 00 07 1a c1 0f 00 00 08 00 [52803.349939] sd 2:0:0:0: [sdc] Unhandled error code [52803.349942] sd 2:0:0:0: [sdc] [52803.349944] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.349946] sd 2:0:0:0: [sdc] CDB: [52803.349948] Read(10): 28 00 00 67 64 9f 00 00 08 00 [52803.350147] sd 2:0:0:0: [sdc] Unhandled error code [52803.350150] sd 2:0:0:0: [sdc] [52803.350152] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.350154] sd 2:0:0:0: [sdc] CDB: [52803.350155] Read(10): 28 00 00 67 64 97 00 00 08 00 [52803.351302] sd 2:0:0:0: [sdc] Unhandled error code [52803.351305] sd 2:0:0:0: [sdc] [52803.351307] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.351309] sd 2:0:0:0: [sdc] CDB: [52803.351311] Read(10): 28 00 00 a4 1d cf 00 00 08 00 [52803.351894] sd 2:0:0:0: [sdc] Unhandled error code [52803.351897] sd 2:0:0:0: [sdc] [52803.351899] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.351901] sd 2:0:0:0: [sdc] CDB: [52803.351902] Read(10): 28 00 00 67 67 3f 00 00 08 00 [52803.353163] sd 2:0:0:0: [sdc] Unhandled error code [52803.353166] sd 2:0:0:0: [sdc] [52803.353168] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.353170] sd 2:0:0:0: [sdc] CDB: [52803.353172] Read(10): 28 00 00 67 64 ef 00 00 08 00 [52803.353917] sd 2:0:0:0: [sdc] Unhandled error code [52803.353920] sd 2:0:0:0: [sdc] [52803.353922] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.353924] sd 2:0:0:0: [sdc] CDB: [52803.353925] Read(10): 28 00 00 67 65 17 00 00 08 00 [52803.354484] sd 2:0:0:0: [sdc] Unhandled error code [52803.354487] sd 2:0:0:0: [sdc] [52803.354489] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.354491] sd 2:0:0:0: [sdc] CDB: [52803.354492] Read(10): 28 00 07 1a d8 9f 00 00 08 00 [52803.355005] sd 2:0:0:0: [sdc] Unhandled error code [52803.355010] sd 2:0:0:0: [sdc] [52803.355013] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.355017] sd 2:0:0:0: [sdc] CDB: [52803.355019] Read(10): 28 00 00 67 65 3f 00 00 08 00 [52803.355293] sd 2:0:0:0: [sdc] Unhandled error code [52803.355298] sd 2:0:0:0: [sdc] [52803.355301] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.355305] sd 2:0:0:0: [sdc] CDB: [52803.355308] Read(10): 28 00 00 a4 20 27 00 00 08 00 [52803.355575] sd 2:0:0:0: [sdc] Unhandled error code [52803.355580] sd 2:0:0:0: [sdc] [52803.355583] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.355587] sd 2:0:0:0: [sdc] CDB: [52803.355589] Read(10): 28 00 00 5d dc 67 00 00 08 00 [52803.356647] sd 2:0:0:0: [sdc] Unhandled error code [52803.356650] sd 2:0:0:0: [sdc] [52803.356652] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.356654] sd 2:0:0:0: [sdc] CDB: [52803.356655] Read(10): 28 00 07 1a dd 3f 00 00 08 00 [52803.357108] sd 2:0:0:0: [sdc] Unhandled error code [52803.357111] sd 2:0:0:0: [sdc] [52803.357113] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.357115] sd 2:0:0:0: [sdc] CDB: [52803.357116] Read(10): 28 00 00 67 65 97 00 00 08 00 [52803.357298] sd 2:0:0:0: [sdc] Unhandled error code [52803.357300] sd 2:0:0:0: [sdc] [52803.357302] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.357304] sd 2:0:0:0: [sdc] CDB: [52803.357306] Read(10): 28 00 07 1a 04 d7 00 00 08 00 [52803.360374] sd 2:0:0:0: [sdc] Unhandled error code [52803.360377] sd 2:0:0:0: [sdc] [52803.360379] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.360382] sd 2:0:0:0: [sdc] CDB: [52803.360383] Read(10): 28 00 00 67 65 b7 00 00 08 00 [52803.360581] sd 2:0:0:0: [sdc] Unhandled error code [52803.360584] sd 2:0:0:0: [sdc] [52803.360586] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.360588] sd 2:0:0:0: [sdc] CDB: [52803.360589] Read(10): 28 00 00 67 65 c7 00 00 08 00 [52803.361352] sd 2:0:0:0: [sdc] Unhandled error code [52803.361355] sd 2:0:0:0: [sdc] [52803.361357] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.361359] sd 2:0:0:0: [sdc] CDB: [52803.361360] Read(10): 28 00 09 bd e1 af 00 00 08 00 [52803.362096] sd 2:0:0:0: [sdc] Unhandled error code [52803.362099] sd 2:0:0:0: [sdc] [52803.362101] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.362103] sd 2:0:0:0: [sdc] CDB: [52803.362104] Read(10): 28 00 07 0a 64 e7 00 00 08 00 [52803.362555] sd 2:0:0:0: [sdc] Unhandled error code [52803.362558] sd 2:0:0:0: [sdc] [52803.362560] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.362562] sd 2:0:0:0: [sdc] CDB: [52803.362563] Read(10): 28 00 00 67 65 d7 00 00 08 00 [52803.362747] sd 2:0:0:0: [sdc] Unhandled error code [52803.362750] sd 2:0:0:0: [sdc] [52803.362752] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.362754] sd 2:0:0:0: [sdc] CDB: [52803.362755] Read(10): 28 00 01 4c 12 6f 00 00 08 00 [52803.362977] sd 2:0:0:0: [sdc] Unhandled error code [52803.362980] sd 2:0:0:0: [sdc] [52803.362982] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.362984] sd 2:0:0:0: [sdc] CDB: [52803.362985] Read(10): 28 00 03 85 43 7f 00 00 08 00 [52803.365197] sd 2:0:0:0: [sdc] Unhandled error code [52803.365200] sd 2:0:0:0: [sdc] [52803.365202] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.365204] sd 2:0:0:0: [sdc] CDB: [52803.365206] Read(10): 28 00 07 15 46 4f 00 00 08 00 [52803.365524] sd 2:0:0:0: [sdc] Unhandled error code [52803.365527] sd 2:0:0:0: [sdc] [52803.365528] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.365531] sd 2:0:0:0: [sdc] CDB: [52803.365532] Read(10): 28 00 07 11 78 8f 00 00 08 00 [52803.369355] sd 2:0:0:0: [sdc] Unhandled error code [52803.369360] sd 2:0:0:0: [sdc] [52803.369362] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.369365] sd 2:0:0:0: [sdc] CDB: [52803.369366] Read(10): 28 00 09 bd e2 8f 00 00 08 00 [52803.370806] sd 2:0:0:0: [sdc] Unhandled error code [52803.370809] sd 2:0:0:0: [sdc] [52803.370811] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.370814] sd 2:0:0:0: [sdc] CDB: [52803.370815] Read(10): 28 00 07 1a c6 37 00 00 08 00 [52803.371630] sd 2:0:0:0: [sdc] Unhandled error code [52803.371634] sd 2:0:0:0: [sdc] [52803.371636] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.371639] sd 2:0:0:0: [sdc] CDB: [52803.371640] Read(10): 28 00 00 67 66 57 00 00 08 00 [52803.371863] sd 2:0:0:0: [sdc] Unhandled error code [52803.371867] sd 2:0:0:0: [sdc] [52803.371868] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.371871] sd 2:0:0:0: [sdc] CDB: [52803.371872] Read(10): 28 00 00 64 0b df 00 00 08 00 [52803.373467] sd 2:0:0:0: [sdc] Unhandled error code [52803.373470] sd 2:0:0:0: [sdc] [52803.373472] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.373474] sd 2:0:0:0: [sdc] CDB: [52803.373476] Read(10): 28 00 00 60 83 7f 00 00 08 00 [52803.373655] sd 2:0:0:0: [sdc] Unhandled error code [52803.373658] sd 2:0:0:0: [sdc] [52803.373660] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.373662] sd 2:0:0:0: [sdc] CDB: [52803.373663] Read(10): 28 00 00 60 83 7f 00 00 08 00 [52803.374063] sd 2:0:0:0: [sdc] Unhandled error code [52803.374066] sd 2:0:0:0: [sdc] [52803.374068] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.374070] sd 2:0:0:0: [sdc] CDB: [52803.374071] Read(10): 28 00 08 db d5 5f 00 00 08 00 [52803.374602] sd 2:0:0:0: [sdc] Unhandled error code [52803.374605] sd 2:0:0:0: [sdc] [52803.374607] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.374609] sd 2:0:0:0: [sdc] CDB: [52803.374611] Read(10): 28 00 07 1a bf a7 00 00 08 00 [52803.375259] sd 2:0:0:0: [sdc] Unhandled error code [52803.375264] sd 2:0:0:0: [sdc] [52803.375267] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.375270] sd 2:0:0:0: [sdc] CDB: [52803.375272] Read(10): 28 00 00 67 66 87 00 00 08 00 [52803.375515] sd 2:0:0:0: [sdc] Unhandled error code [52803.375520] sd 2:0:0:0: [sdc] [52803.375522] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.375526] sd 2:0:0:0: [sdc] CDB: [52803.375527] Read(10): 28 00 00 62 54 8f 00 00 08 00 [52803.378506] sd 2:0:0:0: [sdc] Unhandled error code [52803.378513] sd 2:0:0:0: [sdc] [52803.378516] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.378520] sd 2:0:0:0: [sdc] CDB: [52803.378522] Read(10): 28 00 00 67 66 bf 00 00 08 00 [52803.381048] sd 2:0:0:0: [sdc] Unhandled error code [52803.381054] sd 2:0:0:0: [sdc] [52803.381057] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.381061] sd 2:0:0:0: [sdc] CDB: [52803.381063] Read(10): 28 00 00 60 ae 77 00 00 08 00 [52803.381238] sd 2:0:0:0: [sdc] Unhandled error code [52803.381242] sd 2:0:0:0: [sdc] [52803.381245] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.381248] sd 2:0:0:0: [sdc] CDB: [52803.381250] Read(10): 28 00 00 60 ae 77 00 00 08 00 [52803.381382] sd 2:0:0:0: [sdc] Unhandled error code [52803.381386] sd 2:0:0:0: [sdc] [52803.381388] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.381392] sd 2:0:0:0: [sdc] CDB: [52803.381394] Read(10): 28 00 00 60 ae 77 00 00 08 00 [52803.381569] sd 2:0:0:0: [sdc] Unhandled error code [52803.381573] sd 2:0:0:0: [sdc] [52803.381575] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.381579] sd 2:0:0:0: [sdc] CDB: [52803.381581] Read(10): 28 00 00 60 ae 77 00 00 08 00 [52803.382295] sd 2:0:0:0: [sdc] Unhandled error code [52803.382300] sd 2:0:0:0: [sdc] [52803.382302] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.382306] sd 2:0:0:0: [sdc] CDB: [52803.382307] Read(10): 28 00 00 67 6a 87 00 00 08 00 [52803.382552] sd 2:0:0:0: [sdc] Unhandled error code [52803.382556] sd 2:0:0:0: [sdc] [52803.382558] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.382562] sd 2:0:0:0: [sdc] CDB: [52803.382564] Read(10): 28 00 00 67 6a af 00 00 08 00 [52803.382794] sd 2:0:0:0: [sdc] Unhandled error code [52803.382798] sd 2:0:0:0: [sdc] [52803.382801] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.382804] sd 2:0:0:0: [sdc] CDB: [52803.382806] Read(10): 28 00 00 67 6a c7 00 00 08 00 [52803.383269] sd 2:0:0:0: [sdc] Unhandled error code [52803.383274] sd 2:0:0:0: [sdc] [52803.383277] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.383280] sd 2:0:0:0: [sdc] CDB: [52803.383282] Read(10): 28 00 00 67 6a f7 00 00 08 00 [52803.383556] sd 2:0:0:0: [sdc] Unhandled error code [52803.383560] sd 2:0:0:0: [sdc] [52803.383563] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.383566] sd 2:0:0:0: [sdc] CDB: [52803.383568] Read(10): 28 00 00 67 6b 2f 00 00 08 00 [52803.386185] sd 2:0:0:0: [sdc] Unhandled error code [52803.386191] sd 2:0:0:0: [sdc] [52803.386194] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.386198] sd 2:0:0:0: [sdc] CDB: [52803.386200] Read(10): 28 00 01 4c 1b bf 00 00 08 00 [52803.386454] sd 2:0:0:0: [sdc] Unhandled error code [52803.386458] sd 2:0:0:0: [sdc] [52803.386461] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.386465] sd 2:0:0:0: [sdc] CDB: [52803.386467] Read(10): 28 00 07 1a b4 1f 00 00 08 00 [52803.388320] sd 2:0:0:0: [sdc] Unhandled error code [52803.388324] sd 2:0:0:0: [sdc] [52803.388326] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.388328] sd 2:0:0:0: [sdc] CDB: [52803.388329] Read(10): 28 00 09 bd de 17 00 00 08 00 [52803.388836] sd 2:0:0:0: [sdc] Unhandled error code [52803.388838] sd 2:0:0:0: [sdc] [52803.388839] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.388841] sd 2:0:0:0: [sdc] CDB: [52803.388842] Read(10): 28 00 07 57 9f ff 00 00 08 00 [52803.389124] sd 2:0:0:0: [sdc] Unhandled error code [52803.389126] sd 2:0:0:0: [sdc] [52803.389127] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.389129] sd 2:0:0:0: [sdc] CDB: [52803.389130] Read(10): 28 00 00 67 6b 8f 00 00 08 00 [52803.389244] sd 2:0:0:0: [sdc] Unhandled error code [52803.389246] sd 2:0:0:0: [sdc] [52803.389248] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.389249] sd 2:0:0:0: [sdc] CDB: [52803.389250] Read(10): 28 00 07 e9 ee ff 00 00 08 00 [52803.390386] sd 2:0:0:0: [sdc] Unhandled error code [52803.390389] sd 2:0:0:0: [sdc] [52803.390390] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.390392] sd 2:0:0:0: [sdc] CDB: [52803.390393] Read(10): 28 00 07 1a be 0f 00 00 08 00 [52803.390682] sd 2:0:0:0: [sdc] Unhandled error code [52803.390685] sd 2:0:0:0: [sdc] [52803.390686] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.390688] sd 2:0:0:0: [sdc] CDB: [52803.390689] Read(10): 28 00 00 67 6b e7 00 00 08 00 [52803.390804] sd 2:0:0:0: [sdc] Unhandled error code [52803.390806] sd 2:0:0:0: [sdc] [52803.390808] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.390809] sd 2:0:0:0: [sdc] CDB: [52803.390810] Read(10): 28 00 07 ed 17 bf 00 00 08 00 [52803.391449] sd 2:0:0:0: [sdc] Unhandled error code [52803.391451] sd 2:0:0:0: [sdc] [52803.391452] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.391454] sd 2:0:0:0: [sdc] CDB: [52803.391455] Read(10): 28 00 09 bd e5 9f 00 00 08 00 [52803.391956] sd 2:0:0:0: [sdc] Unhandled error code [52803.391958] sd 2:0:0:0: [sdc] [52803.391960] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.391961] sd 2:0:0:0: [sdc] CDB: [52803.391962] Read(10): 28 00 00 b5 86 a7 00 00 08 00 [52803.392293] sd 2:0:0:0: [sdc] Unhandled error code [52803.392295] sd 2:0:0:0: [sdc] [52803.392296] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.392298] sd 2:0:0:0: [sdc] CDB: [52803.392299] Read(10): 28 00 07 18 bf bf 00 00 08 00 [52803.392843] sd 2:0:0:0: [sdc] Unhandled error code [52803.392845] sd 2:0:0:0: [sdc] [52803.392846] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.392848] sd 2:0:0:0: [sdc] CDB: [52803.392849] Read(10): 28 00 00 60 b3 1f 00 00 08 00 [52803.392929] sd 2:0:0:0: [sdc] Unhandled error code [52803.392931] sd 2:0:0:0: [sdc] [52803.392932] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.392934] sd 2:0:0:0: [sdc] CDB: [52803.392935] Read(10): 28 00 00 60 b3 1f 00 00 08 00 [52803.393057] sd 2:0:0:0: [sdc] Unhandled error code [52803.393059] sd 2:0:0:0: [sdc] [52803.393060] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.393062] sd 2:0:0:0: [sdc] CDB: [52803.393063] Read(10): 28 00 00 60 83 9f 00 00 08 00 [52803.393286] sd 2:0:0:0: [sdc] Unhandled error code [52803.393288] sd 2:0:0:0: [sdc] [52803.393289] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.393291] sd 2:0:0:0: [sdc] CDB: [52803.393292] Read(10): 28 00 00 67 6b bf 00 00 08 00 [52803.393720] sd 2:0:0:0: [sdc] Unhandled error code [52803.393722] sd 2:0:0:0: [sdc] [52803.393723] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.393725] sd 2:0:0:0: [sdc] CDB: [52803.393725] Read(10): 28 00 00 60 b2 17 00 00 08 00 [52803.393806] sd 2:0:0:0: [sdc] Unhandled error code [52803.393808] sd 2:0:0:0: [sdc] [52803.393809] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.393810] sd 2:0:0:0: [sdc] CDB: [52803.393811] Read(10): 28 00 00 60 b2 17 00 00 08 00 [52803.393892] sd 2:0:0:0: [sdc] Unhandled error code [52803.393894] sd 2:0:0:0: [sdc] [52803.393895] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.393896] sd 2:0:0:0: [sdc] CDB: [52803.393897] Read(10): 28 00 00 60 b2 17 00 00 08 00 [52803.393974] sd 2:0:0:0: [sdc] Unhandled error code [52803.393976] sd 2:0:0:0: [sdc] [52803.393977] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.393978] sd 2:0:0:0: [sdc] CDB: [52803.393979] Read(10): 28 00 00 60 b2 17 00 00 08 00 [52803.394298] sd 2:0:0:0: [sdc] Unhandled error code [52803.394300] sd 2:0:0:0: [sdc] [52803.394302] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.394303] sd 2:0:0:0: [sdc] CDB: [52803.394304] Read(10): 28 00 00 5d a6 a7 00 00 08 00 [52803.395577] sd 2:0:0:0: [sdc] Unhandled error code [52803.395580] sd 2:0:0:0: [sdc] [52803.395582] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.395584] sd 2:0:0:0: [sdc] CDB: [52803.395585] Read(10): 28 00 00 00 01 9f 00 00 08 00 [52803.395721] sd 2:0:0:0: [sdc] Unhandled error code [52803.395724] sd 2:0:0:0: [sdc] [52803.395725] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.395726] sd 2:0:0:0: [sdc] CDB: [52803.395727] Read(10): 28 00 00 00 01 67 00 00 08 00 [52803.395843] sd 2:0:0:0: [sdc] Unhandled error code [52803.395845] sd 2:0:0:0: [sdc] [52803.395846] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.395847] sd 2:0:0:0: [sdc] CDB: [52803.395848] Read(10): 28 00 02 a8 33 77 00 00 08 00 [52803.395960] sd 2:0:0:0: [sdc] Unhandled error code [52803.395962] sd 2:0:0:0: [sdc] [52803.395963] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.395965] sd 2:0:0:0: [sdc] CDB: [52803.395965] Read(10): 28 00 00 b5 ae 7f 00 00 08 00 [52803.396077] sd 2:0:0:0: [sdc] Unhandled error code [52803.396079] sd 2:0:0:0: [sdc] [52803.396080] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.396082] sd 2:0:0:0: [sdc] CDB: [52803.396083] Read(10): 28 00 00 63 64 bf 00 00 08 00 [52803.396193] sd 2:0:0:0: [sdc] Unhandled error code [52803.396195] sd 2:0:0:0: [sdc] [52803.396196] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.396198] sd 2:0:0:0: [sdc] CDB: [52803.396199] Read(10): 28 00 07 1a e2 e7 00 00 08 00 [52803.396313] sd 2:0:0:0: [sdc] Unhandled error code [52803.396315] sd 2:0:0:0: [sdc] [52803.396316] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.396318] sd 2:0:0:0: [sdc] CDB: [52803.396319] Read(10): 28 00 07 1a b9 87 00 00 08 00 [52803.396435] sd 2:0:0:0: [sdc] Unhandled error code [52803.396437] sd 2:0:0:0: [sdc] [52803.396438] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.396439] sd 2:0:0:0: [sdc] CDB: [52803.396441] Read(10): 28 00 02 ce 8e df 00 00 08 00 [52803.396555] sd 2:0:0:0: [sdc] Unhandled error code [52803.396557] sd 2:0:0:0: [sdc] [52803.396558] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.396560] sd 2:0:0:0: [sdc] CDB: [52803.396561] Read(10): 28 00 0e 66 6d f7 00 00 08 00 [52803.396769] sd 2:0:0:0: [sdc] Unhandled error code [52803.396770] sd 2:0:0:0: [sdc] [52803.396772] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.396773] sd 2:0:0:0: [sdc] CDB: [52803.396774] Read(10): 28 00 07 1a e4 2f 00 00 08 00 [52803.396886] sd 2:0:0:0: [sdc] Unhandled error code [52803.396888] sd 2:0:0:0: [sdc] [52803.396889] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.396890] sd 2:0:0:0: [sdc] CDB: [52803.396891] Read(10): 28 00 00 63 d4 3f 00 00 08 00 [52803.397002] sd 2:0:0:0: [sdc] Unhandled error code [52803.397004] sd 2:0:0:0: [sdc] [52803.397005] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.397007] sd 2:0:0:0: [sdc] CDB: [52803.397007] Read(10): 28 00 07 1a e4 1f 00 00 08 00 [52803.400074] sd 2:0:0:0: [sdc] Unhandled error code [52803.400078] sd 2:0:0:0: [sdc] [52803.400079] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.400081] sd 2:0:0:0: [sdc] CDB: [52803.400082] Read(10): 28 00 07 16 c7 5f 00 00 08 00 [52803.400318] sd 2:0:0:0: [sdc] Unhandled error code [52803.400320] sd 2:0:0:0: [sdc] [52803.400322] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.400323] sd 2:0:0:0: [sdc] CDB: [52803.400324] Read(10): 28 00 00 60 01 87 00 00 08 00 [52803.400408] sd 2:0:0:0: [sdc] Unhandled error code [52803.400410] sd 2:0:0:0: [sdc] [52803.400412] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.400413] sd 2:0:0:0: [sdc] CDB: [52803.400414] Read(10): 28 00 00 60 01 0f 00 00 08 00 [52803.400564] sd 2:0:0:0: [sdc] Unhandled error code [52803.400566] sd 2:0:0:0: [sdc] [52803.400568] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.400569] sd 2:0:0:0: [sdc] CDB: [52803.400570] Read(10): 28 00 00 5d d1 d7 00 00 08 00 [52803.400841] sd 2:0:0:0: [sdc] Unhandled error code [52803.400843] sd 2:0:0:0: [sdc] [52803.400844] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.400846] sd 2:0:0:0: [sdc] CDB: [52803.400847] Read(10): 28 00 07 1a e3 47 00 00 08 00 [52803.401151] sd 2:0:0:0: [sdc] Unhandled error code [52803.401153] sd 2:0:0:0: [sdc] [52803.401155] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.401156] sd 2:0:0:0: [sdc] CDB: [52803.401157] Read(10): 28 00 07 1a b9 1f 00 00 08 00 [52803.401310] sd 2:0:0:0: [sdc] Unhandled error code [52803.401312] sd 2:0:0:0: [sdc] [52803.401313] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.401315] sd 2:0:0:0: [sdc] CDB: [52803.401316] Read(10): 28 00 00 a4 1b 57 00 00 08 00 [52803.401877] sd 2:0:0:0: [sdc] Unhandled error code [52803.401879] sd 2:0:0:0: [sdc] [52803.401880] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.401881] sd 2:0:0:0: [sdc] CDB: [52803.401882] Read(10): 28 00 0e 66 35 47 00 00 08 00 [52803.402032] sd 2:0:0:0: [sdc] Unhandled error code [52803.402033] sd 2:0:0:0: [sdc] [52803.402034] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.402036] sd 2:0:0:0: [sdc] CDB: [52803.402037] Read(10): 28 00 06 30 69 ff 00 00 08 00 [52803.402148] sd 2:0:0:0: [sdc] Unhandled error code [52803.402150] sd 2:0:0:0: [sdc] [52803.402151] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.402153] sd 2:0:0:0: [sdc] CDB: [52803.402154] Read(10): 28 00 09 bd d8 77 00 00 08 00 [52803.402263] sd 2:0:0:0: [sdc] Unhandled error code [52803.402265] sd 2:0:0:0: [sdc] [52803.402266] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.402267] sd 2:0:0:0: [sdc] CDB: [52803.402268] Read(10): 28 00 00 5d ff 77 00 00 08 00 [52803.402376] sd 2:0:0:0: [sdc] Unhandled error code [52803.402378] sd 2:0:0:0: [sdc] [52803.402379] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.402381] sd 2:0:0:0: [sdc] CDB: [52803.402382] Read(10): 28 00 00 5d ff 7f 00 00 08 00 [52803.402490] sd 2:0:0:0: [sdc] Unhandled error code [52803.402492] sd 2:0:0:0: [sdc] [52803.402493] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.402495] sd 2:0:0:0: [sdc] CDB: [52803.402496] Read(10): 28 00 00 00 01 2f 00 00 08 00 [52803.402602] sd 2:0:0:0: [sdc] Unhandled error code [52803.402604] sd 2:0:0:0: [sdc] [52803.402605] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.402607] sd 2:0:0:0: [sdc] CDB: [52803.402608] Read(10): 28 00 00 b5 ac 8f 00 00 08 00 [52803.402715] sd 2:0:0:0: [sdc] Unhandled error code [52803.402717] sd 2:0:0:0: [sdc] [52803.402719] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.402720] sd 2:0:0:0: [sdc] CDB: [52803.402721] Read(10): 28 00 00 e1 18 ff 00 00 08 00 [52803.402829] sd 2:0:0:0: [sdc] Unhandled error code [52803.402831] sd 2:0:0:0: [sdc] [52803.402833] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.402834] sd 2:0:0:0: [sdc] CDB: [52803.402835] Read(10): 28 00 09 bd ea cf 00 00 08 00 [52803.403999] sd 2:0:0:0: [sdc] Unhandled error code [52803.404001] sd 2:0:0:0: [sdc] [52803.404003] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.404005] sd 2:0:0:0: [sdc] CDB: [52803.404006] Read(10): 28 00 07 1a b8 f7 00 00 08 00 [52832.950225] sd 2:0:0:0: [sdc] Unhandled error code [52832.950230] sd 2:0:0:0: [sdc] [52832.950233] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52832.950235] sd 2:0:0:0: [sdc] CDB: [52832.950237] Write(10): 2a 00 00 60 bf 7f 00 00 08 00 [52832.950247] blk_update_request: 1077 callbacks suppressed [52832.950250] end_request: I/O error, dev sdc, sector 6340479 [52832.950253] quiet_error: 1077 callbacks suppressed [52832.950256] Buffer I/O error on device sdc1, logical block 792552 [52832.950258] lost page write due to I/O error on sdc1 [52832.950269] sd 2:0:0:0: [sdc] Unhandled error code [52832.950272] sd 2:0:0:0: [sdc] [52832.950273] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52832.950276] sd 2:0:0:0: [sdc] CDB: [52832.950277] Write(10): 2a 00 01 a5 f1 4f 00 00 08 00 [52832.950285] end_request: I/O error, dev sdc, sector 27652431 [52832.950287] Buffer I/O error on device sdc1, logical block 3456546 [52832.950289] lost page write due to I/O error on sdc1

    Read the article

  • Add a Sleep Timer to Windows 7 Media Center

    - by DigitalGeekery
    Do you make it a habit of falling asleep at night while watching Windows Media Center? Today we are going to take a look at the MC7 Sleep Timer for Windows 7 Media Center. This simple little plugin allows you to schedule an automatic shutdown time in Media Center. Note: At this point MC7 Sleep Timer doesn’t work with extenders. If you’re using ClamAV or Panda it may detect this plugin as a virus, we’ve tested it and this is a false positive for these two antivirus apps. Installation and Usage Download and install MC7 Sleep Timer. (See download below) After the installation is finished, you will find MC7 Sleep Timer located in the Media Center Extras Library. Click on the tile to open the timer and configure your settings. The MC7 Sleep Timer will open in full screen mode. You can choose to shutdown the PC after 30 or 60 minutes, create a custom length shutdown timer at any 5 minute interval, or select the exact time you want the PC to shutdown.  After setting your PC to shutdown, you’ll get an audio confirmation. To set a custom timer length, scroll to the “Custom timer” option and click right or left on your Media Center remote or, the right or left arrow keys, to choose how many minutes before shutdown. To schedule a shutdown for a certain time, browse to the “Shutdown at time” button, and scroll right or left with the arrow keys on the keyboard or remote. When you’ve chosen your time, hit “Enter” on the keyboard or “OK” on the remote.   Clicking the “Monitor Off” button will turn off only the monitor and “Cancel Timer” will cancel your shutdown request. Conclusion If you often find yourself falling asleep every night watching Media Center and then fumbling and stumbling in the middle of the night to shutdown your computer, MC7 Sleep timer might just be a perfect addition to your Media Center setup. Download MC7 Sleep Timer Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Re-Enable Sleep Mode in Windows VistaSchedule Updates for Windows Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7Add Color Coding to Windows 7 Media Center Program Guide TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Use My TextTools to Edit and Organize Text Discovery Channel LIFE Theme (Win7) Increase the size of Taskbar Previews (Win 7) Scan your PC for nasties with Panda ActiveScan CleanMem – Memory Cleaner AceStock – The Personal Stock Monitor

    Read the article

  • HPC Server Dynamic Job Scheduling: when jobs spawn jobs

    - by JoshReuben
    HPC Job Types HPC has 3 types of jobs http://technet.microsoft.com/en-us/library/cc972750(v=ws.10).aspx · Task Flow – vanilla sequence · Parametric Sweep – concurrently run multiple instances of the same program, each with a different work unit input · MPI – message passing between master & slave tasks But when you try go outside the box – job tasks that spawn jobs, blocking the parent task – you run the risk of resource starvation, deadlocks, and recursive, non-converging or exponential blow-up. The solution to this is to write some performance monitoring and job scheduling code. You can do this in 2 ways: manually control scheduling - allocate/ de-allocate resources, change job priorities, pause & resume tasks , restrict long running tasks to specific compute clusters Semi-automatically - set threshold params for scheduling. How – Control Job Scheduling In order to manage the tasks and resources that are associated with a job, you will need to access the ISchedulerJob interface - http://msdn.microsoft.com/en-us/library/microsoft.hpc.scheduler.ischedulerjob_members(v=vs.85).aspx This really allows you to control how a job is run – you can access & tweak the following features: max / min resource values whether job resources can grow / shrink, and whether jobs can be pre-empted, whether the job is exclusive per node the creator process id & the job pool timestamp of job creation & completion job priority, hold time & run time limit Re-queue count Job progress Max/ min Number of cores, nodes, sockets, RAM Dynamic task list – can add / cancel jobs on the fly Job counters When – poll perf counters Tweaking the job scheduler should be done on the basis of resource utilization according to PerfMon counters – HPC exposes 2 Perf objects: Compute Clusters, Compute Nodes http://technet.microsoft.com/en-us/library/cc720058(v=ws.10).aspx You can monitor running jobs according to dynamic thresholds – use your own discretion: Percentage processor time Number of running jobs Number of running tasks Total number of processors Number of processors in use Number of processors idle Number of serial tasks Number of parallel tasks Design Your algorithms correctly Finally , don’t assume you have unlimited compute resources in your cluster – design your algorithms with the following factors in mind: · Branching factor - http://en.wikipedia.org/wiki/Branching_factor - dynamically optimize the number of children per node · cutoffs to prevent explosions - http://en.wikipedia.org/wiki/Limit_of_a_sequence - not all functions converge after n attempts. You also need a threshold of good enough, diminishing returns · heuristic shortcuts - http://en.wikipedia.org/wiki/Heuristic - sometimes an exhaustive search is impractical and short cuts are suitable · Pruning http://en.wikipedia.org/wiki/Pruning_(algorithm) – remove / de-prioritize unnecessary tree branches · avoid local minima / maxima - http://en.wikipedia.org/wiki/Local_minima - sometimes an algorithm cant converge because it gets stuck in a local saddle – try simulated annealing, hill climbing or genetic algorithms to get out of these ruts   watch out for rounding errors – http://en.wikipedia.org/wiki/Round-off_error - multiple iterations can in parallel can quickly amplify & blow up your algo ! Use an epsilon, avoid floating point errors,  truncations, approximations Happy Coding !

    Read the article

  • E 2.0 Value Metaphors

    - by Tom Tonkin
    I guess I have been doing this too long. I can easily see the value of Enterprise 2.0 technology for an organization, but find it a challenge at times to convey that same value to others. I also know that I'm not the only one that has that issue. Others, that have that same passion, also suffer from being, perhaps, too close to the market. I was having this same discussion with a few colleagues when one of them suggested that metaphors might be a good vehicle to communicate the value to those that are not as familiar.  One such metaphor was discussed.Apparently,back in the early 50's, there was a great Air Force aviator and military strategist by the name of John Boyd.  Without going into a ton of detail (you can search him on the internet), what made Colonel Boyd great was that he never lost a dog fight.  As a matter of fact, they called him 'Forty-Second Boyd' since he claimed to be able to beat anyone in any type of aircraft in less than forty seconds, even if his aircraft was inferior to his opponents.His approach as was unique.  He observed over time that there was a pattern on how aviators  engaged in a dogfight.  He called this method OODA.   It describes how a person or, in our case, an organization, would react to an event.  OODA is an acrostic for Observation, Orientation, Decision and Action.  Again, there is a lot more on the internet about this.A pilot would go through this loop several times during a dogfight and Boyd would try to predict this loop and interrupt it by changing the landscape of the actual dogfight.  This would give Boyd an advantage and be able to predict what his opponent would do and then counterattack.Boyd went on to say that many companies have a similar reaction loop and that by understanding that loop, organizations would be able to adjust better to market conditions, predict what the competition is doing and reposition themselves to gain competitive advantages. So, our metaphor would be that Enterprise 2.0 provides companies greater visibility of their business by connecting to employees, customers and partners in a collaborative fashion.  This, in turn, helps them navigate through the tough times and provide lines of sight to more innovative ideas.  Innovation is that last tool for companies to achieve competitive advantage (maybe a discusion for another post).Perhaps this is more wordy than some other metaphor, but it does allow for an interesting  dialogue to start and maybe even a framwork to fullfill the promise of E 2.0. So, I'm sure there are many more metaphors for the value that E 2.0 brings to organzaitons. Do you have one to share? Please comment below and thanks for stopping by.

    Read the article

  • Challenges in multi-player Android Game Server with RESTful Nature

    - by Kush
    I'm working on an Android Game based on Contract Bridge, as a part of my college Summer Internship project. The game will be multi-player such that 4 Android devices can play it, so there's no BOT or CPU player to be developed. At the time of getting project, I realized that most of the students had already worked on the project but none of their works is reusable now (for variety of reasons like, undocumented code and design architecture, different platform implementation). I have experience working on several open source projects and hence I emphasis to work out on this project such that components I make become reusable as much as possible. Now, as the game is multi-player and entire game progress will be handled on server, I'm currently working on Server's design, since I wanted to make game server reusable such that any client platform can use it, I was previously confused in selecting Socket or REST for Game Server's design, but later finalized to work on REST APIs for the server. Now, since I have to keep all players in-sync while they make movements in game, on server I've planned to use Database which will keep all players' progress, specific for each table (in Bridge, 4 players play on single table, and server will handle many such game tables). I don't know if its an appropriate decision to use database as shared medium to track progress of each game table (let me know if there's an appropriate or better option). Obviously, when game is completed for the table, data for that table on server's database is discarded. Now the problem is that, access to REST service is an HTTP call, so as long as client doesn't make any request, server will remain idle, and consider a situation where A player has played a card on his device and the device requests to apply this change on the server. Now, I need to let rest of the three devices know that the player has played a card, and also update view on their device. AFAIK, REST cannot provide a sort-of Push-notification system, since the connection to the server is not persistent. One solution that I thought was to make each device constantly poll the server for any change (like every 56 ms) and when changes are found, reflect it on the device. But I feel this is not an elegant way, as every HTTP request is expensive. (and I choose REST to make game play experience robust since, a mobile device tends to get disconnected from Internet, and if there's Socket-like persistent connection then entire game progress is subject to lost. Also, portability on client-end is important) Also, imagining a situation where 10 game tables are in progress and 40 players are playing, a server must be capable to handle flooded HTTP requests from all the devices which make it every 56 ms. So I wonder if the situation is assumed as DoS attack. So, explaining the situation, am I going on the right track for the server design? I wanted to be sure before I proceed much further with the code.

    Read the article

  • JRockit R28/JRockit Mission Control 4.0 is out!

    - by Marcus Hirt
    The next major release of JRockit is finally out! Here are some highlights: Includes the all new JRockit Flight Recorder – supersedes the old JRockit Runtime Analyser. The new flight recorder is inspired by the “black box” in airplanes. It uses a highly efficient recording engine and thread local buffers to capture data about the runtime and the application running in the JVM. It can be configured to always be on, so that whenever anything “interesting” happens, data can be dumped for some time back. Think of it as your own personal profiling time machine. Automatic shortest path calculation in Memleak – no longer any need for running around in circles when trying to find your way back to a thread root from an instance. Memleak can now show class loader related information and split graphs on a per class loader basis. More easily configured JMX agent – default port for both RMI Registry and RMI Server can be configured, and is by default the same, allowing easier configuration of firewalls. Up to 64 GB (was 4GB) compressed references. Per thread allocation profiling in the Management Console. Native Memory Tracking – it is now possible to track native memory allocations with very high resolution. The information can either be accessed using JRCMD, or the dedicated Native Memory Tracking experimental plug-in for the Management Console (alas only available for the upcoming 4.0.1 release). JRockit can now produce heap dumps in HPROF format. Cooperative suspension – JRockit is no longer using system signals for stopping threads, which could lead to hangs if signals were lost or blocked (for example bad NFS shares). Now threads check periodically to see if they are suspended. VPAT/Section 508 compliant JRMC – greatly improved keyboard navigation and screen reader support. See New and Noteworthy for more information. JRockit Mission Control 4.0.0 can be downloaded from here: http://www.oracle.com/technology/software/products/jrockit/index.html <shameless ad> There is even a book to go with JRMC 4.0.0/JRockit R28! http://www.packtpub.com/oracle-jrockit-the-definitive-guide/book/ </shameless ad>

    Read the article

  • Quickly Preview Songs in Windows Media Center 12 in Windows 7

    - by DigitalGeekery
    Do you ever wish you could quickly preview a song without having to play it? Today we look at a quick and easy way to do that in Windows Media Player 12. Open Windows Media Player in Library Mode and select your Music library. Hover your cursor over the Title of the song and a Preview pop-up window will appear after a few seconds.    Click on the Preview in the pop-up window and the song will begin to play. As the preview begins to play, you will see the Skip link and a song timer. Click on Skip to jump ahead 15 seconds in the song. When you are finished previewing the song, simply move your mouse away from the preview window to stop playback. Automatically Preview Songs You can adjust settings in Windows Media Player to automatically preview songs when you hover your cursor over the title. Select Tools  from the menu and click Options. On the Options window, select the Library tab and click on Automatically preview songs on title hover. Click OK.   Now when you simply hover your cursor over the song title the preview window will appear and playback will begin automatically. This feature works just as well in Details view as it does in Expanded Tile view. Would you like to stream your music to other computers on your network? Check out our article on how to stream media to other Windows 7 computers. Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Add Color Coding to Windows 7 Media Center Program GuideSchedule Updates for Windows Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7Integrate Boxee with Media Center in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites

    Read the article

  • MVC Architecture

    Model-View-Controller (MVC) is an architectural design pattern first written about and implemented by  in 1978. Trygve developed this pattern during the year he spent working with Xerox PARC on a small talk application. According to Trygve, “The essential purpose of MVC is to bridge the gap between the human user's mental model and the digital model that exists in the computer. The ideal MVC solution supports the user illusion of seeing and manipulating the domain information directly. The structure is useful if the user needs to see the same model element simultaneously in different contexts and/or from different viewpoints.”  Trygve Reenskaug on MVC The MVC pattern is composed of 3 core components. Model View Controller The Model component referenced in the MVC pattern pertains to the encapsulation of core application data and functionality. The primary goal of the model is to maintain its independence from the View and Controller components which together form the user interface of the application. The View component retrieves data from the Model and displays it to the user. The View component represents the output of the application to the user. Traditionally the View has read-only access to the Model component because it should not change the Model’s data. The Controller component receives and translates input to requests on the Model or View components. The Controller is responsible for requesting methods on the model that can change the state of the model. The primary benefit to using MVC as an architectural pattern in a project compared to other patterns is flexibility. The flexibility of MVC is due to the distinct separation of concerns it establishes with three distinct components.  Because of the distinct separation between the components interaction is limited through the use of interfaces instead of classes. This allows each of the components to be hot swappable when the needs of the application change or needs of availability change. MVC can easily be applied to C# and the .Net Framework. In fact, Microsoft created a MVC project template that will allow new project of this type to be created with the standard MVC structure in place before any coding begins. The project also creates folders for the three key components along with default Model, View and Controller classed added to the project. Personally I think that MVC is a great pattern in regards to dealing with web applications because they could be viewed from a myriad of devices. Examples of devices include: standard web browsers, text only web browsers, mobile phones, smart phones, IPads, IPhones just to get started. Due to the potentially increasing accessibility needs and the ability for components to be hot swappable is a perfect fit because the core functionality of the application can be retained and the View component can be altered based on the client’s environment and the View component could be swapped out based on the calling device so that the display is targeted to that specific device.

    Read the article

  • Oracle ADF Core Functionality Now Available for Free - Presenting Oracle ADF Essentials

    - by Shay Shmeltzer
    Normal 0 false false false EN-US X-NONE HE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:Arial;} We are happy to announce the new Oracle ADF Essentials - a free to develop and deploy version of the core technologies at the base of Oracle ADF – Oracle’s strategic development framework that was used, among other things, to build the new generation of the enterprise Oracle Fusion Applications. This release is aligned with the new Oracle JDeveloper 11.1.2.3 version that we released today. Oracle ADF Essentials enables developers to use the following free: Oracle ADF Faces Rich Client components –over 150 JSF 2.0 components that include extensive charting and data visualization components, supports skinning, internalization, accessibility and touch gestures and providing advanced Ajax, windowing, drag and drop and other UI capabilities in a declarative way. Oracle ADF Controller – an extension on top of the JSF controller providing complete process flow definition and enabling advanced reusability of flows inside page’s regions. Oracle ADF Binding – a declarative way to bind various business services to JSF user interfaces eliminating tedious managed-beans coding. Oracle ADF Business Components – a declarative layer for building Java based business services on top of relational databases. The main goal of Oracle ADF Essentials is to bring the benefits of Oracle ADF to a broader community of developers. If you are already using Oracle ADF, a key new functionality for you would be the ability to deploy your application on GlassFish. Several other interesting points: We provide instructions for deployment of Oracle ADF Essentials on GlassFish and officially support this platform for Oracle ADF Essentials deployment. Developers can choose to use the whole Oracle ADF Essentials, or just pieces of the technology. Visual development for Oracle ADF Essentials is provided in Oracle JDeveloper. Eclipse support via Oracle Enterprise for Eclipse (OEPE) is also planned. Want to learn more? Here is a quick overview and development demo of Oracle ADF Essentials For more visit the Oracle ADF Essentials page on OTN

    Read the article

  • Emoti-phrases

    - by Tony Davis
    Surely the next radical step in the development of User-interface design is for applications to react appropriately to the rising tide of bewilderment, frustration or antipathy of the users. When an application understands that rapport is lost, it should respond accordingly. When we, for example, become confused by an unforgiving interface, shouldn't there be a way of signalling our bewilderment and having the application respond appropriately? There is surprisingly little in the current interface standards that would help. If we're getting frustrated with an unresponsive application, perhaps we could let it know of our increasing irritation by means of an "I'm getting angry and exasperated" slider. Although, by 'responding appropriately', I don't include playing a "we are experiencing unusually heavy traffic: your application usage is important to us" message, accompanied by calming muzak. When confronted with a tide of wizards, 'are you sure?' messages, or page-after-page of tiresome and barely-relevant options, how one yearns for a handy 'JFDI' (Just Flaming Do It) button. One click and the application miraculously desists with its annoying questions and just gets on with the job, using the defaults, or whatever we selected last time. Much more satisfying, and more direct to most developers and DBAs, however, would be the facility to communicate to the application via a twitter-style input field, or via parameters to command-line applications ("I don't want a wide-ranging debate with you; just open the bl**dy PDF!" or, or "Don't forget which of us has the close button"). Although to avoid too much cultural-dependence, perhaps we should take the lead from emoticons, and use a set of standardized emoti-phrases such as 'sez you', 'huh?', 'Pshaw!', or 'meh', which could be used to vent a range of feelings in any given application, whether it be SQL Server stubbornly refusing to give us the result we are expecting, or when an online-survey is getting too personal. Or a 'Lingua Glaswegia' perhaps: 'Atsabelter' ("very good") 'Atspish' ("must try harder") 'AnThenYerArsFellAff ' ("I don't quite trust these results") 'BileYerHeid' or 'ShutYerGub' ("please stop these inane questions") There would, of course, have to be an ANSI standards body to define the phrases that were acceptable. Presumably, there would be a tussle amongst the different international standards organizations. Meanwhile Oracle, Microsoft and Apple would each release non-standard extensions. Time then, surely, to plant emoti-phrases on the lot of them and develop a user-driven consensus. Send us your suggestions! The best one will win an iPod nano! Cheers! Tony.

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >