Search Results

Search found 7085 results on 284 pages for 'termine updates'.

Page 245/284 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • SQL SERVER – How to Roll Back SQL Server Database Changes

    - by Pinal Dave
    In a perfect scenario, no unexpected and unplanned changes occur. There are no unpleasant surprises, no inadvertent changes. However, even with all precautions and testing, there is sometimes a need to revert a structure or data change. One of the methods that can be used in this situation is to use an older database backup that has the records or database object structure you want to revert to. For this method, you have to have the adequate full database backup and a tool that will help you with comparison and synchronization is preferred. In this article, we will focus on another method: rolling back the changes. This can be done by using: An option in SQL Server Management Studio T-SQL, or ApexSQL Log The first two solutions have been described in this article The disadvantages of these methods are that you have to know when exactly the change you want to revert happened and that all transactions on the database executed in a specific time range are rolled back – the ones you want to undo and the ones you don’t. How to easily roll back SQL Server database changes using ApexSQL Log? The biggest challenge is to roll back just specific changes, not all changes that happened in a specific time range. While SQL Server Management Studio option and T-SQL read and roll forward all transactions in the transaction log files, I will show you a solution that finds and scripts only the specific changes that match your criteria. Therefore, you don’t need to worry about all other database changes that you don’t want to roll back. ApexSQL Log is a SQL Server disaster recovery tool that reads transaction logs and provides a wide range of filters that enable you to easily rollback only specific data changes. First, connect to the online database where you want to roll back the changes. Once you select the database, ApexSQL Log will show its recovery model. Note that changes can be rolled back even for a database in the Simple recovery model, when no database and transaction log backups are available. However, ApexSQL Log achieves best results when the database is in the Full recovery model and you have a chain of subsequent transaction log backups, back to the moment when the change occurred. In this example, we will use only the online transaction log. In the next step, use filters to read only the transactions that happened in a specific time range. To remove noise, it’s recommended to use as many filters as possible. Besides filtering by the time of the transaction, ApexSQL Log can filter by the operation type: Table name: As well as transaction state (committed, aborted, running, and unknown), name of the user who committed the change, specific field values, server process IDs, and transaction description. You can select only the tables affected by the changes you want to roll back. However, if you’re not certain which tables were affected, you can leave them all selected and once the results are shown in the main grid, analyze them to find the ones you to roll back. When you set the filters, you can select how to present the results. ApexSQL Log can automatically create undo or redo scripts, export the transactions into an XML, HTML, CSV, SQL, or SQL Bulk file, and create a batch file that you can use for unattended transaction log reading. In this example, I will open the results in the grid, as I want to analyze them before rolling back the transactions. The results contain information about the transaction, as well as who and when made it. For UPDATEs, ApexSQL Log shows both old and new values, so you can easily see what has happened. To create an UNDO script that rolls back the changes, select the transactions you want to roll back and click Create undo script in the menu. For the DELETE statement selected in the screenshot above, the undo script is: INSERT INTO [Sales].[PersonCreditCard] ([BusinessEntityID], [CreditCardID], [ModifiedDate]) VALUES (297, 8010, '20050901 00:00:00.000') When it comes to rolling back database changes, ApexSQL Log has a big advantage, as it rolls back only specific transactions, while leaving all other transactions that occurred at the same time range intact. That makes ApexSQL Log a good solution for rolling back inadvertent data and schema changes on your SQL Server databases. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: ApexSQL

    Read the article

  • New Rapid Install StartCD 12.2.0.48 for EBS 12.2 Now Available

    - by Max Arderius
    A new Rapid Install startCD (Patch 18086193) for Oracle E-Business Suite Release 12.2 is now available. We recommend that all EBS customers installing or upgrading to EBS 12.2 use this latest update. The startCD updates are distributed to customers via My Oracle Support Patch which can be uncompressed on top of any previous 12.2 startCD under the main staging area. This patch replaces any previous startCDs. What's New in This Update? This new startCD version 12.2.0.48 includes important fixes for multi-node Installs, RAC, pre-install checks, platform specific issues, and upgrade scenario failures: 18703814 - QREP:122:RI:ISSUE WITH CHECKOS.CMD 18689527 - QREP:122:RI:ISSUE WITH FNDCORE.DLL SHIPPED AS PART OF R122 PACKAGE 18548485 - QREP1224:4:JAR SIGNER ISSUE DUE TO THE RI UPGRADE AUTOCONFIG CHANGES 18535812 - QREP:1220.48_4: 12.2.0 UPGRADE FILE SYSTEM LAY OUT IS AFFECTING THE DB TABLES 18507545 - WIN: UNABLE TO LAY DOWN FS PRIOR TO 12.2 UPGRADE WITHOUT AFFECTING RUNNING DB 18476041 - UNABLE TO LAY DOWN FS PRIOR TO 12.2 UPGRADE WITHOUT AFFECTING PRODUCTION DB 18459887 - R12.2 INSTALLATION FAILURE - OPMNCTL: NOT FOUND 18436053 - START CD 48_4 - ISSUES WITH TEMP SPACE CHECK 18424747 - QREP1224.3:ADD SERVER BROWSE BUTTON NOT WORKING 18421132 - *RW-50010: ERROR: - SCRIPT HAS RETURNED AN ERROR: 1 18403700 - QREP122.48:RI:UPGRADE RI PRECHECK HUNG IN SPLIT TIER APPS NODE ( NO SILENT ) 18383075 - ADD VERBOSE OPTION TO RAC VALIDATION 18363584 - UPTAKE INSTALL SCRIPTS FOR XB48_4 18336093 - QREP:122:RI:PATCH FS ADMIN SERVICE RUNNING AFTER RI UPGRADE CONFIGURE MODE 18320278 - QREP:1224.3:PLATFORM SPECIFIC SYNTAX ERRORS WITH DATE COMMAND IN DB CHECKER 18314643 - DISABLE SID=DB_NAME FOR RI UPGRADE FLOW IN RAC 18298977 - RI: EXCEPTION WHILE CLICKING RAC NODES BUTTON ON A NON-RAC SERVER 18286816 - QREP122:STARTCD48_3:TRAVERSING FROM VISION PASSW SCREEN TO PROD 18286371 - QREP122:STARTCD48_3:AMBIGUOUS MESSAGE DURING STAGE AREA CHECK ON HP 18275403 - QREP122:48:RI UPGRADE WITH EOH POST CHECKS HANGS IN SPLIT TIER DB NODE 18270631 - QREP122.48:MULTI-NODE RI USING NON-DEFAULT PASSWORDS NOT WORKING 18266046 - QREP122:48:RI NOT ALLOWING TO IGNORE THE RAC PRE-CHECK FAILURE 18242201 - UPTAKE TXK INSTALL SCRIPTS AND PLATFORMS.ZIP INTO STARTCD XB48_3 18236428 - QREP122.47:RI UPGRADE EXISTING OH FOR NON-DEFAULT APPS PASSWORD NOT WORKING 18220640 - INCONSISTENT DATABASE PORTS DURING EBS 12.2 INSTALLATION FOR STARTCD 12.2.0.47 18138796 - QREP122:47:RI 10.1.2 TECHSTACK NOT WORKING IF WE RUN RI FROM NEW STARTCD LOC 18138396 - TST1220: CONTROL FILE NAMING IN RAPID INSTALL SEEMS TO HAVE ISSUES 18124144 - IMPROVE HANDLING ERRORS FOUND IN CLUVFY LOG DURING PREINSTALL CHECKS 18111361 - VALIDATE ASM DB DATA FILES PATH AS +<DATA GROUP>/<PATH> 18102504 - QREP1220.47_5: UNZIP PANEL DOES NOT CREATE THE CORRECT STAGE 18083342 - 12.2 UPGRADE JAVA.NET.BINDEXCEPTION: CANNOT ASSIGN REQUESTED ADDRESS 18082140 - QREP122:47:RAC DB VALIDATION IS FAILS WITH EXIT STATUS IS 6 18062350 - 12.2.3 UPG: 12.2.0 INSTALLATION LOGS 18050840 - RI: UPGRADE WITH EXISTING RAC OH:SECONDARY DB NODE NAME IS BLANK 18049813 - RAC LOV DEFAULTS NOT SAVED UNLESS "SELECT" IS CLICKED 18003592 - TST1220:ADDITIONAL FREE SPACE CHECK FOR RI NEEDS TO BE CHECKED 17981471 - REMOVE ASM SPACE CHECK FROM RACVALIDATIONS.SH 17942179 - R12.2 INSTALL FAILING AT ADRUN11G.SH WITH ERRORS RW-50004 & RW-50010 17893583 - QREP1220.47:VALIDATION OF O.S IN RAPIDWIZ IN THE DB NODE CONFIGURATION SCREEN 17886258 - CLEANUP FND_NODES DURING UPGRADE FLOW 17858010 - RI POST INSTALL CHECKS (SSH VERIFICATION) STEP IS FAILING 17799807 - GEOHR: 12.2.0 - ERRORS IN RAPIDWIZ AND ADCONFIG LOGS 17786162 - QREP1223.4:RI:SERVICE_NAMES IS PRINTED AS SERVICE_NAME IN RI SCREEN 17782455 - RI: CONFIRM DEFAULT APPS PASSWORD IN SILENT MODE KICKOFF 17778130 - RI:ADMIN SERVER TO BE UP ON PRIMARY MID-TIER IN MULTI-NODE UPGRADE FS CREATION 17773989 - UN-SUPPORTED PLATFORM SHOWS 32 BIT AS HARD-CODED 17772655 - RELEVANT MESSAGE DURING THE RAPDIWIZ -TECHSTACK 17759279 - VERIFICATION PANEL DOES NOT EXPAND TECHNOLOGY STACK 17759183 - BUILDSTAGE SCRIPT MENU NEEDS TO BE ADJUSTED 17737186 - DATABASE PRE-REQ CHECK INCORRECTLY REPORTS SUCCESS ON AIX 17708082 - 12.2 INSTALLATION - OS PRE-REQUISITES CHECK 17701676 - TST122: GENERATE WRONG S_DBSID FOR PATCH FILE SYSTEM AT PHASE PREPARE 17630972 - /TMP PRE-REQ INSTALLATION CHECK 17617245 - 12.2 VISION INSTALL FAILS ON AIX 17603342 - OMCS: DB STAGING COMPLAINS WHILE MOVING IT TO FINAL LOCATION 17591171 - OMCS: DB STAGING FAILS WITH FRESH INSTALL R12.2 17588765 - CHECKER VERSION AND PLUGIN VERSION 17561747 - BUILDSTAGE.SH FAILS WITH ERROR WHEN STAGE HOSTED ON 32BIT LINUX 17539198 - RAPID INSTALL NEEDS TO IGNORE NON-REQUIRED STAGE ELEMENTS 17272808 - APPS USERS THAT HAVE DEFAULT PASSWORD AFTER 12.2 RAPID INSTALL References 12.2 Documentation Library 1581299.1 : EBS 12.2 Product Information Center 1320300.1 : Oracle E-Business Suite Release Notes, Release 12.2 1606170.1 : Oracle E-Business Suite Technology Stack and Applications DBA Release Notes for Release 12.2.3 1624423.1 : Oracle E-Business Suite Technology Stack and Applications DBA Release Notes for R12.TXK.C.Delta.4 and R12.AD.C.Delta.4 1594274.1 : Oracle E-Business Suite Release 12.2: Consolidated List of Patches and Technology Bug Fixes Related Articles Oracle E-Business Suite 12.2 Now Available startCD options to install Oracle E-Business Suite Release 12.2

    Read the article

  • Get Pop Up Notifications for Your RSS Feeds with Feed Notifier

    - by DigitalGeekery
    Are you looking for a way to get updates from your favorite websites right to your desktop?  If so, you’ll want to check out Feed Notifier. This free Windows application runs in the system tray and delivers pop-up notifications to your desktop when your subscribed RSS feeds are updated. Download and install Feed Notifier. (Download link below) When you are finished installing, the Feed Notifier Preferences window will open. Click on the Add… button to add an RSS feed. Copy and paste the Feed URL into the text box and click Next. Choose your polling interval. This is how often your feed will be checked for new items. You can set your polling interval for days, hours, minutes, or even seconds. Click FInish. At your configured interval, Feed Notifier will check your feeds for new items. If new items are present, they will pop up above your system tray.  You’ll get an intro portion of the article. Simply Click the headline in the feed pop up… …to open the full article in your default browser. Setting Preferences Open the preferences of Feed Notifier, by going to Start > All Programs > Feed Notifier, or right clicking on the system tray icon and selecting Preferences. On the Pop-ups tab you can configure the duration in seconds that each article stays displayed on your screen. The default is five seconds. You can also change the size of the display, the theme, and the amount of content displayed.   The Options tab offers additional configurations like article caching and using a proxy server. Filter tab allows you to filter in or out certain content. To add a filter click Add…   … then type in the filter rule. You can even choose to apply it to only certain feeds. Click OK. Feed Notifier will display on the filters tab the number of times the filter is applied. Click OK when finished.   You can scroll though the articles by using the forward and back buttons at the lower left, or use the play / pause buttons to move though the articles in a slideshow-type fashion.   Feed Notifier is nice way to get your updated feeds directly to your desktop in a timely fashion. It’s supports all RSS and Atom feeds and features a clean look and feel with plenty of customizable options. Download Feed Notifier Similar Articles Productive Geek Tips Make Outlook Stop Using Internet Explorer’s RSS FeedsChange Default Feed Reader in FirefoxView Feedburner Subscriber Numbers Even if FeedCount is Not DisplayedSubscribe to RSS Feeds in Chrome with a Single ClickOrganize your RSS Feeds with FeedDemon TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Heaven & Hell Finder Icon Using TrueCrypt to Secure Your Data Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain

    Read the article

  • Free RAM disappears - Memory leak?

    - by Izzy
    On a fresh started system, free reports about 1.5G used RAM (8G RAM alltogether, Ubuntu 12.04 with lightdm and plasma desktop, one konsole window started). Having the apps running I use, it still consumes not more than 2G. However, having the system running for a couple of days, more and more of my free RAM disappears -- without showing up in the list of used apps: while smem --pie=name reports less than 20% used (and 80% being available), everything else says differently. free -m for example reports on about day 7: total used free shared buffers cached Mem: 7459 7013 446 0 178 997 -/+ buffers/cache: 5836 1623 Swap: 9536 296 9240 (so you can see, it's not the buffers or the cache). Today this finally ended with the system crashing completely: the windows manager being gone, apps "hanging in the air" (frameless) -- and a popup notifying me about "too many open files". Syslog reports: kernel: [856738.020829] VFS: file-max limit 752838 reached So I closed those applications I was able to close, and killed X using Ctrl-Alt-backspace. X tried to come up again after that with failsafeX, but was unable to do so as it could no longer detect its configuration. So I switched to a console using Ctrl-Alt-F2, captured all information I could think of (vmstat, free, smem, proc/meminfo, lsof, ps aux), and finally rebooted. X again came up with failsafeX; this time I told it to "recover from my backed-up configuration", then switched to a console and successfully used startx to bring up the graphical environment. I have no real clue to what is causing this issue -- though it must have to do either with X itself, or with some user processes running on X -- as after killing X, free -m output looked like this: total used free shared buffers cached Mem: 7459 2677 4781 0 62 419 -/+ buffers/cache: 2195 5263 Swap: 9536 59 9477 (~3.5GB being freed) -- to compare with the output after a fresh start: total used free shared buffers cached Mem: 7459 1483 5975 0 63 730 -/+ buffers/cache: 689 6769 Swap: 9536 0 9536 Two more helpful outputs are provided by memstat -u. Shortly before the crash: User Count Swap USS PSS RSS mail 1 0 200 207 616 whoopsie 1 764 740 817 2300 colord 1 3200 836 894 2156 root 62 70404 352996 382260 569920 izzy 80 177508 1465416 1519266 1851840 After having X killed: User Count Swap USS PSS RSS mail 1 0 184 188 356 izzy 1 1400 708 739 1080 whoopsie 1 848 668 826 1772 colord 1 3204 804 888 1728 root 62 54876 131708 149950 267860 And after a restart, back in X: User Count Swap USS PSS RSS mail 1 0 212 217 628 whoopsie 1 0 1536 1880 5096 colord 1 0 3740 4217 7936 root 54 0 148668 180911 345132 izzy 47 0 370928 437562 915056 Edit: Just added two graphs from my monitoring system. Interesting to see: everytime when there's a "jump" in memory consumption, CPU peaks as well. Just found this right now -- and it reminds me of another indicator pointing to X itself: Often when returning to my machine and unlocking the screen, I found something doing heavvy work on my CPU. Checking with top, it always turned out to be /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background none. So after this long explanation, finally my questions: What could be the possible causes? How can I better identify involved processes/applications? What steps could be taken to avoid this behaviour -- short from rebooting the machine all X days? I was running 8.04 (Hardy) for about 5 years on my old machine, never having experienced the like (always more than 100 days uptime, before rebooting for e.g. kernel updates). This now is a complete new machine with a fresh install of 8.04. In case it matters, some specs: AMD A4-3400 APU with Radeon(tm) HD Graphics, using the open-source ati/radeon driver (so no fglrx installed), 8GB RAM, WDC WD1002FAEX-0 hdd (1TB), Asus F1A75-V Evo mainboard. Ubuntu 12.04 64-bit with KDE4/Plasma. Apps usually open more or less permanently include Evolution, Firefox, konsole (with Midnight Commander running inside, about 4 tabs), and LibreOffice -- plus occasionally Calibre, Gimp and Moneyplex (banking software I'm already using for almost 20 years now, in a version which did fine on Hardy).

    Read the article

  • SQL Azure: Notes on Building a Shard Technology

    - by Herve Roggero
    In Chapter 10 of the book on SQL Azure (http://www.apress.com/book/view/9781430229612) I am co-authoring, I am digging deeper in what it takes to write a Shard. It's actually a pretty cool exercise, and I wanted to share some thoughts on how I am designing the technology. A Shard is a technology that spreads the load of database requests over multiple databases, as transparently as possible. The type of shard I am building is called a Vertical Partition Shard  (VPS). A VPS is a mechanism by which the data is stored in one or more databases behind the scenes, but your code has no idea at design time which data is in which database. It's like having a mini cloud for records instead of services. Imagine you have three SQL Azure databases that have the same schema (DB1, DB2 and DB3), you would like to issue a SELECT * FROM Users on all three databases, concatenate the results into a single resultset, and order by last name. Imagine you want to ensure your code doesn't need to change if you add a new database to the shard (DB4). Now imagine that you want to make sure all three databases are queried at the same time, in a multi-threaded manner so your code doesn't have to wait for three database calls sequentially. Then, imagine you would like to obtain a breadcrumb (in the form of a new, virtual column) that gives you a hint as to which database a record came from, so that you could update it if needed. Now imagine all that is done through the standard SqlClient library... and you have the Shard I am currently building. Here are some lessons learned and techniques I am using with this shard: Parellel Processing: Querying databases in parallel is not too hard using the Task Parallel Library; all you need is to lock your resources when needed Deleting/Updating Data: That's not too bad either as long as you have a breadcrumb. However it becomes more difficult if you need to update a single record and you don't know in which database it is. Inserting Data: I am using a round-robin approach in which each new insert request is directed to the next database in the shard. Not sure how to deal with Bulk Loads just yet... Shard Databases:  I use a static collection of SqlConnection objects which needs to be loaded once; from there on all the Shard commands use this collection Extension Methods: In order to make it look like the Shard commands are part of the SqlClient class I use extension methods. For example I added ExecuteShardQuery and ExecuteShardNonQuery methods to SqlClient. Exceptions: Capturing exceptions in a multi-threaded code is interesting... but I kept it simple for now. I am using the ConcurrentQueue to store my exceptions. Database GUID: Every database in the shard is given a GUID, which is calculated based on the connection string's values. DataTable. The Shard methods return a DataTable object which can be bound to objects.  I will be sharing the code soon as an open-source project in CodePlex. Please stay tuned on twitter to know when it will be available (@hroggero). Or check www.bluesyntax.net for updates on the shard. Thanks!

    Read the article

  • ffmpeg unmet dependencies

    - by Nikki
    I faced an issue recently when tried to install ffmpeg on my Ubuntu computer. I am running Ubuntu 11.10 64 bit, all latest updates are installed and system runs perfectly, however i feel need in recording my desktop and have read many articles that ffmpeg is one of the best recording tools for it (besides providing packages for video) So I tried to run sudo apt-get install ffmpeg However i wasn't able to do this because packages have unmet dependencies. Here is a full text I receive after trying to install package above. Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ffmpeg : Depends: libavcodec53 (< 4:0.7.3-99) but it is not going to be installed or libavcodec-extra-53 (< 4:0.7.3.99) but 4:0.8.0.1~ppa2 is to be installed Depends: libavdevice53 (>= 4:0.7.3-0ubuntu0.11.10.1) but it is not going to be installed or libavdevice-extra-53 (>= 4:0.7.3) but it is not going to be installed Depends: libavdevice53 (< 4:0.7.3-99) but it is not going to be installed or libavdevice-extra-53 (< 4:0.7.3.99) but it is not going to be installed Depends: libavfilter2 (>= 4:0.7.3-0ubuntu0.11.10.1) but it is not going to be installed or libavfilter-extra-2 (>= 4:0.7.3) but it is not going to be installed Depends: libavfilter2 (< 4:0.7.3-99) but it is not going to be installed or libavfilter-extra-2 (< 4:0.7.3.99) but it is not going to be installed Depends: libavformat53 (< 4:0.7.3-99) but 4:0.8-1u1~ppa2 is to be installed or libavformat-extra-53 (< 4:0.7.3.99) but it is not going to be installed Depends: libavutil51 (< 4:0.7.3-99) but it is not going to be installed or libavutil-extra-51 (< 4:0.7.3.99) but 4:0.8.0.1~ppa2 is to be installed Depends: libpostproc52 (< 4:0.7.3-99) but 4:0.8-1u1~ppa2 is to be installed or libpostproc-extra-52 (< 4:0.7.3.99) but it is not going to be installed Depends: libswscale2 (< 4:0.7.3-99) but 4:0.8-1u1~ppa2 is to be installed or libswscale-extra-2 (< 4:0.7.3.99) but it is not going to be installed E: Unable to correct problems, you have held broken packages. This problem hasn't existed on my previous laptop which runs the same Ubuntu 11.10 64 bit as my new one. Can anyone please help me find a solution without "messing and braking" the whole system? Thank you for helping in advance.

    Read the article

  • Launch 7:Windows Phone 7 Style Live Tiles On Android Mobiles

    - by Gopinath
    Android is a great mobile OS but one thought that lingers in the mind of few Android owners is: Am I using a cheap iPhone? This is valid thought for many low end Android users as their phones runs sluggish and the user interface of Android looks like an imitation of iOS. When it comes to Windows Phone 7 users, even though their operating system features are not as great as iPhone/Android but it has its unique user interface; Windows Phone 7 user interface is a very intuitive and fresh, it’s constantly updating Live Tiles show all the required information on the home screen. Android has best mobile operating system features except UI and Windows Phone 7 has excellent user interface. How about porting Windows Phone 7 Tiles interface on an Android? That should be great. Launch 7 app brings the best of Windows Phone 7 look and feel to Android OS. Once the Launcher 7 app is installed and activated, it brings Live Tiles or constantly updating controls that show information on Android home screen. Apart from simple and smooth tiles, there are handful of customization options provided. Users can change colour of the tiles, add new tiles, enable/disable transitions. The reviews on Android Market are on the positive side with 4.4 stars by 10,000 + reviewers. Here are few user reviews 1. Does what it says. only issue for me is that the app drawer doesn’t rotate. And I would like the UI to rotate when my KB is opened. HTC desire z – Jonathan 2. Works great on atrix.Kudos to developers. Awesome. Though needs: Better notification bar More stock images of tiles Better fitting of widgets on tiles – Manny 3. Looks really good like it much more than I thought I would runs real smooth running royal ginger 2.1 – Jay 4. Omg amazing i am definetly keeping it as my default best of android and windows – Devon 5. Man! An update every week! Very very responsive developer! – Andrew You can read more reviews on Android Market here.  There is no doubt that this application is receiving rave reviews. After scanning a while through the reviews, few complaints throw light on the negative side: Battery drains a bit faster & Low end mobile run a bit sluggish. The application is available in two versions – an ad supported free version and $1.41 ad free version. Download Launcher 7 from Android Market This article titled,Launch 7:Windows Phone 7 Style Live Tiles On Android Mobiles, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • BI&EPM in Focus June 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    General News Thomas Kurian Discusses Oracle Exalytics, SAP HANA (replay | preso | press)  Accenture & Oracle Study: The Challenges of Corporate Financial Reporting  (link) Flash Demo: Oracle Hyperion Planning on Exalytics in the Public Sector (link) Flash Demo: OBIEE & Exalytics in Retail (link) Customers Italian Partner Alfa Sistemi implemented at Autovie Venete S.p.A. Integrates Business Intelligence and Performance Management to Improve Efficiency and Speed for Managing Public Works Projects (English version)  / Autovie Venete implementa un sistema integrato di Business Intelligence e Performance Management per migliorare l’efficienza e la tempestività dell’attività di Controlling di Commessa (Italian version). FANCL Gains 360-Degree View of Customers across Multiple Sales Channels, Reduces Reports by 75% Korea Yakult Improves Profit & Loss Analysis with Oracle Hyperion Planning and OBIEE Hill International Streamlines Forecasting, Improves Visibility into Project Productivity and Profitability Children’s Rights in Society Better Supports Organizational Mission with Advanced, Integrated, and Streamlined Business Intelligence Tools Profit: International utility Enel monitors the performance of global subsidiaries with Oracle Hyperion Applications (link) Profit: Charting a New Course: Korean Air gains altitude by leveraging its greatest asset: information (link)   Events June 12: Breaking Away from the Excel Add-In: Welcome to Hyperion Smart View 11.1.2.2 (link) June 13: Upgrading OBIEE 10g to 11g: Best Practices and Lessons Learned (performance architects) (link) June 14, The Netherlands: Strategies for Business Excellence, New Release of Oracle Hyperion EPM Suite (link) June 21: Comprehensive and Accurate Forecasting for Healthcare (link) June 26: What Exactly is Exalytics? (KPI Partners) (link) Webcast Replay: Is Your Company Able to Navigate Through Market Volatility? (link)  Webcast Replay: Is Hope and Email The Core of Your Reconciliation Process? (link) Webcast Replay: Troubleshooting EPM Reporting & Analysis 11.1.2.x  (link) Webcast Replay: Is your Organization Flying Blind when it comes to Understanding Profitability?  (link) Enterprise Performance Management Final Oracle EPM  Information Panel (CIP) survey on cost, profitability and performance reporting/scorecards is now OPEN (link) New on EPM Blog: What's Going on With IFRS? (link) How does Crystal Ball integrate with EPM Solutions? New collateral and demos on Crystal Ball Solution Factory!  (link) New Youtube Video: Business Case Analysis with Oracle Crystal Ball (link) Crystal Ball 11.1.2.2 is released! Grouped Assumptions in Sensitivity Charts, Data Filtering When Fitting Distributions and Parameter Edits When Fitting Distributions to name a few. Get full details from the online New Features Guide (link) New DRM Oracle-by-Examples now available (link) Support Blog: Hyperion Ledgerlink Sample Record and Windows 7: Now you see it, now you don’t  (link) Use Enterprise Manager FMW Control to Troubleshoot Oracle EPM 11.1.2 Family of Products (link) Business  Intelligence Whitepaper: Real-Time Operational Reporting for E-Business Suite via GoldenGate Replication to an Operational Data Store.  How Oracle enabled real-time operational reporting for its $20B services contract business with Golden Gate & OBIEE (link) KPI Partners ebook: Understanding Oracle BI Components and Repository Modeling Basics (link) “Getting Started with Oracle Endeca Information Discovery” video tutorials now available (link) Oracle BI Publisher Conversion Center: Convert from Crystal, Actuate, or Oracle Reports to Oracle BI Publisher (link) Oracle Fusion Applications: Monthly Partner Updates Webcast Replays to help BI partners understand how OBI, Essbase, BI-Apps and Fusion work together: More on Fusion CRM: Fusion Marketing More on Fusion CRM: Fusion CRM Sales Start-Up Packs and Expert Services for Implementation Partners Introducing the Oracle Fusion Accounting Hub Implementing Fusion Applications using Oracle's Composers Oracle Fusion Applications Co-Existence

    Read the article

  • RC of Entity Framework 4.1 (which includes EF Code First)

    - by ScottGu
    Last week the data team shipped the Release Candidate of Entity Framework 4.1.  You can learn more about it and download it here. EF 4.1 includes the new “EF Code First” option that I’ve blogged about several times in the past.  EF Code First provides a really elegant and clean way to work with data, and enables you to do so without requiring a designer or XML mapping file.  Below are links to some tutorials I’ve written in the past about it: Code First Development with Entity Framework 4.x EF Code First: Custom Database Schema Mapping Using EF Code First with an Existing Database The above tutorials were written against the CTP4 release of EF Code First (and so some APIs might be a little different) – but the concepts and scenarios outlined in them are the same as with the RC. Go Live License Last week’s EF 4.1 RC ships with a “go live” license that enables you to use it in production environments.  The final release of EF 4.1 will ship within the next 4 weeks and will be 100% API compatible with the RC release. Improvements with the RC The RC includes several improvements and enhancements.  The EF team has a good blog post summarizing the RC changes.  Scott Hanselman also has a nice video interview with the data team that talks more about the release. One of my favorite improvements introduced with last week’s RC is its support for medium trust security.  This enables you to use EF 4.1 (and code-first) within low-cost ASP.NET shared hosting web environments – without requiring a hoster to install anything to use it. EF 4.1 also now supports validation with not only code-first scenarios, but also model-first and database-first workflows.  Upgrading from previous releases The RC does include a few API tweaks and changes from the prior CTP builds.  Read the release notes that come with the release to get a more detailed listing of the changes. John Papa also has an excellent Upgrading to EF 4.1 RC blog post that describes the steps he took when upgrading a large project he wrote with the previous CTP5 release.  The work to upgrade is pretty straight forward and easy – use his write-up as a guide on how to quickly update projects of your own. NuGet Package Rename One of the changes that the data team made between the CTP5 and RC releases was to rename the NuGet package name from “EFCodeFirst” to “EntityFramework”. They decided to make this change since the EF 4.1 release now includes several additions above and beyond just code first. If you already have installed the “EFCodeFirst” NuGet package, you’ll want to uninstall it and then install the new “EntityFramework” NuGet package.  John Papa’s blog post details the exact steps on how to do this (it only takes ~20 seconds to do this). More EF Tutorials Julie Lerman has created some nice whitepapers and tutorials for MSDN that show using the new EF4 and EF 4.1 feature set. Click here to find links to read and watch them. Summary I’m really excited about the EF 4.1 release that will be shipping next month.  It significantly improves the Entity Framework, and makes it even easier and cleaner to work with data inside of .NET.  You can take advantage of it within all ASP.NET projects (including both Web Forms and MVC), within client projects using Windows Forms and WPF, and within other project types like WCF, Console and Services.  You can use NuGet to easily install it within all of them. Hope this helps, Scott P.S. I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Announcing the Winnipeg VS.NET 2012 Community Launch Event!

    - by D'Arcy Lussier
    Back in May 2010 the local Winnipeg technical community got together and put on a launch event for VS.NET 2010. That event was such a good time that we’re doing it again this year for the VS.NET 2012 launch! On December 6th, the Winnipeg .NET User Group is hosting a full day VS.NET 2012 Community Launch Event at the Imax theatre in Portage Place! We have 4 sessions planned covering dev tools, ALM/TFS, web development, and cloud development, presented by Dylan Smith, Tyler Doerksen, and myself. You can get all the details and register on our Eventbrite site: http://wpgvsnet2012launch.eventbrite.ca/ I’ve included the details below as well for convenience: Winnipeg VS.NET 2012 Community Launch Event Join us for a full day of sessions highlighting the new features and capabilities of Visual Studio .NET 2012 and the .NET 4.5 Framework! Hosted by the Winnipeg .NET User Group, this community event is FREE thanks to the generous support from our event sponsors: Imaginet Online Business Systems Prairie Developer Conference Event Details When: Thursday, Decemer 6th from 8:00 AM - 4:00 PM Where: IMAX Theatre, Portage Place Cost: *FREE!* Agenda 8:00 - 9:00 Continental Breakfast and Registration 9:00 - 9:15 Welcome 9:15 - 10:30 End-To-End Application Lifecycle Management with TFS 2012 10:30 - 10:45 Break 10:45 - 12:00 Improving Developer Productivity with Visual Studio 2012 12:00 - 1:00 Lunch Break (Lunch Not Provided) 1:00 - 2:15 Web Development in Visual Studio 2012 and .NET 4.5 2:15 - 2:30  Break 2:30 - 3:45 Microsoft Cloud Development with Azure and Visual Studio 2012 3:45 - 4:00 Prizes and Thanks Session Abstracts End-To-End Application Lifecycle Management with TFS 2012 Dylan Smith, Imaginet In this session we'll walk through the application development lifecycle from end-to-end and see how some of the new capabilities in TFS 2012 help streamline the software delivery process. There are some exciting new capabilities around Agile Project Management, Gathering Feedback, Code Reviews, Unit Testing, Version Control, Storyboarding, etc. During this session we’ll follow a fictional software development team through the process of planning, developing, testing, and deployment focusing on where the new functionality in VS/TFS 2012 fits in to make teams more effective. Improving Developer Productivity with Visual Studio 2012 Dylan Smith, Imaginet Microsoft Visual Studio 2012 enables developers to take full advantage of the capability of Windows using the skills and technologies developers already know and love to deliver exceptional and compelling apps.  Whether working individually or in a small, medium or large development team Visual Studio 2012 sets a new standard for development tools, helping teams deliver superior results for their customers that help set them apart from their competitors.  In this session we’ll walk through new features in Visual Studio 2012 specifically focusing on how these improve Developer Productivity. Web Development in Visual Studio 2012 and .NET 4.5 D’Arcy Lussier, Online Business Systems It’s an exciting time to be a web developer in the Microsoft ecosystem! The launch of Visual Studio 2012 and .NET 4.5 brings new tooling and features, and the ASP.NET team is continually releasing updates for MVC, SignalR, Web API, and other platform features. In this session we’ll take a tour of the new features and technologies available for Microsoft web developers here in 2012! Microsoft Cloud Development with Azure and Visual Studio 2012 Tyler Doerksen, Imaginet Microsoft’s public cloud platform is nearing its third year of public availability, supporting web site/service hosting, storage, relational databases, virtual machines, virtual networks and much more. Windows Azure provides both power and flexibility.  But to capture this power you need to have the right tools!  This session will demonstrate the primary ways you can harness Windows Azure with the .NET platform.  We’ll explain cloud service development, packaging, deployment, testing and show how Visual Studio 2012 with the Windows Azure SDK and other Microsoft tools can be used to develop for and manage Windows Azure.Harness the power of the cloud from the comfort of Visual Studio 2012!

    Read the article

  • Slow Ubuntu 10.04 after long time unused

    - by Winston Ewert
    I'm at spring break so I'm back at my parent's house. I've turned my computer on which has been off since January and its unusably slow. This was not the case when I last used the computer in January. It is running 10.04, Memory: 875.5 MB CPU: AMD Athlon 64 X2 Dual Core Processor 4400+ Available Disk Space: 330.8 GB I'm not seeing a large usage of either memory or Disk I/O. If I look at my list of processes there is only a very small amount of CPU usage. However, if I hover over the CPU usage graph that I've on the top bar, I sometimes get really high readings like 100%. It took a long time to boot, to open firefox, to open a link in firefox. As far as I can tell everything that the computer tries to do is just massively slow. Right now, I'm apt-get dist-upgrading to install any updates that I will have missed since last time this computer was on. Any ideas as to what is going on here? UPDATE: I thought to check dmesg and it has a lot of entries like this: [ 1870.142201] ata3.00: exception Emask 0x0 SAct 0x7 SErr 0x0 action 0x0 [ 1870.142206] ata3.00: irq_stat 0x40000008 [ 1870.142210] ata3.00: failed command: READ FPDMA QUEUED [ 1870.142217] ata3.00: cmd 60/08:10:c0:4a:65/00:00:03:00:00/40 tag 2 ncq 4096 in [ 1870.142218] res 41/40:00:c5:4a:65/00:00:03:00:00/40 Emask 0x409 (media error) <F> [ 1870.142221] ata3.00: status: { DRDY ERR } [ 1870.142223] ata3.00: error: { UNC } [ 1870.143981] ata3.00: SB600 AHCI: limiting to 255 sectors per cmd [ 1870.146758] ata3.00: SB600 AHCI: limiting to 255 sectors per cmd [ 1870.146761] ata3.00: configured for UDMA/133 [ 1870.146777] ata3: EH complete [ 1872.092269] ata3.00: exception Emask 0x0 SAct 0x7 SErr 0x0 action 0x0 [ 1872.092274] ata3.00: irq_stat 0x40000008 [ 1872.092278] ata3.00: failed command: READ FPDMA QUEUED [ 1872.092285] ata3.00: cmd 60/08:00:c0:4a:65/00:00:03:00:00/40 tag 0 ncq 4096 in [ 1872.092287] res 41/40:00:c5:4a:65/00:00:03:00:00/40 Emask 0x409 (media error) <F> [ 1872.092289] ata3.00: status: { DRDY ERR } [ 1872.092292] ata3.00: error: { UNC } [ 1872.094050] ata3.00: SB600 AHCI: limiting to 255 sectors per cmd [ 1872.096795] ata3.00: SB600 AHCI: limiting to 255 sectors per cmd [ 1872.096798] ata3.00: configured for UDMA/133 [ 1872.096814] ata3: EH complete [ 1874.042279] ata3.00: exception Emask 0x0 SAct 0x7 SErr 0x0 action 0x0 [ 1874.042285] ata3.00: irq_stat 0x40000008 [ 1874.042289] ata3.00: failed command: READ FPDMA QUEUED [ 1874.042296] ata3.00: cmd 60/08:10:c0:4a:65/00:00:03:00:00/40 tag 2 ncq 4096 in [ 1874.042297] res 41/40:00:c5:4a:65/00:00:03:00:00/40 Emask 0x409 (media error) <F> [ 1874.042300] ata3.00: status: { DRDY ERR } [ 1874.042302] ata3.00: error: { UNC } [ 1874.044048] ata3.00: SB600 AHCI: limiting to 255 sectors per cmd [ 1874.046837] ata3.00: SB600 AHCI: limiting to 255 sectors per cmd [ 1874.046840] ata3.00: configured for UDMA/133 [ 1874.046861] sd 2:0:0:0: [sda] Unhandled sense code [ 1874.046863] sd 2:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [ 1874.046867] sd 2:0:0:0: [sda] Sense Key : Medium Error [current] [descriptor] [ 1874.046872] Descriptor sense data with sense descriptors (in hex): [ 1874.046874] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 [ 1874.046883] 03 65 4a c5 [ 1874.046886] sd 2:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed [ 1874.046892] sd 2:0:0:0: [sda] CDB: Read(10): 28 00 03 65 4a c0 00 00 08 00 [ 1874.046900] end_request: I/O error, dev sda, sector 56969925 [ 1874.046920] ata3: EH complete I'm not certain, but that looks like my problem may be a failing hard drive. But the drive is less then a year old, it really shouldn't be failing now...

    Read the article

  • How can I best manage making open source code releases from my company's confidential research code?

    - by DeveloperDon
    My company (let's call them Acme Technology) has a library of approximately one thousand source files that originally came from its Acme Labs research group, incubated in a development group for a couple years, and has more recently been provided to a handful of customers under non-disclosure. Acme is getting ready to release perhaps 75% of the code to the open source community. The other 25% would be released later, but for now, is either not ready for customer use or contains code related to future innovations they need to keep out of the hands of competitors. The code is presently formatted with #ifdefs that permit the same code base to work with the pre-production platforms that will be available to university researchers and a much wider range of commercial customers once it goes to open source, while at the same time being available for experimentation and prototyping and forward compatibility testing with the future platform. Keeping a single code base is considered essential for the economics (and sanity) of my group who would have a tough time maintaining two copies in parallel. Files in our current base look something like this: > // Copyright 2012 (C) Acme Technology, All Rights Reserved. > // Very large, often varied and restrictive copyright license in English and French, > // sometimes also embedded in make files and shell scripts with varied > // comment styles. > > > ... Usual header stuff... > > void initTechnologyLibrary() { > nuiInterface(on); > #ifdef UNDER_RESEARCH > holographicVisualization(on); > #endif > } And we would like to convert them to something like: > // GPL Copyright (C) Acme Technology Labs 2012, Some rights reserved. > // Acme appreciates your interest in its technology, please contact [email protected] > // for technical support, and www.acme.com/emergingTech for updates and RSS feed. > > ... Usual header stuff... > > void initTechnologyLibrary() { > nuiInterface(on); > } Is there a tool, parse library, or popular script that can replace the copyright and strip out not just #ifdefs, but variations like #if defined(UNDER_RESEARCH), etc.? The code is presently in Git and would likely be hosted somewhere that uses Git. Would there be a way to safely link repositories together so we can efficiently reintegrate our improvements with the open source versions? Advice about other pitfalls is welcome.

    Read the article

  • How to Make the Gnome Panels in Ubuntu Totally Transparent

    - by The Geek
    We all love transparency, since it makes your desktop so beautiful and lovely—so today we’re going to show you how to apply transparency to the panels in your Ubuntu Gnome setup. It’s an easy process, and here’s how to do it. This article is the first part of a multi-part series on how to customize the Ubuntu desktop, written by How-To Geek reader and ubergeek, Omar Hafiz. Making the Gnome Panels Transparent Of course we all love transparency, It makes your desktop so beautiful and lovely. So you go for enabling transparency in your panels , you right click on your panel, choose properties, go to the Background tab and make your panel transparent. Easy right? But instead of getting a lovely transparent panel, you often get a cluttered, ugly panel like this: Fortunately it can be easily fixed, all we need to do is to edit the theme files. If your theme is one of those themes that came with Ubuntu like Ambiance then you’ll have to copy it from /usr/share/themes to your own .themes directory in your Home Folder. You can do so by typing the following command in the terminal cp /usr/share/themes/theme_name ~/.themes Note: don’t forget to substitute theme_name with the theme name you want to fix. But if your theme is one you downloaded then it is already in your .themes folder. Now open your file manager and navigate to your home folder then do to .themes folder. If you can’t see it then you probably have disabled the “View hidden files” option. Press Ctrl+H to enable it. Now in .themes you’ll find your previously copied theme folder there, enter it then go to gtk-2.0 folder. There you may find a file named “panel.rc”, which is a configuration file that tells your panel how it should look like. If you find it there then rename it to “panel.rc.bak”. If you don’t find don’t panic! There’s nothing wrong with your system, it’s just that your theme decided to put the panel configurations in the “gtkrc” file. Open this file with your favorite text editor and at the end of the file there is line that looks like this “include “apps/gnome-panel.rc””. Comment out this line by putting a hash mark # in front of it. Now it should look like this “# include “apps/gnome-panel.rc”” Save and exit the text editor. Now change your theme to any other one then switch back to the one you edited. Now your panel should look like this: Stay tuned for the second part in the series, where we’ll cover how to change the color and fonts on your panels. Latest Features How-To Geek ETC How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The Legend of Zelda – 1980s High School Style [Video] Suspended Sentence is a Free Cross-Platform Point and Click Game Build a Batman-Style Hidden Bust Switch Make Your Clock Creates a Custom Clock for your Android Homescreen Download the Anime Angels Theme for Windows 7 CyanogenMod Updates; Rolls out Android 2.3 to the Less Fortunate

    Read the article

  • Graduate Life in Oracle by Ramakrishna Nalabothula

    - by david.talamelli
    Preparation for the BIG interview: I prepared in both technical and logical aspects to face the Oracle Interview. I had to cover almost all main areas in technical and many types of problems in logical areas. We attended mock- interviews and written tests at our college, browsed websites and communities. Having started with such a rigorous preparation before Oracle visited the college; it was possible for me to make it into the list of final selection. I put in a big effort to reach this position and I am very happy to achieve this. Why I chose Oracle: Oracle is one of the best technology providers and has a large customer base. I think it is not an easy job to offer services to that many customers. So, the company needs young and dynamic people and I wanted to be one among them. This gave me spirit and led me to walk into Oracle. I am working on different technologies and learning something new in the field. Having many customers is challenging Oracle and my work is challenging me. I am confident enough that customers to both me and Oracle will never lose their faith. Learning at Oracle: The style of learning is good and never resembles a classroom session in a college. It is always fun to learn here. There is no exam to track the performance. There is enough time until we completely learn it. There is no concept of stiff competition. Peers help through KT (Knowledge Transfers) and there are good resources in the Oracle to learn. People are always there to direct you to those. There are lots of opportunities for Web learning too! My Work Area: My team gives me a great opportunity to offer service to the entire product. There were no situations when I got tense with my work or targets and deliverables. I work with a Global Team and my manager is based in the UK. I have a lot of freedom and flexibility. I use the work from home option in case of any disturbances in the city or due to personal problems. I have a weekly meeting with my manager and use Instant Messenger for status updates. My manager plans very well to give me enough time to complete tasks. I have good coordination from the team towards our deadlines. My work has also brought me close to many people across various technology and product teams. I am glad to make many friends across Oracle. I am enjoying my time and work here. I cover all the major activities in the team. I am thankful to everyone from the Development and Quality Assurance Team to have high confidence in me by assigning such big responsibilities. Primary tasks are maintaining the environments that are very unstable at times. This really requires big time and effort to trace the root causes. I am working and still learning on all these areas. The happiest thing is I got chances to travel to USA & UK for training and for supporting a few customer demo projects. I have got to explore more across two countries and got sponsored to visit the places around due to Oracle's policies. I am very much thankful for these what I have from Oracle and for the cooperation there from other colleagues. Fun at Work: Oracle has a club from Employees to conduct games and events. I had an opportunity to participate in competitions, tournaments inside Oracle and Inter-Corporate for all games. I thank Oracle for providing me all these opportunities and I would like to extend my thanks to Senior Management for their confidence in me. I thank Oracle HR Recruiting Team too for selecting me into Oracle and giving me this opportunity to share my experience and feelings. Ramakrishna Nalabothula

    Read the article

  • Standards Corner: Preventing Pervasive Monitoring

    - by independentid
     Phil Hunt is an active member of multiple industry standards groups and committees and has spearheaded discussions, creation and ratifications of industry standards including the Kantara Identity Governance Framework, among others. Being an active voice in the industry standards development world, we have invited him to share his discussions, thoughts, news & updates, and discuss use cases, implementation success stories (and even failures) around industry standards on this monthly column. Author: Phil Hunt On Wednesday night, I watched NBC’s interview of Edward Snowden. The past year has been tumultuous one in the IT security industry. There has been some amazing revelations about the activities of governments around the world; and, we have had several instances of major security bugs in key security libraries: Apple's ‘gotofail’ bug  the OpenSSL Heartbleed bug, not to mention Java’s zero day bug, and others. Snowden’s information showed the IT industry has been underestimating the need for security, and highlighted a general trend of lax use of TLS and poorly implemented security on the Internet. This did not go unnoticed in the standards community and in particular the IETF. Last November, the IETF (Internet Engineering Task Force) met in Vancouver Canada, where the issue of “Internet Hardening” was discussed in a plenary session. Presentations were given by Bruce Schneier, Brian Carpenter,  and Stephen Farrell describing the problem, the work done so far, and potential IETF activities to address the problem pervasive monitoring. At the end of the presentation, the IETF called for consensus on the issue. If you know engineers, you know that it takes a while for a large group to arrive at a consensus and this group numbered approximately 3000. When asked if the IETF should respond to pervasive surveillance attacks? There was an overwhelming response for ‘Yes'. When it came to 'No', the room echoed in silence. This was just the first of several consensus questions that were each overwhelmingly in favour of response. This is the equivalent of a unanimous opinion for the IETF. Since the meeting, the IETF has followed through with the recent publication of a new “best practices” document on Pervasive Monitoring (RFC 7258). This document is extremely sensitive in its approach and separates the politics of monitoring from the technical ones. Pervasive Monitoring (PM) is widespread (and often covert) surveillance through intrusive gathering of protocol artefacts, including application content, or protocol metadata such as headers. Active or passive wiretaps and traffic analysis, (e.g., correlation, timing or measuring packet sizes), or subverting the cryptographic keys used to secure protocols can also be used as part of pervasive monitoring. PM is distinguished by being indiscriminate and very large scale, rather than by introducing new types of technical compromise. The IETF community's technical assessment is that PM is an attack on the privacy of Internet users and organisations. The IETF community has expressed strong agreement that PM is an attack that needs to be mitigated where possible, via the design of protocols that make PM significantly more expensive or infeasible. Pervasive monitoring was discussed at the technical plenary of the November 2013 IETF meeting [IETF88Plenary] and then through extensive exchanges on IETF mailing lists. This document records the IETF community's consensus and establishes the technical nature of PM. The draft goes on to further qualify what it means by “attack”, clarifying that  The term is used here to refer to behavior that subverts the intent of communicating parties without the agreement of those parties. An attack may change the content of the communication, record the content or external characteristics of the communication, or through correlation with other communication events, reveal information the parties did not intend to be revealed. It may also have other effects that similarly subvert the intent of a communicator.  The past year has shown that Internet specification authors need to put more emphasis into information security and integrity. The year also showed that specifications are not good enough. The implementations of security and protocol specifications have to be of high quality and superior testing. I’m proud to say Oracle has been a strong proponent of this, having already established its own secure coding practices. 

    Read the article

  • Java EE @ No Fluff Just Stuff Tour

    - by reza_rahman
    If you work in the US and still don't know what the No Fluff Just Stuff (NFJS) Tour is, you are doing yourself a very serious disfavor. NFJS is by far the cheapest and most effective way to stay up to date through some world class speakers and talks. This is most certainly true for US enterprise Java developers in particular. Following the US cultural tradition of old-fashioned roadshows, NFJS is basically a set program of speakers and topics offered at major US cities year round. Many now famous world class technology speakers can trace their humble roots to NFJS. Via NFJS you basically get to have amazing training without paying for an expensive venue, lodging or travel. The events are usually on the weekends so you don't need to even skip work if you want (a great feature for consultants on tight budgets and deadlines). I am proud to share with you that I recently joined the NFJS troupe. My hope is that this will help solve the lingering problem of effectively spreading the Java EE message here in the US. For NFJS I hope my joining will help beef up perhaps much desired Java content. In any case, simply being accepted into this legendary program is an honor I could have perhaps only dreamed of a few years ago. I am very grateful to Jay Zimmerman for seeing the value in me and the Java EE content. The current speaker line-up consists of the likes of Neal Ford, Venkat Subramaniam, Nathaniel Schutta, Tim Berglund and many other great speakers. I actually had my tour debut on April 4-5 with the NFJS New York Software Symposium - basically a short train commute away from my home office. The show is traditionally one of the smaller ones and it was not that bad for a start. I look forward to doing a few more in the coming months (more on that a bit later). I had four talks back to back (really my most favorite four at the moment). The first one was a talk on JMS 2 - some of you might already know JMS is one of my most favored Java EE APIs. The slides for the talk are posted below: What’s New in Java Message Service 2 from Reza Rahman The next talk I delivered was my Cargo Tracker/Java EE + DDD talk. This talk basically overviews DDD and describes how DDD maps to Java EE using code examples/demos from the Cargo Tracker Java EE Blue Prints project. Applied Domain-Driven Design Blue Prints for Java EE from Reza Rahman The third talk I delivered was our flagship Java EE 7/8 talk. As you may know, currently the talk is basically about Java EE 7. I'll probably slowly evolve this talk to gradually transform it into a Java EE 8 talk as we move forward (I'll blog about that separately shortly). The following is the slide deck for the talk: JavaEE.Next(): Java EE 7, 8, and Beyond from Reza Rahman My last talk for the show was my JavaScript+Java EE 7 talk. This talk is basically about aligning EE 7 with the emerging JavaScript ecosystem (specifically AngularJS). The slide deck for the talk is here: JavaScript/HTML5 Rich Clients Using Java EE 7 from Reza Rahman Unsurprisingly this talk was well-attended. The demo application code is posted on GitHub. The code should be a helpful resource if this development model is something that interests you. Do let me know if you need help with it but the instructions should be fairly self-explanatory. My next NFJS show is the Central Ohio Software Symposium in Columbus on June 6-8 (sorry for the late notice - it's been a really crazy few weeks). Here's my tour schedule so far, I'll keep you up-to-date as the tour goes forward: June 6 - 8, Columbus Ohio. June 24 - 27, Denver Colorado (UberConf) - my most extensive agenda on the tour so far. July 18 - 20, Austin Texas. I hope you'll take this opportunity to get some updates on Java EE as well as the other awesome content on the tour?

    Read the article

  • Moving monarchs and dragons: migrating the JDK bugs to JIRA

    - by darcy
    Among insects, monarch butterflies and dragonflies have the longest migrations; migrating JDK bugs involves a long journey as well! As previously announced by Mark back in March, we've been working according to a revised plan to transition the JDK bug management from Sun's legacy system to initially an Oracle-internal JIRA instance which is afterward made visible and usable externally. I've been busily working on this project for the last few months and the team has made good progress on many aspects of the effort: JDK bugs will be imported into JIRA regardless of age; bugs will also be imported regardless of state, including closed bugs. Consequently, the JDK bug project will start pre-populated with over 100,000 existing bugs, some dating all the way back to 1994. This will allow a continuity of information and allow new issues to be linked to old ones. Using a custom import process, the Sun bug numbers will be preserved in JIRA. For example, the Sun bug with bug number 4040458 will become "JDK-4040458" in JIRA. In JIRA the project name, "JDK" in our case, is part of the bug's identifier. Bugs created after the JIRA migration will be numbered starting at 8000000; bugs imported from the legacy system have numbers ranging between 1000000 and 79999999. We're working with the bugs.sun.com team to try to maintain continuity of the ability to both read JDK bug information as well as to file new incidents. At least for now, the overall architecture of bugs.sun.com will be the same as it is today: it will be a gateway bridging to an Oracle-internal system, but the internal system will change to JIRA from the legacy database. Generally we are aiming to preserve the visibility of bugs currently viewable on bugs.sun.com; however, bugs in areas not related to the JDK will not be visible after the transition to JIRA. New incoming incidents will be sent to a separate JIRA project for initial triage before possibly being moved into the JDK project. JDK bug management leans heavily on being able to track the state of bugs in multiple releases, especially to coordinate delivering synchronized security releases (known as CPUs, critital patch updates, in Oracle parlance). For a security release, it is common for half a dozen or more release trains to be affected (for example, JDK 5, JDK 6 update, OpenJDK 6, JDK 7 update, JDK 8, virtual releases for HotSpot express, etc.). We've determined we need to track at least the tuple of (release, responsible engineer/assignee for the release, status in the release) for the release trains a fix is going into. To do this in JIRA, we are creating a separate port/backport issue type along with a custom link type to allow the multiple release information to be easily grouped and presented together. The Sun legacy system had a three-level classification scheme, product, category, and subcategory. Out of the box, JIRA only has a one-level classification, component. We've implemented a custom second-level classification, subcomponent. As part of the bug migration we've taken the opportunity to think about how bugs should be grouped under a two-level system and we'll the new system will be simpler and more regular. The main top-level components of the JDK product will include: core-libs client-libs deploy install security-libs other-libs tools hotspot For the libs areas, the primary name of the subcomportment will be the package of the API in question. In the core-libs component, there will be subcomponents like: java.lang java.lang.class_loading java.math java.util java.util:i18n In the tools component, subcomponents will primarily correspond to command names in $JDK/bin like, jar, javac, and javap. The first several bulk imports of the JDK bugs into JIRA have gone well and we're continuing to refine the import to have greater fidelity to the current data, including by reconstructing information not brought over in a structured fashion during the previous large JDK bug system migration back in 2004. We don't currently have a firm timeline of when the new system will be usable externally, but as it becomes available, I'll share further information in follow-up blog posts.

    Read the article

  • Visual Studio 2012 and .NET 4.5 now Live!

    - by Tarun Arora
    Today was the formal launch event for Visual Studio 2012 and .NET 4.5, a state-of-the-art development solution for building modern applications that span connected devices and continuous services, from the client to the cloud. The event was streamed live from http://visualstudiolaunch.com, S.Somasegar corporate vice president of the Developer Division opened the key note, Jason Zander dived deeper into how to leverage Visual Studio 2012 and .NET 4.5 to build modern application. Brian Harry all the awesome features in Visual Studio 2012 to improve the application lifecycle management.   I. Summary of the announcements made today 1. Visual Studio Updates coming this fall –  VS Update will better support agile teams, enable continuous quality, elevate SharePoint development with application lifecycle management (ALM) tools, and expand Visual Studio 2012 Windows development capabilities. It will be available as a community technology preview (CTP) later this month and in final release later this calendar year. A comprehensive list of what will be on offer can be found here. 2. Visual Studio Express 2012 for Windows Desktop – Visual Studio Express 2012 for Windows Desktop brings the newest desktop development capabilities in Visual Studio 2012 to Express users, too. You would be excited to know that the express SKU will support Integration with TFS among some of the other cool features I would like to mention Unit Testing, Unit Testing, Code Analysis, dependency management with NuGet a full list and download links can be found here. 3. F# tools for Visual Studio Express 2012 for web –  This F# Tools release adds in F# 3.0 components, such as the F# 3.0 compiler, F# Interactive, IDE support, and new F# features such as type providers and query expressions to your Visual Studio 2012 express for web. More details and download links can be found here. 4. Visual Studio TFS 2012 Power Tools – The TFS 2012 Power tools brings the goodness of Best Practice Analyzer, Process Template Editor, Storyboard Shapes, Team Explorer enhancements, TFPT command line, TFS Server Backups, etc via to your TFS 2012 installation. It can be downloaded right away from here. II. Road shows There will be many more community road shows this month packaged with hours of demos and discussions. The Visual Studio UK Team has just announced that there will be four UK launch events, face to face session including a product group speaker and partner sessions: Edinburgh, 1st October Manchester, 3rd October London, 4th October Reading, 5th October III. Get Started Download Visual Studio 2012 and the additional supporting software's from here. The Visual Studio development team has put together over 60 videos to help you learn about the new Visual Studio 2012 capabilities in more detail, and all of these will be available for watching here. IV. What’s Next A lot more exciting stuff lined up… Windows 8 Anticipated release: Oct. 26 (UPDATED 9/12) Windows Server 2012 Released (UPDATED 9/4) System Center 2012 Released (UPDATED 9/11) SQL Server 2012 Released (UPDATED 4/2) Internet Explorer 10 Anticipated release: Between Q3 2012 and early 2013 (UPDATED 5/3   Office 2013 Anticipated release: Q4 2012 or Q1 2013(UPDATED 9/12) Exchange 2013 Anticipated release: Q4 2012 (UPDATED 7/26) Visual Studio 2012 Released (UPDATED 9/12) Kinect for Windows Released (UPDATED 9/4) Windows Phone "Tango" and 8 "Tango": Released; Anticipated "Windows Phone 8" release: Q4 2012 (UPDATED 9/5) Dynamics ERP Online Anticipated release: September or October 2012 (UPDATED 7/20) Office 365 Anticipated update schedule: "Almost weekly"(UPDATED 9/12) Windows Azure Rumored CTP release: Spring 2012 (UPDATED 9/7) SharePoint 2013 Anticipated release: Q4 2012 (UPDATED 8/21) Enjoy

    Read the article

  • SQL SERVER – Asynchronous Update and Timestamp – Check if Row Values are Changed Since Last Retrieve

    - by pinaldave
    Here is the question received just this morning. “Pinal, Our application is much different than other application you might have come across. In simple words, I would like to call it Asynchronous Updated Application. We need your quick opinion about one of the situation which we are facing. From business side: We have bidding system (similar to eBay but not exactly) and where multiple parties bid on one item, during the last few minutes of bidding many parties try to bid at the same time with the same price. When they hit submit, we would like to check if the original data which they retrieved is changed or not. If the original data which they have retrieved is the same, we will accept their new proposed price. If original data are changed, they will have to resubmit the data with new price. From technical side: We have a row which we retrieve in our application. Multiple users are retrieving the same row. Some of the users will update the value of the row and submit. However, only the very first user should be allowed to update the row and remaining all the users will have to re-fetch the row and updated it once again. We do not want to lock any record as that will create other problems. Do you have any solution for this kind of situation?” Fantastic Question. I believe there is good chance that we can use timestamp datatype in this kind of application. Before we continue let us see following simple example. USE tempdb GO CREATE TABLE SampleTable (ID INT, Col1 VARCHAR(100), TimeStampCol TIMESTAMP) GO INSERT INTO SampleTable (ID, Col1) VALUES (1, 'FirstVal') GO SELECT ID, Col1, TimeStampCol FROM SampleTable st GO UPDATE SampleTable SET Col1 = 'NextValue' GO SELECT ID, Col1, TimeStampCol FROM SampleTable st GO DROP TABLE SampleTable GO Now let us see the resultset. Here is the simple explanation of the scenario. We created a table with simple column with TIMESTAMP datatype. When we inserted a very first value the timestamp was generated. When we updated any value in that row, the timestamp was updated with the new value. Every single time when we update any value in the row, it will generate new timestamp value. Now let us apply this in an original question’s scenario. In that case multiple users are retrieving the same row. Everybody will have the same now same TimeStamp with them. Before any user update any value they should once again retrieve the timestamp from the table and compare with the timestamp they have with them. If both of the timestamp have the same value – the original row has not been updated and we can safely update the row with the new value. After initial update, now the row will contain a new timestamp. Any subsequent update to the same row should also go to the same process of checking the value of the timestamp they have in their memory. In this case, the timestamp from memory will be different from the timestamp in the row. This indicates that row in the table has changed and new updates should not be allowed. I believe timestamp can be very very useful in this kind of scenario. Is there any better alternative? Please leave a comment with the suggestion and I will post on the blog with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Silverlight ProgressBar issues with Binding

    - by Chris Skardon
    The ProgressBar pretty much does what it says on the tin, displays progress, in a bar form (well, by default anyhow). It’s pretty simple to use: <ProgressBar Minimum="0" Maximum="100" Value="50"/> Gives you a progress bar with 50% of it filled: Easy! But of course, we’re wanting to use binding to change the value, again, pretty easy, have a ViewModel with a ‘Value’ in it, and bind: <ProgressBar Minimum="0" Maximum="100" Value="{Binding Value}"/> Spiffy, and whilst we’re at it, why not bind the Maximum value as well – after all, we can’t be sure of the size of the progress, and it’s a pain to have to work out the percentage (when the progress bar can do it for us): <ProgressBar Minimum="0" Maximum="MaximumValue" Value="{Binding Value}"/> Right, this will work absolutely fine. Or will it??? On the face of it, it looks good, and testing it shows no issues, until at one point we go from: Maximum = 100; Value = 90; to Maximum=60; Value=50; On the face of it not unreasonable. The problem is more obvious if we look at the states of the properties after each set (initially Maximum is set at 1, Value = 0): Code Maximum Value Value < Maximum Maximum = 100; 100 0 True Value = 90; 100 90 True Maximum = 60; 60 90 False Value = 50; 60 50 True Everything is good until the Value is less than the Maximum, at this point the Progress Bar breaks. That’s right, it no longer updates itself, it will always look 100% full. The simple solution – always ensuring you set Value before Maximum is fine unless you’re using a ProgressBar in a less controlled environment – where for example you’re setting a ‘container’ with both values at the same time. The example I have is in a DataTemplate, I have a DataTemplate for a BusyIndicator, (specifically the BusyContentTemplate). The binding works this way: <BusyIndicator BusyContent="{Binding BusyContent}" BusyContentTemplate="{Binding ProgressTemplate}"/> With the template as the ProgressBar defined above… I was setting my BusyContent like this: BusyContent = content; aaaaaand finally, ‘content’ is a class: public class ContentClass : INotifyPropertyChanged { //Obviously this is properly implemented… public double Maximum { get;set;} public double Value { get;set;} } Soooo… As I was replacing the BusyContent wholesale, the order of the binding being set was outside of my control, so – how to go about it? Basically? Fudge it. Modify the ContentClass to include a method: public void Update(double value, double max) { Value = value; Maximum = max; } and change where the setting is to be: BusyContent.Update(content.Value, content.Maximum); Thereby getting the order correct.. Obvious really. Meh :|

    Read the article

  • Indexed view deadlocking

    - by Dave Ballantyne
    Deadlocks can be a really tricky thing to track down the root cause of.  There are lots of articles on the subject of tracking down deadlocks, but seldom do I find that in a production system that the cause is as straightforward.  That being said,  deadlocks are always caused by process A needs a resource that process B has locked and process B has a resource that process A needs.  There may be a longer chain of processes involved, but that is the basic premise. Here is one such (much simplified) scenario that was at first non-obvious to its cause: The system has two tables,  Products and Stock.  The Products table holds the description and prices of a product whilst Stock records the current stock level. USE tempdb GO CREATE TABLE Product ( ProductID INTEGER IDENTITY PRIMARY KEY, ProductName VARCHAR(255) NOT NULL, Price MONEY NOT NULL ) GO CREATE TABLE Stock ( ProductId INTEGER PRIMARY KEY, StockLevel INTEGER NOT NULL ) GO INSERT INTO Product SELECT TOP(1000) CAST(NEWID() AS VARCHAR(255)), ABS(CAST(CAST(NEWID() AS VARBINARY(255)) AS INTEGER))%100 FROM sys.columns a CROSS JOIN sys.columns b GO INSERT INTO Stock SELECT ProductID,ABS(CAST(CAST(NEWID() AS VARBINARY(255)) AS INTEGER))%100 FROM Product There is a single stored procedure of GetStock: Create Procedure GetStock as SELECT Product.ProductID,Product.ProductName FROM dbo.Product join dbo.Stock on Stock.ProductId = Product.ProductID where Stock.StockLevel <> 0 Analysis of the system showed that this procedure was causing a performance overhead and as reads of this data was many times more than writes,  an indexed view was created to lower the overhead. CREATE VIEW vwActiveStock With schemabinding AS SELECT Product.ProductID,Product.ProductName FROM dbo.Product join dbo.Stock on Stock.ProductId = Product.ProductID where Stock.StockLevel <> 0 go CREATE UNIQUE CLUSTERED INDEX PKvwActiveStock on vwActiveStock(ProductID) This worked perfectly, performance was improved, the team name was cheered to the rafters and beers all round.  Then, after a while, something else happened… The system updating the data changed,  The update pattern of both the Stock update and the Product update used to be: BEGIN TRAN UPDATE... COMMIT BEGIN TRAN UPDATE... COMMIT BEGIN TRAN UPDATE... COMMIT It changed to: BEGIN TRAN UPDATE... UPDATE... UPDATE... COMMIT Nothing that would raise an eyebrow in even the closest of code reviews.  But after this change we saw deadlocks occuring. You can reproduce this by opening two sessions. In session 1 begin transaction Update Product set ProductName ='Test' where ProductID = 998 Then in session 2 begin transaction Update Stock set Stocklevel = 5 where ProductID = 999 Update Stock set Stocklevel = 5 where ProductID = 998 Hop back to session 1 and.. Update Product set ProductName ='Test' where ProductID = 999 Looking at the deadlock graphs we could see the contention was between two processes, one updating stock and the other updating product, but we knew that all the processes do to the tables is update them.  Period.  There are separate processes that handle the update of stock and product and never the twain shall meet, no reason why one should be requiring data from the other.  Then it struck us,  AH the indexed view. Naturally, when you make an update to any table involved in a indexed view, the view has to be updated.  When this happens, the data in all the tables have to be read, so that explains our deadlocks.  The data from stock is read when you update product and vice-versa. The fix, once you understand the problem fully, is pretty simple, the apps did not guarantee the order in which data was updated.  Luckily it was a relatively simple fix to order the updates and deadlocks went away.  Note, that there is still a *slight* risk of a deadlock occurring, if both a stock update and product update occur at *exactly* the same time.

    Read the article

  • Security and the Mobile Workforce

    - by tobyehatch
    Now that many organizations are moving to the BYOD philosophy (bring your own devices), security for phones and tablets accessing company sensitive information is of paramount importance. I had the pleasure to interview Brian MacDonald, Principal Product Manager for Oracle Business Intelligence (BI) Mobile Products, about this subject, and he shared some wonderful insight about how the Oracle Mobile Security Tool Kit is addressing mobile security and doing some pretty cool things.  With the rapid proliferation of phones and tablets, there is a perception that mobile devices are a security threat to corporate IT, that mobile operating systems are not secure, and that there are simply too many ways to inadvertently provide access to critical analytic data outside the firewall. Every day, I see employees working on mobile devices at the airport, while waiting for their airplanes, and using public WIFI connections at coffee houses and in restaurants. These methods are not typically secure ways to access confidential company data. I asked Brian to explain why. “The native controls for mobile devices and applications are indeed insufficiently secure for corporate deployments of Business Intelligence and most certainly for businesses where data is extremely critical - such as financial services or defense - although it really applies across the board. The traditional approach for accessing data from outside a firewall is using a VPN connection which is not a viable solution for mobile. The problem is that once you open up a VPN connection on your phone or tablet, you are creating an opening for the whole device, for all the software and installed applications. Often the VPN connection by itself provides insufficient encryption – if any – which means that data can be potentially intercepted.” For this reason, most organizations that deploy Business Intelligence data via mobile devices will only do so with some additional level of control. So, how has the industry responded? What are companies doing to address this very real threat? Brian explained that “Mobile Device Management (MDM) and Mobile Application Management (MAM) software vendors have rapidly created solutions for mobile devices that provide a vast array of services for controlling, managing and establishing enterprise mobile usage policies. On the device front, vendors now support full levels of encryption behind the firewall, encrypted local data storage, credential management such as federated single-sign-on as well as remote wipe, geo-fencing and other risk reducing features (should a device be lost or stolen). More importantly, these software vendors have created methods for providing these capabilities on a per application basis, allowing for complete isolation of the application from the mobile operating system. Finally, there are tools which allow the applications themselves to be distributed through enterprise application stores allowing IT organizations to manage who has access to the apps, when updates to the applications will happen, and revoke access after an employee leaves. So even though an employee may be using a personal device, access to company data can be controlled while on or near the company premises. So do the Oracle BI mobile products integrate with the MDM and MAM vendors? Brian explained that our customers use a wide variety of mobile security vendors and may even have more than one in-house. Therefore, Oracle is ensuring that users have a choice and a mechanism for linking together Oracle’s BI offering with their chosen vendor’s secure technology. The Oracle BI Mobile Security Toolkit, which is a version of the Oracle BI Mobile HD application, delivered through the Oracle Technology Network (OTN) in its component parts, helps Oracle users to build their own version of the Mobile HD application, sign it with their own enterprise development certificates, link with their security vendor of choice, then deploy the combined application through whichever means they feel most appropriate, including enterprise application stores.  Brian further explained that Oracle currently supports most of the major mobile security vendors, has close relationships with each, and maintains strong partnerships enabling both Oracle and the vendors to test, update and release a cooperating solution in lock-step. Oracle also ensures that as new versions of the Oracle HD application are made available on the Apple iTunes store, the same version is also immediately made available through the Security Toolkit on OTN.  Rest assured that as our workforce continues down the mobile path, company sensitive information can be secured.  To listen to the entire podcast, click here. To learn more about the Oracle BI Mobile HD, click  here To learn more about the BI Mobile Security Toolkit, click here 

    Read the article

  • 13 Lösungen für eine höhere Sicherheit in einer Oracle Datenbank (Best Practices)

    - by C.Muetzlitz
    Externe Einflüsse wie Gesetze fordern die IT auf, (unsere) Daten zu schützen. Doch wie prüft man die eingestellte Sicherheit einer Oracle Datenbank überhaupt? Ist die geforderte Sicherheit ausreichend umgesetzt und zwar im Idealfall entsprechend dem notwendigen Schutzbedarf? Wann haben Sie eigentlich die Sicherheit Ihrer Oracle Datenbank das letzte Mal überprüft? Und noch besser gefragt, kennen Sie die Bedrohungen und die davon abgeleiteten Risiken? Alles Fragen deren Antworten ein verantwortlicher Anwendungsbesitzer sofort parat haben sollte oder sehen Sie das anders? Wie kann man sich am besten vor Bedrohungen schützen? Die einzige richtige Antwort auf diese Frage ist, durch Informationen und daraus abgeleitetes Wissen. Nun umfassen Informationen und das darin versteckte Wissen wahrscheinlich sehr viele Quellen. D.h. es wird immer schwieriger sich das richtige Wissen anzueignen und dieses Wissen für den Schutz von Daten und Datenbanken anzuwenden.Betrachtet man die Oracle Datenbank, dann empfehle ich zwei wesentliche Bereiche, die man tun muss bzw. wissen sollte. Die Best Practices Lösungen kennen, die man implementieren sollte und teilweise muss, um gute Sicherheit zu garantieren.Ich nenne diesen Bereich „13 Lösungen für eine höhere Sicherheit in einer Oracle Datenbank (Best Practices)“ Wie sieht der wirkliche Sicherheitszustand einer Oracle Datenbank aus.Diesen Bereich nenne ich „Check Oracle DB Security“ In diesem Beitrag möchte ich Sie nun in die Grundlagen einer guten Oracle Datenbank Sicherheit einführen und Sie befähigen, den Sicherheitszustand Ihrer Datenbank selber bestimmen zu können. 13 Lösungen für eine höhere Sicherheit in einer Oracle Datenbank (Best Practices)“  Password-Management aktiveren:Seien Sie sich bewusst, dass schwache Passwords eine hohe Bedrohung bedeuten. Aktivieren Sie ein vernünftiges Password Management Kennen Sie den Funktionsumfang Ihrer aktuellen Datenbank Version, auch die Funktionen, die nicht mehr unterstützt werden.Der "New Feature und Upgrade Guide" sollte eine Pflichtlektüre werden. Implementieren Sie eine passende Mindestsicherheit.Oracle liefert hier viele Vorgaben. Haben Sie das Rollen- und Account Management im GriffHier geht es um eine kontrollierte Privilegien-Vergabe (Least Privileg), eine Zwecktrennung im Account Management und eine andauernde Überprüfung des Rollenmanagements und Zugriffskonzepts Sicheres Datenbank Link Konzept implementierenGerade im Bereich der Datenintegration werden wiederholt DB Links in der Datenbank konfiguriert. Diese Links eröffnen u.U. unkontrollierte Zugriffe auf entfernte Datenbanken. Tracken Sie den Zugriff und setzen Sie ein sicheres DB Link Konzept um. Oracle liefert hier die entsprechenden Vorgaben. Definieren Sie Schutz-Policies für Ihre Anwendungen.Hierunter fällt z.B. ein richtiges Anwendungs-Owner und Anwendungs-User Setup Implementieren Sie den notwendigen Datenschutz für wichtige DatenKennen Sie die Daten, die geschützt werden müssen und schützen Sie diese angemessen. Kontrollieren Sie den Ressourcenverbrauch in Ihrer Datenbank Implementieren Sie eine sinnvolle Zwecktrennung in der DatenbankAuch bei der Datenbank ist es sinnvoll eine Zwecktrennung zu implementieren. Schalten Sie eine sinnvolle und gesetzeskonforme Protokollierung ein.Gesetze erfordern das und Oracle gibt eine Mindestprotokollierung vor. Implementieren Sie Prozesse, die den guten Zustand der Datenbank erhalten Führen Sie regelmäßige Health- Checks durchOracle liefert z.B. mit dem Enterprise Manager eine vollständige Library. Definieren Sie ein funktionierendes Patch-ManagementKennen Sie die Critical Patch Updates und handeln Sie falls notwendig. Check Oracle DB Security oder wer den Sicherheitszustand nicht kennt, wird auch keine Maßnahmen ergreifen Den Sicherheitszustand einer Oracle Datenbank zu überprüfen, ist sehr wichtig. Hierfür kann man verschiedene Anwendungen nutzen, die im Markt erhältlich sind. Eine gute Entscheidung wäre z.B. den Oracle Enterprise Manager (Cloud Control) mit dem Lifecycle Management zu nutzen, der periodisch den Sicherheitszustand für Sie ermittelt. Eine manuelle Überprüfung ist auch möglich, erfordert aber tiefes Wissen. Doch auch trotz der hohen Wissensanforderung ist ein Verstehen, wie man eine Oracle Datenbank manuell auf Sicherheit überprüft, wichtig. Vertrauen Sie nicht mehr auf Vermutungen, sondern nehmen Sie die Sicherheit Ihrer Datenbank ernst und lernen Sie den realen Zustand Ihrer Datenbank kennen. Wissen über reale Zustände und Wissen über geeignete Konzepte schützen. Erst dann können Sie entscheiden, welche Maßnahmen tatsächlich notwendig sind. Weiterführende Informationen: Oracle Online Dokumentation für die Datenbank Verschiedene Artikel in der Knowledge Base vom Oracle Support Das neue Buch „Oracle Security in der Praxis. Vollständige Sicherheitsüberprüfung Ihrer Oracle Datenbank“.

    Read the article

  • NetworkManager Applet shows no networks

    - by Kkelk
    I am "the friend" referred to in the questions here and here. I decided to come and ask a question myself, as I can still not connect to the wireless network. I downloaded Keryx, as suggested here, and managed to download the necessary package and its dependencies. When I attempted to install the packages on Ubuntu using Keryx, Keryx just closed. Following this, I installed the packages manually using dpkg, and as far as I can tell, this was successful: kieran@ubuntu:~$ cd /host/wifi/Keryx/keryx/projects/Kieran/packages kieran@ubuntu:/host/wifi/Keryx/keryx/projects/Kieran/packages$ sudo dpkg -i *.deb [sudo] password for kieran: Selecting previously deselected package bcmwl-kernel-source. (Reading database ... 118296 files and directories currently installed.) Unpacking bcmwl-kernel-source (from bcmwl-kernel-source_5.60.48.36+bdcom-0ubuntu5_i386.deb) ... Selecting previously deselected package dkms. Unpacking dkms (from dkms_2.1.1.2-3ubuntu1.1_all.deb) ... Selecting previously deselected package fakeroot. Unpacking fakeroot (from fakeroot_1.14.4-1ubuntu1_i386.deb) ... Selecting previously deselected package linux-image. Unpacking linux-image (from linux-image_2.6.35.22.23_i386.deb) ... Selecting previously deselected package menu. Unpacking menu (from menu_2.1.44ubuntu1_i386.deb) ... Selecting previously deselected package patch. Unpacking patch (from patch_2.6-2ubuntu1_i386.deb) ... Setting up fakeroot (1.14.4-1ubuntu1) ... update-alternatives: using /usr/bin/fakeroot-sysv to provide /usr/bin/fakeroot (fakeroot) in auto mode. Setting up linux-image (2.6.35.22.23) ... Setting up menu (2.1.44ubuntu1) ... Setting up patch (2.6-2ubuntu1) ... Processing triggers for man-db ... Setting up dkms (2.1.1.2-3ubuntu1.1) ... Setting up bcmwl-kernel-source (5.60.48.36+bdcom-0ubuntu5) ... Loading new bcmwl-5.60.48.36+bdcom DKMS files... First Installation: checking all kernels... Building only for 2.6.35-22-generic Building for architecture i686 Building initial module for 2.6.35-22-generic Done. wl.ko: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/2.6.35-22-generic/updates/dkms/ depmod..... DKMS: install Completed. update-initramfs: deferring update (trigger activated) Processing triggers for install-info ... Processing triggers for doc-base ... Processing 31 changed 1 added doc-base file(s)... Registering documents with scrollkeeper... Processing triggers for menu ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-2.6.35-22-generic Warning: No support for locale: en_GB.utf8 After rebooting, however, there were still no wireless networks in the NetworkManager Applet list. I opened the file /var/lib/NetworkManager/NetworkManager.state, and both NetworkEnabled and WirelessEnabled were set to True. While i'm very concious I may be asking a stupid question here, both my friend and I have nothing left to suggest, and as such - I would be very grateful for any answers as to how to get wireless working.

    Read the article

  • Does your analytic solution tell you what questions to ask?

    - by Manan Goel
    Analytic solutions exist to answer business questions. Conventional wisdom holds that if you can answer business questions quickly and accurately, you can take better business decisions and therefore achieve better business results and outperform the competition. Most business questions are well understood (read structured) so they are relatively easy to ask and answer. Questions like what were the revenues, cost of goods sold, margins, which regions and products outperformed/underperformed are relatively well understood and as a result most analytics solutions are well equipped to answer such questions. Things get really interesting when you are looking for answers but you don’t know what questions to ask in the first place? That’s like an explorer looking to make new discoveries by exploration. An example of this scenario is the Center of Disease Control (CDC) in United States trying to find the vaccine for the latest strand of the swine flu virus. The researchers at CDC may try hundreds of options before finally discovering the vaccine. The exploration process is inherently messy and complex. The process is fraught with false starts, one question or a hunch leading to another and the final result may look entirely different from what was envisioned in the beginning. Speed and flexibility is the key; speed so the hundreds of possible options can be explored quickly and flexibility because almost everything about the problem, solutions and the process is unknown.  Come to think of it, most organizations operate in an increasingly unknown or uncertain environment. Business Leaders have to take decisions based on a largely unknown view of the future. And since the value proposition of analytic solutions is to help the business leaders take better business decisions, for best results, consider adding information exploration and discovery capabilities to your analytic solution. Such exploratory analysis capabilities will help the business leaders perform even better by empowering them to refine their hunches, ask better questions and take better decisions. That’s your analytic system not only answering the questions but also suggesting what questions to ask in the first place. Today, most leading analytic software vendors offer exploratory analysis products as part of their analytic solutions offerings. So, what characteristics should be top of mind while evaluating the various solutions? The answer is quite simply the same characteristics that are essential for exploration and analysis – speed & flexibility. Speed is required because the system inherently has to be agile to handle hundreds of different scenarios with large volumes of data across large user populations. Exploration happens at the speed of thought so make sure that you system is capable of operating at speed of thought. Flexibility is required because the exploration process from start to finish is full of unknowns; unknown questions, answers and hunches. So, make sure that the system is capable of managing and exploring all relevant data – structured or unstructured like databases, enterprise applications, tweets, social media updates, documents, texts, emails etc. and provides flexible Google like user interface to quickly explore all relevant data. Getting Started You can help business leaders become “Decision Masters” by augmenting your analytic solution with information discovery capabilities. For best results make sure that the solution you choose is enterprise class and allows advanced, yet intuitive, exploration and analysis of complex and varied data including structured, semi-structured and unstructured data.  You can learn more about Oracle’s exploratory analysis solutions by clicking here.

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >