Search Results

Search found 2127 results on 86 pages for 'simon martin'.

Page 1/86 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Intel Dual Band Wireless-AC 7260 keeps dropping wifi

    - by Rick T
    My wifi Intel Dual Band Wireless-AC 7260 keeps dropping wificonnection drops and the network to which I was connected disappears from the list of available networks in network manager. The only way to fix it is to disable wifi and re-enable it How can I fix this. I'm using ubuntu 14.04 64bit. It mostly drops connections on the 5ghz network. My other devices don't drop connections over wifi. see logs and versions rt@simon:~$ uname -a Linux simon 3.13.0-34-generic #60-Ubuntu SMP Wed Aug 13 15:45:27 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux rt@simon:~$ rt@simon:~$ dmesg | grep iwl [ 3.370777] iwlwifi 0000:03:00.0: irq 46 for MSI/MSI-X [ 3.381089] iwlwifi 0000:03:00.0: loaded firmware version 22.24.8.0 op_mode iwlmvm [ 3.414637] iwlwifi 0000:03:00.0: Detected Intel(R) Dual Band Wireless AC 7260, REV=0x144 [ 3.414695] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 3.414913] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 3.630208] ieee80211 phy0: Selected rate control algorithm 'iwl-mvm-rs' [ 9.304838] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 9.305068] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 605.483174] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 605.483396] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S rt@simon:~$ cat /var/log/syslog | grep -e iwl -e 80211 | tail -n25 Aug 14 08:13:02 simon kernel: [ 3.452780] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:13:02 simon kernel: [ 3.630208] ieee80211 phy0: Selected rate control algorithm 'iwl-mvm-rs' Aug 14 08:13:06 simon NetworkManager[1125]: <info> rfkill1: found WiFi radio killswitch (at /sys/devices/pci0000:00/0000:00:1c.2/0000:03:00.0/ieee80211/phy0/rfkill1) (driver iwlwifi) Aug 14 08:13:06 simon NetworkManager[1125]: <info> (wlan0): using nl80211 for WiFi device control Aug 14 08:13:06 simon NetworkManager[1125]: <info> (wlan0): new 802.11 WiFi device (driver: 'iwlwifi' ifindex: 3) Aug 14 08:13:06 simon kernel: [ 9.304838] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S Aug 14 08:13:06 simon kernel: [ 9.305068] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S Aug 14 08:14:18 simon kernel: [ 81.230162] cfg80211: Calling CRDA to update world regulatory domain Aug 14 08:14:18 simon kernel: [ 81.232330] cfg80211: World regulatory domain updated: Aug 14 08:14:18 simon kernel: [ 81.232332] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) Aug 14 08:14:18 simon kernel: [ 81.232333] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:14:18 simon kernel: [ 81.232334] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:14:18 simon kernel: [ 81.232335] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) Aug 14 08:14:18 simon kernel: [ 81.232336] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:14:18 simon kernel: [ 81.232337] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:23:02 simon kernel: [ 605.483174] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S Aug 14 08:23:02 simon kernel: [ 605.483396] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S Aug 14 08:23:18 simon kernel: [ 621.223905] cfg80211: Calling CRDA to update world regulatory domain Aug 14 08:23:18 simon kernel: [ 621.228945] cfg80211: World regulatory domain updated: Aug 14 08:23:18 simon kernel: [ 621.228950] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) Aug 14 08:23:18 simon kernel: [ 621.228954] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:23:18 simon kernel: [ 621.228956] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:23:18 simon kernel: [ 621.228959] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) Aug 14 08:23:18 simon kernel: [ 621.228961] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:23:18 simon kernel: [ 621.228963] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm)

    Read the article

  • Why is Robert C. Martin called Uncle Bob?

    - by Lernkurve
    Is there a story behind it? I did a Google search for "Why is Robert C. Martin called Uncle Bob?" but didn't find an answer. More context There is this pretty well-know person in the software engineering world named Robert C. Martin. He speaks at conferences and has published many excellent books one of which is Clean Code (Amazon). He is the founder and CEO of Object Mentor Inc. Robert C. Martin is also called Uncle Bob. But I can't figure out why.

    Read the article

  • OOW 2013 Summary for Fusion Middleware Architects & Administrators by Simon Haslam

    - by JuergenKress
    OOW 2013 Summary for Fusion Middleware Architects & Administrators by Simon Haslam This September during Oracle OpenWorld 2013 the weather in San Francisco, as you see can from the photo, was exceptionally sunny. The dramatic final few days of the Americas Cup sailing competition, being held every day in the bay, coincided with the conference and meant that there was almost a holiday feel to the whole event. Here's my annual round-up of what I think was most interesting at OpenWorld 2013 for Fusion Middleware architects and administrators; I hope you find it useful and if you think I've missed something please add a comment! WebLogic and Cloud Application Foundation (CAF) The big WebLogic release of the year has already happened a few months ago with 12.1.2 so I won't duplicate that here. Will Lyons discussed the WebLogic and Coherence roadmap which essentially is that 12.1.3 will probably be released to coincide with SOA 12c next year and that 12.1.4, the next feature-rich WebLogic release, is more likely to be in 2015. This latter release will probably include full Java EE 7 support, have enhancements for multi-tenancy and further auto-scaling features to support increased density (i.e. more WebLogic usage for the same amount of hardware). There's a new Oracle Virtual Assembly Builder (OVAB) out already and an Oracle Traffic Director (OTD) 12c release round the corner too. Also of relevance to administrators is that Oracle has increased the support lifetime for Fusion Middleware 11g (e.g. WebLogic 10.3.6) so that Premier Support will now run to the end of 2018 and Extended Support until 2021 - this should remove any Oracle-driven pressure to upgrade at least. Java Mission Control Java Mission Control (JMC) is the HotSpot Java 7 version of JRockit 6 Mission Control, a very nice performance monitoring tool from Oracle's BEA acquisition. Flight Recorder is a feature built into the JVM which records diagnostic events into, typically, a circular buffer which can then be used for historical analysis, particularly in the case of a JVM crash or hang. It's been available separately for WebLogic only for perhaps a year now but, more significantly, it now includes JVM events and was bundled in with JDK7 Update 40 a few weeks ago. I attended a couple of interesting Java One sessions on JMC/Flight Recorder and have to say it's looking really good - it has all the previous JRMC features except for memory leak detector, plus some enhancements around operative sets and ECID filtering I think. Marcus also showed how you could add your own events into flight recorder by building your own event class - they are then available for graphing alongside all the other events in JMC. This uses a currently an unsupported/undocumented API, but it's also the same one that WebLogic uses for WLDF events so I imagine it is stable. I'm not sure quite whether this would be useful to custom applications, as opposed to infrastructure services or ISV packaged applications, but it was a very nice demonstration. I've been testing JMC / FR enabling on several environments recently and my confidence is growing - it feels robust and I think could very soon be part of my standard builds. Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: OOW,Simon Haslam,Oracle OpenWorld,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Talking JavaOne with Rock Star Simon Ritter

    - by Janice J. Heiss
    Oracle’s Java Technology Evangelist Simon Ritter is well known at JavaOne for his quirky and fun-loving sessions, which, this year include: CON4644 -- “JavaFX Extreme GUI Makeover” (with Angela Caicedo on how to improve UIs in JavaFX) CON5352 -- “Building JavaFX Interfaces for the Real World” (Kinect gesture tracking and mind reading) CON5348 -- “Do You Like Coffee with Your Dessert?” (Some cool demos of Java of the Raspberry Pi) CON6375 -- “Custom JavaFX Charts: (How to extend JavaFX Chart controls with some interesting things) I recently asked Ritter about the significance of the Raspberry Pi, the topic of one of his sessions that consists of a credit card-sized single-board computer developed in the UK with the intention of stimulating the teaching of basic computer science in schools. “I don't think there's one definitive thing that makes the RP significant,” observed Ritter, “but a combination of things that really makes it stand out. First, it's the cost: $35 for what is effectively a completely usable computer. OK, so you have to add a power supply, SD card for storage and maybe a screen, keyboard and mouse, but this is still way cheaper than a typical PC. The choice of an ARM processor is also significant, as it avoids problems like cooling (no heat sink or fan) and can use a USB power brick.  Combine these two things with the immense groundswell of community support and it provides a fantastic platform for teaching young and old alike about computing, which is the real goal of the project.”He informed me that he’ll be at the Raspberry Pi meetup on Saturday (not part of JavaOne). Check out the details here.JavaFX InterfacesWhen I asked about how JavaFX can interface with the real world, he said that there are many ways. “JavaFX provides you with a simple set of programming interfaces that can create complex, cool and compelling user interfaces,” explained Ritter. “Because it's just Java code you can combine JavaFX with any other Java library to provide data to display and control the interface. What I've done for my session is look at some of the possible ways of doing this using some of the amazing hardware that's available today at very low cost. The Kinect sensor has added a new dimension to gaming in terms of interaction; there's a Java API to access this so you can easily collect skeleton tracking data from it. Some clever people have also written libraries that can track gestures like swipes, circles, pushes, and so on. We use these to control parts of the UI. I've also experimented with a Neurosky EEG sensor that can in some ways ‘read your mind’ (well, at least measure some of the brain functions like attention and meditation).  I've written a Java library for this that I include as a way of controlling the UI. We're not quite at the stage of just thinking a command though!” Here Comes Java EmbeddedAnd what, from Ritter’s perspective, is the most exciting thing happening in the world of Java today? “I think it's seeing just how Java continues to become more and more pervasive,” he said. “One of the areas that is growing rapidly is embedded systems.  We've talked about the ‘Internet of things’ for many years; now it's finally becoming a reality. With the ability of more and more devices to include processing, storage and networking we need an easy way to write code for them that's reliable, has high performance, and is secure. Java fits all these requirements. With Java Embedded being a conference within a conference, I'm very excited about the possibilities of Java in this space.”Check out Ritter’s sessions or say hi if you run into him. Originally published on blogs.oracle.com/javaone.

    Read the article

  • Talking JavaOne with Rock Star Simon Ritter

    - by Janice J. Heiss
    Oracle’s Java Technology Evangelist Simon Ritter is well known at JavaOne for his quirky and fun-loving sessions, which, this year include: CON4644 -- “JavaFX Extreme GUI Makeover” (with Angela Caicedo on how to improve UIs in JavaFX) CON5352 -- “Building JavaFX Interfaces for the Real World” (Kinect gesture tracking and mind reading) CON5348 -- “Do You Like Coffee with Your Dessert?” (Some cool demos of Java of the Raspberry Pi) CON6375 -- “Custom JavaFX Charts: (How to extend JavaFX Chart controls with some interesting things) I recently asked Ritter about the significance of the Raspberry Pi, the topic of one of his sessions that consists of a credit card-sized single-board computer developed in the UK with the intention of stimulating the teaching of basic computer science in schools. “I don't think there's one definitive thing that makes the RP significant,” observed Ritter, “but a combination of things that really makes it stand out. First, it's the cost: $35 for what is effectively a completely usable computer. OK, so you have to add a power supply, SD card for storage and maybe a screen, keyboard and mouse, but this is still way cheaper than a typical PC. The choice of an ARM processor is also significant, as it avoids problems like cooling (no heat sink or fan) and can use a USB power brick.  Combine these two things with the immense groundswell of community support and it provides a fantastic platform for teaching young and old alike about computing, which is the real goal of the project.”He informed me that he’ll be at the Raspberry Pi meetup on Saturday (not part of JavaOne). Check out the details here.JavaFX InterfacesWhen I asked about how JavaFX can interface with the real world, he said that there are many ways. “JavaFX provides you with a simple set of programming interfaces that can create complex, cool and compelling user interfaces,” explained Ritter. “Because it's just Java code you can combine JavaFX with any other Java library to provide data to display and control the interface. What I've done for my session is look at some of the possible ways of doing this using some of the amazing hardware that's available today at very low cost. The Kinect sensor has added a new dimension to gaming in terms of interaction; there's a Java API to access this so you can easily collect skeleton tracking data from it. Some clever people have also written libraries that can track gestures like swipes, circles, pushes, and so on. We use these to control parts of the UI. I've also experimented with a Neurosky EEG sensor that can in some ways ‘read your mind’ (well, at least measure some of the brain functions like attention and meditation).  I've written a Java library for this that I include as a way of controlling the UI. We're not quite at the stage of just thinking a command though!” Here Comes Java EmbeddedAnd what, from Ritter’s perspective, is the most exciting thing happening in the world of Java today? “I think it's seeing just how Java continues to become more and more pervasive,” he said. “One of the areas that is growing rapidly is embedded systems.  We've talked about the ‘Internet of things’ for many years; now it's finally becoming a reality. With the ability of more and more devices to include processing, storage and networking we need an easy way to write code for them that's reliable, has high performance, and is secure. Java fits all these requirements. With Java Embedded being a conference within a conference, I'm very excited about the possibilities of Java in this space.”Check out Ritter’s sessions or say hi if you run into him.

    Read the article

  • JavaOne Afterglow by Simon Ritter

    - by JuergenKress
    Last week was the eighteenth JavaOne conference and I thought it would be a good idea to write up my thoughts about how things went. Firstly thanks to Yoshio Terada for the photos, I didn't bother bringing a camera with me so it's good to have some pictures to add to the words. Things kicked off full-throttle on Sunday.  We had the Java Champions and JUG leaders breakfast, which was a great way to meet up with a lot of familiar faces and start talking all things Java.  At midday the show really started with the Strategy and Technical Keynotes.  This was always going to be tougher job than some years because there was no big shiny ball to reveal to the audience.  With the Java EE 7 spec being finalised a few months ago and Java SE 8, Java ME 8 and JDK8 not due until the start of next year there was not going to be any big announcement.  I thought both keynotes worked really well each focusing on the things most important to Java developers: Strategy One of the things that is becoming more and more prominent in many companies marketing is the Internet of Things (IoT).  We've moved from the conventional desktop/laptop environment to much more mobile connected computing with smart phones and tablets.  The next wave of the internet is not just billions of people connected, but 10s or 100s of billions of devices connected to the network, all generating data and providing much more precise control of almost any process you can imagine.  This ties into the ideas of Big Data and Cloud Computing, but implementation is certainly not without its challenges.  As Peter Utzschneider explained it's about three Vs: Volume, Velocity and Value.  All these devices will create huge volumes of data at very high speed; to avoid being overloaded these devices will need some sort of processing capabilities that can filter the useful data from the redundant.  The raw data then needs to be turned into useful information that has value.  To make this happen will require applications on devices, at gateways and on the back-end servers, all very tightly integrated.  This is where Java plays a pivotal role, write once, run everywhere becomes essential, having nine million developers fluent in the language makes it the defacto lingua franca of IoT.  There will be lots more information on how this will become a reality, so watch this space. Technical How do we make the IoT a reality, technically?  Using the game of chess Mark Reinhold, with the help of people like John Ceccarelli, Jasper Potts and Richard Bair, showed what you could do.  Using Java EE on the back end, Java SE and JavaFX on the desktop and Java ME Embedded and JavaFX on devices they showed a complete end-to-end demo. This was really impressive, using 3D features from JavaFX 8 (that's included with JDK8) to make a 3D animated Duke chess board.  Jasper also unveiled the "DukePad" a home made tablet using a Raspberry Pi, touch screen and accelerometer. Although the Raspberry Pi doesn't have earth shattering CPU performance (about the same level as a mid 1990s Pentium), it does have really quite good GPU performance so the GUI works really well.  The plans are all open sourced and available here.  One small, but very significant announcement was that Java SE will now be included with the NOOB and Raspbian Linux distros provided by the Raspberry Pi foundation (these can be found here).  No more hassle having to download and install the JDK after you've flashed your SD card OS image.  The finale was the Raspberry Pi powered chess playing robot.  Really very, very cool.  I talked to Jasper about this and he told me each of the chess pieces had been 3D printed and then he had to use acetone to give them a glossy finish (not sure what his wife thought of him spending hours in the kitchen in a gas mask!)  The way the robot arm worked was very impressive as it did not have any positioning data (like a potentiometer connected to each motor), but relied purely on carefully calibrated timings to get the arm to the right place.  Having done things like this myself in the past I know how easy it is to find a small error gets magnified into very big mistakes. Here's some pictures from the keynote: The "Dukepad" architecture Nice clear perspex case so you can see the innards. The very nice 3D chess set.  Maya's obviously a great tool. Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Simon Ritter,Java One,OOW,Oracle OpenWorld,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Simon Sabin has a great discount for the SQL Server Masterclass

    - by Testas
    Check out Simons blog post to get a discount of £100 for this event http://sqlblogcasts.com/blogs/simons/archive/2010/05/14/paul-and-kimberly-are-coming-the-uk.aspx   Remember as well  Pencil the 17th June in your diary, send an email [email protected] with the title of Masterclass in the subject line. On Friday 25th May we will draw out a name and the winner will have free entrance to a must see seminar on SQL Server from two of the industry’s leading experts. Thanks Chris

    Read the article

  • Not so long ago in a city not so far away by Carlos Martin

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 This is the story of how the EMEA Presales Center turned an Oracle intern into a trusted technology advisor for both Oracle’s Sales and customers. It was the summer of 2011 when I was finishing my Computer Engineering studies as well as my internship at Oracle when I was offered what could possibly be THE dream job for any young European Computer Engineer. Apart from that, it also seemed like the role was particularly tailored to me as I could leverage almost everything I learned at University and during the internship. And all of it in one of the best cities to live in, not only from my home country but arguably from Europe: Malaga! A day at EPC As part of the EPC Technology pillar, and later on completely focused on WebCenter, there was no way to describe a normal day on the job as each day had something unique. Some days I was researching documentation in order to elaborate accurate answers for a customer’s question within a Request for Information or Proposal (RFI/RFP), other days I was doing heavy programming in order to bring a Proof of Concept (PoC) for a customer to life and last not but least, some days I presented to the customer via webconference the demo I built for them the past weeks. So as you can see, the role has research, development and presentation, could you ask for more? Well, don’t worry because there IS more! Internationality As the organization’s name suggests, EMEA Presales Center, it is the Center of Presales within Europe, Middle East and Africa so I got the chance to work with great professionals from all this regions, expanding my network and learning things from one country to apply them to others. In addition to that, the teams based in the Malaga office are comprised of many young professionals hailing mainly from Western and Central European countries (although there are a couple of exceptions!) with very different backgrounds and personalities which guaranteed many laughs and stories during lunch or coffee breaks (or even while working on projects!). Furthermore, having EPC offices in Bucharest and Bangalore and thanks to today’s tele-presence technologies, I was working every day with people from India or Romania as if they were sitting right next to me and the bonding with them got stronger day by day. Career development Apart from the research and self-study I’ve earlier mentioned, one of the EPC’s Key Performance Indicators (KPI) is that 15% of your time is spent on training so you get lots and lots of trainings in order to develop both your technical product knowledge and your presentation, negotiation and other soft skills. Sometimes the training is via webcast, sometimes the trainer comes to the office and sometimes, the best times, you get to travel abroad in order to attend a training, which also helps you to further develop your network by meeting face to face with many people you only know from some email or instant messaging interaction. And as the months go by, your skills improving at a very fast pace, your relevance increasing with each new project you successfully deliver, it’s only a matter of time (and a bit of self-promoting!) that you get the attention of the manager of a more senior team and are offered the opportunity to take a new step in your professional career. For me it took 2 years to move to my current position, Technology Sales Consultant at the Oracle Direct organization. During those 2 years I had built a good relationship with the Oracle Direct Spanish sales and sales managers, who are also based in the Malaga office. I supported their former Sales Consultant in a couple of presentations and demos and were very happy with my overall performance and attitude so even before the position got eventually vacant, I got a heads-up from then in advance that their current Sales Consultant was going to move to a different position. To me it felt like a natural step, same as when I joined EPC, I had at least a 50% of the “homework” already done but wanted to experience that extra 50% to add new product and soft skills to my arsenal. The rest is history, I’ve been in the role for more than half a year as I’m writing this, achieved already some important wins, gained a lot of trust and confidence in front of customers and broadened my view of Oracle’s Fusion Middleware portfolio. I look back at the 2 years I spent in EPC and think: “boy, I’d recommend that experience to absolutely anyone with the slightest interest in IT, there are so many different things you can do as there are different kind of roles you can end up taking thanks to the experience gained at EPC” /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Apprende à analyser un rapport de type RIST (Random System Information Tool), par Simon-Sayce

    Bonjour, Sayce, l'un des membres de l'équipe de rédaction souhaite vous inviter à la lecture de l'article suivant: Random System Information Tool . Cet article est à destination de toutes les personnes souhaitant vérifier l'intégrité de leur PC en utilisant le logiciel RSIT Ce cours explique ligne par ligne le rapport généré part l'outil. N'hésitez pas à partager vos remarques Nous vous souhaitons une bonne lecture

    Read the article

  • "Agile Principles, Patterns, and Practices in C#": Is this just a .NET-translation of the popular Uncle Bob book?

    - by Louis Rhys
    I found this book sold on Amazon Agile Principles, Patterns, and Practices in C#, written by Robert C Martin and Micah Martin. Is it merely a .NET port of the older, more popular Agile Software Development, Principles, Patterns, and Practices? Or is it just a new book trying to take advantage of the other book's popularity? If I am a .NET developer who hasn't read either book, which one would you recommend?

    Read the article

  • Sourcecode for Paymentroll example in Robert C. Martin book

    - by bitbonk
    Throughout the book "Agile Principles, Patterns, and Practices in C#" by Robert C. Martin a small Paymentroll application is build. While most of the source code is printed in place, some classes are missing and some are incomplete. The book says on the firest page: The book includes many source code examples that are also available for download from the authors' Web site. Unfortunately this seems to be a lie. Unless either this is not the author's website (the book forgets to mention the authors website adress) or I am blind. Does anyone have the comlete source code for that book preferably in form of a Visual Studio project or knows where I can find it.

    Read the article

  • Question About Example In Robert C Martin's _Clean Code_

    - by Jonah
    This is a question about the concept of a function doing only one thing. It won't make sense without some relevant passages for context, so I'll quote them here. They appear on pgs 37-38: To say this differently, we want to be able to read the program as though it were a set of TO paragraphs, each of which is describing the current level of abstraction and referencing subsequent TO paragraphs at the next level down. To include the setups and teardowns, we include setups, then we include the test page content, and then we include the teardowns. To include the setups, we include the suite setup if this is a suite, then we include the regular setup. It turns out to be very dif?cult for programmers to learn to follow this rule and write functions that stay at a single level of abstraction. But learning this trick is also very important. It is the key to keeping functions short and making sure they do “one thing.” Making the code read like a top-down set of TO paragraphs is an effective technique for keeping the abstraction level consistent. He then gives the following example of poor code: public Money calculatePay(Employee e) throws InvalidEmployeeType { switch (e.type) { case COMMISSIONED: return calculateCommissionedPay(e); case HOURLY: return calculateHourlyPay(e); case SALARIED: return calculateSalariedPay(e); default: throw new InvalidEmployeeType(e.type); } } and explains the problems with it as follows: There are several problems with this function. First, it’s large, and when new employee types are added, it will grow. Second, it very clearly does more than one thing. Third, it violates the Single Responsibility Principle7 (SRP) because there is more than one reason for it to change. Fourth, it violates the Open Closed Principle8 (OCP) because it must change whenever new types are added. Now my questions. To begin, it's clear to me how it violates the OCP, and it's clear to me that this alone makes it poor design. However, I am trying to understand each principle, and it's not clear to me how SRP applies. Specifically, the only reason I can imagine for this method to change is the addition of new employee types. There is only one "axis of change." If details of the calculation needed to change, this would only affect the submethods like "calculateHourlyPay()" Also, while in one sense it is obviously doing 3 things, those three things are all at the same level of abstraction, and can all be put into a TO paragraph no different from the example one: TO calculate pay for an employee, we calculate commissioned pay if the employee is commissioned, hourly pay if he is hourly, etc. So aside from its violation of the OCP, this code seems to conform to Martin's other requirements of clean code, even though he's arguing it does not. Can someone please explain what I am missing? Thanks.

    Read the article

  • Game of Thrones : l'arme secrète de George R. R. Martin contre les virus, un PC sous DOS avec Wordstar 4.0 et sans internet

    Game of Thrones : L'arme secrète de George R. R. Martin contre les virus George R. R. MARTIN, auteur de la saga Game of Thrones et co-producteur de la série du même nom a révélé à un Talk-show américain avoir une arme secrète contre les virus qui pourraient attaquer son ordinateur et détruire ses documents. Pour écrire ses livres, il se sert d'un ordinateur non relié à internet et qui fonctionne sous DOS et comme traitement de texte Wordstar 4,0 ! À la question « pourquoi rester avec ce vieux...

    Read the article

  • Installing MySQL without root access

    - by vinay
    I am trying to install MySQL without root permissions. I ran through the following steps: Download MySQL Community Server 5.5.8 Linux - Generic Compressed TAR Archive Unpack it, for example to: /home/martin/mysql Create a my.cnf file in your home directory. The file contents should be: [server] user=martin basedir=/home/martin/mysql datadir=/home/martin/sql_data socket=/home/martin/socket port=3666 Go to the /home/martin/mysql directory and execute: ./scripts/mysql_install_db --defaults-file=~/my.cnf --user=martin --basedir=/home/martin/mysql --datadir=/home/martin/sql_data --socket=/home/martin/socket Your MySQL server is ready. Start it with this command: ./bin/mysqld_safe --defaults-file=~/my.cnf & When I try to change the password of MySQL it gives the error: Cannot connect to mysql server through socket '/tmp/mysql.sock' How can I change this path and see whether the mysql.sock is created or not?

    Read the article

  • Adding an user to samba

    - by JustMaximumPower
    I'm trying to setup some samba shares in my home network on an Ubuntu 12.04 machine. Everything works fine for my user account (max) but I can not add any new user. Every time I try to add new user they can not use the shares. It's likely that the error is very basic to the concept of samba but please don't just tell me to read the docs. I've been trying that for about 2 weeks now. I've set up the server with my user max who can mount transfer and the share max. Than I added the user simon with sudo adduser --no-create-home --disabled-login --shell /bin/false simon because the user should not be able to ssh into the machine. I did an sudo smbpasswd -a simon and set an (samba) password for simon and added an share for simon. I also added simon to transferusers to give him access to the share transfer. But simon can't connect to transfer or simons. ---- output of testparam: ------- Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Processing section "[printers]" Processing section "[print$]" Processing section "[max]" Processing section "[simons]" Processing section "[transfer]" Loaded services file OK. Server role: ROLE_STANDALONE Press enter to see a dump of your service definitions [global] server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers [max] comment = Privater share von Max path = /media/Main/max read only = No create mask = 0700 [simons] comment = Privater share von Simon path = /media/Main/simon read only = No create mask = 0700 [transfer] comment = Transferlaufwerk path = /media/Main/transfer read only = No create mask = 0755 ---- The files in /media/Main: ------ drwxrwxr-x 17 max max 4096 Oct 4 19:13 max/ drwx------ 5 simon max 4096 Aug 4 15:18 simon/ drwxrwxr-x 7 max transferusers 258048 Oct 1 22:55 transfer/

    Read the article

  • Ubuntu 14.04: After login, top- and sidepanel don't load and system settings don't open

    - by Löwe Simon
    I have Ubuntu 14.04 on a Medion Erazer with the Nvidia GTX570M card. At first, while I had not installed the last nvidia driver, each time I would try to suspend my pc it would freeze frame on the login. Once I managed to install the nvidia drivers, everything seemed to work up until I noticed that suddenly the system settings would not open anymore. I then rebooted my pc, but when I logged in I just end up with my cursor but no top panel or side pannel. Thankfully, I had installed the gnome desktops which works, except for the fact the system settings do not open also in the gnome desktop. I have tried following: Problems after upgrading to 14.04 (only background and pointer after login) except for reinstalling nvidia, since then I would be back at square one with suspend not working. When I try to launch system settings through the terminal, I get: simon@simon-X681X:~$ unity-control-center libGL error: No matching fbConfigs or visuals found libGL error: failed to load driver: swrast (unity-control-center:2806): Gdk-ERROR **: The program 'unity-control-center' received an X Window System error. This probably reflects a bug in the program. The error was 'BadLength (poly request too large or internal Xlib length erro'. (Details: serial 229 error_code 16 request_code 155 (GLX) minor_code 1) (Note to programmers: normally, X errors are reported asynchronously; that is, you will receive the error a while after causing it. To debug your program, run it with the GDK_SYNCHRONIZE environment variable to change this behavior. You can then get a meaningful backtrace from your debugger if you break on the gdk_x_error() function.) Trace/breakpoint trap (core dumped) I know the 2 problems seem unrelated, but since they occured together, I am wondering if that is really the case. Your help is much appreciated, Simon

    Read the article

  • One to many too much data returned - MySQL

    - by Evan McPeters
    I have 2 related MySQL tables in a one to many relationship. Customers: cust_id, cust_name, cust_notes Orders: order_id, cust_id, order_comments So, if I do a standard join to get all customers and their orders via PHP, I return something like: Jack Black, jack's notes, comments about jack's 1st order Jack Black, jack's notes, comments about jack's 2nd order Simon Smith, simon's notes, comments about simon's 1st order Simon Smith, simon's notes, comments about simon's 2nd order The problem is that *cust_notes* is a text field and can be quite large (a couple of thousand words). So, it seems like returning that field for every order is inneficient. I could use *GROUP_CONCAT* and JOINS to return all *order_comments* on a single row BUT order_comments is a large text field too, so it seems like that could create a problem. Should I just use two separate queries, one for the customers table and one for the orders table? Is there a better way?

    Read the article

  • Error creating Rails DB using rake db:create

    - by Simon
    Hi- I'm attempting to get my first "hello world" rails example going using the rails' getting started guide on my OSX 10.6.3 box. When I go to execute the first rake db:create command (I'm using mysql) I get: simon@/Users/simon/source/rails/blog/config: rake db:create (in /Users/simon/source/rails/blog) Couldn't create database for {"reconnect"=>false, "encoding"=>"utf8", "username"=>"root", "adapter"=>"mysql", "database"=>"blog_development", "pool"=>5, "password"=>nil, "socket"=>"/opt/local/var/run/mysql5/mysqld.sock"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation) I found plenty of stackoverflow questions addressing this problem with the following advice: Verify that user and password are correct (I'm running w/ no password for root on my dev box) Verify that the socket is correct - I can cat the socket, so I assume it's correct Verify that the user can create a DB (As you can see root can connect and create a this DB no problem) simon@/Users/simon/source/rails/blog/config: mysql -uroot -hlocalhost Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 16 Server version: 5.1.45 Source distribution Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql create database blog_development; Query OK, 1 row affected (0.00 sec) Any idea on what might be going on here?

    Read the article

  • unmet dependencies and broken count>0 problem

    - by Simon
    I tried installing fbreader, following all the steps, but ended up with unmet dependencies, i also think a file is referenced in two locations at once and hence killing it.. any ideas how I can fix it? i've done alot of research and tried: simon@simon-Studio-1558:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: dkms patch Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libzlcore0.12 The following NEW packages will be installed: libzlcore0.12 0 upgraded, 1 newly installed, 0 to remove and 61 not upgraded. 6 not fully installed or removed. Need to get 0 B/270 kB of archives. After this operation, 811 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 179860 files and directories currently installed.) Unpacking libzlcore0.12 (from .../libzlcore0.12_0.12.10dfsg-4_i386.deb) ... dpkg: error processing /var/cache/apt/archives/libzlcore0.12_0.12.10dfsg-4_i386.deb (--unpack): trying to overwrite '/usr/lib/libzlcore.so.0.12.10', which is also in package libzlcore 0.12.10-1 No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/libzlcore0.12_0.12.10dfsg-4_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) sorry for the formatting, but it basically isn't liking: dpkg: error processing /var/cache/apt/archives/libzlcore0.12_0.12.10dfsg-4_i386.deb (--unpack): trying to overwrite '/usr/lib/libzlcore.so.0.12.10', which is also in package libzlcore 0.12.10-1 Any ideas? Also I don't care about keeping the program, but the error is stopping sudo apt-get remove fbreader from working too.

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • NetBeans Podcast 69

    - by TinuA
    Podcast Guests: Terrence Barr, Simon Ritter, Jaroslav Tulach (It's an all-Oracle lineup!) Download mp3: 47 Minutes – 39.5 mb Subscribe on iTunes NetBeans Community News with Geertjan and Tinu If you missed the first two Java Virtual Developer Day events in early May, there's still one more LIVE training left on May 28th. Sign up here to participate live in the APAC time zone or watch later ON DEMAND. Video: Get started with Vaadin development using NetBeans IDE NetBeans IDE was at JavaCro 2014 and at Hippo Get-together 2014 Another great lineup is in the works for NetBeans Day at JavaOne 2014. More details coming soon! NetBeans' Facebook page is almost at 40,000 Likes! Help us crack that milestone in the next few weeks! Other great ways to stay updated about NetBeans? Twitter and Google+. 09:28 / Terrence Barr - What to Know about Java Embedded Terrence Barr, a Senior Technologist and Principal Product Manager for Embedded and Mobile technologies at Oracle, discusses new features of the Java SE Embedded and Java ME Embedded platforms, and sheds some light on the differences between them and what they have to offer to developers. Learn more about Java SE Embedded Tutorial: Using Oracle Java SE Embedded Support in NetBeans IDE Learn more about Java ME Embedded Video: NetBeans IDE Support for Java ME 8 Video: Installing and Using Java ME SDK 8.0 Plugins in NetBeans IDE Follow Terrence Barr to keep up with news in the Embedded space: Blog and Twitter 26:02 / Simon Ritter - A Massive Serving of Raspberry Pi Oracle's Raspberry Pi virtual course is back by popular demand! Simon Ritter, the head of Oracle's Java Technology Evangelism team, chats about the second run of the free Java Embedded course (starting May 30th), what participants can expect to learn, NetBeans' support for Java ME development, and other Java trainings coming to a desktop, laptop or user group near you. Sign up for the Oracle MOOC: Develop Java Embedded Applications Using Raspberry Pi Find out when Simon Ritter and the Java Evangelism team are coming to a Java event or JUG in your area--follow them on Twitter: Simon Ritter Angela Caicedo Steven Chin Jim Weaver 36:58 / Jaroslav Tulach - A Perfect Translation Jaroslav Tulach returns to the NetBeans podcast with tales about the Japanese translation of the Practical API Design book, which he contends surpasses all previous translations, including the English edition! Order "Practical API Design" (Japanese Version)  Find out why the Japanese translation is the best edition yet *Have ideas for NetBeans Podcast topics? Send them to ">nbpodcast at netbeans dot org. *Subscribe to the official NetBeans page on Facebook! Check us out as well on Twitter, YouTube, and Google+.

    Read the article

  • Visual Studio ALM MVP of the Year 2011

    - by Martin Hinshelwood
    For some reason this year some of my peers decided to vote for me as a contender for Visual Studio ALM MVP of the year. I am not sure what I did to deserve this, but a number of people have commented that I have a rather useful blog. I feel wholly unworthy to join the ranks of previous winners: Ed Blankenship (2010) Martin Woodward (2009) Thank you to everyone who voted regardless of who you voted for. If there was a prize for the best group of MVP’s then the Visual Studio ALM MVP would be a clear winner, as would the product group of product groups that is Visual Studio ALM Group. To use a phrase that I have learned since moving to Seattle and probably use too much: you guys are all just awesome. I have tried my best in the last year to document not only every problem that I have had with Team Foundation Server (TFS), but also to document as many of the things I am doing as possible. I have taken some of Adam Cogan’s rules to heart and when a customer asks me a question I always blog the answer and send them a link. This allows both my blog and my understanding of TFS to grow while creating a useful bank of content. The idea is that if one customer asks, all benefit. I try, when writing for my blog, to capture both the essence and the context for a problem being solved. This allows more people to benefit as they do not need to understand the specifics of an environment to gain value. I have a number of goals for this year that I think will help increase value in the community: persuade my new colleagues at Northwest Cadence to do more blogging (Steve, Jeff, Shad and Rennie) Rangers Project – TFS Iteration Automation with Willy-Peter Schaub, Bill Essary, Martin Hinshelwood, Mike Fourie, Jeff Bramwell and Brian Blackman Write a book on the Team Foundation Server API with Willy-Peter Schaub, Mike Fourie and Jeff Bramwell write more useful blog posts I do not think that these things are beyond the realms of do-ability, but we will see…

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >