Search Results

Search found 13430 results on 538 pages for 'easy'.

Page 373/538 | < Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >

  • ArchBeat Facebook Friday: Top 10 Posts - August 8-14, 2014

    - by Bob Rhubart-Oracle
    5,307 people pay attention to the OTN ArchBeat Facebook Page. Here are the Top 10 posts from that page for the last seven days, August 8-14, 2014. Podcast: ODTUG Kscope 2014: Anatomy of a User Conference - Part 3 There is more to a great user conference than a shared interest in Oracle products. In the final segment of this 3-part OTN ArchBeat Podcast panelists Danny Bryant , Chet "ORACLENERD" Justice, Cameron Lackpour, Debra Lilley, and Mike Riley discuss the nature and importance of community Oracle SOA Suite 12c: The LDAP Adapter quick and easy | Maarten Smeets Maarten Smeets' how-to post describes the installation and configuration of an LDAP server and browser (ApacheDS and Apache Directory Studio). Process level Exception Handling in BPM12c | Abhishek Mittal When an exception occurs while running a process flow you have two choices: 1) retry running the flow object that caused that process flow or 2) move the process instance to the next flow object in the main process flow. Abhishek Mittal shows you how to do both. Building a Responsive WebCenter Portal Application | JayJay Zheng Oracle ACE JayJay Zheng's article addresses the essentials of responsive web design, shows you how to design and develop a responsive WebCenter Portal application, and reviews key development considerations. Cloud Control authorization with Active Directory | Jeroen Gouma Jeroen Gouma takes you step-by-step through the user authortization process in Oracle Enterprise Manager Cloud Control 12c. Video: CIOs Guide to Oracle Products and Solutions | Jessica Keyes The CIO's Guide to Oracle Products and Solutions author Jessica Keyes talks about why input from users and developers is essential to CIOs who want to avoid being escorted out of the building by security guards. Read A CIO's Guide to Oracle Cloud Computing, a sample chapter from the book. Twitter Tuesday - Top 10 @ArchBeat Tweets - August 5-11, 2014 @OTNArchBeat followers from across the galaxy have spoken! Here are the Top 10 tweets for the past seven days. Topics include: Hyperion, OBIEE, ODI, Oracle MAF, and SOA Suite. Recap: Fusion Middleware Summer Camps - Lisbon 2014 | Simon Haslam Oracle ACE Director Simon Haslam's recap of his experience at the Oracle Fusion Middleware Summer Camp in Lisbon, Portugal will make you wish you had been there. WebLogic Data Source Connection Labeling | Steve Felts The connection labeling feature was added in WLS release 10.3.6, and enhanced in release WLS 12.1.3. This post by Steve Felts describes two new connection properties that can be configured on the data source descriptor. Why Mobile Apps <3 REST/JSON | Martin Jarvis Martin Jarvis explores the preference for REST and JSON over SOAP and XML for mobile web services.

    Read the article

  • Oracle User Productivity Kit Translation

    - by ultan o'broin
    Oracle's customers just love the User Productivity Kit (UPK). I hear only great things about it from our international customers at the Oracle Usability Advisory Board meetings too. The UPK is the perfect solution for enterprise applications training needs (I previously reviewed a fine book about UPK btw). One question I am often asked is how source content created using the UPK can be translated into another language. I spoke with Peter Maravelias, Principal Product Strategy Manager for UPK about this recently. UPK is already optimized for easy source-target translation already. There is even a solution for re-recording demos. Here's what you can do to get your source content into another language: Use UPK's ability to automatically translate events and actions. UPK comes with XML templates that allow you to accomplish this in 21 languages with a simple publishing action switch. These templates even deal with the tricky business of using gender-based translations. Spanish localization template sample Japanese localization template sample Use the Import and Export localization features to export additional custom content in a format like XLIFF, easily handled by translation tools. You could also export and import in Word format. Re-record the sound (audio) files that go with the recordings, one per screen. UPK's granular approach to the sound files means that timing isn't an option. Retiming demos isn't required. A tip here with sound files and XLFF-exported custom content is to facilitate translation context by avoiding explicit references to actions going on in the screen recordings. A text based storyboard with screenshots accompanying the sound files should also be provided to the translators. Provide a glossary of terms too. Use the re-record option in UPK to record any demo from a translated application. This will allow all the translated UI labels to be automatically captured. You may be required to resize any action events here due to text expansion issues. Of course, you will need translated data in the translated application too, so plan for this in advance. However, source-target language skills aren't required for the re-recording. The UPK Player itself, of course, is also available from Oracle along with content and doc in 21 languages. The Developer and Setup is also translated in a smaller number of languages. Check the Oracle UPK website for latest details. UPK is a super solution for global enterprise applications training deployments allowing source content to be translated into multiple languages easily. See this post on the UPK blog for more insight too!

    Read the article

  • Work at Oracle as a Fresh Student by Ang Sun

    - by Nadiya
    The past months have flown by since I started working at Oracle; but at the same time it feels like I’ve been here forever. I came to Beijing to find a job after I graduated from The University of Southampton with a MSc in Software Engineering. I got an offer the next day after I had an interview with my manager. This new style of working life hasn’t been a problem with me. The atmosphere here is fantastic and everyone is so friendly and easy to talk to. I am the first member in our AIE China Team. We do appreciate those colleagues from Core I/O team who helped us a lot to familiarize ourselves with the new environment. After hire orientation training I got to know many new people from various teams including Middleware, People Soft and Solaris. Also Oracle provides weekly system online training as additional training for those people who need it. The best thing about working at Oracle is that there is a balance between work and rest. It’s good to have a really nice park and green space near the Oracle buildings. Most of us like to walk around the riverside after lunch before we get back to work. I like to grab a cup of latte before discussing issues and the schedule of our projects in a weekly conference call with my US colleagues. It has been great experience; I am working alongside talented colleagues from different countries and nationalities. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • How to Name Linked Servers

    - by Bill Graziano
    I did another SQL Server migration over the weekend that dealt with linked servers.  I’ve seen all kinds of odd naming schemes and there are a few I like and a few I suggest you avoid. Don’t name your linked server for its IP address.  At some point whatever is on the other end of that IP address will move.  You’ll probably need to point your linked server to a new IP address but not change the name of the linked server.  And then you’ve completely lost any context around this.  Bonus points if a new SQL Server eventually ends up at the old IP address further adding confusion when you’re trying to troubleshoot. Don’t name your linked server based on its instance name.  This one is less obvious.  It sounds nice to have a linked server named [VSRV1\SQLTRAN01].  You know what it is and it’s easy to use.  It’s less nice when you’ve got 200 stored procedures that all reference this linked server but the database they reference has moved to a new instance.  Now when you query this you’re actually querying a different instance. (Please note: I’m not saying it’s a good idea to have 200 stored procedures that all reference a linked server.  I’m just saying it’s not all that uncommon.) Consider naming your linked server something that you can easily search on.  See my note above.  You can also get around this by always enclosing the name in brackets.  That is harder to enforce unless you use some odd characters in it. Consider naming your linked server based on the function.  For example, I’ve had some luck having a linked server named [DW] that points to our data warehouse server.  That server can change names or physically move and all I need to do is update the linked server to point to the new destination.  The descriptive name of the linked server is still accurate.  No code needs to change and people still know what it is just by looking at it. Consider naming your linked server for the database.  I’m still thinking through this one.  It may mean you have multiple linked servers that point to the same instance.  I’ve found that database names rarely change.  It also makes it easier to move individual databases to new servers. Consider pointing your linked servers to DNS entries and not IP addresses.  I’ve done this for reporting databases and had some success.  Especially for read-only snapshots that can get created on the main database or on the mirror.  What issues have you had with linked server names?  What has worked for you?  Where are the holes in my approach?

    Read the article

  • Convert DVDs and ISO Files to MKV with MakeMKV

    - by DigitalGeekery
    Looking for a quick and easy way to convert your DVDs or ISOs to MKV files? Today we take a look at the MakeMKV Beta which gets the job done very well. Installing and Using MakeMKV Download and install MakeMKV (See download link below) If converting a DVD, place it into your optical drive. When you open MakeMKV you will be greeted by it’s minimalistic interface. Click on the DVD to hard drive button to open the DVD, or the folder icon on the top menu to browse for an ISO file.   MakeMKV will open the disc or file. Once the disc or file is opened, you’ll see the titles listed in the window on the left. Double-click on the titles to expand the tree structure.   Remove any title or tracks you don’t want to convert by unselecting the check box to the left. On the right side of the window, click the folder icon to select browse for your file output directory. When ready, click the MakeMkv button to begin the conversion process.   Conversion will proceed.   When the conversion is finished. Click OK. That’s all there is to it! Your MKV file is ready to play. Conclusion MakeMKV is currently still in beta and during the beta phase it will rip both DVD and Blu-ray for free. However, the DVD ripping functionality will always remain free. After 30 days if you want to continue ripping Blu-ray discs, you’ll need to purchase a license. DVD rips are very quick…typically around 15-20 minutes depending on the length of the movie. MakeMKV is available for Windows, Mac, Linux and will rip and convert DVDs to MKV files. Not all media players natively support MKV playback, so if you’re having trouble playing MKV files, try downloading VLC Media player, or the latest version of the DivX codec. Download MakeMKV Similar Articles Productive Geek Tips How To Rip DVDs with VLCEasily Change Audio File Formats with XRECODEHow To Convert Video Files to MP3 with VLCConvert PDF Files to Word Documents and Other FormatsConvert DVD to MP4 / H.264 with HD Decrypter and Handbrake TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar

    Read the article

  • Adding Attributes to Generated Classes

    ASP.NET MVC 2 adds support for data annotations, implemented via attributes on your model classes.  Depending on your design, you may be using an OR/M tool like Entity Framework or LINQ-to-SQL to generate your entity classes, and you may further be using these entities directly as your Model.  This is fairly common, and alleviates the need to do mapping between POCO domain objects and such entities (though there are certainly pros and cons to using such entities directly). As an example, the current version of the NerdDinner application (available on CodePlex at nerddinner.codeplex.com) uses Entity Framework for its model.  Thus, there is a NerdDinner.edmx file in the project, and a generated NerdDinner.Models.Dinner class.  Fortunately, these generated classes are marked as partial, so you can extend their behavior via your own partial class in a separate file.  However, if for instance the generated Dinner class has a property Title of type string, you cant then add your own Title of type string for the purpose of adding data annotations to it, like this: public partial class Dinner { [Required] public string Title { get;set; } } This will result in a compilation error, because the generated Dinner class already contains a definition of Title.  How then can we add attributes to this generated code?  Do we need to go into the T4 template and add a special case that says if were generated a Dinner class and it has a Title property, add this attribute?  Ick. MetadataType to the Rescue The MetadataType attribute can be used to define a type which contains attributes (metadata) for a given class.  It is applied to the class you want to add metadata to (Dinner), and it refers to a totally separate class to which youre free to add whatever methods and properties you like.  Using this attribute, our partial Dinner class might look like this: [MetadataType(typeof(Dinner_Validation))] public partial class Dinner {}   public class Dinner_Validation { [Required] public string Title { get; set; } } In this case the Dinner_Validation class is public, but if you were concerned about muddying your API with such classes, it could instead have been created as a private class within Dinner.  Having the validation attributes specified in their own class (with no other responsibilities) complies with the Single Responsibility Principle and makes it easy for you to test that the validation rules you expect are in place via these annotations/attributes. Thanks to Julie Lerman for her help with this.  Right after she showed me how to do this, I realized it was also already being done in the project I was working on. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Role based access control in Oracle VM using Enterprise Manager 12c

    - by Ronen Kofman
    Enterprise Managers let’s you control any element in the environment and define which users can do what on each element. We will show here an example on how to set up RBAC (Role Base Access Control) for Oracle VM using Enterprise Manager, this will be a very simplified explanation  to help you get going. For more comprehensive explanations please refer to the Enterprise Manager User Guide. OK, first some basic Enterprise Manager terminology: Target – any element in the environment is a target – server, pool, zone, VM etc. Administrators – these are the Enterprise Manager users who can login to the platform. Roles – roles are privilege profiles which could be applied to Administrators. The first step will be to discover the virtual environment and bring it in to Enterprise Manager, this process is simple and can be done in two ways: Work on your Oracle VM manager, set it up until you feel comfortable and then register it in Enterprise Manager Use Enterprise Manager and build it all from there. In both cases we will be able to see the same picture from Oracle VM and from Enterprise Manager, any change made in one will be reflected in the other. Oracle VM Manager: Enterprise Manager: Once you have your virtual environment set up in Enterprise Manager it is time to start associating VMs with users (or Administrators as they are called in Enterprise Manager). Enterprise Manager allows us to connect to multiple different identity services and import users from them but the simplest way to add Administrators is just go to setup->security->Administrators and create new Administrator. The creation wizard will walk you through several stages and allow you to assign role(s) to your newly created Administrator, using roles can really shorten the process if done multiple times. When you get to “Target Privileges” stage, scroll down to the bottom to the “Target Privileges” section. In this section you can add targets (virtual machine in our case) and define the type of privileges you would like to assign to the Administrator which you are creating. In this example I chose one of the VMs and granted full privileges to the newly created Administrator. Administrator creation wizard "Target Privileges": Now when you login as the newly created administrator, you will only see the VM that was assign to you and will be able to have full control over it. That’s it, simple and straight forward, Enterprise Manager offers many more things which I skipped here but the point is that if you need role based access control Enterprise Manager can give it to you in a very easy way. Oh and one more thing, virtualization management in Enterprise Manager has no license cost, sweet.

    Read the article

  • Database Developer - October 2013 issue: Download Database 12c and related products

    - by Javier Puerta
    The October issue of the Database Application Developer  newsletter is now available. The focus of this issue is on downloads of Database 12c and related products. (Full newsletter here) Get Ready to Download, Deploy and Develop for Oracle Database 12c This month we're focused on downloads. We've rounded up the top developer releases (both early adopter and BETA releases) and the articles that will help you do more with Oracle 12c. See the technical content that will help you get started. If you're ready...Away we go! — Laura Ramsey, Database and Developer Community, Oracle Technology Network Team FEATURED DOWNLOADS Download: Oracle Database 12c According Tom Kyte, the Oracle 12c version has some of the biggest enhancements to the core database since version 6 - Check it out for yourself. Download: Oracle SQL Developer 4.0 Early Adopter 2 is Here Oracle SQL Developer is a free IDE that simplifies the development and management of Oracle Database. It is a complete end-to-end development platform for your PL/SQL applications that features a worksheet for running queries and scripts, a DBA console for managing the database, a reports interface, a complete data modeling solution and a migration platform for moving your 3rd party databases to Oracle.  If you are interested in checking out this new early adopter version,Oracle SQL Developer 4.0 EA is the place to go. Download: Oracle 12c Multitenant Self Provisioning Application -BETA- The -BETA- is here. The Multitenant self provisioning Application is an easy and productive way for DBAs and Developers to get familiar with powerful PDB features including create, clone, plug and unplug.   No better time to start playing with PDBs. Oracle 12c Multitenant Self Provisioning Application. Download: New! Updates to Oracle Data Integration Portfolio Oracle GoldenGate 12c and Oracle Data Integrator 12c is now available. From Real-Time data integration, transactional change data capture, data replication, transformations....to hi-volume, high-performance batch loads, event-driven, trickle-feed integration process..its now available. Go here all the details and links to downloads...and Congratulations Data Integration Team!. Download: Oracle VM Templates for Oracle 12c Features Support for Single Instance, Oracle Restart and Oracle RAC Support for all current Oracle Database 11.2 versions as well as Oracle 12c on Oracle Linux 5 Update 9 & Oracle Linux 6 Update 4. The Oracle 12c templates allow end-to-end automation for Flex Cluster, Flex ASM and PDBs. See how the Deploycluster tool was updated to support Single Instance and the new Oracle 12c features. Oracle VM Templates for Oracle Database. Download: Oracle SQL Developer Data Modeler 4.0 EA 3 If you're looking for a datamodeling and database design tool that provides an environment for capturing, modeling, managing and exploiting metadata, it's time to check out Oracle SQL Developer Data Modeler. Oracle SQL Developer Data Modeler 4.0 EA V3 is here.

    Read the article

  • Database Developer - October 2013 issue: Download Database 12c and related products

    - by Javier Puerta
    The October issue of the Database Application Developer  newsletter is now available. The focus of this issue is on downloads of Database 12c and related products. (Full newsletter here) Get Ready to Download, Deploy and Develop for Oracle Database 12c This month we're focused on downloads. We've rounded up the top developer releases (both early adopter and BETA releases) and the articles that will help you do more with Oracle 12c. See the technical content that will help you get started. If you're ready...Away we go! — Laura Ramsey, Database and Developer Community, Oracle Technology Network Team FEATURED DOWNLOADS Download: Oracle Database 12c According Tom Kyte, the Oracle 12c version has some of the biggest enhancements to the core database since version 6 - Check it out for yourself. Download: Oracle SQL Developer 4.0 Early Adopter 2 is Here Oracle SQL Developer is a free IDE that simplifies the development and management of Oracle Database. It is a complete end-to-end development platform for your PL/SQL applications that features a worksheet for running queries and scripts, a DBA console for managing the database, a reports interface, a complete data modeling solution and a migration platform for moving your 3rd party databases to Oracle.  If you are interested in checking out this new early adopter version,Oracle SQL Developer 4.0 EA is the place to go. Download: Oracle 12c Multitenant Self Provisioning Application -BETA- The -BETA- is here. The Multitenant self provisioning Application is an easy and productive way for DBAs and Developers to get familiar with powerful PDB features including create, clone, plug and unplug.   No better time to start playing with PDBs. Oracle 12c Multitenant Self Provisioning Application. Download: New! Updates to Oracle Data Integration Portfolio Oracle GoldenGate 12c and Oracle Data Integrator 12c is now available. From Real-Time data integration, transactional change data capture, data replication, transformations....to hi-volume, high-performance batch loads, event-driven, trickle-feed integration process..its now available. Go here all the details and links to downloads...and Congratulations Data Integration Team!. Download: Oracle VM Templates for Oracle 12c Features Support for Single Instance, Oracle Restart and Oracle RAC Support for all current Oracle Database 11.2 versions as well as Oracle 12c on Oracle Linux 5 Update 9 & Oracle Linux 6 Update 4. The Oracle 12c templates allow end-to-end automation for Flex Cluster, Flex ASM and PDBs. See how the Deploycluster tool was updated to support Single Instance and the new Oracle 12c features. Oracle VM Templates for Oracle Database. Download: Oracle SQL Developer Data Modeler 4.0 EA 3 If you're looking for a datamodeling and database design tool that provides an environment for capturing, modeling, managing and exploiting metadata, it's time to check out Oracle SQL Developer Data Modeler. Oracle SQL Developer Data Modeler 4.0 EA V3 is here.

    Read the article

  • Space partitioning when everything is moving

    - by Roy T.
    Background Together with a friend I'm working on a 2D game that is set in space. To make it as immersive and interactive as possible we want there to be thousands of objects freely floating around, some clustered together, others adrift in empty space. Challenge To unburden the rendering and physics engine we need to implement some sort of spatial partitioning. There are two challenges we have to overcome. The first challenge is that everything is moving so reconstructing/updating the data structure has to be extremely cheap since it will have to be done every frame. The second challenge is the distribution of objects, as said before there might be clusters of objects together and vast bits of empty space and to make it even worse there is no boundary to space. Existing technologies I've looked at existing techniques like BSP-Trees, QuadTrees, kd-Trees and even R-Trees but as far as I can tell these data structures aren't a perfect fit since updating a lot of objects that have moved to other cells is relatively expensive. What I've tried I made the decision that I need a data structure that is more geared toward rapid insertion/update than on giving back the least amount of possible hits given a query. For that purpose I made the cells implicit so each object, given it's position, can calculate in which cell(s) it should be. Then I use a HashMap that maps cell-coordinates to an ArrayList (the contents of the cell). This works fairly well since there is no memory lost on 'empty' cells and its easy to calculate which cells to inspect. However creating all those ArrayLists (worst case N) is expensive and so is growing the HashMap a lot of times (although that is slightly mitigated by giving it a large initial capacity). Problem OK so this works but still isn't very fast. Now I can try to micro-optimize the JAVA code. However I'm not expecting too much of that since the profiler tells me that most time is spent in creating all those objects that I use to store the cells. I'm hoping that there are some other tricks/algorithms out there that make this a lot faster so here is what my ideal data structure looks like: The number one priority is fast updating/reconstructing of the entire data structure Its less important to finely divide the objects into equally sized bins, we can draw a few extra objects and do a few extra collision checks if that means that updating is a little bit faster Memory is not really important (PC game)

    Read the article

  • Voxel terrain rendering with marching cubes

    - by JavaJosh94
    I was working on making procedurally generated terrain using normal cubish voxels (like minecraft) But then I read about marching cubes and decided to convert to using those. I managed to create a working marching cubes class and cycle through the densities and everything in it seemed to be working so I went on to work on actual terrain generation. I'm using XNA (C#) and a ported libnoise library to generate noise for the terrain generator. But instead of rendering smooth terrain I get a 64x64 chunk (I specified 64 but can change it) of seemingly random marching cubes using different triangles. This is the code I'm using to generate a "chunk": public MarchingCube[, ,] getTerrainChunk(int size, float dMultiplyer, int stepsize) { MarchingCube[, ,] temp = new MarchingCube[size / stepsize, size / stepsize, size / stepsize]; for (int x = 0; x < size; x += stepsize) { for (int y = 0; y <size; y += stepsize) { for (int z = 0; z < size; z += stepsize) { float[] densities = {(float)terrain.GetValue(x, y, z)*dMultiplyer, (float)terrain.GetValue(x, y+stepsize, z)*dMultiplyer, (float)terrain.GetValue(x+stepsize, y+stepsize, z)*dMultiplyer, (float)terrain.GetValue(x+stepsize, y, z)*dMultiplyer, (float)terrain.GetValue(x,y,z+stepsize)*dMultiplyer,(float)terrain.GetValue(x,y+stepsize,z+stepsize)*dMultiplyer,(float)terrain.GetValue(x+stepsize,y+stepsize,z+stepsize)*dMultiplyer,(float)terrain.GetValue(x+stepsize,y,z+stepsize)*dMultiplyer }; Vector3[] corners = { new Vector3(x,y,z), new Vector3(x,y+stepsize,z),new Vector3(x+stepsize,y+stepsize,z),new Vector3(x+stepsize,y,z), new Vector3(x,y,z+stepsize), new Vector3(x,y+stepsize,z+stepsize), new Vector3(x+stepsize,y+stepsize,z+stepsize), new Vector3(x+stepsize,y,z+stepsize)}; if (x == 0 && y == 0 && z == 0) { temp[x / stepsize, y / stepsize, z / stepsize] = new MarchingCube(densities, corners, device); } temp[x / stepsize, y / stepsize, z / stepsize] = new MarchingCube(densities, corners); } } } (terrain is a Perlin Noise generated using libnoise) I'm sure there's probably an easy solution to this but I've been drawing a blank for the past hour. I'm just wondering if the problem is how I'm reading in the data from the noise or if I may be generating the noise wrong? Or maybe the noise is just not good for this kind of generation? If I'm reading it wrong does anyone know the right way? the answers on google were somewhat ambiguous but I'm going to keep searching. Thanks in advance!

    Read the article

  • Is inline SQL still classed as bad practice now that we have Micro ORMs?

    - by Grofit
    This is a bit of an open ended question but I wanted some opinions, as I grew up in a world where inline SQL scripts were the norm, then we were all made very aware of SQL injection based issues, and how fragile the sql was when doing string manipulations all over the place. Then came the dawn of the ORM where you were explaining the query to the ORM and letting it generate its own SQL, which in a lot of cases was not optimal but was safe and easy. Another good thing about ORMs or database abstraction layers were that the SQL was generated with its database engine in mind, so I could use Hibernate/Nhibernate with MSSQL, MYSQL and my code never changed it was just a configuration detail. Now fast forward to current day, where Micro ORMs seem to be winning over more developers I was wondering why we have seemingly taken a U-Turn on the whole in-line sql subject. I must admit I do like the idea of no ORM config files and being able to write my query in a more optimal manner but it feels like I am opening myself back up to the old vulnerabilities such as SQL injection and I am also tying myself to one database engine so if I want my software to support multiple database engines I would need to do some more string hackery which seems to then start to make code unreadable and more fragile. (Just before someone mentions it I know you can use parameter based arguments with most micro orms which offers protection in most cases from sql injection) So what are peoples opinions on this sort of thing? I am using Dapper as my Micro ORM in this instance and NHibernate as my regular ORM in this scenario, however most in each field are quite similar. What I term as inline sql is SQL strings within source code. There used to be design debates over SQL strings in source code detracting from the fundamental intent of the logic, which is why statically typed linq style queries became so popular its still just 1 language, but with lets say C# and Sql in one page you have 2 languages intermingled in your raw source code now. Just to clarify, the SQL injection is just one of the known issues with using sql strings, I already mention you can stop this from happening with parameter based queries, however I highlight other issues with having SQL queries ingrained in your source code, such as the lack of DB Vendor abstraction as well as losing any level of compile time error capturing on string based queries, these are all issues which we managed to side step with the dawn of ORMs with their higher level querying functionality, such as HQL or LINQ etc (not all of the issues but most of them). So I am less focused on the individual highlighted issues and more the bigger picture of is it now becoming more acceptable to have SQL strings directly in your source code again, as most Micro ORMs use this mechanism. Here is a similar question which has a few different view points, although is more about the inline sql without the micro orm context: http://stackoverflow.com/questions/5303746/is-inline-sql-hard-coding

    Read the article

  • AWS .NET SDK v2: the message-pump pattern

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/11/aws-.net-sdk-v2--the-message-pump-pattern.aspxVersion 2 of the AWS SDK for .NET has had a few pre-release iterations on NuGet and is stable, if a bit lacking in step-by-step guides. There’s at least one big reason to try it out: the SQS queue client now supports asynchronous reads, so you don’t need a clumsy polling mechanism to retrieve messages. The new approach  is easy to use, and lets you work with AWS queues in a similar way to the message-pump pattern used in the latest Azure SDK for Service Bus queues and topics. I’ve posted a simple wrapper class for subscribing to an SQS hub on gist here: A wrapper for the SQS client in the AWS SDK for.NET v2, which uses the message-pump pattern. Here’s the core functionality in the subscribe method: private async void Subscribe() { if (_isListening) { var request = new ReceiveMessageRequest { MaxNumberOfMessages = 10 }; request.QueueUrl = QueueUrl; var result = await _sqsClient.ReceiveMessageAsync(request, _cancellationTokenSource.Token); if (result.Messages.Count > 0) { foreach (var message in result.Messages) { if (_receiveAction != null && message != null) { _receiveAction(message.Body); DeleteMessage(message.ReceiptHandle); } } } } if (_isListening) { Subscribe(); } } which you call with something like this: client.Subscribe(x=>Log.Debug(x.Body)); The async SDK call returns when there is something in the queue, and will run your receive action for every message it gets in the batch (defaults to the maximum size of 10 messages per call). The listener will sit there awaiting messages until you stop it with: client.Unsubscribe(); Internally it has a cancellation token which it sets when you call unsubscribe, which cancels any in-flight call to SQS and stops the pump. The wrapper will also create the queue if it doesn’t exist at runtime. The Ensure() method gets called in the constructor so when you first use the client for a queue (sending or subscribing), it will set itself up: if (!Exists()) { var request = new CreateQueueRequest(); request.QueueName = QueueName; var response = _sqsClient.CreateQueue(request); QueueUrl = response.QueueUrl; } The Exists() check has to do make a call to ListQueues on the SNS client, as it doesn’t provide its own method to check if a queue exists. That call also populates the Amazon Resource Name, the unique identifier for this queue, which will be useful later. To use the wrapper, just instantiate and go: var queueClient = new QueueClient(“ProcessWorkflow”); queueClient.Subscribe(x=>Log.Debug(x.Body)); var message = {}; //etc. queueClient.Send(message);

    Read the article

  • Creating metadata value relationships

    - by kyle.hatlestad
    I was recently asked an question about an interesting use case. They wanted content to be submitted into UCM with a particular ID in a custom metadata field. But they wanted that ID to be translated during submission into an employee name in another metadata field upon submission. My initial thought was that this could be done with a dependent choice list (DCL). One option list field driving the choices in another. But this didn't work in this case for a couple of reasons. First, the number of IDs could potentially be very large. So making that into a drop-down list would not be practical. The preference would be for that field to simply be a text field to type in the ID. Secondly, data could be submitted through different methods other then the web-based check-in form. And without an interface to select the DCL choices, the system needed a way to determine and populate the name field. So instead I went the approach of having the value of the ID field drive the value of the Name field using the derived field approach in my rule. In looking at it though, it was easy to simply copy the value of the ID field into the Name field...but to have it look up and translate the value proved to be the tricky part. So here is the approach I took... First I created my two metadata fields as standard text fields in the Configuration Manager applet. Next I create a table that stores the relationship between the IDs and Names. I then create a View into that table and set the column to the EmployeeID. I now create a new Application Field and set it as an option list using the View I created in the previous step. The reason I create it as an Application field is because I don't need to display the field or store a value in it. I simply need to make use of the option list in the next step... Finally, I create a Rule in which I select the Employee Name field and turn on the 'Is derived field' checkbox. I edit the derived value and add a new condition. Because the option list is a Application field and not an Information field, I can't use the Compute button. Instead, I insert this line directly in the Value field: @getFieldViewValue("EmployeeMapping",#active.xEmployeeID, "EmployeeName") The "EmployeeMapping" parameter designates that the value should be pulled from the EmployeeMapping Application field that I had created in the previous step. The #active.xEmployeeID field is the ID value that should be pulled from what the user entered. "EmployeeName" is the column name in the table which has the value which corresponds to the ID. The extracted name then becomes the value within our Employee Name field. That's it. You can then add additional Rules to make the Name field read-only/hidden on the check-in page and such.

    Read the article

  • Issue 15: Oracle Exadata Marketing Campaigns

    - by rituchhibber
         PARTNER FOCUS Oracle ExadataMarketing Campaign Steve McNickleVP Europe, cVidya Steve McNickle is VP Europe for cVidya, an innovative provider of revenue intelligence solutions for telecom, media and entertainment service providers including AT&T, BT, Deutsche Telecom and Vodafone. The company's product portfolio helps operators and service providers maximise margins, improve customer experience and optimise ecosystem relationships through revenue assurance, fraud and security management, sales performance management, pricing analytics, and inter-carrier services. cVidya has partnered with Oracle for more than a decade. RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Engineered Systems Oracle Communications cVidya SUBSCRIBE FEEDBACK PREVIOUS ISSUES Are you ready for Oracle OpenWorld this October? -- -- Please could you tell us a little about cVidya's partnering history with Oracle, and expand on your Oracle Exastack accreditations? "cVidya was established just over ten years ago and we've had a strong relationship with Oracle almost since the very beginning. Through our Revenue Intelligence work with some of the world's largest service providers we collect tremendous amounts of information, amounting to billions of records per day. We help our clients to collect, store and analyse that data to ensure that their end customers are getting the best levels of service, are billed correctly, and are happy that they are on the correct price plan. We have been an Oracle Gold level partner for seven years, and crucially just two months ago we were also accredited as Oracle Exastack Optimized for MoneyMap, our core Revenue Assurance solution. Very soon we also expect to be Oracle Exastack Optimized DRMap, our Data Retention solution." What unique capabilities and customer benefits does Oracle Exastack add to your applications? "Oracle Exastack enables us to deliver radical benefits to our customers. A typical mobile operator in the UK might handle between 500 million and two billion call data record details daily. Each transaction needs to be validated, billed correctly and fraud checked. Because of the enormous volumes involved, our clients demand scalable infrastructure that allows them to efficiently acquire, store and process all that data within controlled cost, space and environmental constraints. We have proved that the Oracle Exadata system can process data up to seven times faster and load it as much as 20 times faster than other standard best-of-breed server approaches. With the Oracle Exadata Database Machine they can reduce their datacentre equipment from say, the six or seven cabinets that they needed in the past, down to just one. This dramatic simplification delivers incredible value to the customer by cutting down enormously on all of their significant cost, space, energy, cooling and maintenance overheads." "The Oracle Exastack Program has given our clients the ability to switch their focus from reactive to proactive. Traditionally they may have spent 80 percent of their day processing, and just 20 percent enabling end customers to see advanced analytics, and avoiding issues before they occur. With our solutions and Oracle Exadata they can now switch that balance around entirely, resulting not only in reduced revenue leakage, but a far higher focus on proactive leakage prevention. How has the Oracle Exastack Program transformed your customer business? "We can already see the impact. Oracle solutions allow our delivery teams to achieve successful deployments, happy customers and self-satisfaction, and the power of Oracle's Exa solutions is easy to measure in terms of their transformational ability. We gained our first sale into a major European telco by demonstrating the major performance gains that would transform their business. Clients can measure the ease of organisational change, the early prevention of business issues, the reduction in manpower required to provide protection and coverage across all their products and services, plus of course end customer satisfaction. If customers know that that service is provided accurately and that their bills are calculated correctly, then over time this satisfaction can be attributed to revenue intelligence and the underlying systems which provide it. Combine this with the further integration we have with the other layers of the Oracle stack, including the telecommunications offerings such as NCC, OCDM and BRM, and the result is even greater customer value—not to mention the increased speed to market and the reduced project risk." What does the Oracle Exastack community bring to cVidya, both in terms of general benefits, and also tangible new opportunities and partnerships? "A great deal. We have participated in the Oracle Exastack community heavily over the past year, and have had lots of meetings with Oracle and our peers around the globe. It brings us into contact with like-minded, innovative partners, who like us are not happy to just stand still and want to take fresh technology to their customer base in order to gain enhanced value. We identified three new partnerships in each of two recent meetings, and hope these will open up new opportunities, not only in areas that exactly match where we operate today, but also in some new associative areas that will expand our reach into new business sectors. Notably, thanks to the Exastack community we were invited on stage at last year's Oracle OpenWorld conference. Appearing so publically with Oracle senior VP Judson Althoff elevated awareness and visibility of cVidya and has enabled us to participate in a number of other events with Oracle over the past eight months. We've been involved in speaking opportunities, forums and exhibitions, providing us with invaluable opportunities that we wouldn't otherwise have got close to." How has Exastack differentiated cVidya as an ISV, and helped you to evolve your business to the next level? "When we are selling to our core customer base of Tier 1 telecommunications providers, we know that they want more than just software. They want an enduring partnership that will last many years, they want innovation, and a forward thinking partner who knows how to guide them on where they need to be to meet market demand three, five or seven years down the line. Membership of respected global bodies, such as the Telemanagement Forum enables us to lead standard adherence in our area of business, giving us a lot of credibility, but Oracle is also involved in this forum with its own telecommunications portfolio, strengthening our position still further. When we approach CEOs, CTOs and CIOs at the very largest Tier 1 operators, not only can we easily show them that our technology is fantastic, we can also talk about our strong partnership with Oracle, and our joint embracing of today's standards and tomorrow's innovation." Where would you like cVidya to be in one year's time? "We want to get all of our relevant products Oracle Exastack Optimized. Our MoneyMap Revenue Assurance solution is already Exastack Optimised, our DRMAP Data Retention Solution should be Exastack Optimised within the next month, and our FraudView Fraud Management solution within the next two to three months. We'd then like to extend our Oracle accreditation out to include other members of the Oracle Engineered Systems family. We are moving into the 'Big Data' space, and so we're obviously very keen to work closely with Oracle to conduct pilots, map new technologies onto Oracle Big Data platforms, and embrace and measure the benefits of other Oracle systems, namely Oracle Exalogic Elastic Cloud, the Oracle Exalytics In-Memory Machine and the Oracle SPARC SuperCluster. We would also like to examine how the Oracle Database Appliance might benefit our Tier 2 service provider customers. Finally, we'd also like to continue working with the Oracle Communications Global Business Unit (CGBU), furthering our integration with Oracle billing products so that we are able to quickly deploy fraud solutions into Oracle's Engineered System stack, give operational benefits to our clients that are pre-integrated, more cost-effective, and can be rapidly deployed rapidly and producing benefits in three months, not nine months." Chris Baker ,Senior Vice President, Oracle Worldwide ISV-OEM-Java Sales Chris Baker is the Global Head of ISV/OEM Sales responsible for working with ISV/OEM partners to maximise Oracle's business through those partners, whilst maximising those partners' business to their end users. Chris works with partners, customers, innovators, investors and employees to develop innovative business solutions using Oracle products, services and skills. Firstly, could you please explain Oracle's current strategy for ISV partners, globally and in EMEA? "Oracle customers use independent software vendor (ISV) applications to run their businesses. They use them to generate revenue and to fulfil obligations to their own customers. Our strategy is very straight-forward. We want all of our ISV partners and OEMs to concentrate on the things that they do the best – building applications to meet the unique industry and functional requirements of their customer. We want to ensure that we deliver a best in class application platform so the ISV is free to concentrate their effort on their application functionality and user experience We invest over four billion dollars in research and development every year, and we want our ISVs to benefit from all of that investment in operating systems, virtualisation, databases, middleware, engineered systems, and other hardware. By doing this, we help them to reduce their costs, gain more consistency and agility for quicker implementations, and also rapidly differentiate themselves from other application vendors. It's all about simplification because we believe that around 25 to 30 percent of the development costs incurred by many ISVs are caused by customising infrastructure and have nothing to do with their applications. Our strategy is to enable our ISV partners to standardise their application platform using engineered architecture, so they can write once to the Oracle stack and deploy seamlessly in the cloud, on-premise, or in hybrid deployments. It's really important that architecture is the same in order to keep cost and time overheads at a minimum, so we provide standardisation and an environment that enables our ISVs to concentrate on the core business that makes them the most money and brings them success." How do you believe this strategy is helping the ISVs to work hand-in-hand with Oracle to ensure that end customers get the industry-leading solutions that they need? "We work with our ISVs not just to help them be successful, but also to help them market themselves. We have something called the 'Oracle Exastack Ready Program', which enables ISVs to publicise themselves as 'Ready' to run the core software platforms that run on Oracle's engineered systems including Exadata and Exalogic. So, for example, they can become 'Database Ready' which means that they use the latest version of Oracle Database and therefore can run their application without modification on Exadata or the Oracle Database Appliance. Alternatively, they can become WebLogic Ready, Oracle Linux Ready and Oracle Solaris Ready which means they run on the latest release and therefore can run their application, with no new porting work, on Oracle Exalogic. Those 'Ready' logos are important in helping ISVs advertise to their customers that they are using the latest technologies which have been fully tested. We now also have Exadata Ready and Exalogic Ready programmes which allow ISVs to promote the certification of their applications on these platforms. This highlights these partners to Oracle customers as having solutions that run fluently on the Oracle Exadata Database Machine, the Oracle Exalogic Elastic Cloud or one of our other engineered systems. This makes it easy for customers to identify solutions and provides ISVs with an avenue to connect with Oracle customers who are rapidly adopting engineered systems. We have also taken this programme to the next level in the shape of 'Oracle Exastack Optimized' for partners whose applications run best on the Oracle stack and have invested the time to fully optimise application performance. We ensure that Exastack Optimized partner status is promoted and supported by press releases, and we help our ISVs go to market and differentiate themselves through the use our technology and the standardisation it delivers. To date we have had several hundred organisations successfully work through our Exastack Optimized programme." How does Oracle's strategy of offering pre-integrated open platform software and hardware allow ISVs to bring their products to market more quickly? "One of the problems for many ISVs is that they have to think very carefully about the technology on which their solutions will be deployed, particularly in the cloud or hosted environments. They have to think hard about how they secure these environments, whether the concern is, for example, middleware, identity management, or securing personal data. If they don't use the technology that we build-in to our products to help them to fulfil these roles, they then have to build it themselves. This takes time, requires testing, and must be maintained. By taking advantage of our technology, partners will now know that they have a standard platform. They will know that they can confidently talk about implementation being the same every time they do it. Very large ISV applications could once take a year or two to be implemented at an on-premise environment. But it wasn't just the configuration of the application that took the time, it was actually the infrastructure - the different hardware configurations, operating systems and configurations of databases and middleware. Now we strongly believe that it's all about standardisation and repeatability. It's about making sure that our partners can do it once and are then able to roll it out many different times using standard componentry." What actions would you recommend for existing ISV partners that are looking to do more business with Oracle and its customer base, not only to maximise benefits, but also to maximise partner relationships? "My team, around the world and in the EMEA region, is available and ready to talk to any of our ISVs and to explore the possibilities together. We run programmes like 'Excite' and 'Insight' to help us to understand how we can help ISVs with architecture and widen their environments. But we also want to work with, and look at, new opportunities - for example, the Machine-to-Machine (M2M) market or 'The Internet of Things'. Over the next few years, many millions, indeed billions of devices will be collecting massive amounts of data and communicating it back to the central systems where ISVs will be running their applications. The only way that our partners will be able to provide a single vendor 'end-to-end' solution is to use Oracle integrated systems at the back end and Java on the 'smart' devices collecting the data – a complete solution from device to data centre. So there are huge opportunities to work closely with our ISVs, using Oracle's complete M2M platform, to provide the infrastructure that enables them to extract maximum value from the data collected. If any partners don't know where to start or who to contact, then they can contact me directly at [email protected] or indeed any of our teams across the EMEA region. We want to work with ISVs to help them to be as successful as they possibly can through simplification and speed to market, and we also want all of the top ISVs in the world based on Oracle." What opportunities are immediately opened to new ISV partners joining the OPN? "As you know OPN is very, very important. New members will discover a huge amount of content that instantly becomes accessible to them. They can access a wealth of no-cost training and enablement materials to build their expertise in Oracle technology. They can download Oracle software and use it for development projects. They can help themselves become more competent by becoming part of a true community and uncovering new opportunities by working with Oracle and their peers in the Oracle Partner Network. As well as publishing massive amounts of information on OPN, we also hold our global Oracle OpenWorld event, at which partners play a huge role. This takes place at the end of September and the beginning of October in San Francisco. Attending ISV partners have an unrivalled opportunity to contribute to elements such as the OpenWorld / OPN Exchange, at which they can talk to other partners and really begin thinking about how they can move their businesses on and play key roles in a very large ecosystem which revolves around technology and standardisation." Finally, are there any other messages that you would like to share with the Oracle ISV community? "The crucial message that I always like to reinforce is architecture, architecture and architecture! The key opportunities that ISVs have today revolve around standardising their architectures so that they can confidently think: “I will I be able to do exactly the same thing whenever a customer is looking to deploy on-premise, hosted or in the cloud”. The right architecture is critical to being competitive and to really start changing the game. We want to help our ISV partners to do just that; to establish standard architecture and to seize the opportunities it opens up for them. New market opportunities like M2M are enormous - just look at how many devices are all around you right now. We can help our partners to interface with these devices more effectively while thinking about their entire ecosystem, rather than just the piece that they have traditionally focused upon. With standardised architecture, we can help people dramatically improve their speed, reach, agility and delivery of enhanced customer satisfaction and value all the way from the Java side to their centralised systems. All Oracle ISV partners must take advantage of these opportunities, which is why Oracle will continue to invest in and support them." -- Gergely Strbik is Oracle Hardware and Software Product Manager for Avnet in Hungary. Avnet Technology Solutions is an OracleValue Added Distributor focused on the development of the existing Oracle channel. This includes the recruitment and enablement of Oracle partners as well as driving deeper adoption of Oracle's technology and application products within the IT channel. "The main business benefits of ODA for our customers and partners are scalability, flexibility, a great price point for the high performance delivered, and the easily configurable embedded Linux operating system. People welcome a lower point of entry and the ability to grow capacity on demand as their business expands." "Marketing and selling the ODA requires another way of thinking because it is an appliance. We have to transform the ways in which our partners and customers think from buying hardware and software independently to buying complete solutions. Successful early adopters and satisfied customer reactions will certainly help us to sell the ODA. We will have more experience with the product after the first deliveries and installations—end users need to see the power and benefits for themselves." "Our typical ODA customers will be those looking for complete solutions from a single reseller partner who is also able to manage the appliance. They will have enjoyed using Oracle Database but now want a new product that is able to unlock new levels of performance. A higher proportion of potential customers will come from our existing Oracle base, with around 30% from new business, but we intend to evangelise the ODA on the market to see how we can change this balance as all our customers adjust to the concept of 'Hardware and Software, Engineered to Work Together'. -- Back to the welcome page

    Read the article

  • Top 5 Sites and Activities in San Francisco to Experience During Oracle OpenWorld

    - by kgee
    While Oracle OpenWorld may provide solutions and information on topics like how to simplify your IT, the importance of cloud, and what types of storage may satisfy your enterprise needs, who is going to tell you more about San Francisco? Here are some suggested sites and activities to experience after OpenWorld that aren’t too far from the Moscone Center. It is recommended to take a cab for the sake of time, but the 6 square miles that make up San Francisco will make for a quick trek to any of the following destinations: The Golden Gate BridgeAn image often associated with San Francisco, this bridge is one of the most impressive in the world. Take a walk across it, or view it from nearby Crissy Field, it is a sight that floors even the most veteran of San Franciscans. The Ferry BuildingLocated at the end of Market Street in the Embarcadero, the Ferry Building once served as a hub of water transport and trade. The building has a bay front view and an array of food choices and restaurants. It is easily accessible via the Muni, BART, trolley or by cab. It is a must-see in San Francisco, and not too far from the Moscone Center. Ride the Trolley to the CastroFor only $2, you can get go back in history for a moment on the Trolley. Take the F-line from the Embarcadero and ride it all the way to the Castro district. During the ride, you will get an overview of the landscape and cultures that are prevalent in San Francisco, but be wary that some areas may beg for an open mind more than others. Golden Gate ParkWhen you tire of the concrete jungle, the lucky part of being in San Francisco is that you can escape to a natural refuge, this park being one of the favorites. This park is known for its hiking trails, cultural attractions, monuments, lakes and gardens. It is one good reason to bring your sneakers to San Francisco, and is also a great place to picnic. Please be wary that it is easy to get lost, and it is advisable to bring a map (just in case) if you go. Haight AshburyFor a complete change of scenery, Haight Ashbury is known as one of the places hippies used to live and the location of "The Summer of Love." It is now a more affluent neighborhood with boutique shops and the occasional drum circle. While it may be perceived as grungy in certain spots, it is one of the most photographed places in San Francisco and an integral part of San Franciscan history.

    Read the article

  • Coming to a City Near You: Oracle Business Analytics Summits

    - by Rob Reynolds
    More and more organizations use analytics to identify new business opportunities, reduce costs, and optimize business processes. How? By making business information available throughout the enterprise—and making sure that it is relevant, actionable, and easy to access.Oracle invites you to join us for an information-packed event where you’ll learn about the latest trends, best practices, and innovations in business intelligence, analytic applications, and data warehousing.If you are an IT professional involved in BI strategy, program management, systems management, architecture, or deployment, this event is for you. You’ll find out about: New ways of deploying and delivering business intelligence on premise, in the cloud, and on mobile devices to a diverse base of business users New approaches for integrating, storing, managing, securing, and accessing your ever-growing volumes of structured and unstructured data The latest strategies for dramatically increasing the ROI of your ERP and CRM deployments Click here to view the presentation abstracts. Agenda 9:00 a.m. Registration 10:00 a.m. Keynote: Business Analytics—Be the First to Know 11:00 a.m. Break Breakout Sessions Technology and Architecture Strategy Track Business Insight and Analytic Delivery Track 11:15 a.m. Emerging Trends in Enterprise BI Platforms 11:15 a.m. Mobile BI—More than Dashboards on a Tablet 12:00 noon Networking Lunch 12:00 noon Networking Lunch 1:00 p.m. Is Your Business Intelligence Data at Risk? 1:00 p.m. Geospatial Intelligence—Location, Location, Location! 1:45 p.m. What Extreme Performance Means for Your Business 1:45 p.m. The Role of BI in Your ERP and Performance Management Initiatives 2:30 p.m. Become a BI Architect 2:30 p.m. BI Applications: Step 1 in Your ERP Upgrade or Expansion 3:00 p.m. Partner Spotlight Registration links for each city are below: New York , NY- July 26 Miami, FL - July 27 Reston, VA, July 27 Atlanta, GA - July 28 Boston, MA - July 28 Rochester, NY - Aug 2 (event link coming soon!) Menlo Park, CA - August 2 Charlotte, NC - August 3 Newport Beach, CA - August 3 Register online at the links above or call 1.800.820.5592 ext. 9218 to reserve your place.

    Read the article

  • Finance: Friends, not foes!

    - by red@work
    After reading Phil's blog post about his experiences of working on reception, I thought I would let everyone in on one of the other customer facing roles at Red Gate... When you think of a Credit Control team, most might imagine money-hungry (and often impolite) people, who will do nothing short of hunting people down until they pay up. Well, as with so many things, not at Red Gate! Here we do things a little bit differently.   Since joining the Licensing, Invoicing and Credit Control team at Red Gate (affectionately nicknamed LICC!), I have found it fantastic to work with people who know that often the best way to get what you want is by being friendly, reasonable and as helpful as possible. The best bit about this is that, because everyone is in a good mood, we have a great working atmosphere! We are definitely a very happy team. We laugh a lot, even when dealing with the serious matter of playing table football after lunch. The most obvious part of my job is bringing in money. There are few things quite as satisfying as receiving a big payment or one that you've been chasing for a long time. That being said, it's just as nice to encounter the companies that surprise you with a payment bang on time after little or no chasing. It's always a pleasure to find these people who are generous and easy to work with, and so they always make me smile, too. As I'm in one of the few customer facing roles here, I get to experience firsthand just how much Red Gate customers love our software and are equally impressed with our customer service. We regularly get replies from people thanking us for our help in resolving a problem or just to simply say that they think we're great. Or, as is often the case, that we 'rock and are awesome'! When those are the kinds of emails you have to deal with for most of the day, I would challenge anyone to be unhappy! The best thing about my work is that, much like Phil and his counterparts on reception, I get to talk to people from all over the world, and experience their unique (and occasionally unusual) personality traits. I deal predominantly with customers in the US, so I'll be speaking to someone from a high flying multi-national in New York one minute, and then the next phone call will be to a small office on the outskirts of Alabama. This level of customer involvement has led to a lot of interesting anecdotes and plenty of in-jokes to keep us amused! Obviously there are customers who are infuriating, like those who simply tell us that they will pay "one day", and that we should stop chasing them. Then there are the people who say that they ordered the tools because they really like them, but they just can't afford to actually pay for them at the moment. Thankfully these situations are relatively few and far between, and for every one customer that makes you want to scream, there are far, far more that make you smile!

    Read the article

  • What is a resonable workflow for designing webapps?

    - by Evan Plaice
    It has been a while since I have done any substantial web development and I'd like to take advantage of the latest practices but I'm struggling to visualize the workflow to incorporate everything. Here's what I'm looking to use: CakePHP framework jsmin (JavaScript Minify) SASS (Synctactically Awesome StyleSheets) Git CakePHP: Pretty self explanatory, make modifications and update the source. jsmin: When you modify a script, do you manually run jsmin to output the new minified code, or would it be better to run a pre-commit hook that automatically generates jsmin outputs of javascript files that have changed. Assume that I have no knowledge of implementing commit hooks. SASS: I really like what SASS has to offer but I'm also aware that SASS code isn't supported by browsers by default so, at some point, the SASS code needs to be transformed to normal CSS. At what point in the workflow is this done. Git I'm terrified to admit it but, the last time I did any substantial web development, I didn't use SCM source control (IE, I did use source control but it consisted of a very detailed change log with backups). I have since had plenty of experience using Git (as well as mercurial and SVN) for desktop development but I'm wondering how to best implement it for web development). Is it common practice to implement a remote repository on the web host so I can push the changes directly to the production server, or is there some cross platform (windows/linux) tool that makes it easy to upload only changed files to the production server. Are there web hosting companies that make it eas to implement a remote repository, do I need SSH access, etc... I know how to accomplish this on my own testing server with a remote repository with a separate remote tracking branch already but I've never done it on a remote production web hosting server before so I'm not aware of the options yet. Extra: I was considering implementing a javascript framework where separate javascript files used on a page are compiled into a single file for each page on the production server to limit the number of file downloads needed per page. Does something like this already exist? Is there already an open source project out in the wild that implements something similar that I could use and contribute to? Considering how paranoid web devs are about performance (and the fact that the number of file requests on a website is a big hit to performance) I'm guessing that there is some wizard hacker on the net who has already addressed this issue.

    Read the article

  • View Images and Videos in 3D in Firefox

    - by Asian Angel
    Different websites have their own format for viewing images and videos, but may not be a lot of fun to use. The Cooliris extension for Firefox lets you view those same images and videos in a dynamic 3D format. Before For our example we conducted a search for nature photos at Flickr. You could view them in a static format or even as a slideshow but what about something more dynamic looking? After As soon as the extension has finished installing, you will notice a new toolbar button used for launching the Cooliris tab. When you launch the Cooliris tab you will have an expandable menu system in the upper left corner. A speed dial setup in the center. And a small toolbar in the lower right corner Before going further you should check and make any desired adjustments in the preferences to enhance your viewing experience. In the upper right corner you can start your search by selecting from the available sources. The same search for nature images is more focused and clean looking this time. Clicking on an image will bring it forward and enlarge it. You can use the slider tool at the bottom of the tab to browser left or right through the images and videos. And when you find one that interests you, click on the popout button to open it in a new tab. Conclusion The Cooliris extension makes viewing images and videos fun and interactive with its’ 3D style format. Links Download the Cooliris extension (Mozilla Add-ons) Download Cooliris for Firefox, Internet Explorer, Safari (Mac Only), & Chrome Similar Articles Productive Geek Tips Make Firefox Display Large Images Full SizeInstalling Windows Media Player Plugin for FirefoxStop YouTube Videos from Automatically Playing in FirefoxShare Text & Images the Easy Way with JustPaste.itEasily View Source of Included Files in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro FetchMp3 Can Download Videos & Convert Them to Mp3 Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED

    Read the article

  • Installed Ubuntu 14.04LTS

    - by user291729
    On my laptop which came pre-installed with Windows 8.1. Felt I needed to see the competition for myself to establish which was a better OS. So I followed the channels to dual boot. All seemed fine and I accessed Ubuntu with no issues after selecting this from the menu to select the OS. I should add that the boot method was changed to legacy. However, since using Ubuntu, I no longer have the ability to select the OS. The laptop simply logs straight into Ubuntu. I therefore attempted to access the recovery options, only it appears the Windows 8 bootloader has somehow been corrupted as I am now told to use the Windows 8 recovery disc (which, as this was pre-installed - I do not have). Left with no other alternative, I have scoured these forums without success, and so I am hoping someone in the know (or who has experienced similar) can help. I have tried boot repair again without success. On rebooting I am only presented with a basic splash screen asking me to select Ubuntu, Memtest, Windows 8 Recovery or Windows 8 Bootloader (The bootloaders again require I insert the disc). I have tried Code: cat /boot/grub/grub.cfg df -h sudo fdisk -l cat /proc/partitions # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi if [ "${next_entry}" ] ; then set default="${next_entry}" set next_entry= save_env next_entry set boot_once=true else set default="0" fi if [ x"${feature_menuentry_id}" = xy ]; then menuentry_id_option="--id" else menuentry_id_option="" fi export menuentry_id_option if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { if [ x$feature_all_video_module = xy ]; then insmod all_video else insmod efi_gop insmod efi_uga insmod ieee1275_fb insmod vbe insmod vga insmod video_bochs insmod video_cirrus fi } if [ x$feature_default_font_path = xy ] ; then font=unicode else insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi font="/usr/share/grub/unicode.pf2" fi if loadfont $font ; then set gfxmode=800x600 load_video insmod gfxterm set locale_dir=$prefix/locale set lang=en_GB insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ] ; then set timeout=-1 else if [ x$feature_timeout_style = xy ] ; then set timeout_style=menu set timeout=20 # Fallback normal timeout code in case the timeout_style feature is # unavailable. else set timeout=20 fi fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi linux /boot/vmlinuz-3.13.0-29-generic root=UUID=d2f10f36-e3bb-4d83-a9b8-5d456fc454ad ro vga=789 quiet quiet splash $vt_handoff initrd /boot/initrd.img-3.13.0-29-generic } submenu 'Advanced options for Ubuntu' $menuentry_id_option 'gnulinux-advanced-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { menuentry 'Ubuntu, with Linux 3.13.0-29-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-29-generic-advanced-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi echo 'Loading Linux 3.13.0-29-generic ...' linux /boot/vmlinuz-3.13.0-29-generic root=UUID=d2f10f36-e3bb-4d83-a9b8-5d456fc454ad ro vga=789 quiet quiet splash $vt_handoff echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-29-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-29-generic-recovery-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { recordfail load_video insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi echo 'Loading Linux 3.13.0-29-generic ...' linux /boot/vmlinuz-3.13.0-29-generic root=UUID=d2f10f36-e3bb-4d83-a9b8-5d456fc454ad ro recovery nomodeset vga=789 quiet echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-24-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-24-generic-advanced-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi echo 'Loading Linux 3.13.0-24-generic ...' linux /boot/vmlinuz-3.13.0-24-generic root=UUID=d2f10f36-e3bb-4d83-a9b8-5d456fc454ad ro vga=789 quiet quiet splash $vt_handoff echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-24-generic } menuentry 'Ubuntu, with Linux 3.13.0-24-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-24-generic-recovery-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { recordfail load_video insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi echo 'Loading Linux 3.13.0-24-generic ...' linux /boot/vmlinuz-3.13.0-24-generic root=UUID=d2f10f36-e3bb-4d83-a9b8-5d456fc454ad ro recovery nomodeset vga=789 quiet echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-24-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry 'Memory test (memtest86+)' { insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi knetbsd /boot/memtest86+.elf } menuentry 'Memory test (memtest86+, serial console 115200)' { insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry 'Windows Recovery Environment (loader) (on /dev/sda2)' --class windows --class os $menuentry_id_option 'osprober-chain-7A6A69D66A698FA5' { insmod part_gpt insmod ntfs set root='hd0,gpt2' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 --hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2 7A6A69D66A698FA5 else search --no-floppy --fs-uuid --set=root 7A6A69D66A698FA5 fi drivemap -s (hd0) ${root} chainloader +1 } menuentry 'Windows 8 (loader) (on /dev/sda3)' --class windows --class os $menuentry_id_option 'osprober-chain-8C88-80F7' { insmod part_gpt insmod fat set root='hd0,gpt3' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt3 --hint-efi=hd0,gpt3 --hint-baremetal=ahci0,gpt3 8C88-80F7 else search --no-floppy --fs-uuid --set=root 8C88-80F7 fi drivemap -s (hd0) ${root} chainloader +1 } set timeout_style=menu if [ "${timeout}" = 0 ]; then set timeout=10 fi ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/30_uefi-firmware ### ### END /etc/grub.d/30_uefi-firmware ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f ${config_directory}/custom.cfg ]; then source ${config_directory}/custom.cfg elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi if [ "${next_entry}" ] ; then set default="${next_entry}" set next_entry= save_env next_entry set boot_once=true else set default="0" fi if [ x"${feature_menuentry_id}" = xy ]; then menuentry_id_option="--id" else menuentry_id_option="" fi export menuentry_id_option if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { if [ x$feature_all_video_module = xy ]; then insmod all_video else insmod efi_gop insmod efi_uga insmod ieee1275_fb insmod vbe insmod vga insmod video_bochs insmod video_cirrus fi } if [ x$feature_default_font_path = xy ] ; then font=unicode else insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi font="/usr/share/grub/unicode.pf2" fi if loadfont $font ; then set gfxmode=800x600 load_video insmod gfxterm set locale_dir=$prefix/locale set lang=en_GB insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ] ; then set timeout=-1 else if [ x$feature_timeout_style = xy ] ; then set timeout_style=menu set timeout=20 # Fallback normal timeout code in case the timeout_style feature is # unavailable. else set timeout=20 fi fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi linux /boot/vmlinuz-3.13.0-29-generic root=UUID=d2f10f36-e3bb-4d83-a9b8-5d456fc454ad ro vga=789 quiet quiet splash $vt_handoff initrd /boot/initrd.img-3.13.0-29-generic } submenu 'Advanced options for Ubuntu' $menuentry_id_option 'gnulinux-advanced-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { menuentry 'Ubuntu, with Linux 3.13.0-29-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-29-generic-advanced-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi echo 'Loading Linux 3.13.0-29-generic ...' linux /boot/vmlinuz-3.13.0-29-generic root=UUID=d2f10f36-e3bb-4d83-a9b8-5d456fc454ad ro vga=789 quiet quiet splash $vt_handoff echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-29-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-29-generic-recovery-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { recordfail load_video insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi echo 'Loading Linux 3.13.0-29-generic ...' linux /boot/vmlinuz-3.13.0-29-generic root=UUID=d2f10f36-e3bb-4d83-a9b8-5d456fc454ad ro recovery nomodeset vga=789 quiet echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-24-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-24-generic-advanced-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi echo 'Loading Linux 3.13.0-24-generic ...' linux /boot/vmlinuz-3.13.0-24-generic root=UUID=d2f10f36-e3bb-4d83-a9b8-5d456fc454ad ro vga=789 quiet quiet splash $vt_handoff echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-24-generic } menuentry 'Ubuntu, with Linux 3.13.0-24-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-24-generic-recovery-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { recordfail load_video insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi echo 'Loading Linux 3.13.0-24-generic ...' linux /boot/vmlinuz-3.13.0-24-generic root=UUID=d2f10f36-e3bb-4d83-a9b8-5d456fc454ad ro recovery nomodeset vga=789 quiet echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-24-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry 'Memory test (memtest86+)' { insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi knetbsd /boot/memtest86+.elf } menuentry 'Memory test (memtest86+, serial console 115200)' { insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry 'Windows Recovery Environment (loader) (on /dev/sda2)' --class windows --class os $menuentry_id_option 'osprober-chain-7A6A69D66A698FA5' { insmod part_gpt insmod ntfs set root='hd0,gpt2' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 --hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2 7A6A69D66A698FA5 else search --no-floppy --fs-uuid --set=root 7A6A69D66A698FA5 fi drivemap -s (hd0) ${root} chainloader +1 } menuentry 'Windows 8 (loader) (on /dev/sda3)' --class windows --class os $menuentry_id_option 'osprober-chain-8C88-80F7' { insmod part_gpt insmod fat set root='hd0,gpt3' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt3 --hint-efi=hd0,gpt3 --hint-baremetal=ahci0,gpt3 8C88-80F7 else search --no-floppy --fs-uuid --set=root 8C88-80F7 fi drivemap -s (hd0) ${root} chainloader +1 } set timeout_style=menu if [ "${timeout}" = 0 ]; then set timeout=10 fi ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/30_uefi-firmware ### ### END /etc/grub.d/30_uefi-firmware ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f ${config_directory}/custom.cfg ]; then source ${config_directory}/custom.cfg elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### john@john-SVE1713Y1EB:~$ ^C john@john-SVE1713Y1EB:~$ ^C john@john-SVE1713Y1EB:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda9 84G 7.1G 73G 9% / none 4.0K 0 4.0K 0% /sys/fs/cgroup udev 3.9G 4.0K 3.9G 1% /dev tmpfs 794M 1.4M 793M 1% /run none 5.0M 0 5.0M 0% /run/lock none 3.9G 80K 3.9G 1% /run/shm none 100M 52K 100M 1% /run/user /dev/sdc1 7.5G 2.2G 5.4G 29% /media/john/DYLANMUSIC /dev/sr0 964M 964M 0 100% /media/john/Ubuntu 14.04 LTS amd64 /dev/sdb1 1.9T 892G 972G 48% /media/john/Storage Main WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x4e2ccf75 Device Boot Start End Blocks Id System /dev/sda1 1 1953525167 976762583+ ee GPT Partition 1 does not start on physical sector boundary. Disk /dev/sdc: 8011 MB, 8011120640 bytes 41 heads, 41 sectors/track, 9307 cylinders, total 15646720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc3072e18 Device Boot Start End Blocks Id System /dev/sdc1 8064 15646719 7819328 b W95 FAT32 Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc7d968ff Device Boot Start End Blocks Id System /dev/sdb1 64 3907029119 1953514528 7 HPFS/NTFS/exFAT major minor #blocks name 8 0 976762584 sda 8 1 266240 sda1 8 2 1509376 sda2 8 3 266240 sda3 8 4 131072 sda4 8 5 841012780 sda5 8 6 358400 sda6 8 7 35376128 sda7 8 8 1024 sda8 8 9 89501696 sda9 8 10 8337408 sda10 11 0 987136 sr0 8 32 7823360 sdc 8 33 7819328 sdc1 8 16 1953514584 sdb 8 17 1953514528 sdb1 I am no expert on this and I'm at a loss as how to correct this without having to re-format everything and reinstall Windows 8. However, if I'm to try using Ubuntu again then there is the risk this problem may come back. Again, I did not do anything manually - the installer did everything (with the exception of changing the boot to Legacy to allow the booting of another bootloader). LiveCD works but doesn't give me the options that I've seen here and as mentioned earlier, only boot recovery only gives me the options as mentioned earlier. Also this fails to load via USB (possibly because HDD comes before USB in the boot order?). Being used to a Windows environment, the Ubuntu (and Linux) environment is a dive at a less than comfortable depth at present (but one I fully intend to get to grips with - especially the commands being more common via Terminal). I very much appreciate the help with this guys.

    Read the article

  • Importing PKCS#12 (.p12) files into Firefox From the Command Line

    - by user11165
    I’ve posted this question up on #Ubuntu and #Firefox Forums, and really could do with some help.. Anyone know where i could look or help with the answer. I’m hoping the power of social media will come through… I have a need to perform the following action: Firefox 3.6.x: Quote: open Edit - Preferences - Advanced - Encryption - View Certificates - Your Certificates - Import However i need the same functionality from the bash command line. So far I’ve established that the following command is supposed to be used: Quote: certutil -A -t “u,u,u” -d /home/df001/.mozilla/firefox/qe5y5lht.tc.default/ -n “mycert” -i client.p12 This executes with no isses, however, doesn’t show up in any Firefox Certificate store. However, I have noted that prior to running this command, i have a cert8.db key3.db and secmod.db file in the above folder. After running the command the certutil seems to have created a cert9.db, key4.db and pkcs12.txt file Listing the contents using the command: Quote: certutil -L -d sql:/home/df001/.mozilla/firefox/qe5y5lht.tc.default/ does seem to confirm my attempts of importing files into a certificate folder of some kind have worked. because i get Quote: Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI Thawte SSL CA „ Go Daddy Secure Certification Authority „ Thawte SGC CA „ Entrust Certification Authority - L1C „ My Nero CT,C,c mynero P„ davidfield - Internet Widgits Pty Ltd u,u,u So, having tried this, and heading back over to the www, i cam across this command: Quote: pk12util -d /home/df001/.mozilla/firefox/qe5y5lht.tc.default/ -i client.p12 -n “David Field” -P “cert8.db” this again, appears to be importing something somewhere, however, again, Viewing certs from the Firefox interface doesn’t show the imported Cert. I’m surmising here on reading that the certutil and pk12util are creating a new NSS database, which firefox isn’t reading. So my question is, how can i get the p12 cert from the command line so it displays in the firefox Certificate manager interface? Why have i posted this here? Why not post on the firefox forum? Well i will copy and post the same question there as well, however the ability to use the command line to do this is important, as I have potentially 2000 machines which will need a user cert imported into firefox via a p12 file. I need to do this in the form of a script, i thought the hard part was going to be making the p12 file from the microsoft 2003 CA, turns out thats easy. I can’t just import via the GUI and copy over cert8.db x 2000, i can’t ask users to use the CA webinterface as its for VPN access, the users are off site, and they need the VPN to get to the cert server.. Is there any person out there who can help? By the way, i don't have the tor buttun installed.

    Read the article

  • Keep taking the tablets

    - by Roger Hart
    A guest editorial for the SimpleTalk newsletter. So why would Red Gate build an Ipad Game? Is it just because tablet devices are exciting and cool? Ok, maybe a little. Mostly, it was seeing that the best existing tablet and smartphone apps do simple, intuitive things, using simple intuitive interfaces to solve single problems. That's pretty close to what we call our own "intuitively simple" approach to software. Tablets and mobile could be fantastic for us, if we can identify those problems that a tablet device can solve. How do you create THE next tool for a completely new technology? We're glad we don't face that problem every day, but it's pretty exciting when we do. We figure we should learn by doing. We created "MobileFoo" (a Red Gate Company) , we picked up some shiny Apple tech, and got to grips with Objective C, and life in the App Store ecosystem. The result so far is an iPad game: Stacks and Heaps It's Rob and Marine's spin on Snakes and Ladders. Instead of snakes we have unhandled exceptions, a blue screen of death, and other hazards. We wanted something compellingly geeky on mobile, and we're pretty sure we've got it. It's trudging through App Store approval as we speak. but if you want to get an idea of what it is like to switch from .net to Objective C, take a look at Rob's post Android and iOS is quite a culture-change for Windows developers. So to give them a feel for the problems real users might have, we needed some real users - we offered our colleagues subsidised tablets. The only conditions were that they get used at work, and we get the feedback. Seeing tablets around the office is starting to give us some data points: Is typing the bottleneck? Will tablets ever cut it as text-entry devices, and could we fix it? Is mobile working held up by the pain of connecting to work LANs? How about security? Multi-tasking will let tablets do more. They're small, easy to use, almost instant to switch on, and connect by Wi Fi. There's plenty on that list to make a sysadmin twitchy. We'll find out as people spend more time working with these devices, and we'd love to hear what you think about tablet devices too. (comments are filtered, what with the spam)

    Read the article

  • How valuable are you to your organization?

    - by Lance Shaw
    I don't know about you but I find it easy to get bogged down with the daily list of tasks and deliverables.  We all have lots to do and it all seems to be due tomorrow.  If you are reading this blog, than your to-do list is almost certainly filled with tasks related to the management, processing and publishing of information.  As we get mired in the daily routine of making sure that the content management needs of the organizations are met, we can easily lose sight of the value that we bring.  After all, if information and content is the lifeblood of our organizations, then surely maintaining the healthy flow of that information has real value.  But how can you measure that value and bring it forward on your résumé or your list of achievements in time for your next performance review? The AIIM organization has spent a lot of time recently researching the value of certification for "information professionals".  When it comes to enterprise content management (ECM) there are many areas of specialization including records management, content archivist, digital asset manager, content librarian and more.  Specialization can clearly drive up your value but it can also lock you into a narrow niche area of focus.  AIIM has found that what companies also need is someone that can apply their knowledge of how information is managed within the operational scope of the business in order to drive real, measurable strategic value.  When you can showcase the value of a broader, business-wide mindset to your management, you have more opportunity to make professional progress and drive real growth where it counts, your paycheck.   We here on the Oracle WebCenter team partnered with AIIM on the research they performed around the value of an information professional certification program. In a webinar this week, Doug Miles of AIIM and I will be talking about the results of that recent survey and what it is going to mean in the future to be recognized as a "Certified Information Professional" (CIP).  Oracle sponsored this research to help individuals and companies understand the value of enterprise content management and what it means across the entire organization. I hope you will join us. If any of us were stopped in the street and were asked about it, I bet most of us would think of ourselves as an "Information Professional".  Now we have a way to actually prove it!  There's only one downside that I can see...  you will have to get your business cards updated to include the "CIP" acronym after your name.  I think you will agree that is a price worth paying!

    Read the article

  • Overload Avoidance

    - by mikef
    A little under a year ago, Matt Simmons wrote a rather reflective article about his terrifying brush with stress-induced ill health. SysAdmins and DBAs have always been prime victims of work-related stress, but I wonder if that predilection is perhaps getting worse, despite the best efforts of Matt and his trusty side-kick, HR. The constant pressure from share-holders and CFOs to 'streamline' the workforce is partially to blame, but the more recent culprit is technology itself. I can't deny that the rise of technologies like virtualization, PowerCLI, PowerShell, and a host of others has been a tremendous boon. As a result, individual IT professionals are now able to handle more and more tasks and manage increasingly large and complex environments. But, without a doubt, this is a two-edged sword; The reward for competence is invariably more work. Unfortunately, SysAdmins play such a pivotal role in modern business that it's easy to see how they can very quickly become swamped in conflicting demands coming from different directions. However, that doesn't justify the ridiculous hours many are asked (or volunteer) to devote to their work. Admirably though their commitment is, it isn't healthy for them, it sets a dangerous expectation, and eventually something will snap. There are times when everyone needs to step up to the plate outside of 'normal' work hours, but that time isn't all the time. Naturally, with all that lovely technology, you can automate more and more of those tricky tasks to keep on top of the workload, but you are still only human. Clever though you may be, there is a very real limit to how far technology can take you. I'm not suggesting that you avoid these technologies, or deliberately aim for mediocrity; I'm just saying that you need to be more than just technically skilled (and Wesley Nonapeptide riffs on and around this topic in his excellent 'Telepathic Robot Drones' blog post). You need to be able to manage expectations, not just Exchange. Specifically, that means your own expectations of what you are capable of, because those come before everyone else's. After all, how can you keep your work-life balance under control, if you're the one setting the bar way too high? Talking to your manager, or discussing issues with your users, is only going to be productive if you have some facts to work with. "Know Thyself" is the first law of managing work overload, and this is obviously a skill which people develop over time; the fact that veteran Sysadmins exist at all is testament to this. I'd just love to know how you get to that point. Personally, I'm using RescueTime to keep myself honest, but I'm open to recommendations for better methods. Do you track your own time, do you have an intuitive sense of what is possible, or do you just rely on someone else to handle that all for you? Cheers, Michael

    Read the article

< Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >