Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 166/679 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Adam Bien Testimonial at GlassFish Community Event, JavaOne 2012

    - by arungupta
    Adam Bien, a self-employed enterprise Java consultant, an author of five star-rated books, a presenter, a Java Champion, a NetBeans Dream Team member, a JCP member, a JCP Expert Group Member of several Java EE groups, and with several other titles is one of the most vocal advocate of the Java EE platform. His code-driven workshops using Java EE 6, NetBeans, and GlassFish have won accolades at several developers' conferences all around the world. Adam has been using GlassFish for all his projects for many years. One of the reasons he uses GlassFish is because of high confidence that the Java EE compliance bug will be fixed faster. He find GlassFish very capable application server for faster development and continuous deployment. His own media properties are running on GlassFish with an Apache front-end. Good documentation, accessible source code, REST/Web/CLI administration and monitoring facilities are some other reasons to pick GlassFish. He presented at the recently concluded GlassFish community event at JavaOne 2012. You can watch the video (with transcript) below showing him in full action:

    Read the article

  • Fibonacci numbers in F#

    - by BobPalmer
    As you may have gathered from some of my previous posts, I've been spending some quality time at Project Euler.  Normally I do my solutions in C#, but since I have also started learning F#, it only made sense to switch over to F# to get my math coding fix. This week's post is just a small snippet - spefically, a simple function to return a fibonacci number given it's place in the sequence.  One popular example uses recursion: let rec fib n = if n < 2 then 1 else fib (n-2) + fib(n-1) While this is certainly elegant, the recursion is absolutely brutal on performance.  So I decided to spend a little time, and find an option that achieved the same functionality, but used a recursive function.  And since this is F#, I wanted to make sure I did it without the use of any mutable variables. Here's the solution I came up with: let rec fib n1 n2 c =    if c = 1 then        n2    else        fib n2 (n1+n2) (c-1);;let GetFib num =    (fib 1 1 num);;printfn "%A" (GetFib 1000);; Essentially, this function works through the sequence moving forward, passing the two most recent numbers and a counter to the recursive calls until it has achieved the desired number of iterations.  At that point, it returns the latest fibonacci number. Enjoy!

    Read the article

  • Oracle ZFSSA Hybrid Storage Pool Demo

    - by Darius Zanganeh
    The ZFS Hybrid Storage Pool (HSP) has been around since the ZFSSA first launched.  It is one of the main contributors to the high performance we see on the Oracle ZFSSA both in benchmarks as well as many production environments.  Below is a short video I made to show at a high level just how impactful this HSP pool is on storage performance.  We squeeze a ton of performance out of our drives with our unique use of cache, write optimized ssd and read optimized ssd.  Many have written and blogged about this technology, here it is in action. Demo of the Oracle ZFSSA Hybrid Storage Pool and how it speeds up workloads.

    Read the article

  • Bluetooth drivers for Dell Latitude D830

    - by blomqvist
    I haven’t been using Bluetooth on my D830 before but now I got myself a Bluetooth mouse in order to get rid of the receiver for a wireless mouse. And then I discovered that Bluetooth did not work anymore after I installed Win7. Browsing Dell support pages for a a driver found me one (browsing by computer model etc..). But that drivers does not work. Installation looks ok and everything but no devices are found. i kept looking and found this driver, also at Dells site: http://support.us.dell.com/support/downloads/download.aspx?c=us&l=en&s=gen&releaseid=R231570&formatcnt=1&libid=0&typeid=-1&dateid=-1&formatid=-1&fileid=333170 And that one worked!  Problem solved an I am now happily using my Bluetooth mouse instead.

    Read the article

  • Improving Plant Reliability and Uptime with Oracle Asset Lifecycle

    Successful factories around the world leverage information to drive their production and supply chains. New tools are available today to further catapult the data collection, analysis, contextualization and collaboration to the various stakeholders involved in the manufacturing process. Oracle Manufacturing Operations Center (MOC) addresses the factory's need for accurate and timely information about product and process quality, insight into shop floor operations, and performance of production assets. It solves the complex problem of connecting fragmented disconnected shop floor data to the business context of your ERP and provides the solid foundation for running Continuous Improvement (CI) programs such as Lean and Six Sigma.

    Read the article

  • What Works in Data Integration?

    - by dain.hansen
    TDWI just recently put out this paper on "What Works in Data Integration". I invite you especially to take a look at the section on "Accelerating your Business with Real-time Data Integration" and the DIRECTV case study. The article discusses some of the technology considerations for BI/DW and how data integration plays a role to deliver timely, accessible, and high-quality data. It goes on to outline the three key requirements for how to deliver high performance, low impact, and reliability and how that can translate to faster results. The DIRECTV webinar is something you definitely want to take a look at, you'll hear how DIRECTV successfully transformed their data warehouse investments into a competitive advantage with Oracle GoldenGate.

    Read the article

  • How do I Fix SQL Server error: Order by items must appear in the select list if Select distinct is s

    - by Paula DiTallo 2007-2009 All Rights Reserved
    There's more than one reason why you may receive this error, but the most common reason is that your order by statement column list doesn't correlate with the values specified in your column list when you happen to be using DISTINCT. This is usually easy to spot and resolve. A more obscure reason may be that you are using a function around one of the selected columns --but omitting to use the same function around the same selected column name in the order by statement. Here's an example:   select distinct upper(columnA)   from [evaluate].[testTable]    order by columnA  asc   This statement will cause the "Order by items must appear in the select list if SELECT DISTINCT is specified."  error to appear not because distinct was used, but because the order by statement did not utilize the upper() fundtion around colunnA.  To correct this error, do this: select distinct upper(columnA)   from [evaluate].[testTable]    order by upper(columnA) asc

    Read the article

  • Are You In The Know About Knowledge?

    - by [email protected]
    "Knowledge is of two kinds. We know a subject ourselves, or we know where we can find information on it." To me, this simple and elegant quote from the great English author Samuel Johnson is a reflection of Oracle's knowledge base strategy. The knowledge base in the My Oracle Support portal (https://support.oracle.com) hosts nearly a half million documents, including how-to instructions, problem-solution descriptions, code samples, FAQs, critical alerts, technical whitepapers, and so on. AutoVue's footprint in the Oracle knowledge base - although relatively small at just around 400 documents - is a steadily-expanding assortment of valuable info. This information is designed to complement what you have already learned from the AutoVue documentation, or in some cases, to examine topics not yet covered in the documentation. Similar to the documentation, the knowledge base is one of the highest-value self-service avenues, since it delivers answers in real-time and is driven by the topics most relevant to customers. There are many different ways to leverage the AutoVue knowledge content, or what Oracle often refers to as "KM Notes": 1. Knowledge Browser: To browse the knowledge hierarchy, click on the 'Knowledge' tab at the top of the My Oracle Support webpage. In the list of product areas at the left, click on 'More Applications', then on 'Oracle AutoVue'. From here, you can either view the full set of KM Notes under the AutoVue product family (AutoVue, VueLink, Web Services, Document Print Services, etc) by clicking on 'All of Oracle AutoVue', or you can drill down further by clicking on 'Enterprise Visualization'. 2. Search: To execute simple keyword searches, use the Search bar at the top-right of the My Oracle Support webpage: 3. Advanced Search: Beside the same Search bar at the top-right of the My Oracle Support webpage, click on the 'Advanced' link in order to increase your control over the search string as well as the product to search against: 4. In your Dashboard: By clicking on the 'Customize' link at the top-right of the Dashboard page in My Oracle Support, you can drag & drop multiple "Knowledge Articles" widgets onto your dashboard. Then, click on the pencil icon at the top-right of the widget to customize it by product. This allows you to keep an active monitor on the most recently updated KM Notes across any product: 5. During SR Creation: As you submit a new Service Request, after entering the product information, SR title, and SR description, you will be presented with a frame at the left containing KM Note suggestions based on the information entered: Let Oracle know what you think! If you like or dislike an article, or would like to comment on how easy/difficult it was to find the article, click on the "Rate this document" link at the bottom of the KM Note. Similarly, during SR creation if one of the suggested KM Notes resolves your question/issue, you can click the "This article solved my problem" link at the bottom of the page. I hope these approaches improve your ability find knowledge content within the My Oracle Support portal, and I encourage you to continue to build your knowledge to further your success with the AutoVue product family.

    Read the article

  • C#: Why Decorate When You Can Intercept

    - by James Michael Hare
    We've all heard of the old Decorator Design Pattern (here) or used it at one time or another either directly or indirectly.  A decorator is a class that wraps a given abstract class or interface and presents the same (or a superset) public interface but "decorated" with additional functionality.   As a really simplistic example, consider the System.IO.BufferedStream, it itself is a descendent of System.IO.Stream and wraps the given stream with buffering logic while still presenting System.IO.Stream's public interface:   1: Stream buffStream = new BufferedStream(rawStream); Now, let's take a look at a custom-code example.  Let's say that we have a class in our data access layer that retrieves a list of products from a database:  1: // a class that handles our CRUD operations for products 2: public class ProductDao 3: { 4: ... 5:  6: // a method that would retrieve all available products 7: public IEnumerable<Product> GetAvailableProducts() 8: { 9: var results = new List<Product>(); 10:  11: // must create the connection 12: using (var con = _factory.CreateConnection()) 13: { 14: con.ConnectionString = _productsConnectionString; 15: con.Open(); 16:  17: // create the command 18: using (var cmd = _factory.CreateCommand()) 19: { 20: cmd.Connection = con; 21: cmd.CommandText = _getAllProductsStoredProc; 22: cmd.CommandType = CommandType.StoredProcedure; 23:  24: // get a reader and pass back all results 25: using (var reader = cmd.ExecuteReader()) 26: { 27: while(reader.Read()) 28: { 29: results.Add(new Product 30: { 31: Name = reader["product_name"].ToString(), 32: ... 33: }); 34: } 35: } 36: } 37: }            38:  39: return results; 40: } 41: } Yes, you could use EF or any myriad other choices for this sort of thing, but the germaine point is that you have some operation that takes a non-trivial amount of time.  What if, during the production day I notice that my application is performing slowly and I want to see how much of that slowness is in the query versus my code.  Well, I could easily wrap the logic block in a System.Diagnostics.Stopwatch and log the results to log4net or other logging flavor of choice: 1:     // a class that handles our CRUD operations for products 2:     public class ProductDao 3:     { 4:         private static readonly ILog _log = LogManager.GetLogger(typeof(ProductDao)); 5:         ... 6:         7:         // a method that would retrieve all available products 8:         public IEnumerable<Product> GetAvailableProducts() 9:         { 10:             var results = new List<Product>(); 11:             var timer = Stopwatch.StartNew(); 12:             13:             // must create the connection 14:             using (var con = _factory.CreateConnection()) 15:             { 16:                 con.ConnectionString = _productsConnectionString; 17:                 18:                 // and all that other DB code... 19:                 ... 20:             } 21:             22:             timer.Stop(); 23:             24:             if (timer.ElapsedMilliseconds > 5000) 25:             { 26:                 _log.WarnFormat("Long query in GetAvailableProducts() took {0} ms", 27:                     timer.ElapsedMillseconds); 28:             } 29:             30:             return results; 31:         } 32:     } In my eye, this is very ugly.  It violates Single Responsibility Principle (SRP), which says that a class should only ever have one responsibility, where responsibility is often defined as a reason to change.  This class (and in particular this method) has two reasons to change: If the method of retrieving products changes. If the method of logging changes. Well, we could “simplify” this using the Decorator Design Pattern (here).  If we followed the pattern to the letter, we'd need to create a base decorator that implements the DAOs public interface and forwards to the wrapped instance.  So let's assume we break out the ProductDAO interface into IProductDAO using your refactoring tool of choice (Resharper is great for this). Now, ProductDao will implement IProductDao and get rid of all logging logic: 1:     public class ProductDao : IProductDao 2:     { 3:         // this reverts back to original version except for the interface added 4:     } 5:  And we create the base Decorator that also implements the interface and forwards all calls: 1:     public class ProductDaoDecorator : IProductDao 2:     { 3:         private readonly IProductDao _wrappedDao; 4:         5:         // constructor takes the dao to wrap 6:         public ProductDaoDecorator(IProductDao wrappedDao) 7:         { 8:             _wrappedDao = wrappedDao; 9:         } 10:         11:         ... 12:         13:         // and then all methods just forward their calls 14:         public IEnumerable<Product> GetAvailableProducts() 15:         { 16:             return _wrappedDao.GetAvailableProducts(); 17:         } 18:     } This defines our base decorator, then we can create decorators that add items of interest, and for any methods we don't decorate, we'll get the default behavior which just forwards the call to the wrapper in the base decorator: 1:     public class TimedThresholdProductDaoDecorator : ProductDaoDecorator 2:     { 3:         private static readonly ILog _log = LogManager.GetLogger(typeof(TimedThresholdProductDaoDecorator)); 4:         5:         public TimedThresholdProductDaoDecorator(IProductDao wrappedDao) : 6:             base(wrappedDao) 7:         { 8:         } 9:         10:         ... 11:         12:         public IEnumerable<Product> GetAvailableProducts() 13:         { 14:             var timer = Stopwatch.StartNew(); 15:             16:             var results = _wrapped.GetAvailableProducts(); 17:             18:             timer.Stop(); 19:             20:             if (timer.ElapsedMilliseconds > 5000) 21:             { 22:                 _log.WarnFormat("Long query in GetAvailableProducts() took {0} ms", 23:                     timer.ElapsedMillseconds); 24:             } 25:             26:             return results; 27:         } 28:     } Well, it's a bit better.  Now the logging is in its own class, and the database logic is in its own class.  But we've essentially multiplied the number of classes.  We now have 3 classes and one interface!  Now if you want to do that same logging decorating on all your DAOs, imagine the code bloat!  Sure, you can simplify and avoid creating the base decorator, or chuck it all and just inherit directly.  But regardless all of these have the problem of tying the logging logic into the code itself. Enter the Interceptors.  Things like this to me are a perfect example of when it's good to write an Interceptor using your class library of choice.  Sure, you could design your own perfectly generic decorator with delegates and all that, but personally I'm a big fan of Castle's Dynamic Proxy (here) which is actually used by many projects including Moq. What DynamicProxy allows you to do is intercept calls into any object by wrapping it with a proxy on the fly that intercepts the method and allows you to add functionality.  Essentially, the code would now look like this using DynamicProxy: 1: // Note: I like hiding DynamicProxy behind the scenes so users 2: // don't have to explicitly add reference to Castle's libraries. 3: public static class TimeThresholdInterceptor 4: { 5: // Our logging handle 6: private static readonly ILog _log = LogManager.GetLogger(typeof(TimeThresholdInterceptor)); 7:  8: // Handle to Castle's proxy generator 9: private static readonly ProxyGenerator _generator = new ProxyGenerator(); 10:  11: // generic form for those who prefer it 12: public static object Create<TInterface>(object target, TimeSpan threshold) 13: { 14: return Create(typeof(TInterface), target, threshold); 15: } 16:  17: // Form that uses type instead 18: public static object Create(Type interfaceType, object target, TimeSpan threshold) 19: { 20: return _generator.CreateInterfaceProxyWithTarget(interfaceType, target, 21: new TimedThreshold(threshold, level)); 22: } 23:  24: // The interceptor that is created to intercept the interface calls. 25: // Hidden as a private inner class so not exposing Castle libraries. 26: private class TimedThreshold : IInterceptor 27: { 28: // The threshold as a positive timespan that triggers a log message. 29: private readonly TimeSpan _threshold; 30:  31: // interceptor constructor 32: public TimedThreshold(TimeSpan threshold) 33: { 34: _threshold = threshold; 35: } 36:  37: // Intercept functor for each method invokation 38: public void Intercept(IInvocation invocation) 39: { 40: // time the method invocation 41: var timer = Stopwatch.StartNew(); 42:  43: // the Castle magic that tells the method to go ahead 44: invocation.Proceed(); 45:  46: timer.Stop(); 47:  48: // check if threshold is exceeded 49: if (timer.Elapsed > _threshold) 50: { 51: _log.WarnFormat("Long execution in {0} took {1} ms", 52: invocation.Method.Name, 53: timer.ElapsedMillseconds); 54: } 55: } 56: } 57: } Yes, it's a bit longer, but notice that: This class ONLY deals with logging long method calls, no DAO interface leftovers. This class can be used to time ANY class that has an interface or virtual methods. Personally, I like to wrap and hide the usage of DynamicProxy and IInterceptor so that anyone who uses this class doesn't need to know to add a Castle library reference.  As far as they are concerned, they're using my interceptor.  If I change to a new library if a better one comes along, they're insulated. Now, all we have to do to use this is to tell it to wrap our ProductDao and it does the rest: 1: // wraps a new ProductDao with a timing interceptor with a threshold of 5 seconds 2: IProductDao dao = TimeThresholdInterceptor.Create<IProductDao>(new ProductDao(), 5000); Automatic decoration of all methods!  You can even refine the proxy so that it only intercepts certain methods. This is ideal for so many things.  These are just some of the interceptors we've dreamed up and use: Log parameters and returns of methods to XML for auditing. Block invocations to methods and return default value (stubbing). Throw exception if certain methods are called (good for blocking access to deprecated methods). Log entrance and exit of a method and the duration. Log a message if a method takes more than a given time threshold to execute. Whether you use DynamicProxy or some other technology, I hope you see the benefits this adds.  Does it completely eliminate all need for the Decorator pattern?  No, there may still be cases where you want to decorate a particular class with functionality that doesn't apply to the world at large. But for all those cases where you are using Decorator to add functionality that's truly generic.  I strongly suggest you give this a try!

    Read the article

  • links for 2010-05-20

    - by Bob Rhubart
    @pevansgreenwood: People don’t like change. (Or do they?) "Creating a culture that embraces change, means changing the way we think about and structure our organisations and our careers. It means rethinking the rules of enterprise IT." -- Peter Evans Greenwood (tags: enterprisearchitecture change innovation) Karim Berrah: After IRON MAN 2 "Nice demo of a robot serving a cup of coffee, from a Swiss based engineering company, NOSAKI, I visited last week. This movie is not a fiction (like IRON MAN 2) and is really powered by an Oracle Database." -- Karim Berrah (tags: oracle solaris ironman2 nosake) @myfear: Spring and Google vs. Java EE 6 "While Spring and Rod Johnson in particular have been extremely valuable in influencing the direction of Java (2)EE after the 1.4 release to the new, much more pragmatic world of Java EE 5, Spring has also caused polarization and fragmentation. Instead of helping forge the Java community together, it has sought to advanced its own cause." Oracle ACE Director Markus Eisele (tags: google javaee spring oracleace java) Arup Nanda: Mining Listener Logs Listener logs contain a wealth of information on security events. Oracle ACE Director Arup Nanda shows you how to create an external table to read the listener logs using simple SQL. (tags: otn oracle oracleace sql security)

    Read the article

  • Oracle Database 11g Underground Advice for Database Administrators, by April C. Sims

    - by alejandro.vargas
    Recently I received a request to review the book "Oracle Database 11g Underground Advice for Database Administrators" by April C. Sims I was happy to have the opportunity know some details about the author, she is an active contributor to the Oracle DBA community, through her blog "Oracle High Availability" . The book is a serious and interesting work, I think it provides a good study and reference guide for DBA's that want to understand and implement highly available environments. She starts walking over the more general aspects and skills required by a DBA and then goes on explaining the steps required to implement Data Guard, using RMAN, upgrading to 11g, etc.

    Read the article

  • Ha a hutés nem elég a gépteremben: Sun Cooling Door a Database Machine-hoz

    - by Fekete Zoltán
    A Database Machine hatalmas teljesítménye miatt általában jóval kevesebb hutésre van szükség, mintha egy külön high-end servert és külön high-end storage-ot hutenénk! Ha viszont a géptermünk maradék hutési kapacitása nem elegendo, és nem elégszünk meg a "hagyományos mosóporral", akkor újabb hutési trükkre van szükség. Erre kínálnak megoldást a Sun Cooling Door modellek, például az 5200-as és az 5600-as modellek.

    Read the article

  • Monitoring Database disk space

    - by Michael Freidgeim
    An article Data files: To Autogrow Or Not To Autogrow? recommends NOT to rely on auto-grow, because it causing delays in unplanned times.We should mtonitor database files(both data and log), and if they close to max capacity, manually increase the size. However it doesn't give references, how to monitor the free space inside databases. I've tried to look how to do it. It can be done manually using   execute sp_spaceused for the database in question or  sp_SOS (can be downloaded from http://searchsqlserver.techtarget.com/tip/Find-size-of-SQL-Server-tables-and-other-objects-with-stored-procedure)Alternatively you can run SQL commands as suggested in Http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=82359 by Michael Valentine Jonesselect [FREE_SPACE_MB] = convert(decimal(12,2),round((a.size-fileproperty(a.name,'SpaceUsed'))/128.000,2)) from dbo.sysfiles aMore useful article Monitor database file sizes with SQL Server Jobs describes how to setup monitoring Finally I found the excellent articleManaging Database Data Usage With Custom Space Alerts, that can be followed even support personnel without much DBA experience.

    Read the article

  • Prepping a conference

    - by Laurent Bugnion
    I have had the chance to talk at many conferences these past few years, and came up with a way to prepare them which works really well for me. Most importantly, it would make it quite easy to overcome an emergency (for example if my laptop would suddenly lose data). The whole code as well as the slides and other documents are in the cloud. I also use source control for my demos, so that I always have the latest and the greatest, but also a history of changes I made to my demos. Finally I have a system of code snippets which works great, and I often had very positive remarks from the audience regarding that. Putting everything in the cloud The one thing I used to be the most scared of was a sudden crash of my laptop, and being unable to restore in time for a conference. Most conferences ask speakers to send slides a few days (or weeks…) in advance, but let's face it, we all have last minute changes to our talks and I always come in the conference with updated slides that I pass to the management team. The answer to that dilemma used to be working off memory sticks, and that worked not bad. However last year I started putting all the documents relating to a conference in a DropBox folder, and that works great too. Obviously DropBox works only if you have connectivity, so if I for instance update slides while on an international flight, I cannot save to the cloud. The obvious answer to that is to backup everything on a memory stick… but I have to admit, I have been trusting my luck and working off my laptop HD and then synching everything to the cloud after landing. Of course on some US national flights you get WiFi on board, so in that case it is even simpler :) Usually after the conference is done, I remove the files from DropBox and copy them to their "final destination". They are backed up from there to BackBlaze, the great online backup service I am using routinely (I currently have about 90GB of data in BackBlaze). Outlining the presentations I like to have a written outline of my presentations written somewhere. I keep it simple, just write the various sections of the presentation with timing. I guess it is a remnant of the time when I was a private pilot, and using checklists for flight preparation. For example: Demo about designability 15' (0:37) Switch to Blend Open MainPage.xaml Create a DataTemplate ... Here I can immediately see during the presentation if I am taking too much time for my demo (0:37 is where I need to be when I am done with this section of the presentation, and 15' is the time that this particular section takes). I keep these sections reasonable, I don't detail every step of the preparation. Typically I have one such section for every 10-15 minutes of my talks. Yes, I am timing my presentations. I keep adjusting these numbers when I rehearse, and this really helps to feel more confident during the presentations. This is especially important for presentations that are long, like my MIX11 demo which clocked at 57 minutes (I had a lot of stuff to show…). Such presentations are risky, because if anything goes wrong, you will have to cut stuff, so the answer to that is: Rehearse, rehearse and when you're done rehearsing, rehearse a little more. I also have a "Preparation" section where I outline what I need to do before a presentation. For instance: Preparation Reboot in VHD Make sure MSN and Twitter are not running. Open VS10 and load demo Open Blend and load demo Run the WP7 emulator ... I typically start preparing my laptop an hour before the talk, starting everything I need to start and then putting my laptop to sleep. Saving and printing the outline, Timing Printing is a real problem because it is really hard to find a printer at most conference venues, and also quite hard in hotels. To solve that, I simply write everything in OneNote (synched to the cloud, now you start to know what I like ;) and then I print it to a PDF (I use CutePDFWriter) that I save to my Kindle. During the presentation, I read the outline off the Kindle (I mostly just need a quick check to see how I am timing). For timing during the presentation, I use the free tool ChronoGPS on my Windows Phone 7, but of course any phone these days has a clock/chrono application. In some conferences, they even have timers that the presenters can see, but they tend to count down and I prefer to count up… so I just use my own :) Source control for demos For demos, I create a separate folder and use Mercurial as source control. Mercurial has the huge advantage (over SVN or TFS) to work offline too, so I can commit while on a plane, and all the history is saved. Then when I have connectivity I push everything to the cloud (I am using the fantastic Trunksapp.com for my private repositories). Here too the obvious downside is the risk of losing my last changes if my laptop crashes before I can push to the cloud, and here too the obvious answer would be to work from a memory stick… though I have to admit I didn't do that lately (except when I was writing Silverlight 4 Unleashed, where I was really paranoid…) And code snippets? I am one of these presenters who hates to type in front of an audience. I can type really fast (writing two books has this advantage, it really teaches you to touch type and be fast at it) but in the context of an audience, on a stage where it is often damn cold (an issue I had a lot in past conferences, air conditioning can freeze your fingers and make it really hard to type), it doesn't work as well. I don't know for you, but I really dislike seeing a presentation where the speaker uses the backspace key more often than others ;) To solve that, I like to have my code ready in snippets, and drag them to the screen. Then I can spend time explaining each code snippet, while highlighting portions of the code (always highlight what you talk about, the audience often doesn't even see the cursor and doesn't know where you are on the screen!) Over the years I have used various solutions for code snippets, and now I have one which works really well… if you take a few precautions! I use the Visual Studio Toolbox. Preparing the code snippets You can store code snippets in the Toolbox for anything, XAML, C# etc. I arrange the snippets in the order in which I need them, which is a great way to remember what comes next in the presentation. I also separate them by topic, to make it easier to find them, for example when I switch to the slides and then back to the code. Remember that no matter how experienced you are, you will feel more nervous on stage than while you are preparing, so any way to make it easier for you is going to be beneficial to the audience. To store a code snippet, I do the following: Open the final demo that you want to show to the audience in Visual Studio. In your code, select a snippet of code that you want to explain in particular. Make sure that the Visual Studio Toolbox is open (menu View, Toolbox or Ctrl-Alt-X). Drag the selected snippet from the code window to the toolbox. (if needed) drag the snippet to the correct location (for example between two other code snippets so that you can access it as you speak through the demo). Right click on the snippet and select Rename Item from the context menu. Select a meaningful name. For me I use the following conventions: If it is a method, I use the method's name. If it is not a whole method, I use a descriptive name. If it is the content of a method (i.e. the body only, without the method's signature), I use "-> MethodName". This reminds me during the presentation that this is only the body, and that I need to insert that into an existing signature. This is the case, for instance, when I use Visual Studio to automatically generate the members of an interface’s implementation; then I only need to insert my snippet inside the generated method body. Saving the snippets This is the most important!! It happened to me a few times that VS10 lost its settings. When that happens, the snippets are lost too! Yeah that really sucks, especially (as it happened once) when this is the case about an hour before a talk… Stress and sweat follows, not good conditions to start a talk in front of an audience believe me. Thankfully, saving snippets is really easy with the following steps: Select the menu Tools, Import and Export Settings. Select Export selected environment settings and press Next. Uncheck All Settings. Then expand General Settings and select Toolbox (only!). Press Next. Select your source control folder and save under a meaningful name (for instance Snippets.vssettings). Commit to source control and push to the cloud. By the way, this also has the advantage of applying source control to the snippets file (which is an XML file), so you get history for free on that file! Reimporting the snippets If VS loses its settings and you need to reimport the snippets, this can be done super easily and very fast. Make sure that the Toolbox is empty. When you import snippets, they are merged with existing ones, they do not replace the content of the Toolbox. Unless merging is really what you want, make sure that your Toolbox is clean before you import, it is really easier. Select the menu Tools, Import and Export Settings. Select Import selected environment settings and press Next. Select No, just import new settings and press Next. Press Browse and select the Snippets.vssettings file. Press Finish. Et voila, all your snippets appear again in the Toolbox. Whew, the worst was averted and you can start your demo without sweating! (I had to do that once literally 5 minutes before the start of a demo, while my laptop was already hooked to the projector, and it went just fine). What about special tools? When using special tools (for example beta versions of tools you have an early access to), or a special configuration of your laptop, things can get tricky because you cannot really be sure that you will get a laptop with the same tools and the same configuration at the conference. To solve that, I use the following precautions: I make my demos from a Virtual Hard Disk. The great John Papa made a very easy-to-follow web page where he explains how to create a VHD and install Win7 to it. This gives you the full power of your laptop (as fast as booting from the metal). For me, I have a basic configuration that I saved on a USB harddrive (Win7 plus drivers, basic settings for desktop, folder options, taskbar etc) and Visual Studio 2010 SP1 on it. When preparing, I start by copying this "basis VHD" to my laptop. I install additional tools and configurations. I save the VHD back to the USB harddrive in a different folder. This would allow me to reinstall my demo environment quite fast, for example in case of harddrive failure. Replace the harddrive, copy the VHD to it, configure the BCD and you can start. Unfortunately this only works if the laptop itself still works. In the worst case of total failure, my security is to back all the installers up: The installers I use are synched on all my laptops and backed up to BackBlaze. If the worst happens and my laptop is absolutely broken, I can download the installer from BackBlaze and install on another laptop. This of course takes some time, and if that happens 5 minutes before a presentation, well… I don't have an answer to that, except of course crossing my fingers. Still, all that gives me additional security. Conclusion Remember folks, talking to an audience, large or small, will make you nervous. Just ask Scott Hanselman :) The goal here is to create the best possible conditions for you, and to create an environment where everything is saved and easy to restore, where everything is well known and provides you with additional confidence. The cooler you feel before the presentation (and during ;)), the better your presentation will be. Here too, the goal is to provide the best user experience you can have, which in turn will make it more enjoyable for your audience! Happy presenting :) Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Toughest Developer Puzzle Ever

    - by Josh Holmes
    For the second year in a row, my friend and colleague Jeff Blankenburg has created what is quickly proving to live up to it’s namesake – the Toughest Developer Puzzle Ever. Some of the puzzles are technical, some are not but all require that you understand the web, development and technology to solve. Even if you don’t get in on the fantastic prizes that Jeff has lined up, there’s great bragging rights in being able to solve the Toughest Developer Puzzle Ever. This year, I was honored enough to get to create three of the puzzles myself – let me know what you think of them. I’m not going to tell you which ones I created now and definitely don’t ask me for hints – Jeff has threatened me if I give any of the puzzle away… ;) All I can say now is “Good luck!”

    Read the article

  • Solving Euler Project Problem Number 1 with Microsoft Axum

    - by Jeff Ferguson
    Note: The code below applies to version 0.3 of Microsoft Axum. If you are not using this version of Axum, then your code may differ from that shown here. I have just solved Problem 1 of Project Euler using Microsoft Axum. The problem statement is as follows: If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. My Axum-based solution is as follows: namespace EulerProjectProblem1{ // http://projecteuler.net/index.php?section=problems&id=1 // // If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. // The sum of these multiples is 23. // Find the sum of all the multiples of 3 or 5 below 1000. channel SumOfMultiples { input int Multiple1; input int Multiple2; input int UpperBound; output int Sum; } agent SumOfMultiplesAgent : channel SumOfMultiples { public SumOfMultiplesAgent() { int Multiple1 = receive(PrimaryChannel::Multiple1); int Multiple2 = receive(PrimaryChannel::Multiple2); int UpperBound = receive(PrimaryChannel::UpperBound); int Sum = 0; for(int Index = 1; Index < UpperBound; Index++) { if((Index % Multiple1 == 0) || (Index % Multiple2 == 0)) Sum += Index; } PrimaryChannel::Sum <-- Sum; } } agent MainAgent : channel Microsoft.Axum.Application { public MainAgent() { var SumOfMultiples = SumOfMultiplesAgent.CreateInNewDomain(); SumOfMultiples::Multiple1 <-- 3; SumOfMultiples::Multiple2 <-- 5; SumOfMultiples::UpperBound <-- 1000; var Sum = receive(SumOfMultiples::Sum); System.Console.WriteLine(Sum); System.Console.ReadLine(); PrimaryChannel::ExitCode <-- 0; } }} Let’s take a look at the various parts of the code. I begin by setting up a channel called SumOfMultiples that accepts three inputs and one output. The first two of the three inputs will represent the two possible multiples, which are three and five in this case. The third input will represent the upper bound of the problem scope, which is 1000 in this case. The lone output of the channel represents the sum of all of the matching multiples: channel SumOfMultiples{ input int Multiple1; input int Multiple2; input int UpperBound; output int Sum;} I then set up an agent that uses the channel. The agent, called SumOfMultiplesAgent, received the three inputs from the channel sent to the agent, stores the results in local variables, and performs the for loop that iterates from 1 to the received upper bound. The agent keeps track of the sum in a local variable and stores the sum in the output portion of the channel: agent SumOfMultiplesAgent : channel SumOfMultiples{ public SumOfMultiplesAgent() { int Multiple1 = receive(PrimaryChannel::Multiple1); int Multiple2 = receive(PrimaryChannel::Multiple2); int UpperBound = receive(PrimaryChannel::UpperBound); int Sum = 0; for(int Index = 1; Index < UpperBound; Index++) { if((Index % Multiple1 == 0) || (Index % Multiple2 == 0)) Sum += Index; } PrimaryChannel::Sum <-- Sum; }} The application’s main agent, therefore, simply creates a new SumOfMultiplesAgent in a new domain, prepares the channel with the inputs that we need, and then receives the Sum from the output portion of the channel: agent MainAgent : channel Microsoft.Axum.Application{ public MainAgent() { var SumOfMultiples = SumOfMultiplesAgent.CreateInNewDomain(); SumOfMultiples::Multiple1 <-- 3; SumOfMultiples::Multiple2 <-- 5; SumOfMultiples::UpperBound <-- 1000; var Sum = receive(SumOfMultiples::Sum); System.Console.WriteLine(Sum); System.Console.ReadLine(); PrimaryChannel::ExitCode <-- 0; }} The result of the calculation (which, by the way, is 233,168) is sent to the console using good ol’ Console.WriteLine().

    Read the article

  • FTP Upload ftpWebRequest Proxy

    - by Rodney Vinyard
    Searchable:   FTP Upload ftpWebRequest Proxy FTP command is not supported when using HTTP proxy     In the article below I will cover 2 topics   1.       C# & Windows Command-Line FTP Upload with No Proxy Server   2.       C# & Windows Command-Line FTP Upload with Proxy Server   Not covered here: Secure FTP / SFTP   Sample Attributes: ·         UploadFilePath = “\\servername\folder\file.name” ·         Proxy Server = “ftp://proxy.server/” ·         FTP Target Server = ftp.target.com ·         FTP User = “User” ·         FTP Password = “Password” with No Proxy Server ·         Windows Command-Line > ftp ftp.target.com > ftp User: User > ftp Password: Password > ftp put \\servername\folder\file.name > ftp dir           (result: file.name listed) > ftp del file.name > ftp dir           (result: file.name deleted) > ftp quit   ·         C#   //----------------- //Start FTP via _TargetFtpProxy //----------------- string relPath = Path.GetFileName(\\servername\folder\file.name);   //result: relPath = “file.name”   FtpWebRequest ftpWebRequest = (FtpWebRequest)WebRequest.Create("ftp.target.com/file.name); ftpWebRequest.Method = WebRequestMethods.Ftp.UploadFile;   //----------------- //user - password //----------------- ftpWebRequest.Credentials = new NetworkCredential("user, "password");   //----------------- // set proxy = null! //----------------- ftpWebRequest.Proxy = null;   //----------------- // Copy the contents of the file to the request stream. //----------------- StreamReader sourceStream = new StreamReader(“\\servername\folder\file.name”);   byte[] fileContents = Encoding.UTF8.GetBytes(sourceStream.ReadToEnd()); sourceStream.Close(); ftpWebRequest.ContentLength = fileContents.Length;     //----------------- // transer the stream stream. //----------------- Stream requestStream = ftpWebRequest.GetRequestStream(); requestStream.Write(fileContents, 0, fileContents.Length); requestStream.Close();   //----------------- // Look at the response results //----------------- FtpWebResponse response = (FtpWebResponse)ftpWebRequest.GetResponse();   Console.WriteLine("Upload File Complete, status {0}", response.StatusDescription);   with Proxy Server ·         Windows Command-Line > ftp proxy.server > ftp User: [email protected] > ftp Password: Password > ftp put \\servername\folder\file.name > ftp dir           (result: file.name listed) > ftp del file.name > ftp dir           (result: file.name deleted) > ftp quit   ·         C#   //----------------- //Start FTP via _TargetFtpProxy //----------------- string relPath = Path.GetFileName(\\servername\folder\file.name);   //result: relPath = “file.name”   FtpWebRequest ftpWebRequest = (FtpWebRequest)WebRequest.Create("ftp://proxy.server/" + relPath); ftpWebRequest.Method = WebRequestMethods.Ftp.UploadFile;   //----------------- //user - password //----------------- ftpWebRequest.Credentials = new NetworkCredential("[email protected], "password");   //----------------- // set proxy = null! //----------------- ftpWebRequest.Proxy = null;   //----------------- // Copy the contents of the file to the request stream. //----------------- StreamReader sourceStream = new StreamReader(“\\servername\folder\file.name”);   byte[] fileContents = Encoding.UTF8.GetBytes(sourceStream.ReadToEnd()); sourceStream.Close(); ftpWebRequest.ContentLength = fileContents.Length;     //----------------- // transer the stream stream. //----------------- Stream requestStream = ftpWebRequest.GetRequestStream(); requestStream.Write(fileContents, 0, fileContents.Length); requestStream.Close();   //----------------- // Look at the response results //----------------- FtpWebResponse response = (FtpWebResponse)ftpWebRequest.GetResponse();   Console.WriteLine("Upload File Complete, status {0}", response.StatusDescription);

    Read the article

  • Converting Creole to HTML, PDF, DOCX, ..

    - by Marko Apfel
    Challenge We documented a project on Github with the Wiki there. For most articles we used Creole as markup language. Now we have to deliver a lot of the content to our client in an usual format like PDF or DOCX. So we need a automatism to extract all relevant content, merge it together and convert the stuff to a new format. Problem One of the most popular toolsets to convert between several formats is Pandoc. But unfortunally Pandoc does not support Creole (see the converting matrix). Approach So we need an intermediate step: Converting from Creole to a supported Pandoc format. Creolo/c is a Creole to Html converter and does exactly what we need. After converting our Creole content to Html we could use Pandoc for all the subsequent tasks. Solution Getting the Creole stuff First at all we need the Creole content on our locale machines. This is easy. Because the Github Wiki themselves is a Git repository we could clone it to our machine. In the working copy we see now all the files and the suffix gives us the hint for the markup language. Converting and Merging Creole content to Html Because we would like all content from several Creole files in one HTML file, we have to convert and merge all the input files to one output file. Creole/c has an option (-b) to generate only the Html-stuff below a Html <Body>-tag. And this is hook for us to start. We have to create manually the additional preluding Html-tags (<html>, <head>, ..), then we merge all needed Creole content to our output file and last we add the closing tags. This could be done straightforward with a little bit old DOS magic: REM === Generate the intro tags === ECHO ^<html^> > %TMP%\output.html ECHO ^<head^> >> %TMP%\output.html ECHO ^<meta name="generator" content="creole/c"^> >> %TMP%\output.html ECHO ^</head^> >> %TMP%\output.html ECHO ^<body^> >> %TMP%\output.html REM === Mix in all interesting Creole stuff with creole/c === .\Creole-C\bin\creole.exe -b .\..\datamodel+overview.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+domain+CvdCaptureMode.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+domain+CvdDamageReducingActivity.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+lookup+IncidentDamageCodes.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+table+Attachments.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+table+TrafficLights.creole >> %TMP%\output.html REM === Generate the outro tags === ECHO ^</body^> >> %TMP%\output.html ECHO ^</html^> >> %TMP%\output.html REM === Convert the Html file to Docx with Pandoc === .\Pandoc\bin\pandoc.exe -o .\Database-Schema.docx %TMP%\output.html Some explanation for this The first ECHO call creates the file. Therefore the beginning <html> tag is send via > to a temporary working file. All following calls add content to the existing file via >>. The tag-characters < and > must be escaped. This is done by the caret sign (^). We use a file in the default temporary folder (%TMP%) to avoid writing in our current folders. (better for continuous integration) Both toolsets (Creole/c and Pandoc) are copied to a versioned tools folder in the Wiki. This is committable and no problem after pushing – Github does not do anything with it. In this folder is also the batch (Export-Docx.bat) for all the steps. Pandoc recognizes the conversion by the suffixes of the file names. So it is enough to specify only the input and output files.

    Read the article

  • A Good .NET Quiz ...

    - by lavanyadeepak
    A Good .NET Quiz ... Whilst casually surfing today afternoon I came across a good .NET quiz in one of the indian software company website. It is rather a general technology quiz than just a .NET quiz because it also contains options for the following technologies.   .NET PHP Javascript MySql Testing QTP English Aptitude General Knowledge Science HTML CSS All It also has the following levels for the Quiz:   Easy Medium Hard All   The URL of the quiz is as below: http://www.qualitypointtech.net/quiz/listing_sub_level.php

    Read the article

  • New Horizons now less than 6 Au from Pluto

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/06/22/new-horizons-now-less-than-6-au-from-pluto.aspxThe New Horizons space craft as of 13:00 hrs UTC yesterday is now within 6 AU of  its next target - Pluto. While this is still a long way yet from Pluto, it is the closest spacecraft to Pluto. Closest approach is now some 752 days away on 14 July 2015.There are very interesting articles on the investigation work the New Horizons team has done to plan the path of New Horizons through the Plutonian system:http://pluto.jhuapl.edu/overview/piPerspective.phphttp://pluto.jhuapl.edu/news_center/news/20130614.phpWell done New Horizons team!

    Read the article

  • How to recreate spfile on Exadata?

    - by Bandari Huang
    Copy spfile from the ASM diskgroup to local disk by using the ASMCMD command line tool.  ASMCMD> pwd +DATA_DM01/EDWBASE ASMCMD> ls -l Type Redund Striped Time Sys Name Y CONTROLFILE/ Y DATAFILE/ Y ONLINELOG/ Y PARAMETERFILE/ Y TEMPFILE/ N spfileedwbase.ora => +DATA_DM01/EDWBASE/PARAMETERFILE/spfile.355.800017117 ASMCMD> cp +DATA_DM01/EDWBASE/spfileedwbase.ora /home/oracle/spfileedwbase.ora.bak Copy the context from spfileedwbase.ora.bak to initedwbase.ora except garbled character. Using above initedwbase.ora, start one of the RAC instances to the mount phase.   SQL> startup mount pfile=/home/oracle/initedwbase.ora Ensure one of the database instances is mounted before attempting to recreate the spfile.  SQL> select INSTANCE_NAME,HOST_NAME,STATUS from v$instance; INSTANCE_NAME HOST_NAME  STATUS ------------- ---------  ------ edwbase1      dm01db01   MOUNTED Create the new spfile. SQL> create spfile='+DATA_DM01/EDWBASE/spfileedwbase.ora' from pfile='/home/oracle/initedwbase.ora'; ASMCMD will show that a new spfile has been created as the alias spfilerac2.ora is now pointing to a new spfile under the PARAMETER directory in ASM. ASMCMD> pwd +DATA_DM01/EDWBASE ASMCMD> ls -l Type Redund Striped Time Sys Name Y CONTROLFILE/ Y DATAFILE/ Y ONLINELOG/ Y PARAMETERFILE/ Y TEMPFILE/ N spfilerac2.ora => +DATA_DM01/EDWBASE/PARAMETERFILE/spfile.356.800013581  Shutdown the instance and restart the database using srvctl using the newly created spfile. SQL> shutdown immediate ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> exit [oracle@dm01db01 ~]$ srvctl start database -d edwbase [oracle@dm01db01 ~]$ srvctl status database -d edwbase Instance edwbase1 is running on node dm01db01 Instance edwbase2 is running on node dm01db02 ASMCMD will now show a number of spfiles exist in the PARAMETERFILE directory for this database. The spfile containing the parameter preventing startups should be removed from ASM. In this case the file spfile.355.800017117 can be removed because spfile.356.800013581 is the current spfile. ASMCMD> pwd +DATA_DM01/EDWBASE ASMCMD> cd PARAMETERFILE ASMCMD> ls -l Type Redund Striped Time Sys Name PARAMETERFILE UNPROT COARSE FEB 19 08:00:00 Y spfile.355.800017117 PARAMETERFILE UNPROT COARSE FEB 19 08:00:00 Y spfile.356.800013581 ASMCMD> rm spfile.355.800017117 ASMCMD> ls spfile.356.800013581 Referenece: Recreating the Spfile for RAC Instances Where the Spfile is Stored in ASM [ID 554120.1]

    Read the article

  • Executing a Batch file from remote Ax client on an AOS server

    - by Anisha
    Is it possible to execute a batch file on an AOS server from a remote AX client? Answer is yes, provided you have necessary permission for this execution on the server. Please create a batch file on your AOS server. Some thing as below for creating a directory on the server.    Insert a command something like this in a .BAT file (batch file) and place any were on the server.   Mkdir “c:\test”      Copy the following code into your server static method of your class and call this piece of code from a button click on Ax form. Please execute this button click from a remote AX client and see the result . This should execute the batch file on the server and should create a directory called ‘test’ on the root directoryof the server.     server static void AOS_batch_file_create() { boolean b; System.Diagnostics.Process process; System.Diagnostics.ProcessStartInfo processStartInfo; ; b = Global::isRunningOnServer(); infolog.add(0, int2str(b)); new InteropPermission(InteropKind::ClrInterop).assert(); process = new System.Diagnostics.Process(); processStartInfo = new System.Diagnostics.ProcessStartInfo(); processStartInfo.set_FileName("C:\\create_dir.bat"); // batch file path on the AOS server process.set_StartInfo(processStartInfo); process.Start(); //process.Refresh(); //process.Close(); //process.WaitForExit(); info("Finished"); }

    Read the article

  • Recommended Approach to Secure your ADFdi Spreadsheets

    - by juan.ruiz
    ADF desktop integration leverages ADF security to provide access to published spreadsheets within your application. In this article I discussed a good security practice for your existing as well as any new spreadsheets that you create. ADF Desktop integration uses the adfdiRemoteServlet to process and send request back and fort from and to the ADFmodel which is allocated in the Java EE container where our application is deployed. In other words this is one of the entry points to the application server. Having said that, we need to make sure that container-based security is provided to avoid vulnerabilities. So what is needed? For existing an new ADFdi applications you need to create a Security Constraint for the ADFdi servlet on the Web.xml file of our application. Fortunately JDeveloper 11g provides a nice visual editor to do this. Open the web.xml file and go to the security category Add a new Web Resource Collection give it a meaningful name and on the URL Pattern add /adfdiRemoteServlet click on the Authorization tab and make sure the valid-users  role is selected for authorization and Voila! your application now is more secured.

    Read the article

  • Oracle Linux 6 Implementation Essentials Certification Exam Now Available

    - by Antoinette O'Sullivan
    Get proof of your linux system administration skills by taking the Oracle Linux 6 Implementation Essentials Certification exam. This certification is available to all candidates. Oracle Partner Members earning this certification will be recognized as OPN Certified Specialists. This certification takes under 3 hours, asking you between 120-150 questions on areas including: Introduction to Oracle Linux Installing Oracle Linux 6 Linux Boot Process Oracle Linux System Configuration and Process Management Oracle Linux Package Management Ksplice Zero Downtime Updates Automate Tasks and System Logging User and Group Administration Oracle Linux File Sytems and Storage Administration Network Administration Oracle Linux System Monitoring and Troubleshooting Oracle Certifications are among the most sought after badges of credibility for expertise in the Information Technology marketplace. See Benefits of Oracle Certification for more information. To prepare for this exam, you can take the Oracle Linux System Administration training.

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >