Search Results

Search found 1821 results on 73 pages for 'converted'.

Page 46/73 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Silverlight Cream for March 11, 2010 -- #812

    - by Dave Campbell
    In this Issue: Walter Ferrari, Viktor Larsson, Bill Reiss(-2-, -3-, -4-), Jonathan van de Veen, Walt Ritscher, Jobi Joy, Pete Brown, Mike Taulty, and Mark Miller. Shoutouts: Going to MIX10? John Papa announced Got Questions? Ask the Experts at MIX10 Pete Brown listed The Essential WPF/Silverlight/XNA Developer and Designer Toolbox From SilverlightCream.com: How to extend Bing Maps Silverlight with an elevation profile graph - Part 2 In this second and final tutorial, Walter Ferrari adds elevation to his previous BingMaps post. I'm glad someone else worked this out for me :) Navigating AWAY from your Silverlight page Viktor Larsson has a post up on how to navigate to something other than your Silverlight page like maybe a mailto ... SilverSprite: Not just for XNA games any more Bill Reiss has a new version of SilverSprite up on CodePlex and if you're planning on doing any game development, you should check this out for sure Space Rocks game step 1: The game loop Bill Reiss has a tutorial series on Game development that he's beginning ... looks like a good thing to jump in on and play along. This first one is all about the game loop. Space Rocks game step 2: Sprites (part 1) In Part 2, Bill Reiss begins a series on Sprites in game development and positioning it. Space Rocks game step 3: Sprites (part 2) Bill Reiss's Part 3 is a follow-on tutorial on Sprites and moving according to velocity... fun stuff :) Adventures while building a Silverlight Enterprise application part No. 32 Jonathan van de Veen is discussing debugging and the evil you can get yourself wrapped up in... his scenario is definitely one to remember. Streaming Silverlight media from a Dropbox.com account Read the comments and the agreements, but I think Walt Ritscher's idea of using DropBox to serve up Streaming media is pretty cool! UniformGrid for Silverlight Jobi Joy wanted a UniformGrid like he's familiar with in WPF. Not finding one in the SDK or Toolkit, he converted the WPF one to Silverlight .. all good for you and me :) How to Get Started in WPF or Silverlight: A Learning Path for New Developers Pete Brown has a nice post up describing resources, tutorials, blogs, and books for devs just getting into Silveright or WPF, and thanks for the shoutout, Pete! Silverlight 4, MEF and the DeploymentCatalog ( again :-) ) Mike Taulty is revisiting the DeploymentCatalog to wrap it up in a class like he did the PackageCatalog previously MVVM with Prism 101 – Part 6b: Wrapping IClientChannel Mark Miller is back with a Part 6b on MVVM with Prism, and is answering some questions from the previous post and states his case against the client service proxy. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    MIX10

    Read the article

  • Site Studio Mobile Example - WCM Reuse

    - by john.brunswick
    Mobile internet usage is growing by leaps and bounds and it is theorized that in the not-to-distant future it will eclipse traditional access via desktop browsers. Mary Meeker, a managing director at Morgan Stanley and head of their global technology research team, recently predicted that mobile usage will eclipse desktop usage within the next 5 years in an Events@Google series presentation. In order for organizations to reach their prospects, customers and business partners, they will need to make their content readily available on mobile devices. A few years ago it was fairly challenging to provide a special, separate, site to cater to mobile users using technologies like WML (Wireless Markup Language). Modern mobile browsers have rendered the need for this as irrelevant and now the focus has moved toward providing a browsing experience that works well on small screen sizes and is highly performant. What does all of this mean for Oracle UCM? Taking site content from an existing Site Studio site and targeting it for consumption for mobile devices is a very straightforward process that is aided by a number of native capabilities in the product. The example highlighted in this post takes advantage of dynamic conversion capabilities in Oracle UCM to enable site content to be created and updated via MS Office documents. These documents are then converted to a simple, clean HTML format for consumption in the desktop and mobile browsing experiences. To help better understand how this is possible the example below shows a fictional .COM and its mobile site counterpart that both leverage the same underlying content. The scenario is not complete or production ready, but highlights that a mobile experience may be best delivered by omitting portions of a site that would be present within the version served to desktop clients. If you have browsed CNet (news.com) on a mobile device it becomes quickly apparent that they are serving an optimized version for your mobile device. An iPhone style version can be accessed at http://iphone.cnet.com/. In order to do that they leveraged some work done for the iPhone iUi project developed by Joe Hewitt that provides mobile browsers an experience that is similar to what users may find in a native iPhone application. For our example parts of this framework are used (the CSS) and this approach provides a page that will degrade nicely over a wide range of mobile browsers, since it is comprised of lightweight HTML markup and CSS. The iPhone iUi framework also provides some nice JavaScript to enable animated transitions between pages, but for the widest range of mobile browser compatibility we will only incorporate the CSS and HTML DIV / UL based page markup in our example.

    Read the article

  • Create Custom Sized Thumbnail Images with Simple Image Resizer [Cross-Platform]

    - by Asian Angel
    Are you looking for an easy way to create custom sized thumbnail images for use in blog posts, photo albums, and more? Whether is it a single image or a CD full, Simple Image Resizer is the right app to get the job done for you. To add the new PPA for Simple Image Resizer open the Ubuntu Software Center, go to the Edit Menu, and select Software Sources. Access the Other Software Tab in the Software Sources Window and add the first of the PPAs shown below (outlined in red). The second PPA will be automatically added to your system. Once you have the new PPAs set up, go back to the Ubuntu Software Center and click on the PPA listing for Rafael Sachetto on the left (highlighted with red in the image). The listing for Simple Image Resizer will be right at the top…click Install to add the program to your system. After the installation is complete you can find Simple Image Resizer listed as Sir in the Graphics sub-menu. When you open Simple Image Resizer you will need to browse for the directory containing the images you want to work with, select a destination folder, choose a target format and prefix, enter the desired pixel size for converted images, and set the quality level. Convert your image(s) when ready… Note: You will need to determine the image size that best suits your needs before-hand. For our example we chose to convert a single image. A quick check shows our new “thumbnailed” image looking very nice. Simple Image Resizer can convert “into and from” the following image formats: .jpeg, .png, .bmp, .gif, .xpm, .pgm, .pbm, and .ppm Command Line Installation Note: For older Ubuntu systems (9.04 and previous) see the link provided below. sudo add-apt-repository ppa:rsachetto/ppa sudo apt-get update && sudo apt-get install sir Links Note: Simple Image Resizer is available for Ubuntu, Slackware Linux, and Windows. Simple Image Resizer PPA at Launchpad Simple Image Resizer Homepage Command Line Installation for Older Ubuntu Systems Bonus The anime wallpaper shown in the screenshots above can be found here: The end where it begins [DesktopNexus] Latest Features How-To Geek ETC Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu Create Custom Sized Thumbnail Images with Simple Image Resizer [Cross-Platform] Etch a Circuit Board using a Simple Homemade Mixture Sync Blocker Stops iTunes from Automatically Syncing The Journey to the Mystical Forest [Wallpaper] Trace Your Browser’s Roots on the Browser Family Tree [Infographic]

    Read the article

  • clear explanation sought: throw() and stack unwinding

    - by Jerry Gagelman
    I'm not a programmer but have learned a lot watching others. I am writing wrapper classes to simplify things with a really technical API that I'm working with. Its routines return error codes, and I have a function that converts those to strings: static const char* LibErrString(int errno); For uniformity I decided to have member of my classes throw an exception when an error is encountered. I created a class: struct MyExcept : public std::exception { const char* errstr_; const char* what() const throw() {return errstr_;} MyExcept(const char* errstr) : errstr_(errstr) {} }; Then, in one of my classes: class Foo { public: void bar() { int err = SomeAPIRoutine(...); if (err != SUCCESS) throw MyExcept(LibErrString(err)); // otherwise... } }; The whole thing works perfectly: if SomeAPIRoutine returns an error, a try-catch block around the call to Foo::bar catches a standard exception with the correct error string in what(). Then I wanted the member to give more information: void Foo::bar() { char adieu[128]; int err = SomeAPIRoutine(...); if (err != SUCCESS) { std::strcpy(adieu,"In Foo::bar... "); std::strcat(adieu,LibErrString(err)); throw MyExcept((const char*)adieu); } // otherwise... } However, when SomeAPIRoutine returns an error, the what() string returned by the exception contains only garbage. It occurred to me that the problem could be due to adieu going out of scope once the throw is called. I changed the code by moving adieu out of the member definition and making it an attribute of the class Foo. After this, the whole thing worked perfectly: a try-call block around a call to Foo::bar that catches an exception has the correct (expanded) string in what(). Finally, my question: what exactly is popped off the stack (in sequence) when the exception is thrown in the if-block when the stack "unwinds?" As I mentioned above, I'm a mathematician, not a programmer. I could use a really lucid explanation of what goes onto the stack (in sequence) when this C++ gets converted into running machine code.

    Read the article

  • How to pad number with leading zero with C#

    - by Jalpesh P. Vadgama
    Recently I was working with a project where I was in need to format a number in such a way which can apply leading zero for particular format.  So after doing such R and D I have found a great way to apply this leading zero format. I was having need that I need to pad number in 5 digit format. So following is a table in which format I need my leading zero format. 1-> 00001 20->00020 300->00300 4000->04000 50000->5000 So in the above example you can see that 1 will become 00001 and 20 will become 00200 format so on. So to display an integer value in decimal format I have applied interger.Tostring(String) method where I have passed “Dn” as the value of the format parameter, where n represents the minimum length of the string. So if we pass 5 it will have padding up to 5 digits. So let’s create a simple console application and see how its works. Following is a code for that. using System; namespace LeadingZero { class Program { static void Main(string[] args) { int a = 1; int b = 20; int c = 300; int d = 4000; int e = 50000; Console.WriteLine(string.Format("{0}------>{1}",a,a.ToString("D5"))); Console.WriteLine(string.Format("{0}------>{1}", b, b.ToString("D5"))); Console.WriteLine(string.Format("{0}------>{1}", c, c.ToString("D5"))); Console.WriteLine(string.Format("{0}------>{1}", d, d.ToString("D5"))); Console.WriteLine(string.Format("{0}------>{1}", e, e.ToString("D5"))); Console.ReadKey(); } } } As you can see in the above code I have use string.Format function to display value of integer and after using integer value’s  ToString method. Now Let’s run the console application and following is the output as expected. Here you can see the integer number are converted into the exact output that we requires. That’s it you can see it’s very easy. We have written code in nice clean way and without writing any extra code or loop. Hope you liked it. Stay tuned for more.. Till than happy programming.

    Read the article

  • How one decision can turn web services to hell

    - by DigiMortal
    In this posting I will show you how one stupid decision may turn developers life to hell. There is a project where bunch of complex applications exchange data frequently and it is very hard to change something without additional expenses. Well, one analyst thought that string is silver bullet of web services. Read what happened. Bad bad mistake In the early stages of integration project there was analyst who also established architecture and technical design for web services. There was one very bad mistake this analyst made: All data must be converted to strings before exchange! Yes, that’s correct, this was the requirement. All integers, decimals and dates are coming in and going out as strings. There was also explanation for this requirement: This way we can avoid data type conversion errors! Well, this guy works somewhere else already and I hope he works in some burger restaurant – far away from computers. Consequences If you first look at this requirement it may seem like little annoying piece of crap you can easily survive. But let’s see the real consequences one stupid decision can cause: hell load of data conversions are done by receiving applications and SSIS packages, SSIS packages are not error prone and they depend heavily on strings they get from different services, there are more than one format per type that is used in different services, for larger amounts of data all these conversion tasks slow down the work of integration packages, practically all developers have been in hurry with some SSIS import tasks and some fields that are not used in different calculations in SSAS cube are imported without data conversions (by example, some prices are strings in format “1.021 $”). The most painful problem for developers is the part of data conversions because they don’t expect that there is such a stupid requirement stated and therefore they are not able to estimate the time their tasks take on these web services. Also developers must be prepared for cases when suddenly some service sends data that is not in acceptable format and they must solve the problems ASAP. This puts unexpected load on developers and they are not very happy with it because they can’t understand why they have to live with this horror if it is possible to fix. What to do if you see something like this? Well, explain the problem to customer and demand special tasks to project schedule to get this mess solved before going on with new developments. It is cheaper to solve the problems now that later.

    Read the article

  • Back-sliding into Unmanaged Code

    - by Laila
    It is difficult to write about Microsoft's ambivalence to .NET without mentioning clichés about dog food.  In case you've been away a long time, you'll remember that Microsoft surprised everyone with the speed and energy with which it introduced and evangelised the .NET Framework for managed code. There was good reason for this. Once it became obvious to all that it had sleepwalked into third place as a provider of development languages, behind Borland and Sun, it reacted quickly to attract the best talent in the industry to produce a windows version of the Java runtime, with Bounds-checking, Automatic Garbage collection, structures exception handling and common data types. To develop applications for this managed runtime, it produced several excellent languages, and more are being provided. The only thing Microsoft ever got wrong was to give it a stupid name. The logical step for Microsoft would be to base the entire operating system on the .NET framework, and to re-engineer its own applications. In 2002, Bill Gates, then Microsoft Chairman and Chief Software Architect said about their plans for .NET, "This is a long-term approach. These things don't happen overnight." Now, eight years later, we're still waiting for signs of the 'long-term approach'. Microsoft's vision of an entirely managed operating system has subsided since the Vista fiasco, but stays alive yet dormant as Midori, still being developed by Microsoft Research. This is an Internet-centric fork of the singularity operating system, a research project started in 2003 to build a highly-dependable operating system in which the kernel, device drivers, and applications are all written in managed code. Midori is predicated on the prevalence of connected systems, with provisions for distributed concurrency where application components exist 'in the cloud', and supports a programming model that can tolerate cancellation, intermittent connectivity and latency. It features an entirely new security model that sandboxes applications for increased security. So have Microsoft converted its existing applications to the .NET framework? It seems not. What Windows applications can run on Mono? Very few, it seems. We all thought that .NET spelt the end of DLL Hell and the need for COM interop, but it looks as if Bill Gates' idea of 'not overnight' might stretch to a decade or more. The Operating System has shown only minimal signs of migrating to .NET. Even where the use of .NET has come to dominate, when used for server applications with IIS, IIS itself is still entirely developed in unmanaged code. This is an irritation to Microsoft's greatest supporters who committed themselves fully to the NET framework, only to find parts of the Ambivalent Microsoft Empire quietly backsliding into unmanaged code and the awful C++. It is a strategic mistake that the invigorated Apple didn't make with the Mac OS X Architecture. Cheers, Laila

    Read the article

  • const vs. readonly for a singleton

    - by GlenH7
    First off, I understand there are folk who oppose the use of singletons. I think it's an appropriate use in this case as it's constant state information, but I'm open to differing opinions / solutions. (See The singleton pattern and When should the singleton pattern not be used?) Second, for a broader audience: C++/CLI has a similar keyword to readonly with initonly, so this isn't strictly a C# type question. (Literal field versus constant variable in C++/CLI) Sidenote: A discussion of some of the nuances on using const or readonly. My Question: I have a singleton that anchors together some different data structures. Part of what I expose through that singleton are some lists and other objects, which represent the necessary keys or columns in order to connect the linked data structures. I doubt that anyone would try to change these objects through a different module, but I want to explicitly protect them from that risk. So I'm currently using a "readonly" modifier on those objects*. I'm using readonly instead of const with the lists as I read that using const will embed those items in the referencing assemblies and will therefore trigger a rebuild of those referencing assemblies if / when the list(s) is/are modified. This seems like a tighter coupling than I would want between the modules, but I wonder if I'm obsessing over a moot point. (This is question #2 below) The alternative I see to using "readonly" is to make the variables private and then wrap them with a public get. I'm struggling to see the advantage of this approach as it seems like wrapper code that doesn't provide much additional benefit. (This is question #1 below) It's highly unlikely that we'll change the contents or format of the lists - they're a compilation of things to avoid using magic strings all over the place. Unfortunately, not all the code has converted over to using this singleton's presentation of those strings. Likewise, I don't know that we'd change the containers / classes for the lists. So while I normally argue for the encapsulations advantages a get wrapper provides, I'm just not feeling it in this case. A representative sample of my singleton public sealed class mySingl { private static volatile mySingl sngl; private static object lockObject = new Object(); public readonly Dictionary<string, string> myDict = new Dictionary<string, string>() { {"I", "index"}, {"D", "display"}, }; public enum parms { ABC = 10, DEF = 20, FGH = 30 }; public readonly List<parms> specParms = new List<parms>() { parms.ABC, parms.FGH }; public static mySingl Instance { get { if(sngl == null) { lock(lockObject) { if(sngl == null) sngl = new mySingl(); } } return sngl; } } private mySingl() { doSomething(); } } Questions: Am I taking the most reasonable approach in this case? Should I be worrying about const vs. readonly? is there a better way of providing this information?

    Read the article

  • Oracle Joins XBRL US To Help Drive Adoption

    - by Theresa Hickman
    Recently, Oracle joined XBRL US, the national consortium for XML business reporting standards to stay ahead of the technology and help increase XBRL adoption by U.S. companies by 2011. Large accelerated filers were mandated to use XBRL starting in 2009; other large filers started in 2010 and all other public companies must comply in June 2011. Here is a list of other organizations that recently joined XBRL US: Oracle Citi Federal Filings LLC Edgar Agents LLC XSP For those of you who have been living under a rock, XBRL stands for eXtensible Business Reporting Language. Simply put, it's reporting electronically. Just like PDFs or spreadsheets are a type of output, XBRL is another output option in electronic form. Right now, the transition to XBRL means extra work for publicly traded companies because they need to file their financial statements in both EDGAR and XBRL formats. Once the SEC phases out the EDGAR system, XBRL will be the primary way to deliver financial information with footnotes and supporting schedules to multiple audiences without having to re-key or reformat the information. A single XBRL document can be converted to printed output, published via the Web, fed into an SEC database (e.g. EDGAR) or forwarded to a creditor for analysis. Question: How does Oracle support XBRL reporting? Answer: The latest XBRL 2.1 specifications are supported by Oracle Hyperion Disclosure Management, which is part of Oracle's Hyperion Financial Close Suite along with Hyperion Financial Management, Hyperion Financial Data Quality Management and Hyperion Financial Close Management. Hyperion Disclosure Management supports the authoring of financial filings in Microsoft Office, with "hot links" to reports and data stored in Hyperion Financial Management or Oracle Essbase. It supports the XBRL tagging of financial statements as well as the disclosures and footnotes within your 10K and 10Q filings. Because many of our customers use Hyperion Financial Management (HFM) for their consolidation needs, they simply generate XBRL statements from their consolidated financial results. Question: What if you don't use Hyperion Financial Management, and you only use E-Business Suite General Ledger or PeopleSoft General Ledger? Answer: No problem, all you need is Hyperion Disclosure Management to generate XBRL from your general ledger. Here are the steps: Upload the XBRL taxonomy from the SEC or XBRL website into Hyperion Disclosure Management. Publish your financial statements out of general ledger to Excel. Perform the XBRL tag mapping from the Excel output to Hyperion Disclosure Management. For more information and some interesting background on XBRL, I recommend reading What You Need To Know About XBRL written by our EPM expert, John O'Rourke.

    Read the article

  • Multiple render targets and gamma correctness in Direct3D9

    - by Mario
    Let's say in a deferred renderer when building your G-Buffer you're going to render texture color, normals, depth and whatever else to your multiple render targets at once. Now if you want to have a gamma-correct rendering pipeline and you use regular sRGB textures as well as rendertargets, you'll need to apply some conversions along the way, because your filtering, sampling and calculations should happen in linear space, not sRGB space. Of course, you could store linear color in your textures and rendertargets, but this might very well introduce bad precision and banding issues. Reading from sRGB textures is easy: just set SRGBTexture = true; in your texture sampler in your HLSL effect code and the hardware does the conversion sRGB-linear for you. Writing to an sRGB rendertarget is theoretically easy, too: just set SRGBWriteEnable = true; in your effect pass in HLSL and your linear colors will be converted to sRGB space automatically. But how does this work with multiple rendertargets? I only want to do these corrections to the color textures and rendertarget, not to the normals, depth, specularity or whatever else I'll be rendering to my G-Buffer. Ok, so I just don't apply SRGBTexture = true; to my non-color textures, but when using SRGBWriteEnable = true; I'll do a gamma correction to all the values I write out to my rendertargets, no matter what I actually store there. I found some info on gamma over at Microsoft: http://msdn.microsoft.com/en-us/library/windows/desktop/bb173460%28v=vs.85%29.aspx For hardware that supports Multiple Render Targets (Direct3D 9) or Multiple-element Textures (Direct3D 9), only the first render target or element is written. If I understand correctly, SRGBWriteEnable should only be applied to the first rendertarget, but according to my tests it doesn't and is used for all rendertargets instead. Now the only alternative seems to be to handle these corrections manually in my shader and only correct the actual color output, but I'm not totally sure, that this'll not have any negative impact on color correctness. E.g. if the GPU does any blending or filtering or multisampling after the Linear-sRGB conversion... Do I even need gamma correction in this case, if I'm just writing texture color without lighting to my rendertarget? As far as I know, I DO need it because of the texture filtering and mip sampling happening in sRGB space instead, if I don't correct for it. Anyway, it'd be interesting to hear other people's solutions or thoughts about this.

    Read the article

  • SQL SERVER – Reseting Identity Values for All Tables

    - by pinaldave
    Sometime email requesting help generates more questions than the motivation to answer them. Let us go over one of the such examples. I have converted the complete email conversation to chat format for easy consumption. I almost got a headache after around 20 email exchange. I am sure if you can read it and feel my pain. DBA: “I deleted all of the data from my database and now it contains table structure only. However, when I tried to insert new data in my tables I noticed that my identity values starts from the same number where they actually were before I deleted the data.” Pinal: “How did you delete the data?” DBA: “Running Delete in Loop?” Pinal: “What was the need of such need?” DBA: “It was my development server and I needed to repopulate the database.” Pinal: “Oh so why did not you use TRUNCATE which would have reset the identity of your table to the original value when the data got deleted? This will work only if you want your database to reset to the original value. If you want to set any other value this may not work.” DBA: (silence for 2 days) DBA: “I did not realize it. Meanwhile I regenerated every table’s schema and dropped the table and re-created it.” Pinal: “Oh no, that would be extremely long and incorrect way. Very bad solution.” DBA: “I understand, should I just take backup of the database before I insert the data and when I need, I can use the original backup to restore the database. This way I will have identity beginning with 1.” Pinal: “This going totally downhill. It is wrong to do so on multiple levels. Did you even read my earlier email about TRUNCATE.” DBA: “Yeah. I found it in spam folder.” Pinal: (I decided to stay silent) DBA: (After 2 days) “Can you provide me script to reseed identity for all of my tables to value 1 without asking further question.” Pinal: USE DATABASE; EXEC sp_MSForEachTable ' IF OBJECTPROPERTY(object_id(''?''), ''TableHasIdentity'') = 1 DBCC CHECKIDENT (''?'', RESEED, 1)' GO Our conversation ended here. If you have directly jumped to this statement, I encourage you to read the conversation one time. There is difference between reseeding identity value to 1 and reseeding it to original value – I will write an another blog post on this subject in future. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Free eBook Download – EPUB, MOBI, PDF Format

    - by pinaldave
    Microsoft has released recently free eBooks on various Microsoft Technology. The best part is that all these books are available in ePub, Mobi and PDF. You can download them to your local machine or eBook reader and read them. This is a great start as many important subjects are now covered and converted into an eBook. I personally read through a few of the books and found they are very comprehensive and and detailed. The goal is not to cover complete technology in a single book but rather pick a single topic and discuss it in detail. The source of the book is white paper, Technet wiki as well book online and it is clearly listed right bellow the book title. Following are the books available for SQL Server Technology and I encourage all of you to have a look at them as they are great resources. Master Data Services Capacity Guidelines Microsoft SQL Server AlwaysOn Solutions Guide for High Availability and Disaster Recovery Microsoft SQL Server Analysis Services Multidimensional Performance and Operations Guide QuickStart: Learn DAX Basics in 30 Minutes SQL Server 2012 Transact-SQL DML Reference You can download above eBooks from here. This is indeed a great attempt as each book talks about the a single subject in depth keeping author focus on the single and simple subject. I have previously written two books by focusing on the same subject and I had great pleasure writing it as well. Writing on focus subjects gives complete freedom to author to explore the a single subject without having burden to cover everything which is associated with that technology at large. Just like eBooks mentioned earlier my SQL Server Wait Stats was inspired from my article series on SQL Wait Stats. The latest book SQL Server Interview Question and Answers was derived from my article series on SQL Interview Q and A. Writing book is an absolutely different concept than writing blog posts. When I was converting my blog posts to books, I ended up writing 50% new material and end up removing many repetitive content which shows up in blog series. It was indeed fun to focused book at the same time it was a great learning experience as an individual. Reference: TechNet Wiki, Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • MPEG-2 playback inconsistent

    - by DustByte
    Many years ago I gave up on Linux because video playback was choppy. Now I'm back, and video playback is still playing up... I have two MPEG files: good.mpg bad.mpg. Here is some information about the two files, using avprobe: My machine is Intel Core 2 Duo E8400 @ 3.00GHz x 2, 64-bit. I do not know what graphics card I have. I run Ubuntu 12.04. So far I have had no problems with YouTube and playback of various video files, including playback of the file good.mpg, included in the avprobe snapshot above. However, the file bad.mpg gives me headache! The file bad.mpg is produced by a respectable "Old-video-tapes-to-DVD" company. I converted over 10 Video-8 tapes to MPEG through them, and today I collected my hard drive containing the MPEG files. Unfortunately I have problem watching them! Here are some details: Using Totem Movie Player 3.0.1 works well for several seconds, then it gets choppy and the playback is not at all smooth. Also the player easily freezes for a while when trying to jump to another position in the file. Most strangely though, the total time is shown as 0:42 (42 seconds) instead of the true 00:39:11: The VLC media player is doing a better job. It shows the correct total length, but as soon as I jump in the video to a new position, it stalls. Playback also stalls after 30 seconds if I press play and leave it. Using Handbrake and choosing bad.mpg as the source, gives me: There is only one title to choose, and it is 6 min 53 seconds. I would have guessed the full 39 minutes of the video should have shown. Lastly, putting the file bad.mpg in Dropbox and viewing it on my iPad with the Dropbox app seems fine (disregard the lack of easy jumping forward due to real-time encoding when streaming it). My question is simple: What is going on?! Why do I have problem to play the MPEG-2 files I just paid good money for (the issue with bad.mpg applies to all files I had encoded)? Is it an issue with my particular Linux machine? The graphics card? But why has everything worked fine so far, and why does not the good.mpg file cause any problems?

    Read the article

  • Windows Phone 7 Development Updates &ndash; March 8th 2011

    - by Nikita Polyakov
    Here are the latest update from the Windows Phone 7 Developer Worlds that went live this month. Here are some of the latest numbers: Windows Phone Marketplace currently offers more than 9,000 quality apps and games and enjoys a base of over 32,000 registered developers, delivering an average of 100 new apps every day. There have been over 1 million downloads of the developers tools for Windows Phone 7. Trial version help you sell more Trials result in higher sales by the numbers: Users like trials  - paid apps with trial functionality are downloaded 70 times more than paid apps that don’t Nearly 1 out of 10 trial apps downloaded convert to a purchase and generate 10 times more revenue on average than paid apps that don’t include trial functionality. Trial downloads convert to paid downloads quickly. More than half of trial downloads that convert to a sale do so within the 1st 24 hours of trial download, and mostly within 2 hours of trial download. Microsoft Ad Control is gaining traction By the numbers - ad supported Windows Phone 7 apps are: Roughly ¼ of all registered U.S. WP7 developers have downloaded the free Ad SDK for Silverlight and XNA Of ad funded apps, over 95 percent use the free Microsoft Advertising Ad Control Monthly impressions from our Ad Exchange has continued to grow by double digits – impressions increased by 376 percent since January Ad Control, the first wave of “How Do I” videos are now available on MSDN: Create an Ad in a Windows Phone 7 XNA Game App Register Ad-Enabled Windows Phone 7 Apps Measure Ad Performance of Windows Phone 7 Apps Boarder International App submission for Free Apps through Yalla Apps As of today you can start submitting your free applications in developer markets that are currently not covered by Microsoft. To submit your Free application if you DO NOT belong to one of the Marketplace supported countries, go to: Yalla Apps Marketplace Policy Updates: Free App Marketplace Submission upped to 100 and other news Microsoft has been revisiting a few of our Marketplace policies based on feedback from developers to reduce friction and cost, word for word: 1. We have raised the limit on the number of certifications that can be performed for FREE apps at no cost to the registered developer from five to 100. This was a common request from developers which we are glad to implement after building alternate methods to ensure that users can find and download high quality apps. 2. We have converted policy 5.6 - related to the inclusion of contact information for support - from a mandatory to an optional policy. This is still a strongly recommended best practice, but we recognized and responded to developer feedback that this policy was creating excessive drag on the certification process for developers without commensurate user benefit for all apps. 3. We also understand the desire for clarification with regard to our policy on applications distributed under open source licenses.  The Marketplace Application Provider Agreement (APA) already permits applications under the BSD, MIT, Apache Software License 2.0 and Microsoft Public License.  We plan to update the APA shortly to clarify that we also permit applications under the Eclipse Public License, the Mozilla Public License and other, similar licenses and we continue to explore the possibility of accommodating additional OSS licenses. Enjoy and happy coding! Official Blog Post for reference.

    Read the article

  • SceneManagers as systems in entity system or as a core class used by a system?

    - by Hatoru Hansou
    It seems entity systems are really popular here. Links posted by other users convinced me of the power of such system and I decided to try it. (Well, that and my original code getting messy) In my project, I originally had a SceneManager class that maintained needed logic and structures to organize the scene (QuadTree, 2D game). Before rendering I call selectRect() and pass the x,y of the camera and the width and height of the screen and then obtain a minimized list containing only visible entities ordered from back to front. Now with Systems, originally in my first attempt my Render system required to get added all entities it should handle. This may sound like the correct approach but I realized this was not efficient. Trying to optimize It I reused the SceneManager class internally in the Renderer system, but then I realized I needed methods such as selectRect() in others systems too (AI principally) and make the SceneManager accessible globally again. Currently I converted SceneManager to a system, and ended up with the following interface (only relevant methods): /// Base system interface class System { public: virtual void tick (double delta_time) = 0; // (methods to add and remove entities) }; typedef std::vector<Entity*> EntitiesVector; /// Specialized system interface to allow query the scene class SceneManager: public System { public: virtual EntitiesVector& cull () = 0; /// Sets the entity to be used as the camera and replaces previous ones. virtual void setCamera (Entity* entity) = 0; }; class SceneRenderer // Not a system { vitual void render (EntitiesVector& entities) = 0; }; Also I could not guess how to convert renderers to systems. My game separates logic updates from screen updates, my main class have a tick() method and a render() method that may not be called the same times. In my first attempt renderers were systems but they was saved in a separated manager, updated only in render() and not in tick() like all other systems. I realized that was silly and simply created a SceneRenderer interface and give up about converting them to systems, but that may be for another question. Then... something does not feel right, isn't it? If I understood correctly a system should not depend on another or even count with another system exposing an specific interface. Each system should care only about its entities, or nodes (as optimization, so they have direct references to relevant components without having to constantly call the component() or getComponent() method of the entity).

    Read the article

  • Why is my laptop so sluggish? Or Damn You Facebook and Twitter! Or All Hail Chrome!

    - by John Conwell
    In the past three weeks, I've noticed that my laptop (dual core 2.1GHz, 2Gb RAM) has become amazingly sluggish.  I only uses for communications and data lookup workflows, so the slowness was tolerable.  But today I finally got fed up with the suckyness and decided to get to the root of the problem (I do have strong performance roots after all). It actually didn't take all that long to figure it out.  About a year ago I converted to Google Chrome (away from FireFox).  One of the great tools Chrome has is a "Task Manager" tool, that gives you Windows Task Manager like details for all the tabs open in the browser (Shift + Esc).  Since every tab runs in its own process, its easy from Task Manager (both Windows or Chrome) to identify and kill a single performance offending tab.  This is unlike IE, where you only get aggregate data about all tabs open.  Anyway, I digress.  Today my laptop sucked.  Windows Task Manager told me that I had two memory hogging Chrome tabs, but couldn't tell me which web page those tabs are showing.  Enter Chrome Task Manager which tells you the page title, along with CPU, memory and network utilization of each tab.  Enter my amazement.  Turns out Facebook was using just shy of half a Gb of RAM.  Half a Gigabyte!  That's 512 Megabytes!524,288 Kilobytes! 536,870,912 Bytes!  Or 4,294,967,296 Bits!  In other words, that's a frackin boat load of memory.  Now consider that Facebook is running on pretty much 96.3% (statistics based on absolutely nothing) of every house hold desktop, laptop, netbook, and mobile device in America, that is pretty horrific! And I wasn't playing any Facebook games like FarmWars or MafiaVille.  I just had my normal, default home page up showing me who just had breakfast, or just got finished with their morning run. I'm sorry...let me say that again...HALF A GIG OF RAM!  That is just unforgivable. I can just see my mom calling me up:  Mom: "John...I think I need a new computer.  Mine is really slow these days" John: "What do you have running?" Mom: "Oh, just Facebook" John: "Ok, close Facebook and tell me how fast your computer feels" Mom: "Well...I don't know how fast it is.  All I do is use Facebook" John: "Ok Mom, I'll send you a new computer by Tuesday" Oh yea...and the other offending web page?  It was Twitter, using a quarter of a Gigabyte. God I love social networks!

    Read the article

  • Function Folding in #PowerQuery

    - by Darren Gosbell
    Originally posted on: http://geekswithblogs.net/darrengosbell/archive/2014/05/16/function-folding-in-powerquery.aspxLooking at a typical Power Query query you will noticed that it's made up of a number of small steps. As an example take a look at the query I did in my previous post about joining a fact table to a slowly changing dimension. It was roughly built up of the following steps: Get all records from the fact table Get all records from the dimension table do an outer join between these two tables on the business key (resulting in an increase in the row count as there are multiple records in the dimension table for each business key) Filter out the excess rows introduced in step 3 remove extra columns that are not required in the final result set. If Power Query was to execute a query like this literally, following the same steps in the same order it would not be overly efficient. Particularly if your two source tables were quite large. However Power Query has a feature called function folding where it can take a number of these small steps and push them down to the data source. The degree of function folding that can be performed depends on the data source, As you might expect, relational data sources like SQL Server, Oracle and Teradata support folding, but so do some of the other sources like OData, Exchange and Active Directory. To explore how this works I took the data from my previous post and loaded it into a SQL database. Then I converted my Power Query expression to source it's data from that database. Below is the resulting Power Query which I edited by hand so that the whole thing can be shown in a single expression: let     SqlSource = Sql.Database("localhost", "PowerQueryTest"),     BU = SqlSource{[Schema="dbo",Item="BU"]}[Data],     Fact = SqlSource{[Schema="dbo",Item="fact"]}[Data],     Source = Table.NestedJoin(Fact,{"BU_Code"},BU,{"BU_Code"},"NewColumn"),     LeftJoin = Table.ExpandTableColumn(Source, "NewColumn"                                   , {"BU_Key", "StartDate", "EndDate"}                                   , {"BU_Key", "StartDate", "EndDate"}),     BetweenFilter = Table.SelectRows(LeftJoin, each (([Date] >= [StartDate]) and ([Date] <= [EndDate])) ),     RemovedColumns = Table.RemoveColumns(BetweenFilter,{"StartDate", "EndDate"}) in     RemovedColumns If the above query was run step by step in a literal fashion you would expect it to run two queries against the SQL database doing "SELECT * …" from both tables. However a profiler trace shows just the following single SQL query: select [_].[BU_Code],     [_].[Date],     [_].[Amount],     [_].[BU_Key] from (     select [$Outer].[BU_Code],         [$Outer].[Date],         [$Outer].[Amount],         [$Inner].[BU_Key],         [$Inner].[StartDate],         [$Inner].[EndDate]     from [dbo].[fact] as [$Outer]     left outer join     (         select [_].[BU_Key] as [BU_Key],             [_].[BU_Code] as [BU_Code2],             [_].[BU_Name] as [BU_Name],             [_].[StartDate] as [StartDate],             [_].[EndDate] as [EndDate]         from [dbo].[BU] as [_]     ) as [$Inner] on ([$Outer].[BU_Code] = [$Inner].[BU_Code2] or [$Outer].[BU_Code] is null and [$Inner].[BU_Code2] is null) ) as [_] where [_].[Date] >= [_].[StartDate] and [_].[Date] <= [_].[EndDate] The resulting query is a little strange, you can probably tell that it was generated programmatically. But if you look closely you'll notice that every single part of the Power Query formula has been pushed down to SQL Server. Power Query itself ends up just constructing the query and passing the results back to Excel, it does not do any of the data transformation steps itself. So now you can feel a bit more comfortable showing Power Query to your less technical Colleagues knowing that the tool will do it's best fold all the  small steps in Power Query down the most efficient query that it can against the source systems.

    Read the article

  • TouchDevelop: The Fast Path to Windows 8 and Phone Apps

    - by Clint Edmonson
    Are you looking for a little extra cash for the upcoming holidays? Then you might be interested in creating some cool apps to sell in the Windows Store. Or maybe you’re simply curious and want to try your hand at developing for Windows 8 and Windows Phone. In either case, the newly released TouchDevelop Web App is for you. TouchDevelop Web App is a development environment to create apps on your tablet or smartphone, without requiring a separate PC. Scripts written by using TouchDevelop can access data, media, and sensors on the phone, tablet, and PC. The script can interact with cloud services, including storage, computing, and social networks. TouchDevelop lets you quickly create fun games and useful tools, turning your scripts into true Windows Phone and Windows 8 apps. A year ago, Microsoft Research released TouchDevelop for Windows Phone, which is being used by enthusiasts, students, and researchers to program their phones in fun, inventive, and interesting ways. These scripts are available at TouchDevelop for anyone to download and use. Ever since we released TouchDevelop, we’ve been eyeing the tablet form factor and working on a version for the browser. Now, with the release of TouchDevelop Web App, the wait is over: the tablet version is ready, so go play around with it. All TouchDevelop scripts that are developed on the smartphone can be downloaded to the tablet and run (if hardware allows). Any script that is developed on the tablet can also be accessed on the phone. And scripts can be converted to Windows Phone or Windows 8 apps and submitted to the Windows Phone Store or Windows Store, respectively. TouchDevelop Web App’s editor and programming language have been designed for tablet devices with touchscreens, but you can also use a keyboard and a mouse. So grab your web-enabled device and give the TouchDevelop Web App a try. It’s fun and easy, and could even put a little cash in your holiday-depleted wallet. Or at least give you bragging rights at family get-togethers. Are you interested in further tips on Windows 8 development?  Sign up for the 30 to launch program which will help you build a Windows Store application in 30 days.  You will receive a tip per day for 30 days, along with potential free design consultations and technical support from a Windows 8 expert. As always, stay tuned to my twitter feed for Windows 8, Windows Azure and other Microsoft announcements, updates, and links: @clinted

    Read the article

  • What algorithms can I use to detect if articles or posts are duplicates?

    - by michael
    I'm trying to detect if an article or forum post is a duplicate entry within the database. I've given this some thought, coming to the conclusion that someone who duplicate content will do so using one of the three (in descending difficult to detect): simple copy paste the whole text copy and paste parts of text merging it with their own copy an article from an external site and masquerade as their own Prepping Text For Analysis Basically any anomalies; the goal is to make the text as "pure" as possible. For more accurate results, the text is "standardized" by: Stripping duplicate white spaces and trimming leading and trailing. Newlines are standardized to \n. HTML tags are removed. Using a RegEx called Daring Fireball URLs are stripped. I use BB code in my application so that goes to. (ä)ccented and foreign (besides Enlgish) are converted to their non foreign form. I store information about each article in (1) statistics table and in (2) keywords table. (1) Statistics Table The following statistics are stored about the textual content (much like this post) text length letter count word count sentence count average words per sentence automated readability index gunning fog score For European languages Coleman-Liau and Automated Readability Index should be used as they do not use syllable counting, so should produce a reasonably accurate score. (2) Keywords Table The keywords are generated by excluding a huge list of stop words (common words), e.g., 'the', 'a', 'of', 'to', etc, etc. Sample Data text_length, 3963 letter_count, 3052 word_count, 684 sentence_count, 33 word_per_sentence, 21 gunning_fog, 11.5 auto_read_index, 9.9 keyword 1, killed keyword 2, officers keyword 3, police It should be noted that once an article gets updated all of the above statistics are regenerated and could be completely different values. How could I use the above information to detect if an article that's being published for the first time, is already existing within the database? I'm aware anything I'll design will not be perfect, the biggest risk being (1) Content that is not a duplicate will be flagged as duplicate (2) The system allows the duplicate content through. So the algorithm should generate a risk assessment number from 0 being no duplicate risk 5 being possible duplicate and 10 being duplicate. Anything above 5 then there's a good possibility that the content is duplicate. In this case the content could be flagged and linked to the article's that are possible duplicates and a human could decide whether to delete or allow. As I said before I'm storing keywords for the whole article, however I wonder if I could do the same on paragraph basis; this would also mean further separating my data in the DB but it would also make it easier for detecting (2) in my initial post. I'm thinking weighted average between the statistics, but in what order and what would be the consequences...

    Read the article

  • Making A Photo-Sharing App For Android In Eclipse [on hold]

    - by user3694394
    I've only just started developing mobile apps, which is something that I've been wanting to learn for a while now. I'm from an indie games studio, making PC games for around the last 3 years, and I finally decided to move into android app development. The only problem I'm having is that I don't know where to start. The project which I'm aiming to create will be something similar to Instagram, basically a photo-sharing app which allows users to take new photos, or pull them from their device, and add filters to them, before posting them. I have a rough idea of how I could go about doing this, but I need pointing towards any tutorials available for each specific step. So, here's my idea: Create a UI in eclipse (this wont be a problem for me, I should be able to do this fine through xml files) Setup a server-side database to store all user info and uploaded images (the images will need to be converted into byte arrays, and I have no idea how to do this through a database). My best idea would be to use a MySQL database to do this. Add user interactions (likes, favourites, reposts, etc.). This would, again, have to be stored in the database (or, at least, i think it would). Add the ability to take new photos using the phone's camera (I can probably do this anyway, using the Camera API). Add the ability to pull existing photos from the device (again, pretty simple to do). Add the ability to add filters to any photos (I had a look around, and there are some repos and resources which allow you to do this, but they're mainly for iOS development). Add facebook/twitter integration (possibly) to allow phots to be shared to other social networks. Create a news feed which shows users all of the latest photos from their friends, and allows them to post their own images to it. Give all registered users their own wall/page which has their latest posts/images displayed on it. Add the ability to allow users to follow other users, and display their followed users posts on their news feed. Yep. It's not going to be easy, and I don't even know if it's possible for me to do alone in Eclipse. However, this is the plan, and I'm going to do my best to learn everything I need to know to do this successfully. My actual question would be how should I start doing this- where do I begin learning how to do all of this? I've had a look at snapify, which can be edited via Parse, but I won't be spending hundreds of dollars (since I'm 15 and just don't have the available funds) on software. I have extensive knowledge of Java (again, I've been making games for around 3 years, mainly in Java), and various scripting languages. So, hopefully, this will be of some use here. Thanks in advance, Josh.

    Read the article

  • How can I achieve a 3D-like effect with spritebatch's rotation and scale parameters

    - by Alic44
    I'm working on a 2d game with a top-down perspective similar to Secret of Mana and the 2D Final Fantasy games, with one big difference being that it's an action rpg using a 3-dimensional physics engine. I'm trying to draw an aimer graphic (basically an arrow) at my characters' feet when they're aiming a ranged weapon. At first I just converted the character's aim vector to radians and passed that into spritebatch, but there was a problem. The position of every object in my world is scaled for perspective when it's drawn to the screen. So if the physics engine coordinates are (1, 0, 1), the screen coords are actually (1, .707) -- the Y and Z axis are scaled by a perspective factor of .707 and then added together to get the screen coordinates. This meant that the direction the aimer graphic pointed (thanks to its rotation value passed into spritebatch) didn't match up with the direction the projectile actually traveled over time. Things looked fine when the characters fired left, right, up, or down, but if you fired on a diagonal the perspective of the physics engine didn't match with the simplistic way I was converting the character's aim direction to a screen rotation. Ok, fast forward to now: I've got the aimer's rotation matched up with the path the projectile will actually take, which I'm doing by decomposing a transform matrix which I build from two rotation matrices (one to represent the aimer's rotation, and one to represent the camera's 45 degree rotation on the x axis). My question is, is there a way to get not just rotation from a series of matrix transformations, but to also get a Vector2 scale which would give the aimer the appearance of being a 3d object, being warped by perspective? Orthographic perspective is what I'm going for, I think. So, the aimer arrow would get longer when facing sideways, and shorter when facing north and south because of the perspective. At the same time, it would get wider when facing north and south, and less wide when facing right or left. I'd like to avoid actually drawing the aimer texture in 3d because I'm still using spritebatch's layerdepth parameter at this point in my project, and I don't want to have to figure out how to draw a 3d object within the depth sorting system I already have. I can provide code and more details if this is too vague as a question... This is my first post on stack exchange. Thanks a lot for reading! Note: (I think) I realize it can't be a technically correct 3D perspective, because the spritebatch's vector2 scaling argument doesn't allow for an object to be skewed the way it actually should be. What I'm really interested in is, is there a good way to fake the effect, or should I just drop it and not scale at all? Edit to clarify without the help of a picture (apparently I can't post them yet): I want the aimer arrow to look like it has been painted on the ground at the character's feet, so it should appear to be drawn on the ground plane (in my case the XZ plane) which should be tilted at a 45 degree angle (around the X axis) from the viewing perspective. Alex

    Read the article

  • Why can't Java/C# implement RAII?

    - by mike30
    Question: Why can't Java/C# implement RAII? Clarification: I am aware the garbage collector is not deterministic. So with the current language features it is not possible for an object's Dispose() method to be called automatically on scope exit. But could such a deterministic feature be added? My understanding: I feel an implementation of RAII must satisfy two requirements: 1. The lifetime of a resource must be bound to a scope. 2. Implicit. The freeing of the resource must happen without an explicit statement by the programmer. Analogous to a garbage collector freeing memory without an explicit statement. The "implicitness" only needs to occur at point of use of the class. The class library creator must of course explicitly implement a destructor or Dispose() method. Java/C# satisfy point 1. In C# a resource implementing IDisposable can be bound to a "using" scope: void test() { using(Resource r = new Resource()) { r.foo(); }//resource released on scope exit } This does not satisfy point 2. The programmer must explicitly tie the object to a special "using" scope. Programmers can (and do) forget to explicitly tie the resource to a scope, creating a leak. In fact the "using" blocks are converted to try-finally-dispose() code by the compiler. It has the same explicit nature of the try-finally-dispose() pattern. Without an implicit release, the hook to a scope is syntactic sugar. void test() { //Programmer forgot (or was not aware of the need) to explicitly //bind Resource to a scope. Resource r = new Resource(); r.foo(); }//resource leaked!!! I think it is worth creating a language feature in Java/C# allowing special objects that are hooked to the stack via a smart-pointer. The feature would allow you to flag a class as scope-bound, so that it always is created with a hook to the stack. There could be a options for different for different types of smart pointers. class Resource - ScopeBound { /* class details */ void Dispose() { //free resource } } void test() { //class Resource was flagged as ScopeBound so the tie to the stack is implicit. Resource r = new Resource(); //r is a smart-pointer r.foo(); }//resource released on scope exit. I think implicitness is "worth it". Just as the implicitness of garbage collection is "worth it". Explicit using blocks are refreshing on the eyes, but offer no semantic advantage over try-finally-dispose(). Is it impractical to implement such a feature into the Java/C# languages? Could it be introduced without breaking old code?

    Read the article

  • MobaXTerm - SSH Key authentication

    - by Chip Sprague
    I have a key that I converted and works fine with Putty. I have tried these formats: ssh -p 1111 -i id_rsa [email protected] ssh -i id_rsa -p 1111 [email protected] The key is in the same folder as the MobaXTerm executable. Thanks! EDIT: [chip.client] $ ssh -p 1111 -i id_rsa [email protected] -v Warning: Identity file id_rsa not accessible: No such file or directory. OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to 192.168.0.9 [192.168.0.100] port 1111. debug1: Connection established. debug1: identity file /home/chip/.ssh/id_rsa type -1 debug1: identity file /home/chip/.ssh/id_rsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3p1 Debian-3ubuntu7 debug1: match: OpenSSH_5.3p1 Debian-3ubuntu7 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 [email protected] debug1: kex: client->server aes128-ctr hmac-md5 [email protected] debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: checking without port identifier Warning: Permanently added '[192.168.0.100]:1111' (RSA) to the list of known hosts. debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /home/chip/.ssh/id_rsa debug1: No more authentication methods to try. Permission denied (publickey). [01/09/2011 - 09:15.38] ~

    Read the article

  • Watchguard SSL Certificate problems

    - by Bill Best
    We recently purchased a Watchguard XTM 510. The hope is to replace our ISA 2006 proxy with this UTM product. We are having some issues with secured sites in our test setup. Currently We are still running traffic through the ISA server and I have the Watchguard also setup to be connected to the network. Where we run into problems is when I set in ISA the HTTPS site's location to be forwarded through the XTM, I get a certificate could not be validated error. Therefore I think Ive narrowed it down to two possibilities. One, the certificate needs to be installed on the XTM. Im not 100% sure this is the case as I believe this should just be acting as strictly a proxy and forwarding all the traffic through no questions asked. Either way if I try to import a certificate to the XTM I always get a certificate validation failed error message. These are generally converted pfx to pem files. Second, the XTM CA certificate needs to be installed on the ISA server so that they may communicate. I have done this but it didn't seem to do anything. I believe this should be working and was hoping someone has struggled through this before.

    Read the article

  • Cannot reactivate RAID-5 volume: The size of the plex member is invalid

    - by Ian Boyd
    We had a 3-drive Windows Server 2008 R2 RAID-5 fail (operating in redundancy mode): WDC 1 TB WDC 1 TB WDC 1 TB We removed the failed hard drive, and put a WDC 1 TB drive (that we had standing by) into the machine. When launched, Disk Manager, asked permission to "initialize" the disk as either: Master Boot Record (MBR) Guid Partition Table (GPT) We initialized the disk as GPT, converted it to dynamic, and tried to use the Repair Volume command - except it was greyed out. (which is a terrifying thing on a failed production server hosting 3 virtual servers) i tried from the diskpart command line tool. First we look for our RAID-5 volume that is in Failed Rd mode: DISKPART> list volume Volume ### Ltr Label Fs Type Size Status Info ---------- --- ----------- ----- ---------- ------- --------- -------- Volume 0 E VMs (Raid5) NTFS RAID-5 1863 GB Failed Rd Volume 1 D DVD-ROM 0 B No Media Volume 2 System Rese NTFS Partition 100 MB Healthy System Volume 3 C NTFS Partition 1862 GB Healthy Boot There, Volume 0. Make that our active context: DISKPART> select volume 0 Volume 0 is the selected volume. Now we need to find the disk we will be repairing the volume with: DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------- ------------- ------- ------- --- --- Disk 0 Online 931 GB 0 B * Disk 1 Online 931 GB 931 GB * Disk 2 Online 1863 GB 0 B Disk 3 Online 931 GB 0 B * Disk M0 Missing 0 B 0 B * The disk with 931 GB free, Disk 1. Now we just need to repair the volume: DISKPART> repair disk=1 Virtual Disk Service error: The size of the plex member is invalid.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >