Search Results

Search found 2612 results on 105 pages for 'jd pack'.

Page 93/105 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • HTML5 and Visual Studio 2010

    - by Harish Ranganathan
    All of us work with Visual Studio (or the free Visual Web Developer Express Edition) for developing web applications targeting ASP.NET / ASP.NET MVC or Silverlight etc.,  Over the years, Visual Studio has grown to a great extent.  From being a simple limited functionality tool in VS.NET 2002 to the multi-faceted, MEF driven Visual Studio 2010, it has come a long way.  And as much as Visual Studio supports rapid web development by generating HTML mark up, it also added intellisense for some of the HTML specifications that one has otherwise monotonously type every time.  Ex.- In Visual Studio 2010, one can just type the angular bracket “<” and then the first keyword “h” or “x” for html or xhtml respectively and then press tab twice and it would render the entire markup required for XHTML or HTML 1.0/1.1 strict/transitional and the fully qualified W3C URL. The same holds good for specifying HTML type declaration.  Now, the difference between HTML and XHTML has been discussed in detail already, though, if you are interested to know, you can read it from http://www.w3schools.com/xhtml/xhtml_html.asp But, the industry trend or the buzz around is HTML5.  With browsers like IE9 Beta, Google Chrome, Firefox 4 etc., supporting HTML5 standards big time, everyone wants to start developing HTML5 based websites. VS developers (like me) often get the question around when would VS start supporting HTML5.  VS 2010 was released last year and HTML5 is still specifications under development.  Clearly, with the timelines we started developing Visual Studio (way back in 2008), HTML5 specs were almost non-existent.  Even today, the HTML5 body recommends not to fully depend on the entire mark up set as they are still under development specs and might change in the future. However, with Visual Studio 2010 SP1 beta, there is quite a bit of support for HTML5 based web development.  In fact, one of my colleagues pointed out that SP1 beta’s major enhancement is its ability to support HTML5 tags and even add server mode to them. Lets look at the existing validation schema available in Visual Studio (Tools – Options – Validation) This is before installing Visual Studio 2010 SP1 Beta.  Clearly, the validation options are restricted to HTML 4.01 and XHTML 1.1 transitional and below. Also, lets consider using some of the new HTML5 input type elements.  (I found out this, just today from my friend, also an, ASP.NET team member) <input type=”email”> is one of the new input type elements according to the HTML5 specification.  Now, this works well if you type it as is  in Visual Studio and the page renders without any issue (since the default behaviour is, if there is an “undefined” type specified to input tag, it would fall back on the default mode, which is text. The moment you add <input type=”email” runat=”server” >, you get an error Naturally you don’t get intellisense support as well for these new tags.  Once you install Visual Studio 2010 Service Pack 1 Beta from here (it takes a while so you need to be patient for the installation to complete), you will start getting additional Validation templates for HTML5, as below:- Once you set this, you can start using HTML5 elements in your web page without getting errors/warnings.  Look at the screen shot below, for the new “video” tag which is showing up in intellisense (video is a part of the new HTML5 specifications)     note that, you still need to hook up the <!DOCTYPE html /> on the top manually as it doesn’t change automatically  (from the default XHTML 1.0 strict) when you create a new page. Also, the new input type tags in HTML5 are also supported One, can also use the <asp:TextBox type=”email” which would in turn generate the <input type=”email”> markup when the page is rendered.  In fact, as of SP1 beta, this is the only way to put the new input type tags with the runat=”server” attribute (otherwise you will get the parser error mentioned above.  This issue would be fixed by the final release of SP1 beta) Going further, there may be more support for having server tags for some of the common HTML5 elements, but this is work in progress currently. So, other than not having runat=”server” support for the new HTML5  input tags, you can pretty much build and target HTML5 websites with Visual Studio 2010 SP1 Beta, today.  For those who are running Visual Studio 2008, you also have the “HTML5 intellisense for Visual Studio 2010 and 2008” available for download, from http://visualstudiogallery.msdn.microsoft.com/d771cbc8-d60a-40b0-a1d8-f19fc393127d/ Note that, if you are running Visual Studio 2010, the recommended approach is to install the SP1 beta which would be the way forward for HTML5 support in Visual Studio. Of course, you need to test these on a browser supporting HTML5 such as IE9 Beta or Chrome or FireFox 4.  You can download IE9 Beta from here You can also follow the Visual Web Developer Team Blog for more updates on the stuff they are building. Cheers !!!

    Read the article

  • Oracle Linux and Oracle VM pricing guide

    - by wcoekaer
    A few days ago someone showed me a pricing guide from a Linux vendor and I was a bit surprised at the complexity of it. Especially when you look at larger servers (4 or 8 sockets) and when adding virtual machine use into the mix. I think we have a very compelling and simple pricing model for both Oracle Linux and Oracle VM. Let me see if I can explain it in 1 page, not 10 pages. This pricing information is publicly available on the Oracle store, I am using the current public list prices. Also keep in mind that this is for customers using non-oracle x86 servers. When a customer purchases an Oracle x86 server, the annual systems support includes full use (all you can eat) of Oracle Linux, Oracle VM and Oracle Solaris (no matter how many VMs you run on that server, in case you deploy guests on a hypervisor). This support level is the equivalent of premier support in the list below. Let's start with Oracle VM (x86) : Oracle VM support subscriptions are per physical server on which you deploy the Oracle VM Server product. (1) Oracle VM Premier Limited - 1- or 2 socket server : $599 per server per year (2) Oracle VM Premier - more than 2 socket server (4, or 8 or whatever more) : $1199 per server per year The above includes the use of Oracle VM Manager and Oracle Enterprise Manager Cloud Control's Virtualization management pack (including self service cloud portal, etc..) 24x7 support, access to bugfixes, updates and new releases. It also includes all options, live migrate, dynamic resource scheduling, high availability, dynamic power management, etc If you want to play with the product, or even use the product without access to support services, the product is freely downloadable from edelivery. Next, Oracle Linux : Oracle Linux support subscriptions are per physical server. If you plan to run Oracle Linux as a guest on Oracle VM, VMWare or Hyper-v, you only have to pay for a single subscription per system, we do not charge per guest or per number of guests. In other words, you can run any number of Oracle Linux guests per physical server and count it as just a single subscription. (1) Oracle Linux Network Support - any number of sockets per server : $119 per server per year Network support does not offer support services. It provides access to the Unbreakable Linux Network and also offers full indemnification for Oracle Linux. (2) Oracle Linux Basic Limited Support - 1- or 2 socket servers : $499 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management. It includes ocfs2 as a clustered filesystem. (3) Oracle Linux Basic Support - more than 2 socket server (4, or 8 or more) : $1199 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management. It includes ocfs2 as a clustered filesystem (4) Oracle Linux Premier Limited Support - 1- or 2 socket servers : $1399 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management, XFS filesystem support. It also offers Oracle Lifetime support, backporting of patches for critical customers in previous versions of package and ksplice zero-downtime updates. (5) Oracle Linux Premier Support - more than 2 socket servers : $2299 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management, XFS filesystem support. It also offers Oracle Lifetime support, backporting of patches for critical customers in previous versions of package and ksplice zero-downtime updates. (6) Freely available Oracle Linux - any number of sockets You can freely download Oracle Linux, install it on any number of servers and use it for any reason, without support, without right to use of these extra features like Oracle Clusterware or ksplice, without indemnification. However, you do have full access to all errata as well. Need support? then use options (1)..(5) So that's it. Count number of 2 socket boxes, more than 2 socket boxes, decide on basic or premier support level and you are done. You don't have to worry about different levels based on how many virtual instance you deploy or want to deploy. A very simple menu of choices. We offer, inclusive, Linux OS clusterware, Linux OS Management, provisioning and monitoring, cluster filesystem (ocfs), high performance filesystem (xfs), dtrace, ksplice, ofed (infiniband stack for high performance networking). No separate add-on menus. NOTE : socket/cpu can have any number of cores. So whether you have a 4,6,8,10 or 12 core CPU doesn't matter, we count the number of physical CPUs.

    Read the article

  • Trash Destination Adapter

    The Trash Destination and this article came from early experiences of using SSIS and community feedback at the time. When developing a package it is very useful to have a destination adapter that does nothing but consume rows with no setup requirement. You often want run a package part way through development, or just add a path so you can set a Data Viewer. There are stock tasks that can be used, but with the Trash Destination all columns are treated as selected automatically (usage type of read-only), so the pipeline knows they are required. It is also obvious that this is for development or diagnostic purposes, and is clearly not a part of the functional design of the package. It is also ideal for just playing around and exploring concepts in SSIS, and is often used in conjunction with the Data Generator Source. Using these two components it is easy to setup a test of an expression in the Derived Column Transformation for example. The Data Generator Source provides some dummy data, and the Trash Destination allows you to anchor the output path and set a Data Viewer to examine the results. It can also be used when performance tuning packages. It is a consistent and known quantity that has no external influences, so it is ideal as a destination when breaking the data flow into sections to isolate a bottleneck. The adapter is really simple to use and requires no setup. Simply drop it onto the pipeline designer and use it to terminate your data flow path. Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally, for 2005/2008, you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trash Destination transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Downloads The Trash Destination is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Trash Destination for SQL Server 2005 Trash Destination for SQL Server 2008 Trash Destination for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.34 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.33 - SQL Server 2008 release. Includes support for upgrade of 2005 packages. RTM compatible, previously February 2008 CTP. (4 Mar 2008) Version 2.0.0.31 - SQL Server 2008 November 2007 CTP. (14 Feb 2008) SQL Server 2005 Version 1.0.2.18 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. (12 Jun 2006) Version 1.0.1.1 - SQL Server 2005 IDW 15 June CTP. Minor enhancements over v1.0.1.0. (11 Jun 2005) Version 1.0.1.0 - SQL Server 2005 IDW 14 April CTP. First Public Release. (30 May 2005) Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005, SQL Server 2008 and SQL Server 2012. If you an error when you try and use the component along the lines of The component could not be added to the Data Flow task. Please verify that this component is properly installed.  ... The data flow object "Konesans ..." is not installed correctly on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. The full error message is shown below for reference: TITLE: Microsoft Visual Studio ------------------------------ The component could not be added to the Data Flow task. Please verify that this component is properly installed. ------------------------------ ADDITIONAL INFORMATION: The data flow object "Konesans.Dts.Pipeline.TrashDestination.Trash, Konesans.Dts.Pipeline.TrashDestination, Version=1.0.1.0, Culture=neutral, PublicKeyToken=b8351fe7752642cc" is not installed correctly on this computer. (Microsoft.DataTransformationServices.Design) For 2005/2008, once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component? This is not necessary for SQL Server 2012 as the new SSIS toolbox automatically detects components. If you are still having issues then contact us, but please provide as much detail as possible about error, as well as which version of the the task you are using and details of the SSIS tools installed.

    Read the article

  • Red Gate does Byte Night 2012

    - by red(at)work
    On the 5th of October 2012, a team of nine plucky Red Gaters braved the howling wind and the driving rain to sleep outside. No tents or mattresses were allowed – all we took for protection were sleeping bags, groundsheets, plastic sacks and Colin’s enormous fishing umbrella (a godsend in umbrella-y disguise). Why would we do such a thing? For Byte Night, an annual tech sector sleepout in support of Action for Children, who tackle the causes as well as the consequences of youth homelessness. Byte Night encourages technology professionals to do for one night a year what thousands of young people have to do every night – sleep rough.  We signed up for Byte Night in the warm, heady midst of the British summer, thinking it couldn’t possibly be all that bad. Even on the night itself – before the rain began to fall, sat in the comfort and warmth of a company canteen, drinking wine and eating chill and preparing to win the pub quiz – we were excited and optimistic about the night that lay ahead of us. All of that changed as soon as we stepped out into one of the worst rainstorms of the year. Brian, the team’s birthday boy, describes it best: Picture the scene: it’s 3 am on a Friday. I’m lying outside, fully clothed in a sleeping bag, wearing a raincoat, trussed up inside a large plastic pocket, on a ground sheet beneath a giant umbrella, wedged so tightly between two of my colleagues that I can’t move my arms. I’m wide awake, staring up at the grey sky beyond the edge of the umbrella; a limp, flickering white glow hints at a moon somewhere behind the drifting clouds. I haven’t slept since we first moved outside at 11 pm. Outside. Did I mention we were outside? I’m hung over. I need the loo. But there is no way on earth that I’m getting out of this sleeping bag. It’s cold. It’s raining. Not just raining, but chucking it down. It’s been doing this non-stop since 10pm. The rain sounds like a hyperactive drummer on the fishing umbrella, and the noise is loud and relentless. Puddles of water are forming all over the groundsheet, and, despite being ensconced inside the plastic pouch, I am wet. The fishing umbrella is protecting me from the worst of the driving rain, but not all of me is under it, and five hours of rain is no match for it. Everything is wet. My left side has become horribly damp. My trainers, which I placed next to my sleeping bag, are now completely soaked through. Mmm. That’ll be fun in the morning. My head is next to Colin’s head on one side, and a multi-pack of McCoy’s cheddar and onion crisps on the other. Don’t ask about the tub of hummus. That’s somewhere down by my ankles, abandoned to the night. Jess, who is lying next to me, rolls over onto her side. A mini waterfall cascades from her rain-pouch onto my face. Bah. I continue to stare into the heavens, willing the dawn to hurry up. Something lands on my face. It’s a mosquito. Great. Midnight, when this still seemed like fun – when we opened some champagne and my colleagues presented me with a caterpillar birthday cake, when everyone was drunk and jolly and full of stoic resolve – feels like a long time ago. Did I mention that today is my birthday? The remains of the caterpillar cake endure the same fate as the hummus, left out in the rain like a metaphor for sadness. It’s getting colder. I can see my breath. Silence has descended on the group, apart from the rustle of plastic. And the rain, obviously. Someone snores, and I envy whoever it is the sweet escape of sleep. I try to wriggle a bit further down inside my sleeping bag, but it doesn’t want to be wriggled into. Only 3 hours till dawn. 180 minutes. I begin to count them off, one at a time.  All nine of us got to go home in the morning, but thousands of children across the UK don’t have that luxury. If you’d like to sponsor the Red Gate Byte Night team, our JustGiving page can be found here.   Chris, before the outside bit actually happened. More photos from Byte Night Cambridge 2012 can be found here.

    Read the article

  • Forcing an External Activation with Service Broker

    - by Davide Mauri
    In these last days I’ve been working quite a lot with Service Broker, a technology I’m really happy to work with, since it can give a lot of satisfaction. The scale-out solution one can easily build is simply astonishing. I’m helping a company to build a very scalable and – yet almost inexpensive – invoicing system that has to be able to scale out using commodity hardware. To offload the work from the main server to satellite “compute nodes” (yes, I’ve borrowed this term from PDW) we’re using Service Broker and the External Activator application available in the SQL Server Feature Pack. For those who are not used to work with SSB, the External Activation is a feature that allows you to intercept the arrival of a message in a queue right from your application code. http://msdn.microsoft.com/en-us/library/ms171617.aspx (Look for “Event-Based Activation”) In order to make life even more easier, Microsoft released the External Activation application that saves you even from writing even this code. http://blogs.msdn.com/b/sql_service_broker/archive/tags/external+activator/ The External Activator application can be configured to execute your own application so that each time a message – an invoice in my case – arrives in the target queue, the invoking application is executed and the invoice is calculated. The very nice feature of External Activator is that it can automatically execute as many configured application in order to process as many messages as your system can handle.  This also a lot of create a scale-out solution, leaving to the developer only a fraction of the problems that usually came with asynchronous programming. Developers are also shielded from Service Broker since everything can be encapsulated in Stored Procedures, so that – for them – developing such scale-out asynchronous solution is not much more complex than just executing a bunch of Stored Procedures. Now, if everything works correctly, you don’t have to bother of anything else. You put messages in the queue and your application, invoked by the External Activator, process them. But what happen if for some reason your application fails to process the messages. For examples, it crashes? The message is safe in the queue so you just need to process it again. But your application is invoked by the External Activator application, so now the question is, how do you wake up that app? Service Broker will engage the activation process only if certain conditions are met: http://msdn.microsoft.com/en-us/library/ms171601.aspx But how we can invoke the activation process manually, without having to wait for another message to arrive (the arrival of a new message is a condition that can fire the activation process)? The “trick” is to do manually with the activation process does: sending a system message to a queue in charge of handling External Activation messages: declare @conversationHandle uniqueidentifier; declare @n xml = N' <EVENT_INSTANCE>   <EventType>QUEUE_ACTIVATION</EventType>   <PostTime>' + CONVERT(CHAR(24),GETDATE(),126) + '</PostTime>   <SPID>' + CAST(@@SPID AS VARCHAR(9)) + '</SPID>   <ServerName>[your_server_name]</ServerName>   <LoginName>[your_login_name]</LoginName>   <UserName>[your_user_name]</UserName>   <DatabaseName>[your_database_name]</DatabaseName>   <SchemaName>[your_queue_schema_name]</SchemaName>   <ObjectName>[your_queue_name]</ObjectName>   <ObjectType>QUEUE</ObjectType> </EVENT_INSTANCE>' begin dialog conversation     @conversationHandle from service        [<your_initiator_service_name>] to service          '<your_event_notification_service>' on contract         [http://schemas.microsoft.com/SQL/Notifications/PostEventNotification] with     encryption = off,     lifetime = 6000 ; send on conversation     @conversationHandle message type     [http://schemas.microsoft.com/SQL/Notifications/EventNotification] (@n) ;     end conversation @conversationHandle; That’s it! Put the code in a Stored Procedure and you can add to your application a button that says “Force Queue Processing” (or something similar) in order to start the activation process whenever you need it (which should not occur too frequently but it may happen). PS I know that the “fire-and-forget” (ending the conversation without waiting for an answer) technique is not a best practice, but in this case I don’t see how it can hurts so I decided to stay very close to the KISS principle []

    Read the article

  • The Case of the Extra Page: Rendering Reporting Services as PDF

    - by smisner
    I had to troubleshoot a problem with a mysterious extra page appearing in a PDF this week. My first thought was that it was likely to caused by one of the most common problems that people encounter when developing reports that eventually get rendered as PDF is getting blank pages inserted into the PDF document. The cause of the blank pages is usually related to sizing. You can learn more at Understanding Pagination in Reporting Services in Books Online. When designing a report, you have to be really careful with the layout of items in the body. As you move items around, the body will expand to accommodate the space you're using and you might eventually tighten everything back up again, but the body doesn't automatically collapse. One of my favorite things to do in Reporting Services 2005 - which I dubbed the "vacu-pack" method - was to just erase the size property of the Body and let it auto-calculate the new size, squeezing out all the extra space. Alas, that method no longer works beginning with Reporting Services 2008. Even when you make sure the body size is as small as possible (with no unnecessary extra space along the top, bottom, left, or right side of the body), it's important to calculate the body size plus header plus footer plus the margins and ensure that the calculated height and width do not exceed the report's height and width (shown as the page in the illustration above). This won't matter if users always render reports online, but they'll get extra pages in a PDF document if the report's height and width are smaller than the calculate space. Beginning the Investigation In the situation that I was troubleshooting, I checked the properties: Item Property Value Body Height 6.25in   Width 10.5in Page Header Height 1in Page Footer Height 0.25in Report Left Margin 0.1in   Right Margin 0.1in   Top Margin 0.05in   Bottom Margin 0.05in   Page Size - Height 8.5in   Page Size - Width 11in So I calculated the total width using Body Width + Left Margin + Right Margin and came up with a value of 10.7 inches. And then I calculated the total height using Body Height + Page Header Height + Page Footer Height + Top Margin + Bottom Margin and got 7.6 inches. Well, page sizing couldn't be the reason for the extra page in my report because 10.7 inches is smaller than the report's width of 11 inches and 7.6 inches is smaller than the report's height of 8.5 inches. I had to look elsewhere to find the culprit. Conducting the Third Degree My next thought was to focus on the rendering size of the items in the report. I've adapted my problem to use the Adventure Works database. At the top of the report are two charts, and then below each chart is a rectangle that contains a table. In the real-life scenario, there were some graphics present as a background for the tables which fit within the rectangles that were about 3 inches high so the visual space of the rectangles matched the visual space of the charts - also about 3 inches high. But there was also a huge amount of white space at the bottom of the page, and as I mentioned at the beginning of this post, a second page which was blank except for the footer that appeared at the bottom. Placing a textbox beneath the rectangles to see if they would appear on the first page resulted the textbox's appearance on the second page. For some reason, the rectangles wanted a buffer zone beneath them. What's going on? Taking the Suspect into Custody My next step was to see what was really going on with the rectangle. The graphic appeared to be correctly sized, but the behavior in the report indicated the rectangle was growing. So I added a border to the rectangle to see what it was doing. When I added borders, I could see that the size of each rectangle was growing to accommodate the table it contains. The rectangle on the right is slightly larger than the one on the left because the table on the right contains an extra row. The rectangle is trying to preserve the whitespace that appears in the layout, as shown below. Closing the Case Now that I knew what the problem was, what could I do about it? Because of the graphic in the rectangle (not shown), I couldn't eliminate the use of the rectangles and just show the tables. But fortunately, there is a report property that comes to the rescue: ConsumeContainerWhitespace (accessible only in the Properties window). I set the value of this property to True. Problem solved. Now the rectangles remain fixed at the configured size and don't grow vertically to preserve the whitespace. Case closed.

    Read the article

  • GlassFish Clustering with DCOM on Windows

    - by ByronNevins
    DCOM - Distributed COM, a Microsoft protocol for communicating with Windows machines. Why use DCOM? In GlassFish 3.1 SSH is used as the standard way to run commands on remote nodes for clustering.  It is very difficult for users to get SSH configured properly on Windows.  SSH does not come with Windows so we have to depend on third party tools.  And then the user is forced to install and configure these tools -- which can be tricky. DCOM is available on all supported platforms.  It is built-in to Windows. The idea is to use DCOM to communicate with remote Windows nodes.  This has the huge advantage that the user has to do minimal, if any, configuration on the Windows nodes. Implementation HighlightsTwo open Source Libraries have been added to GlassFish: Jcifs – a SAMBA implementation in Java J-interop – A Java implementation for making DCOM calls to remote Windows computers.   Note that any supported platform can use DCOM to work with Windows nodes -- not just Windows.E.g. you can have a Linux DAS work with Windows remote instances.All existing SSH commands now have a corresponding DCOM command – except for setup-ssh which isn’t needed for DCOM.  validate-dcom is an all new command. New DCOM Commands create-node-dcom delete-node-dcom install-node-dcom list-nodes-dcom ping-node-dcom uninstall-node-dcom update-node-dcom validate-dcom setup-local-dcom (This is only available via Update Center for GlassFish 3.1.2) These commands are in-place in the trunk (4.0).  And in the branch (3.1.2) Windows Configuration Challenges There are an infinite number of possible configurations of Windows if you look at it as a combination of main release, service-pack, special drivers, software, configuration etc.  Later versions of Windows err on the side of tightening security be default.  This means that the Windows host may need to have configuration changes made.These configuration changes mostly need to be made by the user.  setup-local-dcom will assist you in making required changes to the Windows Registry.  See the reference blogs for details. The validate-dcom Command validate-dcom is a crucial command.  It should be run before any other commands.  If it does not run successfully then there is no point in running other commands.The validate-dcom command must be used from a DAS machine to test a different Windows machine.  If  validate-dcom runs successfully you can be confident that all the DCOM commands will work.  Conversely, the opposite is also true:  If validate-dcom fails, then no DCOM commands will work. What validate-dcom does Verify that the remote host is not the local machine. Resolves the remote host name Checks that the remote DCOM port is being listened on (135, 139) Checks that the remote host’s File Sharing is enabled (port 445) It copies a file (a script) to the remote host to verify that SAMBA is working and authorization is correct It runs a script that it copied on-the-fly to the remote host. Tips and Tricks The bread and butter commands that use DCOM are existing commands like create-instance, start-instance etc.   All of the commands that have dcom in their name are for dealing with the actual nodes. The way the software works is to call asadmin.bat on the remote machine and run a command.  This means that you can track these commands easily on the remote machine with the usual tools.  E.g. using AS_LOGFILE, looking at log files, etc.  It’s easy to attach a debugger to the remote asadmin process, “just in time”, if necessary. How to debug the remote commands:Edit the asadmin.bat file that is in the glassfish/bin folder.  Use glassfish/lib/nadmin.bat in GlassFish 4.0+Add these options to the java call:-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=1234  Now if you run, say start-instance on DAS, you can attach your debugger, at your leisure, to the remote machines port 1234.  It will be running start-local-instance and patiently waiting for you to attach.

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • A new SQL, a new Analysis Services, a new Workshop! #ssas #sql2012

    - by Marco Russo (SQLBI)
    One week ago Microsoft SQL Server 2012 finally debuted with a virtual launch event and you can find many intro sessions there (20 minutes each). There is a lot of new content available if you want to learn more about SQL 2012 and in this blog post I’d like to provide a few link to sessions, documents, bits and courses that are available now or very soon. First of all, the release of Analysis Services 2012 has finally released PowerPivot 2012 (many of us called it PowerPivot v2 before this official name) and also the new Data Mining Add-in for Microsoft Office 2010, now available also for Excel 64bit! And, of course, don’t miss the Microsoft SQL Server 2012 Feature Pack, there are a lot of upgrades for both DBAs and developers. I just discovered there is a new LocalDB version of SQL Express that can run in user mode without any setup. Is this the end of SQL CE? But now, back to Analysis Services: if you want some tutorial on Tabular, the Microsoft Virtual Academy has a whole track dedicated to Analysis Services 2012 but you will probably be interested also in the one about Reporting Services 2012. If you think that virtual is good but it’s not enough, there are plenty of conferences in the coming months – these are just those where I and Alberto will deliver some SSAS Tabular presentations: SQLBits X, London, March 29-31, 2012: if you are in London or want a good reason to go, this is the most important SQL Server event in Europe this year, no doubts about it. And not only because of the high number of attendees, but also because there is an impressive number of speakers (excluding me, of course) coming from all over the world. This is an event second only to PASS Summit in Seattle so there are no good reasons to not attend it. Microsoft SQL Server & Business Intelligence Conference 2012, Milan, March 28-29, 2012: this is an Italian conference so the language might be a barrier, but many of us also speak English and the food is good! Just a few seats still available. TechEd North America, Orlando, June 11-14, 2012: you know, this is a big event and it contains everything – if you want to spend a whole day learning the SSAS Tabular model with me and Alberto, don’t miss our pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (be careful, it is on June 10, a nice study-Sunday!). TechEd Europe, Amsterdam, June 26-29, 2012: the European version of TechEd provides almost the same content and you don’t have to go overseas. We also run the same pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (in this case, it is on June 25, that’s a regular Monday). I and Alberto will also speak at some user group meeting around Europe during… well, we’re going to travel a lot in the next months. In fact, if you want to get a complete training on SSAS Tabular, you should spend two days with us in one of our SSAS Tabular Workshop! We prepared a 2-day seminar, a very intense one, that start from the simple tabular modeling and cover architecture, DAX, query, advanced modeling, security, deployment, optimization, monitoring, relationships with PowerPivot and Multidimensional… Really, there are a lot of stuffs here! We announced the first dates in Europe and also an online edition optimized for America’s time zone: Apr 16-17, 2012 – Amsterdam, Netherlands Apr 26-27, 2012 – Copenhagen, Denmark May 7-8, 2012 – Online for America’s time zone May 14-15, 2012 – Brussels, Belgium May 21-22, 2012 – Oslo, Norway May 24-25, 2012 – Stockholm, Sweden May 28-29, 2012 – London, United Kingdom May 31-Jun 1, 2012 – Milan, Italy (Italian language) Also Chris Webb will join us in this workshop and in every date you can find who is the speaker on the web site. The course is based on our upcoming book, almost 600 pages (!) about SSAS Tabular, an incredible effort that will be available very soon in a preview (rough cuts from O’Reilly) and will be on the shelf in May. I will provide a link to order it as soon as we have one! And if you think that this is not enough… you’re right! Do you know what is the only thing you can do to optimize your Tabular model? Optimize your DAX code. Learning DAX is easy, mastering DAX requires some knowledge… and our DAX Advanced Workshop will provide exactly the required content. Public classes will be available later this year, by now we just deliver it on demand.

    Read the article

  • A new SQL, a new Analysis Services, a new Workshop! #ssas #sql2012

    - by Marco Russo (SQLBI)
    One week ago Microsoft SQL Server 2012 finally debuted with a virtual launch event and you can find many intro sessions there (20 minutes each). There is a lot of new content available if you want to learn more about SQL 2012 and in this blog post I’d like to provide a few link to sessions, documents, bits and courses that are available now or very soon. First of all, the release of Analysis Services 2012 has finally released PowerPivot 2012 (many of us called it PowerPivot v2 before this official name) and also the new Data Mining Add-in for Microsoft Office 2010, now available also for Excel 64bit! And, of course, don’t miss the Microsoft SQL Server 2012 Feature Pack, there are a lot of upgrades for both DBAs and developers. I just discovered there is a new LocalDB version of SQL Express that can run in user mode without any setup. Is this the end of SQL CE? But now, back to Analysis Services: if you want some tutorial on Tabular, the Microsoft Virtual Academy has a whole track dedicated to Analysis Services 2012 but you will probably be interested also in the one about Reporting Services 2012. If you think that virtual is good but it’s not enough, there are plenty of conferences in the coming months – these are just those where I and Alberto will deliver some SSAS Tabular presentations: SQLBits X, London, March 29-31, 2012: if you are in London or want a good reason to go, this is the most important SQL Server event in Europe this year, no doubts about it. And not only because of the high number of attendees, but also because there is an impressive number of speakers (excluding me, of course) coming from all over the world. This is an event second only to PASS Summit in Seattle so there are no good reasons to not attend it. Microsoft SQL Server & Business Intelligence Conference 2012, Milan, March 28-29, 2012: this is an Italian conference so the language might be a barrier, but many of us also speak English and the food is good! Just a few seats still available. TechEd North America, Orlando, June 11-14, 2012: you know, this is a big event and it contains everything – if you want to spend a whole day learning the SSAS Tabular model with me and Alberto, don’t miss our pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (be careful, it is on June 10, a nice study-Sunday!). TechEd Europe, Amsterdam, June 26-29, 2012: the European version of TechEd provides almost the same content and you don’t have to go overseas. We also run the same pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (in this case, it is on June 25, that’s a regular Monday). I and Alberto will also speak at some user group meeting around Europe during… well, we’re going to travel a lot in the next months. In fact, if you want to get a complete training on SSAS Tabular, you should spend two days with us in one of our SSAS Tabular Workshop! We prepared a 2-day seminar, a very intense one, that start from the simple tabular modeling and cover architecture, DAX, query, advanced modeling, security, deployment, optimization, monitoring, relationships with PowerPivot and Multidimensional… Really, there are a lot of stuffs here! We announced the first dates in Europe and also an online edition optimized for America’s time zone: Apr 16-17, 2012 – Amsterdam, Netherlands Apr 26-27, 2012 – Copenhagen, Denmark May 7-8, 2012 – Online for America’s time zone May 14-15, 2012 – Brussels, Belgium May 21-22, 2012 – Oslo, Norway May 24-25, 2012 – Stockholm, Sweden May 28-29, 2012 – London, United Kingdom May 31-Jun 1, 2012 – Milan, Italy (Italian language) Also Chris Webb will join us in this workshop and in every date you can find who is the speaker on the web site. The course is based on our upcoming book, almost 600 pages (!) about SSAS Tabular, an incredible effort that will be available very soon in a preview (rough cuts from O’Reilly) and will be on the shelf in May. I will provide a link to order it as soon as we have one! And if you think that this is not enough… you’re right! Do you know what is the only thing you can do to optimize your Tabular model? Optimize your DAX code. Learning DAX is easy, mastering DAX requires some knowledge… and our DAX Advanced Workshop will provide exactly the required content. Public classes will be available later this year, by now we just deliver it on demand.

    Read the article

  • Visual Studio Load Testing using Windows Azure

    - by Tarun Arora
    In my opinion the biggest adoption barrier in performance testing on smaller projects is not the tooling but the high infrastructure and administration cost that comes with this phase of testing. Only if a reusable solution was possible and infrastructure management wasn’t as expensive, adoption would certainly spike. It certainly is possible if you bring Visual Studio and Windows Azure into the equation. It is possible to run your test rig in the cloud without getting tangled in SCVMM or Lab Management. All you need is an active Azure subscription, Windows Azure endpoint enabled developer workstation running visual studio ultimate on premise, windows azure endpoint enabled worker roles on azure compute instances set up to run as test controllers and test agents. My test rig is running SQL server 2012 and Visual Studio 2012 RC agents. The beauty is that the solution is reusable, you can open the azure project, change the subscription and certificate, click publish and *BOOM* in less than 15 minutes you could have your own test rig running in the cloud. In this blog post I intend to show you how you can use the power of Windows Azure to effectively abstract the administration cost of infrastructure management and lower the total cost of Load & Performance Testing. As a bonus, I will share a reusable solution that you can use to automate test rig creation for both VS 2010 agents as well as VS 2012 agents. Introduction The slide show below should help you under the high level details of what we are trying to achive... Leveraging Azure for Performance Testing View more PowerPoint from Avanade Scenario 1 – Running a Test Rig in Windows Azure To start off with the basics, in the first scenario I plan to discuss how to, - Automate deployment & configuration of Windows Azure Worker Roles for Test Controller and Test Agent - Automate deployment & configuration of SQL database on Test Controller on the Test Controller Worker Role - Scaling Test Agents on demand - Creating a Web Performance Test and a simple Load Test - Managing Test Controllers right from Visual Studio on Premise Developer Workstation - Viewing results of the Load Test - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig Scenario 2 – The scaled out Test Rig and sharing data using SQL Azure A scaled out version of this implementation would involve running multiple test rigs running in the cloud, in this scenario I will show you how to sync the load test database from these distributed test rigs into one SQL Azure database using Azure sync. The selling point for this scenario is being able to collate the load test efforts from across the organization into one data store. - Deploy multiple test rigs using the reusable solution from scenario 1 - Set up and configure Windows Azure Sync - Test SQL Azure Load Test result database created as a result of Windows Azure Sync - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig The Ingredients Though with an active MSDN ultimate subscription you would already have access to everything and more, you will essentially need the below to try out the scenarios, 1. Windows Azure Subscription 2. Windows Azure Storage – Blob Storage 3. Windows Azure Compute – Worker Role 4. SQL Azure Database 5. SQL Data Sync 6. Windows Azure Connect – End points 7. SQL 2012 Express or SQL 2008 R2 Express 8. Visual Studio All Agents 2012 or Visual Studio All Agents 2010 9. A developer workstation set up with Visual Studio 2012 – Ultimate or Visual Studio 2010 – Ultimate 10. Visual Studio Load Test Unlimited Virtual User Pack. Walkthrough To set up the test rig in the cloud, the test controller, test agent and SQL express installers need to be available when the worker role set up starts, the easiest and most efficient way is to pre upload the required software into Windows Azure Blob storage. SQL express, test controller and test agent expose various switches which we can take advantage of including the quiet install switch. Once all the 3 have been installed the test controller needs to be registered with the test agents and the SQL database needs to be associated to the test controller. By enabling Windows Azure connect on the machines in the cloud and the developer workstation on premise we successfully create a virtual network amongst the machines enabling 2 way communication. All of the above can be done programmatically, let’s see step by step how… Scenario 1 Video Walkthrough–Leveraging Windows Azure for performance Testing Scenario 2 Work in progress, watch this space for more… Solution If you are still reading and are interested in the solution, drop me an email with your windows live id. I’ll add you to my TFS preview project which has a re-usable solution for both VS 2010 and VS 2012 test rigs as well as guidance and demo performance tests.   Conclusion Other posts and resources available here. Possibilities…. Endless!

    Read the article

  • Adding a DLL to the GAC in Windows 7

    - by Jim Giercyk
    I recently created a DLL and I wanted to reference it from a project I was developing in Visual Studio.  In previous versions of Windows, doing so was simply a matter of dropping the DLL file in the C:\Windows\assembly folder.  That would add the DLL to the Global Assembly Cache (GAC) and make it accessible in Visual Studio.  However, as is often the case, Window 7 is different.  Even if you have Administrator privileges on your machine, you still do not have permission to drop a file in the assembly folder.  Undaunted, I thought about using the old DOS command line utility gacutil.exe.  Microsoft developed the tool as part of the .Net framework, and it is available in the Windows SDK Framework Tools.  If you have never used gacutil.exe before, you can find out everything you ever wanted to know but were afraid to ask here: http://msdn.microsoft.com/en-us/library/ex0ss12c(v=vs.80).aspx .  Unfortunately, if you do not have the Windows SDK loaded on your development machine, you will need to install it to use gacutil, but it is relatively quick and painless, and the framework tools are very useful.  Look here for your latest SDK: http://www.microsoft.com/download/en/search.aspx?q=Windows%20SDK .   After installing the SDK, I tried installing my DLL to the GAC by running gacutil from a DOS command line: That’s odd.  Microsoft is shipping a tool that cannot be executed even with Administrator rights?  Let me stop here and say that I am by no means a Windows security expert, so I actually did contact my system administrators, and they were not sure how to fix the problem….there must be a super administrator access level, but it isn’t available to your average developer in my company.  The solution outlined here is working within the boundaries of a normal windows Administrator. So, now the hacker in me bubbles to the surface.  What if I were to create a simple BAT file containing the gacutil command?  It’s so crazy it just might work!  Ugh!  I was starting to think this would never work, but then I realized that simply executing a batch program did not change my level of access.  Typically in Windows 7, you would select the “Run As Administrator” option to temporarily act as an administrator for the purpose of executing a process.  However, that option is not available for BAT files run from the command line.  SOLUTION: Create a desktop shortcut to execute the BAT file, which in turn will execute the line command…..are you still with me?  I created a shortcut and pointed it to my batch file.  Theoretically, all I need to do now is right-click on the shortcut and select “Run As Administrator” and we’re good, right?  Well, kinda.  If you notice the syntax of my BAT file, the name of the DLL is passed in as a parameter.  Therefore, I either have to hard-code the file name in the BAT program (YUCK!!), or I can leave the parameter and drag the DLL file to the shortcut and drop it.  Sweet, drag-and-drop works for me…..but if I use the drag-and-drop method, there is no way for me to right-click and select “Run As Administrator”.  That is not a problem…..I simply have to adjust the properties of the shortcut I created and I am in business.  I Right-clicked on the shortcut and select “Properties”.  Under the “Shortcut” tab there is an “Advanced” button…..I clicked it. All I needed to do was check the “Run As Administrator” box: In summary, what I have done is create a BAT file to execute a command line utility, gacutil.exe.  Then, rather than executing the BAT file from the command line, I created a desktop shortcut to run it and set the shortcut properties to “Run As Administrator”.  This will effectively mean I am executing the command line utility with Administrator privileges.  Pretty sneaky. Now, when I drag the DLL file  over to the shortcut, it starts the BAT file and adds the DLL to the assembly cache.  I created another BAT file to remove a DLL from the GAC in case the need should arise.  The code for that is: Give it a try.  I can’t imagine why updating the GAC has been made into such a chore in Windows 7.  Hopefully there is a service pack in the works that will give developers the functionality they had in Windows XP, but in the meantime, this workaround is extremely useful.

    Read the article

  • InnoDB Compression Improvements in MySQL 5.6

    - by Inaam Rana
    MySQL 5.6 comes with significant improvements for the compression support inside InnoDB. The enhancements that we'll talk about in this piece are also a good example of community contributions. The work on these was conceived, implemented and contributed by the engineers at Facebook. Before we plunge into the details let us familiarize ourselves with some of the key concepts surrounding InnoDB compression. In InnoDB compressed pages are fixed size. Supported sizes are 1, 2, 4, 8 and 16K. The compressed page size is specified at table creation time. InnoDB uses zlib for compression. InnoDB buffer pool will attempt to cache compressed pages like normal pages. However, whenever a page is actively used by a transaction, we'll always have the uncompressed version of the page as well i.e.: we can have a page in the buffer pool in compressed only form or in a state where we have both the compressed page and uncompressed version but we'll never have a page in uncompressed only form. On-disk we'll always only have the compressed page. When both compressed and uncompressed images are present in the buffer pool they are always kept in sync i.e.: changes are applied to both atomically. Recompression happens when changes are made to the compressed data. In order to minimize recompressions InnoDB maintains a modification log within a compressed page. This is the extra space available in the page after compression and it is used to log modifications to the compressed data thus avoiding recompressions. DELETE (and ROLLBACK of DELETE) and purge can be performed without recompressing the page. This is because the delete-mark bit and the system fields DB_TRX_ID and DB_ROLL_PTR are stored in uncompressed format on the compressed page. A record can be purged by shuffling entries in the compressed page directory. This can also be useful for updates of indexed columns, because UPDATE of a key is mapped to INSERT+DELETE+purge. A compression failure happens when we attempt to recompress a page and it does not fit in the fixed size. In such case, we first try to reorganize the page and attempt to recompress and if that fails as well then we split the page into two and recompress both pages. Now lets talk about the three major improvements that we made in MySQL 5.6.Logging of Compressed Page Images:InnoDB used to log entire compressed data on the page to the redo logs when recompression happens. This was an extra safety measure to guard against the rare case where an attempt is made to do recovery using a different zlib version from the one that was used before the crash. Because recovery is a page level operation in InnoDB we have to be sure that all recompress attempts must succeed without causing a btree page split. However, writing entire compressed data images to the redo log files not only makes the operation heavy duty but can also adversely affect flushing activity. This happens because redo space is used in a circular fashion and when we generate much more than normal redo we fill up the space much more quickly and in order to reuse the redo space we have to flush the corresponding dirty pages from the buffer pool.Starting with MySQL 5.6 a new global configuration parameter innodb_log_compressed_pages. The default value is true which is same as the current behavior. If you are sure that you are not going to attempt to recover from a crash using a different version of zlib then you should set this parameter to false. This is a dynamic parameter.Compression Level:You can now set the compression level that zlib should choose to compress the data. The global parameter is innodb_compression_level - the default value is 6 (the zlib default) and allowed values are 1 to 9. Again the parameter is dynamic i.e.: you can change it on the fly.Dynamic Padding to Reduce Compression Failures:Compression failures are expensive in terms of CPU. We go through the hoops of recompress, failure, reorganize, recompress, failure and finally page split. At the same time, how often we encounter compression failure depends largely on the compressibility of the data. In MySQL 5.6, courtesy of Facebook engineers, we have an adaptive algorithm based on per-index statistics that we gather about compression operations. The idea is that if a certain index/table is experiencing too many compression failures then we should try to pack the 16K uncompressed version of the page less densely i.e.: we let some space in the 16K page go unused in an attempt that the recompression won't end up in a failure. In other words, we dynamically keep adding 'pad' to the 16K page till we get compression failures within an agreeable range. It works the other way as well, that is we'll keep removing the pad if failure rate is fairly low. To tune the padding effort two configuration variables are exposed. innodb_compression_failure_threshold_pct: default 5, range 0 - 100,dynamic, implies the percentage of compress ops to fail before we start using to padding. Value 0 has a special meaning of disabling the padding. innodb_compression_pad_pct_max: default 50, range 0 - 75, dynamic, the  maximum percentage of uncompressed data page that can be reserved as pad.

    Read the article

  • Unable to fix broken packages with sudo apt-get install -f

    - by Bob
    Here's my result, of sudo apt-get install -f. i have Ran it twice and got negative result. I believe there is an error at "error in Version string '0:3.6.1-dates for language English Translation data updates for all supported packages for: English" This same statement "error in Version string, caused me three days of attempting to download version 12.04. There is a bug report concerning the quoted text as well. Is there anyway to download the version without the language packs, why would I corrupt version 11.10? Also, when attempting to download Synaptic using sudo apt-get install synaptic, I get the same error message. Again I point out the initial download problems and the same error message receipt. Thanks b0b@b0b-IC780M-A:~$ sudo apt-get install -f [sudo] password for b0b: Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 298 not upgraded. b0b@b0b-IC780M-A:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 298 not upgraded. b0b@b0b-IC780M-A:~$ sudo apt-get upgrade install Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: linux-headers-generic software-center The following packages will be upgraded: accountsservice acpi-support acpid aisleriot alsa-utils app-install-data-partner appmenu-qt apport apport-gtk apt-transport-https apt-utils aptdaemon aptdaemon-data apturl apturl-common banshee banshee-extension-soundmenu banshee-extension-ubuntuonemusicstore baobab bind9-host binutils bluez-alsa bluez-cups bluez-gstreamer brasero brasero-cdrkit brasero-common checkbox checkbox-gtk command-not-found command-not-found-data compiz compiz-core compiz-gnome compiz-plugins-default compiz-plugins-main-default cups cups-bsd cups-client cups-common cups-ppdc deja-dup desktop-file-utils dnsutils empathy empathy-common eog evince evince-common evolution-data-server evolution-data-server-common file-roller firefox firefox-globalmenu firefox-gnome-support gbrainy gcalctool gconf2 gconf2-common gedit gedit-common ghostscript ghostscript-cups ghostscript-x gir1.2-atspi-2.0 gir1.2-gconf-2.0 gir1.2-gnomebluetooth-1.0 gir1.2-gtk-3.0 gir1.2-gtksource-3.0 gir1.2-totem-1.0 gir1.2-unity-4.0 gir1.2-webkit-3.0 gnome-accessibility-themes gnome-bluetooth gnome-control-center gnome-control-center-data gnome-desktop3-data gnome-font-viewer gnome-games-common gnome-icon-theme gnome-mahjongg gnome-online-accounts gnome-orca gnome-power-manager gnome-screenshot gnome-search-tool gnome-session gnome-session-bin gnome-session-canberra gnome-session-common gnome-settings-daemon gnome-sudoku gnome-system-log gnome-system-monitor gnome-utils-common gnomine gstreamer0.10-gconf gstreamer0.10-plugins-good gstreamer0.10-pulseaudio gvfs gvfs-backends gvfs-bin gvfs-fuse gwibber gwibber-service gwibber-service-facebook gwibber-service-identica gwibber-service-twitter hpijs hplip hplip-cups hplip-data indicator-datetime indicator-session indicator-sound isc-dhcp-client isc-dhcp-common jockey-common jockey-gtk language-selector-common language-selector-gnome libaccountsservice0 libapt-inst1.3 libarchive1 libasound2-plugins libatk-adaptor libbind9-60 libbrasero-media3-1 libcamel-1.2-29 libcanberra-gtk-module libcanberra-gtk0 libcanberra-gtk3-0 libcanberra-gtk3-module libcanberra-pulse libcanberra0 libdecoration0 libdns69 libebackend-1.2-1 libebook1.2-12 libecal1.2-10 libedata-book-1.2-11 libedata-cal-1.2-13 libedataserver1.2-15 libedataserverui-3.0-1 libevince3-3 libgconf2-4 libgnome-bluetooth8 libgnome-control-center1 libgnome-desktop-3-2 libgoa-1.0-0 libgrip0 libgs9 libgs9-common libgtk-3-bin libgtksourceview-3.0-0 libgtksourceview-3.0-common libgweather-3-0 libgweather-common libgwibber-gtk2 libgwibber2 libhpmud0 libimobiledevice2 libisc62 libisccc60 libisccfg62 libjasper1 liblightdm-gobject-1-0 liblwres60 libmetacity-private0 libmission-control-plugins0 libmono-zeroconf1.0-cil libnautilus-extension1 libnm-glib-vpn1 libnm-glib4 libnm-util2 libnotify0.4-cil libnux-1.0-0 libnux-1.0-common libpam-gnome-keyring libreoffice-emailmerge libreoffice-style-human libsane-hpaio libsmbclient libsnmp-base libsnmp15 libsyncdaemon-1.0-1 libt1-5 libtotem0 libubuntuone-1.0-1 libubuntuone1.0-cil libunity-2d-private0 libunity-core-4.0-4 libunity6 libusbmuxd1 libwbclient0 libwebkitgtk-1.0-0 libwebkitgtk-1.0-common libwebkitgtk-3.0-0 libwebkitgtk-3.0-common libxml2 linux-generic linux-image-generic metacity metacity-common mobile-broadband-provider-info modemmanager mousetweaks multiarch-support nautilus nautilus-data nautilus-sendto-empathy network-manager nux-tools onboard openssl pulseaudio pulseaudio-esound-compat pulseaudio-module-bluetooth pulseaudio-module-gconf pulseaudio-module-x11 pulseaudio-utils python-apport python-aptdaemon python-aptdaemon-gtk python-aptdaemon.gtk3widgets python-aptdaemon.gtkwidgets python-brlapi python-cups python-cupshelpers python-gobject-cairo python-httplib2 python-launchpadlib python-libxml2 python-pam python-papyon python-pkg-resources python-problem-report python-pyatspi2 python-software-properties python-ubuntuone-client python-ubuntuone-storageprotocol samba-common samba-common-bin seahorse shotwell simple-scan smbclient sni-qt software-properties-common software-properties-gtk sudo system-config-printer-common system-config-printer-gnome system-config-printer-udev telepathy-indicator telepathy-mission-control-5 thunderbird thunderbird-globalmenu thunderbird-gnome-support tomboy totem totem-common totem-mozilla totem-plugins ttf-opensymbol ubuntu-desktop ubuntu-minimal ubuntu-standard ubuntuone-client ubuntuone-client-gnome ubuntuone-couch unity unity-2d unity-2d-launcher unity-2d-panel unity-2d-places unity-2d-spread unity-common unity-lens-applications unity-services update-manager update-manager-core update-notifier update-notifier-common usbmuxd vim-common vim-tiny vinagre vino xorg xserver-xorg xserver-xorg-input-all xserver-xorg-video-all xserver-xorg-video-intel xserver-xorg-video-openchrome xul-ext-ubufox 296 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. Need to get 0 B/159 MB of archives. After this operation, 10.1 MB of additional disk space will be used. Do you want to continue [Y/n]? y Extracting templates from packages: 100% Preconfiguring packages ... dpkg: error: parsing file '/var/lib/dpkg/available' near line 4131 package 'python-zope.interface': error in Version string '0:3.6.1-dates for language English Translation data updates for all supported packages for: English . language-pack-en-base provides the bulk of translation data and is updated only seldom. This package provides frequent translation updates.': version string has embedded spaces E: Sub-process /usr/bin/dpkg returned an error code (2) b0b@b0b-IC780M-A:~$

    Read the article

  • Oracle Fusion Middleware gives you Choice and Portability for Public and Private Cloud

    - by Michelle Kimihira
    Author: Margaret Lee, Senior Director, Product Management, Oracle Fusion Middleware Cloud Computing allows customers to quickly develop and deploy applications in a shared environment.  The environment can span across hardward (IaaS), foundation layer software (PaaS), and end-user software (SaaS). Cloud Computing provides compelling benefits in terms of business agility and IT cost savings.  However, with complex, existing heterogeneous architectures, and concerns for security and manageability, enterprises are challenged to define their Cloud strategy.  For most enterprises, the solution is a hybrid of private and public cloud.  Fusion Middleware supports customers’ Cloud requirements through choice and portability. Fusion Middleware supports a variety of cloud development and deployment models:  Oracle [Public] Cloud; customer private cloud; hybrid of these two, and traditional dedicated, on-premise model Customers can develop applications in any of these models and deployed in another, providing the flexibility and portability they need Oracle Cloud is a public cloud offering.  Within Oracle Cloud, Fusion Middleware provides two key offerings include the Developer cloud service and Java cloud deployment service. Developer Cloud Service Simplify Development: Automated provisioned environment; pre-configured and integrated; web-based administration Deploy Automatically: Fully integrated with Oracle Cloud for Java deployment; workflow ensures build & test Collaborate & Manage: Fits any size team; integrated team source repository; continuous integration; task/defect tracking Integrated with all major IDEs: Oracle JDeveloper; NetBeans; Eclipse Java Cloud Service Java Cloud service provides flexible Java deployment environment for departmental applications and development, staging, QA, training, and demo environments.  It also supports customizations deployments for SaaS-based Fusion Applications customers.  Some key features of Java Cloud Service include: WebLogic Server on Exalogic, secure, highly available infrastructure Database Service & IDE Integration Open, Standard-based Deploy Web Apps, Web Services, REST Services Fully managed and supported by Oracle For more information, please visit Oracle Cloud, Oracle Cloud Java Service and Oracle Cloud Developer Service. If your enterprise prefers a private cloud, for reasons such as security, control, manageability, and complex integration that prevent your applications from being deployed on a public cloud, Fusion Middleware also provide you with the products and tools you need.  Sometimes called Private PaaS, private clouds have their predecessors in shared-services arrangements many large companies have been building in the past decade.  The difference, however, are in the scope of the services, and depth of their capabilities.  In terms of vertical stack depth, private clouds not only provide hardware and software infrastructure to run your applications, they also provide services such as integration and security, that your applications need.  Horizontally, private clouds provide monitoring, management, lifecycle, and charge back capabilities out-of-box that shared-services platforms did not have before. Oracle Fusion Middleware includes the complete stack of hardware and software for you to build private clouds: SOA suite and BPM suite to support systems integration and process flow between applications deployed on your private cloud and the rest of your organization Identity and Access Management suite to provide security, provisioning, and access services for applications deployed on your private cloud WebLogic Server to run your applications Enterprise Manager's Cloud Management pack to monitor, manage, upgrade applications running on your private cloud Exalogic or optimized Oracle-Sun hardware to build out your private cloud The most important key differentiator for Oracle's cloud solutions is portability, between private and public clouds.  This is unique to Oracle because portability requires the vendor to have product depth and breadth in both public cloud services and private cloud product offerings.  Most public cloud vendors cannot provide the infrastructure and tools customers need to build their own private clouds.  In reverse, traditional software tools vendors typically do not have the product and expertise breadth to build out and offer a public cloud.  Oracle can.  It is important for customers that the products and technologies  Oracle uses to build its public is the same set that it sells to customers for them to build private clouds.  Fundamentally, that enables skills reuse,  as well as application portability. For more information on Oracle PaaS offerings, please visit Oracle's product information page.    Resources Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • VirtualBox appliance for the Oracle Communications Service Delivery Platform (SDP) Products

    - by chlander
    It's been quite awhile since we last blogged. This blog is written by Leif Lourie, a Curriculum Developer for the Oracle Communications Service Delivery Platform (SDP) products. For the last 8 years, Leif has worked as a Curriculum Developer for many of the telecom-oriented products that Oracle offers. He has been working in the telecom industry for about 25 years and has also worked as a software developer, project manager, and solutions architect. He is currently working on courseware for an upcoming release for one of the Service Delivery Platform products. Thanks to Leif not only for this blog, but for making the VM described in the blog available. Cheryl Lander, Oracle Communications InfoDev Senior Director To be able to download, install and test a product within a day is many times very important for people that are doing the primary evaluation of a software product. If it takes longer, it will require a bigger effort, like a proof-of-concept project with many people involved. Of course, if the product is chosen for a more thorough test, it will probably happen anyway, but then maybe with focus on integration instead of product features. We have a long tradition of creating complex software that is easy to install and test and we have often been praised for the ease of getting our products up and running. One key for this has been that there has always been an installer for Windows, as well as for the production environments that usually are Unix and Linux. And, the windows installer has, in most cases, been released for developing and testing purposes. Lately, this has changed. Our products are very seldom released for the Windows platform, at all. And even the Linux versions are almost always released for 64-bit systems. This is creating problems for many of the people that want to try out our products, since few have access to a 64-bit Linux system of the right platform. Most of us are using a laptop with Windows or Mac OS. Some of us are using Linux or Solaris, but probably a non certified distribution for the product you want to test. My job, among other things, is to develop hands-on practices for our products. For me, it is crucial to have access to environments for installing and using our products. For this reason I have been using virtual machines for many years.I have a ready-made base system, with the necessary tools installed for all the products I create hands-on practices for. Whenever I start working on hands-on practices for a new product or a new version, I just copy the base system and start working with a clean slate. This saves me a lot of time! Now, I would like to start saving time for my favorite student: You! If you are using our products and regularly test new versions you might benefit from the virtual machine that is now available on Oracle Technology Network: The Virtual Machine for the Oracle Communications Service Delivery Platform (SDP) Products. This virtual machine contains an installation of the 64-bit version of Oracle Enterprise Linux, version 6. It also has Oracle Database Express Edition (XE), Oracle Java and Oracle Enterprise Pack for Eclipse installed. By using Oracle VM VirtualBox you may use Windows, OS X, Linux or Solaris on your laptop. VirtualBox can be installed on top of any of these platforms and give you the ability to run virtual machines in your laptop. After downloading and starting the virtual machine you will also need to download the installation files for the product you want to test; for example Oracle Communications Services Gatekeeper or Oracle Communications Online Mediation Controller. In some cases there are lessons and practices available for the products. The freely available courses are listed in Oracle Learning Library as a Collection of Oracle Communications Service Delivery Platform Courses. As time goes by, we will make this list collection bigger. Also, the goal is to update the virtual machine about one to two times per year. So you will always be able to get a well maintained virtual machine for the Service Delivery Platform products from us. We Value Your Feedback If you would like to suggest improvements or report issues on any of the product documentation, curriculum, or training produced by the Oracle Communications Information Development team, you can use these channels: Email [email protected]. Post a comment on this blog. Thanks for reading!

    Read the article

  • My Feelings About Microsoft Surface

    - by Valter Minute
    Advice: read the title carefully, I’m talking about “feelings” and not about advanced technical points proved in a scientific and objective way I still haven’t had a chance to play with a MS Surface tablet (I would love to, of course) and so my ideas just came from reading different articles on the net and MS official statements. Remember also that the MVP motto begins with “Independent” (“Independent Experts. Real World Answers.”) and this is just my humble opinion about a product and a technology. I know that, being an MS MVP you can be called an “MS-fanboy”, I don’t care, I hope that people can appreciate my opinion, even if it doesn’t match theirs. The “Surface” brand can be confusing for techies that knew the “original” surface concept but I think that will be a fresh new brand name for most of the people out there. But marketing department are here to confuse people… so I can understand this “recycle” of an existing name. So Microsoft is entering the hardware arena… for me this is good news. Microsoft developed some nice hardware in the past: the xbox, zune (even if the commercial success was quite limited) and, last but not least, the two arc mices (old and new model) that I use and appreciate. In the past Microsoft worked with OEMs and that model lead to good and bad things. Good thing (for microsoft, at least) is market domination by windows-based PCs that only in the last years has been reduced by the return of the Mac and tablets. Google is also moving in the hardware business with its acquisition of Motorola, and Apple leveraged his control of both the hardware and software sides to develop innovative products. Microsoft can scare OEMs and make them fly away from windows (but where?) or just lead the pack, showing how devices should be designed to compete in the market and bring back some of the innovation that disappeared from recent PC products (look at the shelves of your favorite electronics store and try to distinguish a laptop between the huge mass of anonymous PCs on displays… only Macs shine out there…). Having to compete with MS “official” hardware will force OEMs to develop better product and bring back some real competition in a market that was ruled only by prices (the lower the better even when that means low quality) and no innovative features at all (when it was the last time that a new PC surprised you?). Moving into a new market is a big and risky move, but with Windows 8 Microsoft is playing a crucial move for its future, trying to be back in the innovation run against apple and google. MS can’t afford to fail this time. I saw the new devices (the WinRT and Pro) and the specifications are scarce, misleading and confusing. The first impression is that the device looks like an iPad with a nice keyboard cover… Using “HD” and “full HD” to define display resolution instead of using the real figures and reviving the “ClearType” brand (now dead on Win8 as reported here and missed by people who hate to read text on displays, like myself) without providing clear figures (couldn’t you count those damned pixels?) seems to imply that MS was caught by surprise by apple recent “retina” displays that brought very high definition screens on tablets.Also there are no specifications about the processors used (even if some sources report NVidia Tegra for the ARM tablet and i5 for the x86 one) and expected battery life (a critical point for tablets and the point that killed Windows7 x86 based tablets). Also nothing about the price, and this will be another critical point because other platform out there already provide lots of applications and have a good user base, if MS want to enter this market tablets pricing must be competitive. There are some expansion ports (SD and USB), so no fixed storage model (even if the specs talks about 32-64GB for RT and 128-256GB for pro). I like this and don’t like the apple model where flash memory (that it’s dirt cheap used in thumdrives or SD cards) is as expensive as gold (or cocaine to have a more accurate per gram measurement) when mounted inside a tablet/phone. For big files you’ll be able to use external media and an SD card could be used to store files that don’t require super-fast SSD-like access times, I hope. To be honest I really don’t like the marketplace model and the limitation of Windows RT APIs (no local database? from a company that based a good share of its success on VB6+Access!) and lack of desktop support on the ARM (even if the support is here and has been used to port office). It’s a step toward the consumer market (where competitors are making big money), but may impact enterprise (and embedded) users that may not appreciate Windows 8 new UI or the limitations of the new app model (if you aren’t connected you are dead ). Not having compatibility with the desktop will require brand new applications and honestly made all the CPU cycles spent to convert .NET IL into real machine code in the past like a huge waste of time… as soon as a new processor architecture is supported by Windows you still have to rewrite part of your application (and MS is pushing HTML5+JS and native code more than .NET in my perception). On the other side I believe that the development experience provided by Visual Studio is still miles (or kilometres) ahead of the competition and even the all-uppercase menu of VS2012 hasn’t changed this situation. The new metro UI got mixed reviews. On my side I should say that is very pleasant to use on a touch screen, I like the minimalist design (even if sometimes is too minimal and hides stuff that, in my opinion, should be visible) but I should also say that using it with mouse and keyboard is like trying to pick your nose with boxing gloves… Metro is also very interesting for embedded devices where touch screen usage is quite common and where having an application taking all the screen is the norm. For devices like kiosks, vending machines etc. this kind of UI can be a great selling point. I don’t need a new tablet (to be honest I’m pretty happy with my wife’s iPad and with my PC), but I may change my opinion after having a chance to play a little bit with those new devices and understand what’s hidden under all this mysterious and generic announcements and specifications!

    Read the article

  • JPRT: A Build & Test System

    - by kto
    DRAFT A while back I did a little blogging on a system called JPRT, the hardware used and a summary on my java.net weblog. This is an update on the JPRT system. JPRT ("JDK Putback Reliablity Testing", but ignore what the letters stand for, I change what they mean every day, just to annoy people :\^) is a build and test system for the JDK, or any source base that has been configured for JPRT. As I mentioned in the above blog, JPRT is a major modification to a system called PRT that the HotSpot VM development team has been using for many years, very successfully I might add. Keeping the source base always buildable and reliable is the first step in the 12 steps of dealing with your product quality... or was the 12 steps from Alcoholics Anonymous... oh well, anyway, it's the first of many steps. ;\^) Internally when we make changes to any part of the JDK, there are certain procedures we are required to perform prior to any putback or commit of the changes. The procedures often vary from team to team, depending on many factors, such as whether native code is changed, or if the change could impact other areas of the JDK. But a common requirement is a verification that the source base with the changes (and merged with the very latest source base) will build on many of not all 8 platforms, and a full 'from scratch' build, not an incremental build, which can hide full build problems. The testing needed varies, depending on what has been changed. Anyone that was worked on a project where multiple engineers or groups are submitting changes to a shared source base knows how disruptive a 'bad commit' can be on everyone. How many times have you heard: "So And So made a bunch of changes and now I can't build!". But multiply the number of platforms by 8, and make all the platforms old and antiquated OS versions with bizarre system setup requirements and you have a pretty complicated situation (see http://download.java.net/jdk6/docs/build/README-builds.html). We don't tolerate bad commits, but our enforcement is somewhat lacking, usually it's an 'after the fact' correction. Luckily the Source Code Management system we use (another antique called TeamWare) allows for a tree of repositories and 'bad commits' are usually isolated to a small team. Punishment to date has been pretty drastic, the Queen of Hearts in 'Alice in Wonderland' said 'Off With Their Heads', well trust me, you don't want to be the engineer doing a 'bad commit' to the JDK. With JPRT, hopefully this will become a thing of the past, not that we have had many 'bad commits' to the master source base, in general the teams doing the integrations know how important their jobs are and they rarely make 'bad commits'. So for these JDK integrators, maybe what JPRT does is keep them from chewing their finger nails at night. ;\^) Over the years each of the teams have accumulated sets of machines they use for building, or they use some of the shared machines available to all of us. But the hunt for build machines is just part of the job, or has been. And although the issues with consistency of the build machines hasn't been a horrible problem, often you never know if the Solaris build machine you are using has all the right patches, or if the Linux machine has the right service pack, or if the Windows machine has it's latest updates. Hopefully the JPRT system can solve this problem. When we ship the binary JDK bits, it is SO very important that the build machines are correct, and we know how difficult it is to get them setup. Sure, if you need to debug a JDK problem that only shows up on Windows XP or Solaris 9, you'll still need to hunt down a machine, but not as a regular everyday occurance. I'm a big fan of a regular nightly build and test system, constantly verifying that a source base builds and tests out. There are many examples of automated build/tests, some that trigger on any change to the source base, some that just run every night. Some provide a protection gateway to the 'golden' source base which only gets changes that the nightly process has verified are good. The JPRT (and PRT) system is meant to guard the source base before anything is sent to it, guarding all source bases from the evil developer, well maybe 'evil' isn't the right word, I haven't met many 'evil' developers, more like 'error prone' developers. ;\^) Humm, come to think about it, I may be one from time to time. :\^{ But the point is that by spreading the build up over a set of machines, and getting the turnaround down to under an hour, it becomes realistic to completely build on all platforms and test it, on every putback. We have the technology, we can build and rebuild and rebuild, and it will be better than it was before, ha ha... Anybody remember the Six Million Dollar Man? Man, I gotta get out more often.. Anyway, now the nightly build and test can become a 'fetch the latest JPRT build bits' and start extensive testing (the testing not done by JPRT, or the platforms not tested by JPRT). Is it Open Source? No, not yet. Would you like to be? Let me know. Or is it more important that you have the ability to use such a system for JDK changes? So enough blabbering on about this JPRT system, tell me what you think. And let me know if you want to hear more about it or not. Stay tuned for the next episode, same Bloody Bat time, same Bloody Bat channel. ;\^) -kto

    Read the article

  • Are IE9 really good ?

    - by anirudha
    IE9 started a campaign for kill IE6 from the core because they know that IE6 is a big trouble or  problem for them for promote 9 version of IE. so they started a campaign for killing IE6. next time they kill IE 7 , 8,9 whenever they found this old version have a big problem for them to promote next version of IE.   Why they not make a update system who automatically update the browser and tell user to restart and update goes installed in the user system. well IE9 should learn from all other that they have very well design auto-update system who never give user in trouble that your browser goes old. Chrome and Firefox both update themselves and say user restart to enjoy another good version. in IE6 a big problem is that updates. no one sure that they installed new version of IE6 without any hassles and update goes install without any problem because they really know or care about “you need this to install this and this for this” so they thing “why I update IE whenever I am unsure that my browser goes update and I have no problem again” so they do nothing because their work done with no problem because common person used high profile application who work even in IE6. so they do nothing.    IE6 countdown website have designed a banner for warn or force user to upgrade to next version of IE. well there is no good reason for put the banner on website some of reason are:-   Windows 7 comes with pre-installed IE8 and Vista comes with upgrade version them IE6 so that is sure that you force a user who have Windows XP [luna] and if they want to upgrade IE then they can get IE8 not version 9 because IE9 is design for Windows 7 or Vista Service pack 2. so What is the use of update when user still have a outdate version too because IE8 is old version and not have any capability of HTML5 so forcing user by using the banner have no sense. I am not know why they all listed on website put the banner on their own website. it’s good that you offer user what they want instead of giving them a outdate version of IE again. My means to give a user list of browser they can try to enhance their browser experience instead of only IE.   IE9 build upon WPF and they spent more time on using WPF in IE instead of making user experience browser.  many thing is designed wrongly in IE first thing is tabs. the tabs in chrome are bigger and easily to move and same in Firefox even not have smooth tabbing. IE have same tabbing as chrome have but leak a point that it’s too small. if you really  want to move then sometime they create a problem that they going elsewhere from the current instance of IE.   Chrome have a big buttons, tabs and menu to enhance browser experience and Firefox have a good feature that you can make them bigger or small. you can put the icon for add-ons on the toolbar for easily use but IE have no relation with customization so we never can thinking about that.   When chrome provide lot’s of extensions and a  webstore for browser application and same feature in Firefox can be seen then there is no plugin in IE. really you can see their IE addons Website where no plugin listed for web development. even in the category or tag. as a response from many blog there is new for developer that new version of IE9 developer tool. well IE9 have three new tabs a blogger tell on their blog. when I trying them I found many thing but I still unable to edit the Css from the HTML tab and no plugin I found I can get to enhance IE9 web development. something more other provide never IE9 give me like personas , customization , browser extension or any other they used to tell a small thing customization  .   IE9 still have some problem with JavaScript that when I use Firefox and chrome and logout in both then my cookie is deleted but in IE it’s not done. it’s show me that IE9 still have different from other not for good thing even some bad thing too. When I trying to read a article that is written in Hindi using Unicode font I found that they show many thing misspelled. there is three Sha in Hindi but they all goes wrong in IE. the misprint thing is not that the writing  for the articles goes wrong. it’s problem or browser to rendering a font. the Firefox and chrome not give me this problem even opera render the font in italic style by decrease the font-size but all those work perfect.   in Pwn2Own the apple’s safari  and IE9 both are hacked. this is a awesome news for whose who thing that  open-source is lose in  Security and close-source is highly-secured software. well this is not a good parameter for talking about software. it’s should depend how much application tested and used. because more testing and more use of application make them better.   I  appreciate IE to making their new version 9 and good luck for them. there is a another matter that I personally found nothing on them.

    Read the article

  • What to leave when you're leaving

    - by BuckWoody
    There's already a post on this topic - sort of. I read this entry, where the author did a good job on a few steps, but I found that a few other tips might be useful, so if you want to check that one out and then this post, you might be able to put together your own plan for when you leave your job.  I once took over the system administrator (of which the Oracle and SQL Server servers were a part) at a mid-sized firm. The outgoing administrator had about a two- week-long scheduled overlap with me, but was angry at the company and told me "hey, I know this is going to be hard on you, but I want them to know how important I was. I'm not telling you where anything is or what the passwords are. Good luck!" He then quit that day. It took me about three days to find all of the servers and crack the passwords. Yes, the company tried to take legal action against the guy and all that, but he moved back to his home country and so largely got away with it. Obviously, this isn't the way to leave a job. Many of us have changed jobs in the past, and most of us try to be very professional about the transition to a new team, regardless of the feelings about a particular company. I've been treated badly at a firm, but that is no reason to leave a mess for someone else. So here's what you should put into place at a minimum before you go. Most of this is common sense - which of course isn't very common these days - and another good rule is just to ask yourself "what would I want to know"? The article I referenced at the top of this post focuses on a lot of documentation of the systems. I think that's fine, but in actuality, I really don't need that. Even with this kind of documentation, I still perform a full audit on the systems, so in the end I create my own system documentation. There are actually only four big items I need to know to get started with the systems: 1. Where is everything/everybody?The first thing I need to know is where all of the systems are. I mean not only the street address, but the closet or room, the rack number, the IU number in the rack, the SAN luns, all that. A picture here is worth a thousand words, which is why I really like Visio. It combines nice graphics, full text and all that. But use whatever you have to tell someone the physical locations of the boxes. Also, tell them the physical location of the folks in charge of those boxes (in case you aren't) or who share that responsibility. And by "where" in this case, I mean names and phones.  2. What do they do?For both the servers and the people, tell them what they do. If it's a database server, detail what each database does and what application goes to that, and who "owns" that application. In my mind, this is one of hte most important things a Data Professional needs to know. In the case of the other administrtors or co-owners, document each person's responsibilities.   3. What are the credentials?Logging on/in and gaining access to the buildings are things that the new Data Professional will need to do to successfully complete their job. This means service accounts, certificates, all of that. The first thing they should do, of course, is change the passwords on all that, but the first thing they need is the ability to do that!  4. What is out of the ordinary?This is the most tricky, and perhaps the next most important thing to know. Did you have to use a "special" driver for that video card on server X? Is the person that co-owns an application with you mentally unstable (like me) or have special needs, like "don't talk to Buck before he's had coffee. Nothing will make any sense"? Do you have service pack requirements for a specific setup? Write all that down. Anything that took you a day or longer to make work is probably a candidate here. This is my short list - anything you care to add? Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Can't get gitosis and ssh to play nice on cygwin

    - by Noel Kennedy
    I have followed this guide to setting up gitosis on a windows 2003 server via cygwin. I have now got to a point where it largely works. I can clone, pull and push. The problem I am having is that I think I have not got the ssh bit right at all. When I connect via msysgit from machines and accounts where I have not created or uploaded ssh keys it works. Every time I clone, pull or push I get a password challenge for the 'git' user running on the server but basically I can execute git commands. When I connect with users with an ssh key in the ~/.ssh folder, I don't get the password challange and instead I get a permissions failure: DEBUG:gitosis.serve.main:Got command "git-upload-pack '/cris.git'" DEBUG:gitosis.access.haveAccess:Access check for 'teamcity@hhit24808' as 'writable' on 'cris.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'cris.git', new value 'cris' DEBUG:gitosis.access.haveAccess:Access check for 'teamcity@hhit24808' as 'writeable' on 'cris.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'cris.git', new value 'cris' DEBUG:gitosis.access.haveAccess:Access check for 'teamcity@hhit24808' as 'readonly' on 'cris.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'cris.git', new value 'cris' ERROR:gitosis.serve.main:Repository read access denied fatal: The remote end hung up unexpectedly I have uploaded the public rsa key into the key_dir folder. Here is my conf file: [gitosis] loglevel = DEBUG [group gitosis-admin] writable = gitosis-admin members = myemail@mydomain [group cris-developers] members = myemail@mydomain TeamCity@HHIT24808 writable = cris If it matters, I have generated a key without a passphrase as I believe this is necessary to enable ssh for automated scripts. When I use keys with a passphrase, I get challanged for the phrase but then get the same permissions problem. I have tried 'writable' and 'writeable' for permissions. Help!! Update 1: When I try to clone a non-existant repo, I get the same error message, co-incidence? Update 2: Wierd, I've got one machine and one login working. It seems to be something to do with the syntax for addressing git over ssh. This now works on one machine for one login: git clone git@servername:cris.git The same command fails for a user on another machine without an uploaded ssh key. But this command works (after being challanged for git@servername's password) git clone git@servername:/home/git/repositories/cris.git neither command works on a 2nd login whose ssh key has been uploaded

    Read the article

  • availability of Win32_MountPoint and Win32_Volume on Windows XP?

    - by SteveC
    From the MSDN articles I've found -- http://msdn.microsoft.com/en-us/library/aa394515(v=VS.85).aspx -- Win32_Volume and Win32_MountPoint aren't available on Windows XP. However, I'm developing a C# app on Windows XP (64bit), and I can get to those WMI classes just fine. Users of my app will be on Windows XP sp2 with .Net 3.5 sp1. Googling around, I can't determine whether I can count on this or not. Am I successful on my system because of one or more of the following: - windows xp service pack 2? - visual studio 2008 sp1 was installed? - .Net 3.5 sp1? Should I use something other than WMI to get at the volume/mountpoint info? Below is sample code that's working... public static Dictionary<string, NameValueCollection> GetAllVolumeDeviceIDs() { Dictionary<string, NameValueCollection> ret = new Dictionary<string, NameValueCollection>(); // retrieve information from Win32_Volume try { using (ManagementClass volClass = new ManagementClass("Win32_Volume")) { using (ManagementObjectCollection mocVols = volClass.GetInstances()) { // iterate over every volume foreach (ManagementObject moVol in mocVols) { // get the volume's device ID (will be key into our dictionary) string devId = moVol.GetPropertyValue("DeviceID").ToString(); ret.Add(devId, new NameValueCollection()); //Console.WriteLine("Vol: {0}", devId); // for each non-null property on the Volume, add it to our NameValueCollection foreach (PropertyData p in moVol.Properties) { if (p.Value == null) continue; ret[devId].Add(p.Name, p.Value.ToString()); //Console.WriteLine("\t{0}: {1}", p.Name, p.Value); } // find the mountpoints of this volume using (ManagementObjectCollection mocMPs = moVol.GetRelationships("Win32_MountPoint")) { foreach (ManagementObject moMP in mocMPs) { // only care about adding directory // Directory prop will be something like "Win32_Directory.Name=\"C:\\\\\"" string dir = moMP["Directory"].ToString(); // find opening/closing quotes in order to get the substring we want int first = dir.IndexOf('"') + 1; int last = dir.LastIndexOf('"'); string dirSubstr = dir.Substring(first , last - first); // use GetFullPath to normalize/unescape any extra backslashes string fullpath = Path.GetFullPath(dirSubstr); ret[devId].Add(MOUNTPOINT_DIRS_KEY, fullpath); } } } } } } catch (Exception ex) { Console.WriteLine("Problem retrieving Volume information from WMI. {0} - \n{1}",ex.Message,ex.StackTrace); return ret; } return ret; }

    Read the article

  • Why does the interpreted order seem different from what I expect?

    - by inspectorG4dget
    I have a problem that I have not faced before: It seems that the order of interpretation in my program is somehow different from what I expect. I have written a small Twitter client. It takes a few seconds for my program to actually post a tweet after I click the "GO" button (which can also be activated by hitting ENTER on the keyboard). I don't want to click multiple times within this time period thinking that I hadn't clicked it the first time. Therefore, when the button is clicked, I would like the label text to display something that tells me that the button has been clicked. I have implemented this message by altering the label text before I send the tweet across. However, for some reason, the message does not display until the tweet has been attempted. But since I have a confirmation message after the tweet, I never get to see this message and my original problem goes unsolved. I would really appreciate any help. Here is the relevant code: class SimpleTextBoxForm(Form): def __init__(self): # set window properties self.Text = "Tweeter" self.Width = 235 self.Height = 250 #tweet away self.label = Label() self.label.Text = "Tweet Away..." self.label.Location = Point(10, 10) self.label.Height = 25 self.label.Width = 200 #get the tweet self.tweetBox = TextBox() self.tweetBox.Location = Point(10, 45) self.tweetBox.Width = 200 self.tweetBox.Height = 60 self.tweetBox.Multiline = True self.tweetBox.WordWrap = True self.tweetBox.MaxLength = 140; #ask for the login ID self.askLogin = Label() self.askLogin.Text = "Login:" self.askLogin.Location = Point(10, 120) self.askLogin.Height = 20 self.askLogin.Width = 60 self.login = TextBox() self.login.Text= "" self.login.Location = Point(80, 120) self.login.Height = 40 self.login.Width = 100 #ask for the password self.askPass = Label() self.askPass.Text = "Password:" self.askPass.Location = Point(10, 150) self.askPass.Height = 20 self.askPass.Width = 60 # display password box with character hiding self.password = TextBox() self.password.Location = Point(80, 150) self.password.PasswordChar = "x" self.password.Height = 40 self.password.Width = 100 #submit button self.button1 = Button() self.button1.Text = 'Tweet' self.button1.Location = Point(10, 180) self.button1.Click += self.update self.AcceptButton = self.button1 #pack all the elements of the form self.Controls.Add(self.label) self.Controls.Add(self.tweetBox) self.Controls.Add(self.askLogin) self.Controls.Add(self.login) self.Controls.Add(self.askPass) self.Controls.Add(self.password) self.Controls.Add(self.button1) def update(self, sender, event): if not self.password.Text: self.label.Text = "You forgot to enter your password..." else: self.tweet(self.tweetBox.Text, self.login.Text, self.password.Text) def tweet(self, msg, login, password): self.label.Text = "Attempting Tweet..." # this should be executed before sending the tweet is attempted. But this seems to be executed only after the try block try: success = 'Tweet successfully completed... yay!\n' + 'At: ' + time.asctime().split()[3] ServicePointManager.Expect100Continue = False Twitter().UpdateAsXML(login, password, msg) except: error = 'Unhandled Exception. Tweet unsuccessful' self.label.Text = error else: self.label.Text = success self.tweetBox.Text = ""

    Read the article

  • Why does the interpretted order seem different from what I expect?

    - by inspectorG4dget
    I have a problem that I have not faced before: It seems that the order of interpretation in my program is somehow different from what I expect. I have written a small Twitter client. It takes a few seconds for my program to actually post a tweet after I click the "GO" button (which can also be activated by hitting ENTER on the keyboard). I don't want to click multiple times within this time period thinking that I hadn't clicked it the first time. Therefore, when the button is clicked, I would like the label text to display something that tells me that the button has been clicked. I have implemented this message by altering the label text before I send the tweet across. However, for some reason, the message does not display until the tweet has been attempted. But since I have a confirmation message after the tweet, I never get to see this message and my original problem goes unsolved. I would really appreciate any help. Here is the relevant code: class SimpleTextBoxForm(Form): def init(self): # set window properties self.Text = "Tweeter" self.Width = 235 self.Height = 250 #tweet away self.label = Label() self.label.Text = "Tweet Away..." self.label.Location = Point(10, 10) self.label.Height = 25 self.label.Width = 200 #get the tweet self.tweetBox = TextBox() self.tweetBox.Location = Point(10, 45) self.tweetBox.Width = 200 self.tweetBox.Height = 60 self.tweetBox.Multiline = True self.tweetBox.WordWrap = True self.tweetBox.MaxLength = 140; #ask for the login ID self.askLogin = Label() self.askLogin.Text = "Login:" self.askLogin.Location = Point(10, 120) self.askLogin.Height = 20 self.askLogin.Width = 60 self.login = TextBox() self.login.Text= "" self.login.Location = Point(80, 120) self.login.Height = 40 self.login.Width = 100 #ask for the password self.askPass = Label() self.askPass.Text = "Password:" self.askPass.Location = Point(10, 150) self.askPass.Height = 20 self.askPass.Width = 60 # display password box with character hiding self.password = TextBox() self.password.Location = Point(80, 150) self.password.PasswordChar = "x" self.password.Height = 40 self.password.Width = 100 #submit button self.button1 = Button() self.button1.Text = 'Tweet' self.button1.Location = Point(10, 180) self.button1.Click += self.update self.AcceptButton = self.button1 #pack all the elements of the form self.Controls.Add(self.label) self.Controls.Add(self.tweetBox) self.Controls.Add(self.askLogin) self.Controls.Add(self.login) self.Controls.Add(self.askPass) self.Controls.Add(self.password) self.Controls.Add(self.button1) def update(self, sender, event): if not self.password.Text: self.label.Text = "You forgot to enter your password..." else: self.tweet(self.tweetBox.Text, self.login.Text, self.password.Text) def tweet(self, msg, login, password): self.label.Text = "Attempting Tweet..." # this should be executed before sending the tweet is attempted. But this seems to be executed only after the try block try: success = 'Tweet successfully completed... yay!\n' + 'At: ' + time.asctime().split()[3] ServicePointManager.Expect100Continue = False Twitter().UpdateAsXML(login, password, msg) except: error = 'Unhandled Exception. Tweet unsuccessful' self.label.Text = error else: self.label.Text = success self.tweetBox.Text = ""

    Read the article

  • Managed (.net) library with html-tidy like functionality?

    - by Eamon Nerbonne
    Does anybody know of an html cleaner for .NET that can parse html and (for instance) convert it to a more machine friendly format such as xhtml? I've tried the HTML Agility Pack, but that fails to correctly parse even fairly simple examples. To give an example of html that should be parsed correctly: <html><body> <ul><li>TestElem1 <li>TestElem2 <li>TestElem3 List: <ul><li>Nested1 <li>Nested2</li> <li>Nested3 </ul> <li>TestElem4 </ul> <p>paragraph 1 <p>paragraph 2 <p>paragraph 3 </body></html> li tags don't need to be closed (see spec), and neither do P tags. In other words, the above sample should be parsed as: <html><body> <ul><li>TestElem1</li> <li>TestElem2</li> <li>TestElem3 List: <ul><li>Nested1</li> <li>Nested2</li> <li>Nested3</li> </ul></li> <li>TestElem4</li> </ul> <p>paragraph 1</p> <p>paragraph 2</p> <p>paragraph 3</p> </body></html> Since the aim is to use the library on various machines, it's a big disadvantage to need to fall back to native code (such as a wrapper around html tidy) which would require extra deployment hassle and sacrifice platform independance. Any suggestions? To recap, I'm looking for: An html cleaner ala HTML tidy Must be able to deal with real world html, not just xhtml, at the very least correctly reading valid html 4 Must be able to convert to a more easily processable xml format Should be a purely managed app.

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >