Search Results

Search found 11396 results on 456 pages for 'simply denis'.

Page 386/456 | < Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >

  • Government Mandates and Programming Languages

    A recent SEC proposal (which, at over 600 pages, I havent read in any detail) includes the following: We are proposing to require the filing of a computer program (the waterfall computer program, as defined in the proposed rule) of the contractual cash flow provisions of the securities in the form of downloadable source code in Python, a commonly used computer programming language that is open source and interpretive. The computer program would be tagged in XML and required to be filed with the Commission as an exhibit. Under our proposal, the filed source code for the computer program, when downloaded and run (by loading it into an open Python session on the investors computer), would be required to allow the user to programmatically input information from the asset data file that we are proposing to require as described above. We believe that, with the waterfall computer program and the asset data file, investors would be better able to conduct their own evaluations of ABS and may be less likely to be dependent on the opinions of credit rating agencies. With respect to any registration statement on Form SF-1 (Section 239.44) or Form SF-3 (Section 239.45) relating to an offering of an asset-backed security that is required to comply with Item 1113(h) of Regulation AB, the Waterfall Computer Program (as defined in Item 1113(h)(1) of Regulation AB) must be written in the Python programming language and able to be downloaded and run on a local computer properly configured with a Python interpreter. The Waterfall Computer Program should be filed in the manner specified in the EDGAR Filer Manual. I dont see how it can be in investors best interests that the SEC demand a particular programming language be used for software related to investment data.  I have a feeling that investors who use computers at all already have software with which they are familiar, and that the vast majority of them are not running an open source scripting language on their machines to do their financial analysis.  In fact, I would wager that most of them are using tools like Excel, and if they really need to script anything, its being done with VBA in Excel. Now, Im not proposing that the SEC should require that the data be provided in Excel format with VBA scripts included so everyone can easily access the data (despite the fact that this would actually be pretty useful generally).  Rather, I think it is ill-advised for a government agency to make recommendations of this nature, period.  If the goal of the recommendation is to ensure that the way things work is codified in a transparent manner, than I can certainly respect that.  It seems to me that this could be accomplished without dictating the technology to use.  To wit: An Excel document could contain all of the data as well as the formulae necessary, and most likely would not require the end-user to install anything on their machine The SEC could simply create a calculator in the cloud such that any/all investors could use a single canonical web-based (or web service based) tool Millions of Java and .NET developers could write their own implementations You can read more about this issue, including the favorable position on it, on Jayanth Varmas blog. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provides a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? Use mipmaps and simply read the pixel on the 1x1 level. Again the question of accuracy and if it is even possible using non-power-of-two textures. The problem wit these approaches is, that the pipeline gets stalled which results in major performance issues. Therefore, I'm looking for a more performant way to accomplish my goal. Using the EXT_OCCLUSION_QUERY_BOOLEAN extension Apple introduced EXT_OCCLUSION_QUERY_BOOLEAN in iOS 5.0 for iPad 2. "4.1.6 Occlusion Queries Occlusion queries use query objects to track the number of fragments or samples that pass the depth test. An occlusion query can be started and finished by calling BeginQueryEXT and EndQueryEXT, respectively, with a target of ANY_SAMPLES_PASSED_EXT or ANY_SAMPLES_PASSED_CONSERVATIVE_EXT. When an occlusion query is started with the target ANY_SAMPLES_PASSED_EXT, the samples-boolean state maintained by the GL is set to FALSE. While that occlusion query is active, the samples-boolean state is set to TRUE if any fragment or sample passes the depth test. When the occlusion query finishes, the samples-boolean state of FALSE or TRUE is written to the corresponding query object as the query result value, and the query result for that object is marked as available. If the target of the query is ANY_SAMPLES_PASSED_CONSERVATIVE_EXT, an implementation may choose to use a less precise version of the test which can additionally set the samples-boolean state to TRUE in some other implementation dependent cases." The first sentence hints on a behavior which is exactly what I'm looking for: getting the number of pixels which passed the depth test in an asynchronous manner without much performance loss. However, the rest of the document describes only how to get boolean results. Is it possible to exploit this extension to get the pixel count? Does the hardware support it so that there may be hidden API to get access to the pixel count? Other extensions which could be exploitable would be debugging features like the number of times the fragment shader was invoked (PSInvocations in DirectX - not sure if something simila is available in OpenGL ES). However, this would also result in a pipeline stall.

    Read the article

  • SR Activity Summaries Via Direct Email? You Bet!

    - by PCat
    Courtesy of Ken Walker. I’m a “bottom line” kind of guy.  My friends and co-workers will tell you that I’m a “Direct Communicator” when it comes to work or my social life.  For example, if I were to come up with a fantastic new recipe for a low-fat pan fried chicken, I’d Tweet, email, or find a way to blast the recipe directly to you so that you could enjoy it immediately.  My friends would see the subject, “Awesome New Fried Chicken” and they’d click and see the recipe there before them.Others are “Indirect Communicators.”  My friend Joel is like this.  He would post the recipe in his blog, and then Tweet or email a link back to his blog with a subject, “Fried Chicken.”  Then Joel would sit back and expect his friends to read the email, AND click the link to his blog, and then read the recipe.  As a fan of the “Direct” method, I wish there was a way for me to “Opt-in” for immediate updates from Joel so I could see the recipe without having to click over to his blog to search for it.The same is true for MOS.  If you’ve ever opened a Service Request through My Oracle Support (MOS), you know that most of the communication between you and the Oracle Support Engineer with respect to the issue in the SR, is done via email.  Which type of email would you rather receive in your email account? Example1:Your SR has been updated.  Click HERE to see the update. Or Example2:Your SR has been updated.  Here is the update:  “Hi John, Oracle Development has completed the patch we’ve been waiting for!  Here’s a direct “LINK” to the patch that should resolve your issue.  Please download and install the patch via the instructions (included with the link) and let me know if it does, in fact, resolve your issue!”Example2 is available to you!  All you need to do is to “Opt-In” for the direct email updates.  The default is for the indirect update as seen in Example1.  To turn on “Service Request Details in Email” simply follow these instructions (aided by the screenshot below):1.    Log into MOS, and click on your name in the upper right corner.  Select “My Account.”2.    Make sure “My Account” is highlighted in bold on the left.3.    Turn ON, “Service Request Details in Email” That’s it!  You will now receive the SR Updates, directly in your email account without having to log into MOS, click the SR, scroll down to the updates, etc.  That’s better than Fried Chicken!  (Well; almost better....).

    Read the article

  • Make your TSQL easier to read during a presentation

    - by Jonathan Allen
    SQL Server Management Studio 2012 has some neat settings that you can use to help your presentations at a SQL event better for the attendees if you are willing to spend a few minutes making some settings changes. Historically, I have been reluctant to make changes to my SSMS settings as it is such a tedious process and it’s not 100% clear that what you think you are changing is actually what gets changed. With SSMS 2012 this has become a lot easier and a lot less risky. In any session that involves TSQL there is a trade off between the speaker having all the code on screen and the attendees being able to read any of what is on screen. You (the speaker) might be able to read this when you are working on the code but plenty of your audience wont be able to make head or tail of it. SSMS 2012 has a zoom facility that can help: but don’t go nuts … Having the font too big means you will be scrolling a lot and the code will again be rendered unreadable. There is more though but you need to take a deep breath and open the Tools menu and delve into the SSMS options. In previous versions of SSMS this is a deep, dark and scary place where changing values can be obscure and sometimes catastrophic to the UI when you get back to the code editor. First things first, we set out as a good DBA and save our current (and presumably acceptable) SSMS configuration. From the import and Export Settings you can set up a file to hold all of the settings that you currently have. The wizard will open and ask you to pick an option. This time around choose to export settings. hit next and next again and then name your settings profile in the final step of the wizard and then click Finish. Once this is done then you can change whatever you like and always get back to this configuration in a couple of clicks. So what can you change to make for a good experience? Well there are plenty of things that can be altered but don’t go too mad and change too many things without taking a look at the results for every item on the list above you can change font, size, weight, colour, background colour etc. etc. but consider what you are trying to achieve and take it slowly. I have seen presenters with their settings set to have a yellow highlight and black font rather than the default pale blue background and slightly darker font so to achieve that select Text Editor and then select “Selected Text” in the Display Items listbox. As you change things the Sample area give you an idea of what effect you are going to have. Black and yellow is the colour combination with the highest contrast – that’s why bees and wasps# are that colour. What next? how about increasing the default font for your demo scripts? This means that any script you open and any new ones that you start will take on this font. No more zooming (or forgetting to) in the middle of sessions. now don’t forget to save this profile – follow the same steps as above but give the profile a different name, something like PresentationBigFontHighContrast might be appropriate. Once you are done making changes, export the settings once more and then go into the Import Export wizard and import settings from the first profile you created. Everything will be back to normal. Now making changes to suit your environment can be done very easily and with confidence. * – and warning tape and safety signs and so forth – Health and Safety officers simply copy nature!

    Read the article

  • Four Easy Ways to Save a Rocky CRM Relationship

    - by Divya Malik
     Today, I am pleased to introduce our guest blogger Luke Christianson. Luke is  an Application Sales rep based out of Minneapolis, MN.  You can find him on LinkedIn and follow him on Twitter. In any relationship, sooner or later, the excitement fades away.  The honeymoon period gives way to the old routines you had, before you committed to each other and you eventually begin doing things apart from one another.  I’m not talking about a marriage…  Well, I guess I am.Commitment to a CRM tool and building a deep and lasting relationship is not much different than the basics of a traditional love story.  After your controlled CRM pilot program, and maybe the National Sales Meeting where you couldn’t escape those three wonderful letters, CRM, you will soon find that if you haven’t designed an environment where it’s going to enable your reps to make more money, the relationship is doomed.   . If you’re currently in a dysfunctional CRM relationship, here are 4 simple tips to re-engaging users and getting that spark back. Shadow a Sales Rep:   Chances are you can find out exactly what is preventing your sales reps from using the application by simply watching how they go about their day.  Sales reps are driven by money, not by additional administrative duties.  Your system needs to be setup so that they can get the information they need quickly, facilitate making key updates and run their business out of one easy-to-use application.  Increase your sales team’s productivity by 5% automatically:    Cancel the weekly forecast calls with your reps and require them update their opportunities in CRM.  Something else that I’ve seen work extremely well, is when you do Monthly or Quarterly reviews, do not let your sales reps bring anything into the room with them; no spreadsheets, notebooks, or computers.  Everything they need to tell you should be able to be put into CRM and fully accessible by the Sales Manager at any time.  Tool time:      Make sure the tools that you have selected meet both your short-term goals and your long term goals.   You need tools that can adapt like your business does.  You probably can’t wait two months for an update to a picklist value or for the addition of a simple workflow rule.  Do you feel the tools that are in place can create the experience you want for your users? and finally, if all else fails... Keep It Simple, Stupid:     Do you really need to require 15 fields to create an Opportunity?  Do you need to clutter the interface with different reports that don’t add daily value?  Most CRM systems on the market today are flexible enough today that your admin could clean up most of the unnecessary interface ‘noise’ in a few hours.  If they're not, see #3. Every strong relationship can be tedious at times, you’ll fight and eventually make amends, you may even threaten to upgrade to a newer model…  But be patient and think about what you want to achieve and you’ll find a partner for life.

    Read the article

  • How to safely copy an object?

    - by Prog
    This question is going to be a little long. Please bear with me. Something that happened in a project of mine made me think about how to safely copy objects. I'll present the situation I had and then ask a question. There was a class SomeClass: class SomeClass{ Thing[] things; public SomeClass(Thing[] things){ this.things = things; } // irrelevant stuff omitted public SomeClass copy(){ return new SomeClass(things); } } There was another class Processor that takes SomeClass objects, copies them (via someClassInstance.copy()), manipulates the copy's state, and returns the copy. Here it is: class Processor{ public SomeClass processObject(SomeClass object){ SomeClass copy = object.copy(); manipulateTheCopy(copy); return copy; } // irrelevant stuff omitted } I ran this, and it had bugs. I looked into these bugs, and it turned out that the manipulations Processor does on copy actually affect not only the copy, but also the original SomeClass object that was passed into processObject. I found out that it was because the original and the copy shared state - because the original passed it's field things into the copy when creating it. This made me realize that copying objects is harder than simply instantiating them with the same fields as the original. For the two objects to be completely disconnected, without any shared state, each of the fields passed to the copy also has to be copied. And if that object contains other objects - they have to be copied too. And so on. So basically, in order to be able to actually copy an object, each class in the system must have a copy() method, that also invokes copy() on all of it's fields, and so on. So for example, for copy() in SomeClass to work, it needs to look like this: public SomeClass copy(){ Thing[] copyThings = new Thing[things.length]; for(int i=0; i<things.length; i++) copyThings[i] = things[i].copy(); return new SomeClass(copyThings); } And if Thing has object fields of it's own, than it's own copy() method must be appropriate: class Thing{ Apple apple; Pencil pencil; int number; public Thing(Apple apple, Pencil pencil, int number){ this.apple = apple; this.pencil = pencil; this.number = number; } public Thing copy(){ // 'number' is a primitve. return new Thing(apple.getCopy(), pencil.getCopy(), number); } } And so on. Of course, instead of all classes having a copy() method, the copying mechanism can happen in all of the getters and the constructors of classes (unless places where it isn't suitable, for example when the field points to an external object, not to an object that 'is part' of this object). Still, that means that in order to be able to safely copy an object - most classes would have to have copying mechanisms in their getters. My question is divided into two parts: How frequently do you need to get a copy of an object? Is this a regular issue? Is the technique described common and/or reasonable? Or is there a better way to make safe copies of objects? Or is there an easier way to safely copy objects, without them sharing any state?

    Read the article

  • Antenna Aligner Part 4: Role'ing in the deep

    - by Chris George
    Since last time I've been trying to sort out the general workflow of the app. It's fundamentally not hard, there is a list of transmitters, you select a transmitter and it shows the compass view. Having done quite a bit of ajax/asp.net/html in the past, I immediately started off by creating two divs within my 'page', one for the list, one for the compass. Then using the onClick event in the list, this will switch the display attribute on the divs. This seemed to work, but did lead to some dodgy transitional redrawing artefacts which I was not happy with. So after some Googling I realised I was doing it all wrong! JQuery mobile has the concept of giving an object in html a data-role. By giving a div the attribute data-role="page" it is then treated as a separate page on the mobile device. Within the code, this is referenced like a html anchor in the form #mypage. Using this system, page transitions such as fade or slide are automatically applied which adds to the whole authenticity of the app! Here is a simple example: . <a href="#'compasspage">compass</a> . <div data-role="page" id="compasspage" data-add-back-btn="true"> But I don't want just a static link, I want to dynamically create my list, and get each list elements to switch to the compass page with the right information. So here is the jquery that I used to dynamically inject new <li> rows into the <ul> block. $('ul').append($('<li/>', {    //here appendin `<li>`     'data-role': "list-divider" }).append($('<a/>', {    //here appending `<a>` into `<li>`     'href': '#compasspage',     'data-transition': 'none',     'onclick': 'selectTx(' + i + ')',     'html': buttonHtml }))); $('ul').listview('refresh'); This is called within a for loop so the first 5 appropriate transmitters are used. There are several things of interest to note here. Firstly, I could not find a more elegant way to tell the target page which transmitter I've clicked on, so I have used the onclick event as well as the href attribute. The onclick event fires 'selectTx' which simply sets a global member variable to the specific index number I've clicked on. Yes it's not nice, but it works. Secondly, the data-transition attribute is set to 'none'. I wanted the transition between the pages to be a whooshy slidey effect. However this worked going to the compass page, but returning to the list page gave some undesirable visual artefacts (flickering, redrawing etc.). So I decided to remove the transitions all together, which was a shame. Thirdly, rather than embedding loads of html into the append command, I removed this out into a variable 'buttonHtml'. Doing this really tidied up my code. Until next time!

    Read the article

  • Best triple head display setup

    - by dgel
    I'm currently running Ubuntu 12.04 with a darn good triple head display setup. I've got a VisionTek 900530 Radeon HD 5450 512MB DDR3 PCI Express video card that has two DVI outputs and one Mini DisplayPort that I have connected to a HDMI adapter. I'm running three identical Asus 1920x1080 monitors that each have a DVI, VGA, and HDMI input. I'm using the xorg-edgers ppa, so I'm using the open source radeon driver version 6.99.99. I tried using the ATI binary fglrx driver, but I wasn't able to get the three monitors working properly- the monitor connected via HDMI / DisplayPort wouldn't run at full resolution. The setup is almost perfect: Compiz runs fine and is quite snappy. I'm not able to use that great compiz feature where you can drag a window to the side of a display and it will half maximize. I occasionally experience display corruption weirdness with Unity and need to restart it. When I use a dropdown menu in LibreOffice it often pops the menu down in another window. For example, if I'm using the center monitor and click the Insert menu, the menu pulls down on the monitor to my right, forcing me to chase it. If I chase down the menu and choose Manual Break, the dialog appears over on my left monitor. This absurdity is mildly entertaining but has lost its novelty. I've decided to build a new system and have spared no expense- latest i7 processor, SSD, etc. I really like the performance of the Nvidia binary drivers, so I put two ZOTAC ZT-40707-10L GeForce GT 440 in the system, figuring I'd have four DVI outputs and an awesome triple (or even eventually quad) head setup. Unfortunately it appears that I didn't do sufficient research before my purchase. It seems that Nvidia TwinView only supports two monitors on one card (I guess that's why they call it TwinView...). I messed around with running two X servers, but I really don't want that- being able to drag windows to any monitor is critical. It doesn't sound like Xinerama is an option because from what I understand it simply doesn't support Compiz. I've seen a BaseMosaic option that can be used with the Nvidia drivers that appears to support an almost unlimited number of heads- unfortunately me cheap little cards don't support it. I'm also not sure whether you'll still have all nice maximizing and snapping that TwinView provides, or whether Ubuntu will only see it as one massive display. I put my old trusty ATI card into my new system and installed 12.10. I'm using the opensource radeon drivers again because even in 12.10 I can't get the fglrx binary drivers to do triple head. Unfortunately, even with an unbelievably powerful system the experience is extremely sluggish (much more so than my experience in 12.04). The menu scattering problem appears to be fixed, but I get a lot of nasty Unity display corruption. So finally, my question is this: What hardware / drivers should I use? I'm willing to buy (almost) any video card(s). I have two PCI-Express 3.0 slots on my motherboard (which has an integrated Intel HD card). I'm willing to use ATI or Nvidia cards and willing to run Ubuntu 12.04.1 or 12.10. I'm not a gamer, but do want beautiful and snappy Compiz effects. Does anyone out there have the perfect triple head setup in 12.04 or 12.10? What hardware / drivers are you using? I have those two Nvidia cards but will probably be returning them unless someone knows a way to use them together for a triple head setup. Since I'm having pretty good luck with a single ATI card providing three displays, should I just buy a beefier one with the hopes that it will fix the horrible sluggishness I'm experiencing in 12.10?

    Read the article

  • broken upgrade from 10.04 to 12.04 on a VPS - recoverable?

    - by HorusKol
    I have a VPS hosted 1500 km away. It originally came with 9.10 - and this morning I decided that I really should get to an LTS release, and figured I'd jump to 12.04. Researching, I discovered that there is no direct path between 9.10 and 12.04, but that I could upgrade via 10.04. After backing up my data, I dove in. The upgrade to 10.04 was successful, and I proceeded to upgrade to 12.04. Things started to go wrong. First, I got an error with GLIBC - I retried and got the same error. That's when I stopped the upgrade. I then tried another round of apt-get update && apt-get upgrade and got a list of "unmet dependencies": apt: Depends: ubuntu-keyring but it is not going to be installed Depends: libc6 (>= 2.15) but 2.11.1-0ubuntu7.11 is to be installed Depends: libstdc++6 (>= 4.6) but 4.4.3-4ubuntu5.1 is to be installed PreDepends: dpkg (>= 1.15.7.2) but 1.15.5.6ubuntu4.6 is to be installed apt-utils: Depends: libapt-pkg-libc6.10-6-4.8 libapt-inst1.4: Depends: libc6 (>= 2.14) but 2.11.1-0ubuntu7.11 is to be installed libapt-pkg4.12: Depends: libc6 (>= 2.15) but 2.11.1-0ubuntu7.11 is to be installed Depends: libstdc++6 (>= 4.6) but 4.4.3-4ubuntu5.1 is to be installed libc6: Depends: libc-bin (= 2.11.1-0ubuntu7.11) but 2.15-0ubuntu10.2 is to be installed libept0: Depends: libapt-pkg-libc6.10-6-4.8 libnih-dbus1: Depends: libnih1 (= 1.0.3-4ubuntu9) but 1.0.1-1 is to be installed I tried to see if I could do something about these - using apt-get -f install. This told me that I would need to upgrade my kernel. I found instructions on how to do this, but when I ran apt-get to install the new linux headers, I got the same dependency errors. I found another answer here where someone else had had an interruption in their upgrade - and tried the solution that worked for them: sudo apt-get -f dist-upgrade This resulted in the error: E: Could not perform immediate configuration on 'python2.7-minimal'.Please see man 5 apt.conf under APT::Immediate-Configure for details. (2) I tried to resolve this by: apt-get install -o APT::Immediate-Configure=false -f apt python-minimal But this simply ended up with this last list of dependency errors: apt: Depends: ubuntu-keyring but it is not going to be installed Depends: libc6 (>= 2.15) but 2.11.1-0ubuntu7.11 is to be installed Depends: libstdc++6 (>= 4.6) but 4.4.3-4ubuntu5.1 is to be installed PreDepends: dpkg (>= 1.15.7.2) but 1.15.5.6ubuntu4.6 is to be installed apt-utils: Depends: libapt-pkg-libc6.10-6-4.8 libapt-inst1.4: Depends: libc6 (>= 2.14) but 2.11.1-0ubuntu7.11 is to be installed libapt-pkg4.12: Depends: libc6 (>= 2.15) but 2.11.1-0ubuntu7.11 is to be installed Depends: libstdc++6 (>= 4.6) but 4.4.3-4ubuntu5.1 is to be installed libc6: Depends: libc-bin (= 2.11.1-0ubuntu7.11) but 2.15-0ubuntu10.2 is to be installed libept0: Depends: libapt-pkg-libc6.10-6-4.8 libnih-dbus1: Depends: libnih1 (= 1.0.3-4ubuntu9) but 1.0.1-1 is to be installed python: Depends: python-minimal (= 2.6.5-0ubuntu1) but 2.7.3-0ubuntu2 is to be installed python-apt: Depends: libapt-pkg-libc6.10-6-4.8 python-minimal: Depends: python2.7-minimal (>= 2.7.3) but it is not going to be installed Breaks: python-support (< 1.0.10ubuntu2) but 1.0.4ubuntu1 is to be installed synaptic: Depends: libapt-pkg-libc6.10-6-4.8 Any ideas on how to dig out of this hole?

    Read the article

  • WiX, MSDeploy and an appealing configuration/deployment paradigm

    - by alexhildyard
    I do a lot of application and server configuration; I've done this for many years and have tended to view the complexity of this strictly in terms of the complexity of the ultimate configuration to be deployed. For example, specific APIs aside, I would tend to regard installing a server certificate as a more complex activity than, say, copying a file or adding a Registry entry.My prejudice revolved around the idea of a sequential deployment script that not only had the explicit prescription to apply a specific server configuration, but also made the implicit presumption that the server in question was in a good known state. Scripts like this fail for hundreds of reasons -- the Default Website didn't exist; the application had already been deployed; the application had already been partially deployed and failed to rollback fully, and so on. And so the problem is that the more complex the configuration activity, the more scope for error in any individual part of that activity, and therefore the greater the chance the server in question will not end up at exactly the desired configuration level.Recently I was introduced to a completely different mindset, which, for want of a better turn of phrase, I will call the "make it so" mindset. It's extremely simple both to explain and to implement. In place of the head-down, imperative script you used to use, you substitute a set of checks -- much like exception handlers -- around each configuration activity, starting with a check of the current system state. Thus the configuration logic becomes: "IF these services aren't started then start them, and IF XYZ website doesn't exist then create it, and IF these shares don't exist then create them, and IF these shares aren't permissioned in some particular way, then permission them so." This works. Really well, in my experience. Scenario 1: You want to get a system into a good known state; it's already in a good known state; you quickly realise there is nothing to do.Scenario 2: You want to get the system into a good known state; your script is flawed or the system is bust; it cannot be put into that state. You know exactly where (at least part of) the problem is and why.Scenario 3: You want to get the system into a good known state; people are fiddling around with the system just now. That's fine. You do what you can, and later you come back and try it againScenario 4: No one wants to deploy anything; they want you to prove that the previous deployment was successful. So you re-run the deployment script with the "-WhatIf" flag. It reports that there was nothing to change. There's your proof.I mentioned two technologies in the title -- MSI and MSDeploy. I am thinking specifically of the conversation that took place here. Having worked with both technologies, I think Rob Mensching's response is appropriately nuanced, and in essence the difference is this: sometimes your target is either to achieve a specific new server state, or to rollback to a known good one. Then again, your target may be to configure what you can, and to understand what you can't. Implicitly MSDeploy's "rollback" is simply to redeploy the previous version, whereas a well-crafted MSI will actively put your system into that state without further intervention. Either way, if all goes well it will leave you with a system in one of two states, whereas MSDeploy could leave your system in one of many states. The key is that MSDeploy and MSI are complementary technologies; which suits you best depends as much on Operational guidance as your Configuration remit.What I wanted to say was that I have always been for atomic, transactional-based configuration, but having worked with the "make it so" paradigm, I have been favourably impressed by the actual results. I'm tempted to put a more technical post up on this in due course.

    Read the article

  • 3 Incredibly Useful Projects to jump-start your Kinect Development.

    - by mbcrump
    I’ve been playing with the Kinect SDK Beta for the past few days and have noticed a few projects on CodePlex worth checking out. I decided to blog about them to help spread awareness. If you want to learn more about Kinect SDK then you check out my”Busy Developer’s Guide to the Kinect SDK Beta”. Let’s get started:   KinectContrib is a set of VS2010 Templates that will help you get started building a Kinect project very quickly. Once you have it installed you will have the option to select the following Templates: KinectDepth KinectSkeleton KinectVideo Please note that KinectContrib requires the Kinect for Windows SDK beta to be installed. Kinect Templates after installing the Template Pack. The reference to Microsoft.Research.Kinect is added automatically.  Here is a sample of the code for the MainWindow.xaml in the “Video” template: <Window x:Class="KinectVideoApplication1.MainWindow" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="480" Width="640"> <Grid> <Image Name="videoImage"/> </Grid> </Window> and MainWindow.xaml.cs using System; using System.Windows; using System.Windows.Media; using System.Windows.Media.Imaging; using Microsoft.Research.Kinect.Nui; namespace KinectVideoApplication1 { public partial class MainWindow : Window { //Instantiate the Kinect runtime. Required to initialize the device. //IMPORTANT NOTE: You can pass the device ID here, in case more than one Kinect device is connected. Runtime runtime = new Runtime(); public MainWindow() { InitializeComponent(); //Runtime initialization is handled when the window is opened. When the window //is closed, the runtime MUST be unitialized. this.Loaded += new RoutedEventHandler(MainWindow_Loaded); this.Unloaded += new RoutedEventHandler(MainWindow_Unloaded); //Handle the content obtained from the video camera, once received. runtime.VideoFrameReady += new EventHandler<Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs>(runtime_VideoFrameReady); } void MainWindow_Unloaded(object sender, RoutedEventArgs e) { runtime.Uninitialize(); } void MainWindow_Loaded(object sender, RoutedEventArgs e) { //Since only a color video stream is needed, RuntimeOptions.UseColor is used. runtime.Initialize(Microsoft.Research.Kinect.Nui.RuntimeOptions.UseColor); //You can adjust the resolution here. runtime.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color); } void runtime_VideoFrameReady(object sender, Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs e) { PlanarImage image = e.ImageFrame.Image; BitmapSource source = BitmapSource.Create(image.Width, image.Height, 96, 96, PixelFormats.Bgr32, null, image.Bits, image.Width * image.BytesPerPixel); videoImage.Source = source; } } } You will find this template pack is very handy especially for those new to Kinect Development.   Next up is The Coding4Fun Kinect Toolkit which contains extension methods and a WPF control to help you develop with the Kinect SDK. After downloading the package simply add a reference to the .dll using either the WPF or WinForms version. Now you will have access to several methods that can help you save an image: (for example) For a full list of extension methods and properties, please visit the site at http://c4fkinect.codeplex.com/. Kinductor – This is a great application for just learning how to use the Kinect SDK. The project uses MVVM Light and is a great start for those looking how to structure their first Kinect Application. Conclusion: Things are already getting easier for those working with the Kinect SDK. I imagine that after a few more months we will see the SDK go out of beta and allow commercial applications to run using it. I am very excited and hope that you continue reading my blog for more Kinect, WPF and Silverlight news.  Subscribe to my feed

    Read the article

  • Get Ready for Anytime, Anywhere Engagement

    - by Christie Flanagan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Are you ready for 2015?  According to IDC, 2015 is the year when more users are projected to access the internet using mobile devices than with PC’s or other wired devices.  It’s no doubt that mobile devices are a critical means of communication today, and are on track to become increasingly more important in the coming years. However, device formats are so varied that delivering a mobile web experience that will engage site visitors and enhance your brand can be a daunting task. Solutions that empower organizations to easily extend their web presence to the mobile channel, while saving significant time and effort in managing mobile sites, are now essential in our ever connected mobile world. So what are some of the things organizations should look for in such a solution? Mobile device form factors, networks, protocols, and browsers vary widely, and reformatting web content for thousands of different device and software combinations is a prohibitive task. An effective mobile solution can make this process seamless by automatically formatting designated web content for mobile delivery.  By automatically detecting a site visitor’s device configuration, the selected web content can be sized and formatted for optimal display on that particular device. This can save tremendous time involved in building, formatting, and maintaining individual websites or mobile applications for different mobile devices. It’s not enough to simply support the thousands of different mobile device types that are out there. It’s also critical to make it easy for marketers and other business users to manage mobile sites and mobile content. Those responsible for maintaining an organization’s web and mobile experiences need the ability to edit content using rich text editor tools and then preview that content directly in the context of the mobile website and the traditional website, ideally from the same business user interface. Powerful capabilities such as these make managing the web experience for mobile devices easy, even with frequently changing content, across a multitude of different devices. This saves tremendous time involved in building, formatting, and maintaining individual websites or mobile applications for different mobile devices. When content or business needs change, the business user needs only to change site content once, and it is seamlessly deployed to the web and all mobile channels.Geo-location is another critical input to making the online experience engaging and relevant for web visitors who are increasingly mobile. A mobile solution should enable use of device GPS data to deliver location-based content and services to mobile website visitors. Organizations can provide mobile site visitors with location-sensitive search results, location-based offers and recommendations, integration of maps and directions into site content, and much more – all critical for meeting the needs of those on the go.To hear more about how mobile is changing the game, check out our recent webcast with Ted Schadler, Vice President, Principal Analyst, Forrester, where he discussed why mobile is the new face of engagement, or learn more about how to extend your web presence to the mobile channel with Oracle WebCenter Sites and Oracle WebCenter Sites Mobility Server.

    Read the article

  • Documentation and Test Assertions in Databases

    - by Phil Factor
    When I first worked with Sybase/SQL Server, we thought our databases were impressively large but they were, by today’s standards, pathetically small. We had one script to build the whole database. Every script I ever read was richly annotated; it was more like reading a document. Every table had a comment block, and every line would be commented too. At the end of each routine (e.g. procedure) was a quick integration test, or series of test assertions, to check that nothing in the build was broken. We simply ran the build script, stored in the Version Control System, and it pulled everything together in a logical sequence that not only created the database objects but pulled in the static data. This worked fine at the scale we had. The advantage was that one could, by reading the source code, reach a rapid understanding of how the database worked and how one could interface with it. The problem was that it was a system that meant that only one developer at the time could work on the database. It was very easy for a developer to execute accidentally the entire build script rather than the selected section on which he or she was working, thereby cleansing the database of everyone else’s work-in-progress and data. It soon became the fashion to work at the object level, so that programmers could check out individual views, tables, functions, constraints and rules and work on them independently. It was then that I noticed the trend to generate the source for the VCS retrospectively from the development server. Tables were worst affected. You can, of course, add or delete a table’s columns and constraints retrospectively, which means that the existing source no longer represents the current object. If, after your development work, you generate the source from the live table, then you get no block or line comments, and the source script is sprinkled with silly square-brackets and other confetti, thereby rendering it visually indigestible. Routines, too, were affected. In our system, every routine had a directly attached string of unit-tests. A retro-generated routine has no unit-tests or test assertions. Yes, one can still commit our test code to the VCS but it’s a separate module and teams end up running the whole suite of tests for every individual change, rather than just the tests for that routine, which doesn’t scale for database testing. With Extended properties, one can get the best of both worlds, and even use them to put blame, praise or annotations into your VCS. It requires a lot of work, though, particularly the script to generate the table. The problem is that there are no conventional names beyond ‘MS_Description’ for the special use of extended properties. This makes it difficult to do splendid things such ensuring the integrity of the build by running a suite of tests that are actually stored in extended properties within the database and therefore the VCS. We have lost the readability of database source code over the years, and largely jettisoned the use of test assertions as part of the database build. This is not unexpected in view of the increasing complexity of the structure of databases and number of programmers working on them. There must, surely, be a way of getting them back, but I sometimes wonder if I’m one of very few who miss them.

    Read the article

  • Olympics data available for all on Windows Azure SQL Database and Power View

    - by jamiet
    Are you looking around for some decent test data for your BI demos? Well, if so, Microsoft have provided some data about all medals won at the Olympics Games (1900 to 2008) at OlympicsData workbook - Excel, SSIS, Azure sample; it provides analysis over athletes, countries, medal type, sport, discipline and various other dimensions. The data has been provided in an Excel workbook along with instructions on how to load the data into a Windows Azure SQL Database using SQL Server Integration Services (SSIS). Frankly though, the rigmarole of standing up your own Windows Azure SQL Database ok, SQL Azure database, is both costly (SQL Azure isn’t free) and time consuming (the provided instructions aren’t exactly an idiot’s guide and getting SSIS to work properly with Excel isn’t a barrel of laughs either). To ease the pain for all you BI folks out there that simply want to party on the data I have loaded it all into the SQL Azure database that I use for hosting AdventureWorks on Azure. You can read more about AdventureWorks on Azure below however I’ll summarise here by saying it is a SQL Azure database provided for the use of the SQL Server community and which is supported by voluntary donations. To view the data the credentials you need are: Server mhknbn2kdz.database.windows.net  Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly Type those into SSMS and away you go, the data is provided in four tables [olympics].[Sport], [olympics].[Discipline], [olympics].[Event] & [olympics].[Medalist]: I figured this would be a good candidate for a Power View report so I fired up Excel 2013 and built such a report to slice’n’dice through the data – here are some screenshots that should give you a flavour of what is available: A view of all the available data Where do all the gymastics medals go? Which countries do top ten all-time medal winners come from? You get the idea. There is masses of information here and if you have Excel 2013 handy Power View provides a quick and easy way of surfing through it. To save you the bother of setting up the Power View report yourself you can have the one that I took these screenshots from, it is available on my SkyDrive at OlympicsAnalysis.xlsx so just hit the link and download to play to your heart’s content. Party on, people! As I said above the data is hosted on a SQL Azure database that I use for hosting “AdventureWorks on Azure” which I first announced in March 2013 at AdventureWorks2012 now available for all on SQL Azure. I’ll repeat the pertinent parts of that blog post here: I am pleased to announce that as of today … [AdventureWorks2012] now resides on SQL Azure and is available for anyone, absolutely anyone, to connect to and use for their own means. This database is free for you to use but SQL Azure is of course not free so before I give you the credentials please lend me your ears eyes for a short while longer. AdventureWorks on Azure is being provided for the SQL Server community to use and so I am hoping that that same community will rally around to support this effort by making a voluntary donation to support the upkeep which, going on current pricing, is going to be $119.88 per year. If you would like to contribute to keep AdventureWorks on Azure up and running for that full year please donate via PayPal to [email protected] Any amount, no matter how small, will help. If those 50+ people that retweeted me beforehand all contributed $2 then that would just about be enough to keep this up for a year. If the community contributes more than we need then there are a number of additional things that could be done: Host additional databases (Northwind anyone??) Host in more datacentres (this first one is in Western Europe) Make a charitable donation That last one, a charitable donation, is something I would really like to do. The SQL Community have proved before that they can make a significant contribution to charitable orgnisations through purchasing the SQL Server MVP Deep Dives book and I harbour hopes that AdventureWorks on Azure can continue in that vein. So please, if you think AdventureWorks on Azure is something that is worth supporting please make a contribution. I’d like to emphasize that last point. If my hosting this Olympics data is useful to you please support this initiative by donating. Thanks in advance. @Jamiet

    Read the article

  • I am not able to delete a corrupt NTFS partition on my pen drive. How can I force its deletion?

    - by yesuraj
    I formatted my 16GB pen drive with the NTFS file system in windows vista. After that I started copying some files. However, only a few files were copied to the pen drive before the copy operation hung. So I cancelled the copy operation. Now I am unable to use the pen drive. I DON'T REALLY NEED ANY FILES THAT I COPIED TO THE PENDRIVE. I JUST WANT TO USE THE PENDRIVE AGAIN. I have tried using Ubuntu to format the pen drive. But when i use fdisk to delete the partition, it looks like it is working fine but in fact it does not delete the partition. Also I am unable to format it with any other file system. When I tried to use gparted, it throws the following error: Error mounting: mount exited with exit code 14: The disk contains an unclean file system(0,0). The file system wasn't safely closed on window. Fixing ntfs_attr_pread_i:ntfs_pread failed: Input/output error Failed to read NTFS$Bitmap:Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a softRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into windows twice. The usage of the /f parameter is very important!. If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the dmraid documentation for more details When I searched the Internet I found help on how to recover. But I don’t want to recover, I want to format it again. When I pressed w after deleting the partition, it took more time than previously. After that i removed the pen drive and re-inserted, but the partition I had deleted was still present. If I simply type the command fdisk /dev/sdb without removing the pen drive after the partition is deleted, then it returns the error message Unable to open /dev/sdb. Here are the steps that I followed: root@yesuraj-ubuntu:~# fdisk /dev/sdb Command (m for help): d Selected partition 1 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks THE DEMESG PRINTS ARE AS FOLLOWS, [ 6139.774753] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6154.816941] usb 2-1.3: device descriptor read/64, error -110 [ 6169.968908] usb 2-1.3: device descriptor read/64, error -110 [ 6170.158427] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6185.200638] usb 2-1.3: device descriptor read/64, error -110 [ 6200.352572] usb 2-1.3: device descriptor read/64, error -110 [ 6200.542093] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6205.559460] usb 2-1.3: device descriptor read/8, error -110 I used the dd command and it erased the partition table. But now when I connect the pen drive, dmesg contains this error message: [88143.437001] sdb: unknown partition table. I am not able to create a partion using fdisk /dev/sdb. The error message says that it is unable to find the node. Other messages from dmesg follow below. [87100.531596] usb 2-1.3: new high speed USB device number 39 using ehci_hcd [87130.915257] usb 2-1.3: new high speed USB device number 40 using ehci_hcd [87135.932647] usb 2-1.3: device descriptor read/8, error -110

    Read the article

  • About the K computer

    - by nospam(at)example.com (Joerg Moellenkamp)
    Okay ? after getting yet another mail because of the new #1 on the Top500 list, I want to add some comments from my side: Yes, the system is using SPARC processor. And that is great news for a SPARC fan like me. It is using the SPARC VIIIfx processor from Fujitsu clocked at 2 GHz. No, it isn't the only one. Most people are saying there are two in the Top500 list using SPARC (#77 JAXA and #1 K) but in fact there are three. The Tianhe-1 (#2 on the Top500 list) super computer contains 2048 Galaxy "FT-1000" 1 GHz 8-core processors. Don't know it? The FeiTeng-1000 ? this proc is a 8 core, 8 threads per core, 1 ghz processor made in China. And it's SPARC based. By the way ? this sounds really familiar to me ? perhaps the people just took the opensourced UltraSPARC-T2 design, because some of the parameters sound just to similar. However it looks like that Tianhe-1 is using the SPARCs as input nodes and not as compute notes. No, I don't see it as the next M-series processor. Simple reason: You can't create SMP systems out of them ? it simply hasn't the functionality to do so. Even when there are multiple CPUs on a single board, they are not connected like an SMP/NUMA machine to a shared memory machine ? they are connected with the cluster interconnect (in this case the Tofu interconnect) and work like a large cluster. Yes, it has a lot of oomph in Linpack ? however I assume a lot came from the extensions to the SPARCv9 standard. No, Linpack has no relevance for any commercial workload ? Linpack is such a special load, that even some HPC people are arguing that it isn't really a good benchmark for HPC. It's embarrassingly parallel, it can work with relatively small interconnects compared to the interconnects in SMP systems (however we get in spheres SMP interconnects where a few years ago). Amdahl isn't hitting that hard when running Linpack. Yes, it's a good move to use SPARC. At some time in the last 10 years, there was an interesting twist in perception: SPARC was considered as proprietary architecture and x86 was the open architecture. However it's vice versa ? try to create a x86 clone and you have a lot of intellectual property problems, create a SPARC clone and you have to spend 100 bucks or so to get the specification from the SPARC Foundation and develop your own SPARC processor. Fujitsu is doing this for a long time now. So they had their own processor, their own know-how. So why was SPARC a good choice? Well ? essentially Fujitsu can do what they want with their core as it is their core, for example adding the extensions to the SPARCv9 chipset ? getting Intel to create extensions to x86 to help you with your product is a little bit harder. So Fujitsu could do they needed to do with their processor in order to create such a supercomputer. No, the K is really using no FPGA or GPU as accelerators. The K is really using the CPU at doing this job. Yes, it has a significantly enhanced FPU capable to execute 8 instructions in parallel. No, it doesn't run Solaris. Yes, it uses Linux. No, it doesn't hurt me ... as my colleague Roland Rambau (he knows a lot about HPC) said once to me ... it doesn't matter which OS is staying out of the way of the workload in HPC.

    Read the article

  • Hyper-V for Developers Part 1 Internal Networks

    Over the last year, weve been working with Microsoft to build training and demo content for the next version of Office Communications Server code-named Microsoft Communications Server 14.  This involved building multi-server demo environments in Hyper-V, getting them running on demo servers which we took to TechEd, PDC, and other training events, and sometimes connecting the demo servers to the show networks at those events.  ITPro stuff that should scare the hell out of a developer! It can get ugly when I occasionally have to venture into ITPro land.  Lets leave it at that. Having gone through this process about 10 to 15 times in the last year, I finally have it down.  This blog series is my attempt to put all that knowledge in one place if anything, so I can find it somewhere when I need it again.  Ill start with the most simple scenario and then build on top of it in future blog posts. If youre an ITPro, please resist the urge to laugh at how trivial this is. Internal Hyper-V Networks Lets start simple.  An internal network is one that intended only for the virtual machines that are going to be on that network it enables them to communicate with each other. Create an Internal Network On your host machine, fire up the Hyper-V Manager and click the Virtual Network Manager in the Actions panel. Select Internal and leave all the other default values. Give the virtual network a name, and leave all the other default values. After the virtual network is created, open the Network and Sharing Center and click Change Adapter Settings to see the list of network connections. The only thing I recommend that you do is to give this connection a friendly label, e.g. Hyper-V Internal.  When you have multiple networks and virtual networks on the host machines, this helps group the networks so you can easily differentiate them from each other.  Otherwise, dont touch it, only bad things can happen. Connect the Virtual Machines to the Internal Network Im assuming that you have more than 1 virtual machine already configured in Hyper-V, for example a Domain Controller, and Exchange Server, and a SharePoint Server. What you need to do is basically plug in the network to the virtual machine.  In order to do this, the machine needs to have a virtual network adapter.  If the VM doesnt have a network adapter, open the VMs Settings and click Add Hardware in the left pane.  Choose the virtual network to which to bind the adapter to. If you already have a virtual network adapter on the VM, simply connect it to the virtual network. Assign IP Addresses to the Virtual Machines on the Internal Network Open the Network and Sharing Center on your VM, there should only be 1 network at this time.  Open the Properties of the connection, select Internet Protocol Version 4 (TCP/IPv4) and hit Properties. In this environment, Im assigning IP addresses as 192.168.0.xxx.  This particular VM has an IP address of 192.168.0.40 with a subnet mask of 255.255.255.0, and a DNS Server of 192.168.0.18.  DNS is running on the Domain Controller VM which has an IP address of 192.168.0.18. Repeat this process on every VM in your environment, obviously assigning a unique IP address to each.  In an environment with a domain controller, you should now be able to ping the machines from each other. What Next? After completing this process, heres what you still cannot do: Access the internet from any of the VMs Remote desktop to a VM from the host Remote desktop to a VM over the network In the next post, well take a look configuring an External network adapter on the virtual machines.  Well then build on top of that so that you can RDP into the VMs from the host machine and over the network.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Bios Memory settings and Virtualization + Ubuntu (Unofficial Answers Welcome) [closed]

    - by TardisGuy
    Attempting to optimize my (Main Windowless) Ubuntu system for my uses I will detail questions below, I understand this might be the wrong place to ask these questions. If so, my apologies and I thank you so much for your patience. Thanks to all the volenteers that have helped me learn ubuntu over the years (Since 5.10) This is a "short" list of questions I have been trying to figure out for some time. If you feel you can answer one but not another, that's already more than I could ask for. I have wrote this up in a format for easy navigation to important points Hopefully to less annoy your eyes. You're welcome :) or i'm sorry i annoy you. :( If you would be so kind, Please format answers as follows: question 1: _ _ _ _ _ or question 1-a: _ _ _ _ _ If you want to simply link me to relevant information, rather than type up something really detailed; that would be more than awesome! Memory Specific Questions Goal: Maximizing memory bandwith to better perform in Virtualization, and Large file compression. (Possible conflict?) Ganged vs Unganged "which is better?"** is relative, i know. But what about ganged vs unganged - With or without Bank/channel interleaving? a: Speculation - If i understand correctly, "channel interleaving has something to do with using both channels to read or write in a kind of "striping" pattern, as opposed to a standard half duplex operation.(probably wrong) but wouldn't ganged channels make this irrelevant? Memory Interleaving(bank). Does it have a down side? Does it require a ratio of clocks? (If I run 4x4gig ddr3) a. If im reading correctly(trying to learn), this is designed to spread operations between latency cycles to work around the higher latency of "normal" operation. b. However it seems to me that it has to be: divisible by fractions of a master clock? So if i run memory at 1333mhz, then the mean between 2 (physical) banks would operate every (roughly) 600Mhz? Warning! Possibly utter nonsense: (1333/2 interleaving to act like 1 memory module per 2 sticks of a total of 4 sticks, meaning 2x channels@4) c. which makes me wonder if there would be left over clock cycles the system would have to... "truncate/balance" or something? But I'm certain theres a feature somewhere i don't understand. Virtualization Questions AMD-V - Option of IOMMU Turned it on, why do i have extra option of "64MB"? If IOMMU is on, but "64MB" is "disabled", Is it on? (have scoured google, I still dont know) a. I think i understand that its supposed to (kind of) "set aside" a part of ram to act as a faster interactive zone for "stuff" (usb, Graphics, and... what?) b. I am using Nvidia graphics on AMD (Used kernel option "iommu=pt iommu=1, pt "passthrough"? No idea what they do, found it on google to solve boot up issue) c. Will this option help me use low latency sound hardware, like my midi keyboard? Can you recommend any additional tweaks? a. sysctl settings? b. swap settings? Grats, youve reached the end. Thanks for Reading.

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • WF4 &ndash; Guess the number game!

    - by MarkPearl
    I posted yesterday how really good WF4 was looking. Today I thought I would show some real basics that I was able to figure out. This will be a simple example, I am going to make a flowchart workflow – which will prompt the user to guess the number until they guess the right number. Lets begin… Make a new project and make it a Workflow console Application. Then select the Workflow file and drag a FlowChart (2) to point 3. This will now show a green start circle in the designer form. We are going to work with primitives to start with. We are now going to drag a few objects onto the Workflow, We drag the WriteLine, Assign & Decision items onto the designer. Once they are dragged onto the designer we will want to link them up. The order that they are linked is critical since this will determine the order of the solution. In this case, we want the system to first ask “Guess a number”, then to wait for the user to input some code, and then to display “You got it” if they got it right, and “Try again” if they got it wrong. So we now link the arrows to the objects. This is done by moving the mouse pointer over the start objects and clicking on one of the toggles and then dragging it to the next object and releasing the button over one of the toggles. This will place an arrow from the source object to the target object. Okay… pretty simple stuff – now we just need these primitive objects to do stuff. Lets start with the WriteLine primitive. We place the text in inverted commas in the Text field. Because this field accepts any valid VB expression we could have put variables etc. in there if we wanted to. The next thing we want to do is allow the user to input a number. This brings up an interesting problem, if a user were to type in a number, there would need to be someway to declare a variable to hold that value for the life of the workflow. We can achieve this by declaring a variable. To declare a variable, move your cursor over the variables tab at the bottom of the workflow, and then type the name of the new variable in the “Create Variable” field and set it as shown in the image above. Now that we have a variable, we want to call the Console.Readline method and assign the inputted value from the Console to that variable. The code that cannot be seen is actually this – Convert.ToInt32(Console.ReadLine()) We now have a workflow that first prompts the user for a number, then allows the user to type in a number. We are almost done, we just need to make the system react to the value inputted. There are a few ways we could do this, I am going to use the Decision item. So select the Decision object on the designer and then view its properties (F4 for me), and in the condition field place a condition. For simplicity sake I have decided that if the user guesses 10, they will have guessed the number. This is now the completed workflow. Its really easy to understand and shows some really powerful principles for Business applications. You can run the application and see what it does. Imagine writing business solutions that do not worry about the exact flow of objects, but simply allows a business analyst or someone to configure the solution to work exactly as the business rules would dictate. And if the rules changed six months later all they would need to do is re-drag some of the flows. Now I do not know if WF4 will allow for this, but it feels like it is a step in the right direct.

    Read the article

  • Profiling Silverlight Applications after installing Visual Studio 2010 Service Pack 1

    - by mbcrump
    Introduction Now that the dust has settled and everyone has downloaded and installed Visual Studio 2010 Service Pack 1, its time to talk about a new feature included that will help Silverlight Developers profile their applications. Let’s take a look at what the official documentation says about it: Performance Wizard for Silverlight – taken from VS2010 SP1 KB. Visual Studio 2010 SP1 enables you to tune the Silverlight application performance by profiling the code. A traditional code profiler cannot tune the rendering performance for Silverlight applications. Many higher-level profilers are added to Visual Studio 2010 SP1 so that you can better determine which parts of the application consume time. So, how do you do it? After you finish installing VS2010 SP1, make sure it took by going to Help –> About. You should see SP1Rel under Visual Studio 2010 as shown below. Now, that we have verified you are on the most current release, let’s load up a Silverlight Application. I’m going to take my hobby Silverlight project that I created a month or so ago. The reason that I’m picking this project is that I didn’t focus so much on performance as it was just built for fun and to see what I could do with Silverlight. I believe this makes the perfect application to profile.  After the project is loaded, click on Analyze then Launch Performance Wizard. Go ahead and click on CPU Sampling (recommended). You will notice that it ask which application to target. By Default, it will select the .Web project in an Silverlight Application. Go ahead and leave the default Web Project checked. We are going to leave the client as Internet Explorer. Now, go ahead and click finish. Now your Silverlight Application will launch. While your application is running, you will see the following inside of Visual Studio 2010. Here is where you will need to attach your Silverlight Application to the web application that is current being profiled. Simply click on the  Attach/Detach button below and find your application to attach to the profiler. In my case, I am using IE8 and could find it by the title. After you close your browser, you will notice it generated a report: These files will end with a .VSP If you click on the .VSP you will it generated the following report: We could turn off “Just My Code” but it may pick up things that we didn’t want to profile as shown below: One other feature to note is that you may want to export the data to a CSV or XML. You can do that by looking at the toolbar and clicking the button highlighted below. Conclusion The profiler for Silverlight is a great addition to an already great product. So before you ship a Silverlight Application run it through the profile and see what comes up. Since its included and free I can’t see a reason not to do this. Thanks again for reading and I hope you subscribe to my blog or follow me on Twitter for more Silverlight/WP7 fun.  Subscribe to my feed

    Read the article

  • Consumer Oriented Search In Oracle Endeca Information Discovery - Part 2

    - by Bob Zurek
    As discussed in my last blog posting on this topic, Information Discovery, a core capability of the Oracle Endeca Information Discovery solution enables businesses to search, discover and navigate through a wide variety of big data including structured, unstructured and semi-structured data. With search as a core advanced capabilities of our product it is important to understand some of the key differences and capabilities in the underlying data store of Oracle Endeca Information Discovery and that is our Endeca Server. In the last post on this subject, we talked about Exploratory Search capabilities along with support for cascading relevance. Additional search capabilities in the Endeca Server, which differentiate from simple keyword based "search boxes" in other Information Discovery products also include: The Endeca Server Supports Set Search.  The Endeca Server is organized around set retrieval, which means that it looks at groups of results (all the documents that match a search), as well as the relationship of each individual result to the set. Other approaches only compute the relevance of a document by comparing the document to the search query – not by comparing the document to all the others. For example, a search for “U.S.” in another approach might match to the title of a document and get a high ranking. But what if it were a collection of government documents in which “U.S.” appeared in many titles, making that clue less meaningful? A set analysis would reveal this and be used to adjust relevance accordingly. The Endeca Server Supports Second-Order Relvance. Unlike simple search interfaces in traditional BI tools, which provide limited relevance ranking, such as a list of results based on key word matching, Endeca enables users to determine the most salient terms to divide up the result. Determining this second-order relevance is the key to providing effective guidance. Support for Queries and Filters. Search is the most common query type, but hardly complete, and users need to express a wide range of queries. Oracle Endeca Information Discovery also includes navigation, interactive visualizations, analytics, range filters, geospatial filters, and other query types that are more commonly associated with BI tools. Unlike other approaches, these queries operate across structured, semi-structured and unstructured content stored in the Endeca Server. Furthermore, this set is easily extensible because the core engine allows for pluggable features to be added. Like a search engine, queries are answered with a results list, ranked to put the most likely matches first. Unlike “black box” relevance solutions, which generalize one strategy for everyone, we believe that optimal relevance strategies vary across domains. Therefore, it provides line-of-business owners with a set of relevance modules that let them tune the best results based on their content. The Endeca Server query result sets are summarized, which gives users guidance on how to refine and explore further. Summaries include Guided Navigation® (a form of faceted search), maps, charts, graphs, tag clouds, concept clusters, and clarification dialogs. Users don’t explicitly ask for these summaries; Oracle Endeca Information Discovery analytic applications provide the right ones, based on configurable controls and rules. For example, the analytic application might guide a procurement agent filtering for in-stock parts by visualizing the results on a map and calculating their average fulfillment time. Furthermore, the user can interact with summaries and filters without resorting to writing complex SQL queries. The user can simply just click to add filters. Within Oracle Endeca Information Discovery, all parts of the summaries are clickable and searchable. We are living in a search driven society where business users really seem to enjoy entering information into a search box. We do this everyday as consumers and therefore, we have gotten used to looking for that box. However, the key to getting the right results is to guide that user in a way that provides additional Discovery, beyond what they may have anticipated. This is why these important and advanced features of search inside the Endeca Server have been so important. They have helped to guide our great customers to success. 

    Read the article

  • XNA 2D Board game - trouble with the cursor

    - by Adorjan
    I just have started making a simple 2D board game using XNA, but I got stuck at the movement of the cursor. This is my problem: I have a 10x10 table on with I should use a cursor to navigate. I simply made that table with the spriteBatch.Draw() function because I couldn't do it on another way. So here is what I did with the cursor: public override void LoadContent() { ... mutato.Position = new Vector2(X, Y); //X=103, Y=107; mutato.Sebesseg = 45; ... mutato.Initialize(content.Load<Texture2D>("cursor"),mutato.Position,mutato.Sebesseg); ... } public override void HandleInput(InputState input) { if (input == null) throw new ArgumentNullException("input"); // Look up inputs for the active player profile. int playerIndex = (int)ControllingPlayer.Value; KeyboardState keyboardState = input.CurrentKeyboardStates[playerIndex]; if (input.IsPauseGame(ControllingPlayer) || gamePadDisconnected) { ScreenManager.AddScreen(new PauseMenuScreen(), ControllingPlayer); } else { // Otherwise move the player position. if (keyboardState.IsKeyDown(Keys.Down)) { Y = (int)mutato.Position.Y + mutato.Move; } if (keyboardState.IsKeyDown(Keys.Up)) { Y = (int)mutato.Position.Y - mutato.Move; } if (keyboardState.IsKeyDown(Keys.Left)) { X = (int)mutato.Position.X - mutato.Move; } if (keyboardState.IsKeyDown(Keys.Right)) { X = (int)mutato.Position.X + mutato.Move; } } } public override void Draw(GameTime gameTime) { mutato.Draw(spriteBatch); } Here's the cursor's (mutato) class: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; namespace Battleship.Components { class Cursor { public Texture2D Cursortexture; public Vector2 Position; public int Move; public void Initialize(Texture2D texture, Vector2 position,int move) { Cursortexture = texture; Position = position; Move = move; } public void Update() { } public void Draw(SpriteBatch spriteBatch) { spriteBatch.Draw(Cursortexture, Position, Color.White); } } } And here is a part of the InputState class where I think I should change something: public bool IsNewKeyPress(Keys key, PlayerIndex? controllingPlayer, out PlayerIndex playerIndex) { if (controllingPlayer.HasValue) { // Read input from the specified player. playerIndex = controllingPlayer.Value; int i = (int)playerIndex; return (CurrentKeyboardStates[i].IsKeyDown(key) && LastKeyboardStates[i].IsKeyUp(key)); } } If I leave the movement operation like this it doesn't have any sense: X = (int)mutato.Position.X - mutato.Move; However if I modify it to this: X = (int)mutato.Position.X--; it moves smoothly. Instead of this I need to move the cursor by fields (45 pixels), but I don't have any idea how to manage it.

    Read the article

< Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >