Search Results

Search found 6020 results on 241 pages for 'valid'.

Page 211/241 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • LA SPÉCIALISATION POUR SE DIFFÉRENCIER ET ÊTRE VALORISÉ

    - by michaela.seika(at)oracle.com
    Software. Hardware. Complete. inside the Click Here The order you must follow to make the colored link appear in browsers. If not the default window link will appear 1. Select the word you want to use for the link 2. Select the desired color, Red, Black, etc 3. Select bold if necessary ___________________________________________________________________________________________________________________ Templates use two sizes of fonts and the sans-serif font tag for the email. All Fonts should be (Arial, Helvetica, sans-serif) tags Normal size reading body fonts should be set to the size of 2. Small font sizes should be set to 1 !!!!!!!DO NOT USE ANY OTHER SIZE FONT FOR THE EMAILS!!!!!!!! ___________________________________________________________________________________________________________________ --     LA SPÉCIALISATION  POUR SE DIFFÉRENCIER ET ÊTRE VALORISÉ       Le marché nous demande de plus en plus de solutions et d’engagements. Pour bâtir ces solutions nous nous appuyons sur vous, Partenaires Oracle. En matière d’engagements, Oracle se doit de communiquer auprès du marché quant à la spécialisation de ses partenaires, sur leurs compétences en fonction des projets que les clients nous demandent d’adresser. Plus de 50 spécialisations sont à ce jour disponibles pour les partenaires Gold, Platinum et Diamond : • Sur les produits Technologiques tels que la Base de Données, les options de la Base, la SOA, la Business Intelligence, … • Sur les produits Applicatifs, tels que l’ERP, le CRM, … • Sur les produits Hardware, les Systèmes d’exploitation. Afin de vous aider à vous spécialiser et donc à vous certifier, nos 2 distributeurs à valeur ajoutée, Altimate et Arrow ECS, vous assistent dans cette démarche. ALTIMATE vous propose de participer Lunch & Spécialisation tour Profitez de ces dispositifs qui sont mis en place pour vous afin de vous spécialiser et profiter de tous les bénéfices auxquels vous donne accès la spécialisation. ARROW ECS vous propose de participer : L'Ecole de la spécialisation Oracle by Arrow Profitez de ces dispositifs qui sont mis en place pour vous afin de vous spécialiser et profiter de tous les bénéfices auxquels vous donne accès la spécialisation. Oracle Solutions Tour Découvrez la solution Oracle lors de ce tour de France. Au programme :  roadmaps, ateliers produits et solutions, certifications     BÉNÉFICES en savoir + • l’engagement d’Oracle aux côtés des partenaires pour adresser les grands dossiers • la visibilité auprès des clients pour être identifié comme Expert sur une offre, reconnu et validé par Oracle • le support (accès support gratuit), Oracle University (vouchers pour certifier gratuitement vos équipes de Consultants Implementation) • les budgets Marketing (lead generation, création de campagnes Marketing, être sponsor d’événements clients)   Différenciez-vous en vous spécialisant sur votre domaine d’expertise et accélérez votre succès ! Oracle et ses Distributeurs à Valeur Ajoutée     Eric Fontaine Directeur Alliances & Channel Technologie Europe du Sud vous présente en vidéo la spécialisation et ses avantages.                                         CONTACTS : ORACLE Jean-Jacques PanissiéOracle Partner Development A&C Technology +33 157 60 28 52 ALTIMATE Sophie Daval +33 1 34 58 47 68 ARROW [email protected] +33 1 49 97 59 63          

    Read the article

  • Configure IPv6 on your Linux system (Ubuntu)

    After the presentation on IPv6 at the first event of the Emtel Knowledge Series and some recent discussion on social media networks with other geeks and Linux interested IT people here in Mauritius, I thought that I should give it a try (finally) and tweak my local network infrastructure. Honestly, I have been to busy with contractual project work and it never really occurred to me to set up IPv6 in my LAN. Well, the following paragraphs are going to shed some light on those aspects of modern computer and network technology. This is the first article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. Let's embrace IPv6 The basic configuration on Linux is actually very simple as the kernel, operating system, and user-space programs support that protocol natively. If your system is ready to go for IP (aka: IPv4), then you are good to go for anything else. At least, I didn't have to install any additional packages on my system(s). We are going to assign a static IPv6 address to the system. Hence, we have to modify the definition of interfaces and check whether we have an inet6 entry specified. Open your favourite text editor and check the following entries (it should be at least similar to this): $ sudo nano /etc/network/interfaces auto eth0# IPv4 configurationiface eth0 inet static  address 192.168.1.2  network 192.168.1.0  netmask 255.255.255.0  broadcast 192.168.1.255# IPv6 configurationiface eth0 inet6 static  pre-up modprobe ipv6  address 2001:db8:bad:a55::2  netmask 64 Of course, you might have to adjust your interface device (eth0) or you might be interested to have multiple directives for additional devices (eth1, eth2, etc.). The auto instruction takes care that your device is enabled and configured during the booting phase. The use of the pre-up directive depends on your kernel configuration but in most scenarios this might be an optional line. Anyways, it doesn't hurt to have it enabled after all - just to be on the safe side. Next, either restart your network subsystem like so: $ sudo service networking restart Or you might prefer to do it manually with identical parameters, like so: $ sudo ifconfig eth0 inet6 add 2001:db8:bad:a55::2/64 In case that you're logged in remotely into your PC (ie. via ssh), it is highly advised to opt for the second choice and add the device manually. You can check your configuration afterwards with one of the following commands (depends on whether it is installed): $ sudo ifconfig eth0eth0      Link encap:Ethernet  HWaddr 00:21:5a:50:d7:94            inet addr:192.168.160.2  Bcast:192.168.160.255  Mask:255.255.255.0          inet6 addr: fe80::221:5aff:fe50:d794/64 Scope:Link          inet6 addr: 2001:db8:bad:a55::2/64 Scope:Global          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1 $ sudo ip -6 address show eth03: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000    inet6 2001:db8:bad:a55::2/64 scope global        valid_lft forever preferred_lft forever    inet6 fe80::221:5aff:fe50:d794/64 scope link        valid_lft forever preferred_lft forever In both cases, it confirms that our network device has been assigned a valid IPv6 address. That's it in general for your setup on one system. But of course, you might be interested to enable more services for IPv6, especially if you're already running a couple of them in your IP network. More details are available on the official Ubuntu Wiki. Continue to configure your network to provide IPv6 address information automatically in your local infrastructure.

    Read the article

  • SQL SERVER – Not Possible – Delete From Multiple Table – Update Multiple Table in Single Statement

    - by pinaldave
    There are two questions which I get every single day multiple times. In my gmail, I have created standard canned reply for them. Let us see the questions here. I want to delete from multiple table in a single statement how will I do it? I want to update multiple table in a single statement how will I do it? The answer is – No, You cannot and you should not. SQL Server does not support deleting or updating from two tables in a single update. If you want to delete or update two different tables – you may want to write two different delete or update statements for it. This method has many issues – from the consistency of the data to SQL syntax. Now here is the real reason for this blog post – yesterday I was asked this question again and I replied my canned answer saying it is not possible and it should not be any way implemented that day. In the response to my reply I was pointed out to my own blog post where user suggested that I had previously mentioned this is possible and with demo example. Let us go over my conversation – you may find it interesting. Let us call the user DJ. DJ: Pinal, can we delete multiple table in a single statement or with single delete statement? Pinal: No, you cannot and you should not. DJ: Oh okey, if that is the case, why do you suggest to do that? Pinal: (baffled) I am not suggesting that. I am rather suggesting that it is not possible and it should not be possible. DJ: Hmm… but in that case why did you blog about it earlier? Pinal: (What?) No, I did not. I am pretty confident. DJ: Well, I am confident as well. You did. Pinal: In that case, it is my word against your word. Isn’t it? DJ: I have proof. Do you want to see it that you suggest it is possible? Pinal: Yes, I will be delighted too. (After 10 Minutes) DJ: Here are not one but two of your blog posts which talks about it - SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2 SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – T-SQL Example – Part 2 of 2 Pinal: Oh! DJ: I know I was correct. Pinal: Well, oh man, I did not mean there what you mean here. DJ: I did not understand can you explain it further. Pinal: Here we go. The example in the other blog is the example of the cascading delete or cascading update. I think you may want to understand the concept of the foreign keys and cascading update/delete. The concept of cascading exists to maintain data integrity. If there primary keys get deleted the update or delete reflects on the foreign key table to maintain the key integrity and data consistency. SQL Server follows ANSI Entry SQL with regard to referential integrity between PrimaryKey and ForeignKey columns which requires the inserting, updating, and deleting of data in related tables to be restricted to values that preserve the integrity. This is all together different concept than deleting multiple values in a single statement. When I hear that someone wants to delete or update multiple table in a single statement what I assume is something very similar to following. DELETE/UPDATE Table 1 (cols) Table 2 (cols) VALUES … which is not valid statement/syntax as well it is not ASNI standards as well. I guess, after this discussion with DJ, I realize I need to do a blog post so I can add the link to this blog post in my canned answer. Well, it was a fun conversation with DJ and I hope it the message is very clear now. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Internet Explorer 8 Standards Mode Results In Broken Blank Page

    - by Agent_9191
    I'm running into a weird issue that I'm struggling to figure out what's causing the page to break. I have an internal website that's still under development (thus no link to the page) that works great in Firefox and Internet Explorer 8 in IE 7 Standards mode. But when I force it to IE 8 Standards mode the page will only display the title text in the browser tab and an otherwise completely blank page. It seems so broken that the blank page doesn't even have a context menu. The page generally looks like this: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta content="IE=8" http-equiv="X-UA-Compatible" /> <title>Page Title</title> <link rel="shortcut icon" href="/Images/favicon.ico" type="image/x-icon" /> <link href="/Style/main.less" rel="stylesheet" type="text/css" /> </head> <body> ... </body> </html> You may notice the .less extension for the stylesheet. This is an ASP.NET MVC application and I'm making use of DotLess. I have the HttpHandler hooked up for it in the web.config. Of course there's some additional info on the page, but (in theory) it shouldn't be causing this issue. I've run the CSS and the HTML through the W3C validators and both have come back as completely valid. I'm trying the arduous task of removing/re-adding elements until it displays, but any insight into what could cause this would help. EDIT: it appears to be something related to the DotLess stylesheet. The resulting CSS is valid according to the W3C CSS validator. EDIT 2: Digging further, and making use of IE's Developer Tools to control the styles, it appears that IE is reading a single statement twice even though it only occurs once in the output. Here's the output of the Less file: a, abbr, acronym, address, applet, b, big, caption, center, cite, code, dd, dfn, div, dl, dt, em, fieldset, font, form, html, i, iframe, img, kbd, label, legend, li, object, pre, s, samp, small, span, strike, strong, sub, sup, tbody, td, tfoot, th, thead, tr, tt, u, var { margin: 0; padding: 0; border: 0; outline: 0; font-size: 100%; vertical-align: baseline; background: transparent; } blockquote, q { margin: 0; padding: 0; border: 0; outline: 0; font-size: 100%; vertical-align: baseline; background: transparent; quotes: none; } body { margin: 0; padding: 0; border: 0; outline: 0; font-size: 100%; vertical-align: baseline; line-height: 1; width: 100%; background: #efebde; min-width: 600px; } del { margin: 0; padding: 0; border: 0; outline: 0; font-size: 100%; vertical-align: baseline; background: transparent; text-decoration: line-through; } h1 { border: 0; outline: 0; vertical-align: baseline; background: transparent; font-size: 2em; margin: .8em 0 .2em 0; padding: 0; } h2 { border: 0; outline: 0; vertical-align: baseline; background: transparent; font-size: 1.8em; margin: .8em 0 .2em 0; padding: 0; } h3 { border: 0; outline: 0; vertical-align: baseline; background: transparent; font-size: 1.6em; margin: .8em 0 .2em 0; padding: 0; } h4 { margin: 0; padding: 0; border: 0; outline: 0; vertical-align: baseline; background: transparent; font-size: 1.4em; } h5 { margin: 0; padding: 0; border: 0; outline: 0; vertical-align: baseline; background: transparent; font-size: 1.2em; } h6 { margin: 0; padding: 0; border: 0; outline: 0; vertical-align: baseline; background: transparent; font-size: 1em; } ins { margin: 0; padding: 0; border: 0; outline: 0; font-size: 100%; vertical-align: baseline; background: transparent; text-decoration: none; } ol, ul { margin: 0; padding: 0; border: 0; outline: 0; font-size: 100%; vertical-align: baseline; background: transparent; list-style: none; } p { border: 0; outline: 0; font-size: 100%; vertical-align: baseline; background: transparent; margin: .4em 0 .8em 0; padding: 0; } table { margin: 0; padding: 0; border: 0; outline: 0; font-size: 100%; vertical-align: baseline; background: transparent; border-collapse: collapse; border-spacing: 0; } blockquote:before, blockquote:after, q:before, q:after { content: none; } :focus { outline: 0; } .bold { font-weight: bold; } .systemFont { font-family: Arial; } .labelled { font-style: italic; } .groovedBorder { border-color: #adaa9c; border-style: groove; border-width: medium; } #header, #footer { clear: both; float: left; width: 100%; } #header p, #header h1, #header h2 { padding: .4em 15px 0 15px; margin: 0; } #header ul { clear: left; float: left; width: 100%; list-style: none; margin: 10px 0 0 0; padding: 0; } #header ul li { display: inline; list-style: none; margin: 0; padding: 0; } #header ul li a { background: #eeeeee; display: block; float: left; left: 15px; line-height: 1.3em; margin: 0 0 0 1px; padding: 3px 10px; position: relative; text-align: center; text-decoration: none; } #header ul li a span { display: block; } #header ul li a:hover { background: #336699; } #header ul li a.active, #header ul li a.active:hover { background: black; font-weight: bold; } #header #logindisplay { float: right; padding-top: .5em; padding-bottom: .5em; padding-right: 1em; padding-left: 1em; } #title h1 { font-family: Arial; font-style: italic; font-size: 175%; text-align: center; margin-top: 1%; } .col1 { font-family: Arial; border-color: #adaa9c; border-style: groove; border-width: medium; min-height: 350px; float: left; overflow: hidden; position: relative; padding-top: 0; padding-bottom: 1em; padding-left: 0; padding-right: 0; } .col1 div.logo { text-align: center; } .col3 { font-family: Arial; border-color: #adaa9c; border-style: groove; border-width: medium; float: left; overflow: hidden; position: relative; } #layoutdims { clear: both; background: #eeeeee; margin: 0; padding: 6px 15px !important; text-align: right; } #company { padding-left: 10px; padding-top: 10px; margin: 0; } #company span { display: block; padding-left: 1em; } #version { padding-right: 1em; padding-top: 1em; text-align: center; } #menu li { padding: 6px; border-color: #adaa9c; border-style: groove; border-width: medium; min-width: 108px; } #menu li a.ciApp { text-decoration: none; font-size: 112.5%; font-weight: bold; font-family: Arial; color: black; } #menu li a.ciApp span { vertical-align: top; } .welcomemessage { font-size: 60.95%; } .newFeatures { overflow-y: scroll; max-height: 300px; } #newsfeed div .newsLabel { color: red; font-size: 60.95%; font-style: italic; } /************************************************************************************** This statement appears twice in Developer Tools. Disabling one disables both. Disabling it also causes the page to render. Turning it on and the page disappears again **************************************************************************************/ #newsfeed div .newFeatures { margin-left: 1em; margin-right: 1em; font-size: 60.95%; } /************************************************************************************** **************************************************************************************/ .colmask { clear: both; float: left; position: relative; overflow: hidden; width: 100%; } .colright, .colmid, .colleft { float: left; position: relative; width: 100%; } .col2 { float: left; overflow: hidden; position: relative; padding-top: 0; padding-bottom: 1em; padding-left: 0; padding-right: 0; } .threecol .colmid { right: 33%; } .threecol .colleft { right: 34%; } .threecol .col1 { width: 33%; left: 100%; } .threecol .col2 { width: 32%; left: 34%; } .threecol .col3 { width: 32%; left: 68.5%; } Notice the #newsfeed div .newFeatures identifier near the end. I don't know what's causing that as it's only appearing once in the output stream. Here's an image of it too: EDIT 3: It appears that even though it duplicates that particular selector, if I change the font-size to a whole number like 61% instead of the current 60.95% (that specific to defaultly match the existing desktop app as closely as possible) it works fine. So something specific to IE duplicating that selector block and the font-size being a percentage specific to two decimal places appears to kill IE8 Standards mode completely.

    Read the article

  • Reference Data Management and Master Data: Are Relation ?

    - by Mala Narasimharajan
    Submitted By:  Rahul Kamath  Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise Master Data Management (MDM) solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or restructuring sales territories to enable equitable distribution of leads to sales teams following the acquisition of new products, or adding additional cost centers to enable fine grain control over expenses. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? How does it relate to Master Data? Reference data is a close cousin of master data. While master data is challenged with problems of unique identification, may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and gives them contextual value. In fact, the creation of a new master data element may require new reference data to be created. For example, when a European company acquires a US business, chances are that they will now need to adapt their product line taxonomy to include a new category to describe the newly acquired US product line. Further, the cross-border transaction will also result in a revised geo hierarchy. The addition of new products represents changes to master data while changes to product categories and geo hierarchy are examples of reference data changes.1 The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Change Management: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change. References 1 Master Data versus Reference Data, Malcolm Chisholm, April 1, 2006.

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 5: Service Client (more Flexibility with WSTrustChannelFactory)

    - by Your DisplayName here!
    See the previous posts first. WIF includes an API to manually request tokens from a token service. This gives you more control over the request and more flexibility since you can use your own token caching scheme instead of being bound to the channel object lifetime. The API is straightforward. You first request a token from the STS and then use that token to create a channel to the relying party service. I’d recommend using the WS-Trust bindings that ship with WIF to talk to ADFS 2 – they are pre-configured to match the binding configuration of the ADFS 2 endpoints. The following code requests a token for a WCF service from ADFS 2: private static SecurityToken GetToken() {     // Windows authentication over transport security     var factory = new WSTrustChannelFactory(         new WindowsWSTrustBinding(SecurityMode.Transport),         stsEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;       var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         AppliesTo = new EndpointAddress(svcEndpoint),         KeyType = KeyTypes.Symmetric     };       var channel = factory.CreateChannel();     return channel.Issue(rst); } Afterwards, the returned token can be used to create a channel to the service. Again WIF has some helper methods here that make this very easy: private static void CallService(SecurityToken token) {     // create binding and turn off sessions     var binding = new WS2007FederationHttpBinding(         WSFederationHttpSecurityMode.TransportWithMessageCredential);     binding.Security.Message.EstablishSecurityContext = false;       // create factory and enable WIF plumbing     var factory = new ChannelFactory<IService>(binding, new EndpointAddress(svcEndpoint));     factory.ConfigureChannelFactory<IService>();       // turn off CardSpace - we already have the token     factory.Credentials.SupportInteractive = false;       var channel = factory.CreateChannelWithIssuedToken<IService>(token);       channel.GetClaims().ForEach(c =>         Console.WriteLine("{0}\n {1}\n  {2} ({3})\n",             c.ClaimType,             c.Value,             c.Issuer,             c.OriginalIssuer)); } Why is this approach more flexible? Well – some don’t like the configuration voodoo. That’s a valid reason for using the manual approach. You also get more control over the token request itself since you have full control over the RST message that gets send to the STS. One common parameter that you may want to set yourself is the appliesTo value. When you use the automatic token support in the WCF federation binding, the appliesTo is always the physical service address. This means in turn that this address will be used as the audience URI value in the SAML token. Well – this in turn means that when you have an application that consists of multiple services, you always have to configure all physical endpoint URLs in ADFS 2 and in the WIF configuration of the service(s). Having control over the appliesTo allows you to use more symbolic realm names, e.g. the base address or a completely logical name. Since the URL is never de-referenced you have some degree of freedom here. In the next post we will look at the necessary code to request multiple tokens in a call chain. This is a common scenario when you first have to acquire a token from an identity provider and have to send that on to a federation gateway or Resource STS. Stay tuned.

    Read the article

  • What Keeps You from Changing Your Public IP Address and Wreaking Havoc on the Internet?

    - by Jason Fitzpatrick
    What exactly is preventing you (or anyone else) from changing their IP address and causing all sorts of headaches for ISPs and other Internet users? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Whitemage is curious about what’s preventing him from wantonly changing his IP address and causing trouble: An interesting question was asked of me and I did not know what to answer. So I’ll ask here. Let’s say I subscribed to an ISP and I’m using cable internet access. The ISP gives me a public IP address of 60.61.62.63. What keeps me from changing this IP address to, let’s say, 60.61.62.75, and messing with another consumer’s internet access? For the sake of this argument, let’s say that this other IP address is also owned by the same ISP. Also, let’s assume that it’s possible for me to go into the cable modem settings and manually change the IP address. Under a business contract where you are allocated static addresses, you are also assigned a default gateway, a network address and a broadcast address. So that’s 3 addresses the ISP “loses” to you. That seems very wasteful for dynamically assigned IP addresses, which the majority of customers are. Could they simply be using static arps? ACLs? Other simple mechanisms? Two things to investigate here, why can’t we just go around changing our addresses, and is the assignment process as wasteful as it seems? The Answer SuperUser contributor Moses offers some insight: Cable modems aren’t like your home router (ie. they don’t have a web interface with simple point-and-click buttons that any kid can “hack” into). Cable modems are “looked up” and located by their MAC address by the ISP, and are typically accessed by technicians using proprietary software that only they have access to, that only runs on their servers, and therefore can’t really be stolen. Cable modems also authenticate and cross-check settings with the ISPs servers. The server has to tell the modem whether it’s settings (and location on the cable network) are valid, and simply sets it to what the ISP has it set it for (bandwidth, DHCP allocations, etc). For instance, when you tell your ISP “I would like a static IP, please.”, they allocate one to the modem through their servers, and the modem allows you to use that IP. Same with bandwidth changes, for instance. To do what you are suggesting, you would likely have to break into the servers at the ISP and change what it has set up for your modem. Could they simply be using static arps? ACLs? Other simple mechanisms? Every ISP is different, both in practice and how close they are with the larger network that is providing service to them. Depending on those factors, they could be using a combination of ACL and static ARP. It also depends on the technology in the cable network itself. The ISP I worked for used some form of ACL, but that knowledge was a little beyond my paygrade. I only got to work with the technician’s interface and do routine maintenance and service changes. What keeps me from changing this IP address to, let’s say, 60.61.62.75 and mess with another consumer’s internet access? Given the above, what keeps you from changing your IP to one that your ISP hasn’t specifically given to you is a server that is instructing your modem what it can and can’t do. Even if you somehow broke into the modem, if 60.61.62.75 is already allocated to another customer, then the server will simply tell your modem that it can’t have it. David Schwartz offers some additional insight with a link to a white paper for the really curious: Most modern ISPs (last 13 years or so) will not accept traffic from a customer connection with a source IP address they would not route to that customer were it the destination IP address. This is called “reverse path forwarding”. See BCP 38. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Database Owner Conundrum

    - by Johnm
    Have you ever restored a database from a production environment on Server A into a development environment on Server B and had some items, such as Service Broker, mysteriously cease functioning? You might want to consider reviewing the database owner property of the database. The Scenario Recently, I was developing some messaging functionality that utilized the Service Broker feature of SQL Server in a development environment. Within the instance of the development environment resided two databases: One was a restored version of a production database that we will call "RestoreDB". The second database was a brand new database that has yet to exist in the production environment that we will call "DevDB". The goal is to setup a communication path between RestoreDB and DevDB that will later be implemented into the production database. After implementing all of the Service Broker objects that are required to communicate within a database as well as between two databases on the same instance I found myself a bit confounded. My testing was showing that the communication was successful when it was occurring internally within DevDB; but the communication between RestoreDB and DevDB did not appear to be working. Profiler to the rescue After carefully reviewing my code for any misspellings, missing commas or any other minor items that might be a syntactical cause of failure, I decided to launch Profiler to aid in the troubleshooting. After simulating the cross database messaging, I noticed the following error appearing in Profiler: An exception occurred while enqueueing a message in the target queue. Error: 33009, State: 2. The database owner SID recorded in the master database differs from the database owner SID recorded in database '[Database Name Here]'. You should correct this situation by resetting the owner of database '[Database Name Here]' using the ALTER AUTHORIZATION statement. Now, this error message is a helpful one. Not only does it identify the issue in plain language, it also provides a potential solution. An execution of the following query that utilizes the catalog view sys.transmission_queue revealed the same error message for each communication attempt: SELECT     * FROM        sys.transmission_queue; Seeing the situation as a learning opportunity I dove a bit deeper. Reviewing the database properties  The owner of a specific database can be easily viewed by right-clicking the database in SQL Server Management Studio and selecting the "properties" option. The owner is listed on the "General" page of the properties screen. In my scenario, the database in the production server was created by Frank the DBA; therefore his server login appeared as the owner: "ServerName\Frank". While this is interesting information, it certainly doesn't tell me much in regard to the SID (security identifier) and its existence, or lack thereof, in the master database as the error suggested. I pulled together the following query to gather more interesting information: SELECT     a.name     , a.owner_sid     , b.sid     , b.name     , b.type_desc FROM        master.sys.databases a     LEFT OUTER JOIN master.sys.server_principals b         ON a.owner_sid = b.sid WHERE     a.name not in ('master','tempdb','model','msdb'); This query also helped identify how many other user databases in the instance were experiencing the same issue. In this scenario, I saw that there were no matching SIDs in server_principals to the owner SID for my database. What login should be used as the database owner instead of Frank's? The system stored procedure sp_helplogins will provide a list of the valid logins that can be used. Here is an example of its use, revealing all available logins: EXEC sp_helplogins;  Fixing a hole The error message stated that the recommended solution was to execute the ALTER AUTHORIZATION statement. The full statement for this scenario would appear as follows: ALTER AUTHORIZATION ON DATABASE:: [Database Name Here] TO [Login Name]; Another option is to execute the following statement using the sp_changedbowner system stored procedure; but please keep in mind that this stored procedure has been deprecated and will likely disappear in future versions of SQL Server: EXEC dbo.sp_changedbowner @loginname = [Login Name]; .And They Lived Happily Ever After Upon changing the database owner to an existing login and simulating the inner and cross database messaging the errors have ceased. More importantly, all messages sent through this feature now successfully complete their journey. I have added the ownership change to my restoration script for the development environment.

    Read the article

  • Swap partition not recognized (The disk drive with UUID=... is not ready yet or not present)

    - by ladaghini
    I think I had an encrypted swap partition, because I chose to encrypt my home directory during the installation. I believe that's what the line with /dev/mapper/cryptswap1 ... in my /etc/fstab is all about. I did something to bork my swap because on the next boot, I got a message (paraphrased): The disk drive for /dev/mapper/cryptswap1 is not ready yet or not present. Wait to continue. Press S to skip or M to manually recover. (As a side note, pressing S or M seemed to do nothing different from just waiting.) Here's what I've tried: This tutorial on how to fix the swap partition not mounting. However, in the above, the mkswap command fails because the device is busy. So I booted from a live USB, ran GParted to reformat the swap partition (which showed up as an unknown fs type), and chrooted into the broken system to try that tutorial again. I also adjusted /etc/initramfs-tools/conf.d/resume and /etc/fstab to reflect the new UUID generated from formatting the partition as a swap. That still didn't work; instead of /dev/mapper/cryptswap1 not present, "The disk drive with UUID=[swap partition's UUID] is not ready yet or not present..." So I decided to start afresh as though I never had created a swap partition in the first place. From the Live USB, I deleted the swap partition altogether (which, again showed up in GParted as an unknown fs type), removed the swap and cryptswap entries in /etc/fstab as well as removed /etc/initramfs-tools/conf.d/resume and /etc/crypttab. At this point the main system shouldn't be considered broken because there is no swap partition and no instructions to mount one, right? I didn't get any errors during startup. I followed the same instructions to create and encrypt the swap partition, starting with creating a partition for the swap, though I think fdisk said a reboot was necessary to see changes. I was confident the 3rd process above would work, but the problem yet persists. Some relevant info (/dev/sda8 is the swap partition): /etc/fstab file: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda6 during installation UUID=4c11e82c-5fe9-49d5-92d9-cdaa6865c991 / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda5 during installation UUID=4031413e-e89f-49a9-b54c-e887286bb15e /boot ext4 defaults 0 2 # /home was on /dev/sda7 during installation UUID=d5bbfc6f-482a-464e-9f26-fd213230ae84 /home ext4 defaults 0 2 # swap was on /dev/sda8 during installation UUID=5da2c720-8787-4332-9317-7d96cf1e9b80 none swap sw 0 0 /dev/mapper/cryptswap1 none swap sw 0 0 output of sudo mount: /dev/sda6 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) /dev/sda5 on /boot type ext4 (rw) /dev/sda7 on /home type ext4 (rw) /home/undisclosed/.Private on /home/undisclosed type ecryptfs (ecryptfs_check_dev_ruid,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_unlink_sigs,ecryptfs_sig=cbae1771abd34009,ecryptfs_fnek_sig=7cefe2f59aab8e58) gvfs-fuse-daemon on /home/undisclosed/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=undisclosed) output of sudo blkid (note that /dev/sda8 is missing): /dev/sda1: LABEL="SYSTEM" UUID="960490E80490CC9D" TYPE="ntfs" /dev/sda2: UUID="D4043140043126C0" TYPE="ntfs" /dev/sda3: LABEL="Shared" UUID="80F613E1F613D5EE" TYPE="ntfs" /dev/sda5: UUID="4031413e-e89f-49a9-b54c-e887286bb15e" TYPE="ext4" /dev/sda6: UUID="4c11e82c-5fe9-49d5-92d9-cdaa6865c991" TYPE="ext4" /dev/sda7: UUID="d5bbfc6f-482a-464e-9f26-fd213230ae84" TYPE="ext4" /dev/mapper/cryptswap1: UUID="41fa147a-3e2c-4e61-b29b-3f240fffbba0" TYPE="swap" output of sudo fdisk -l: Disk /dev/mapper/cryptswap1 doesn't contain a valid partition table Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xdec3fed2 Device Boot Start End Blocks Id System /dev/sda1 * 2048 409599 203776 7 HPFS/NTFS/exFAT /dev/sda2 409600 210135039 104862720 7 HPFS/NTFS/exFAT /dev/sda3 210135040 415422463 102643712 7 HPFS/NTFS/exFAT /dev/sda4 415424510 625141759 104858625 5 Extended /dev/sda5 415424512 415922175 248832 83 Linux /dev/sda6 415924224 515921919 49998848 83 Linux /dev/sda7 515923968 621389823 52732928 83 Linux /dev/sda8 621391872 625141759 1874944 82 Linux swap / Solaris Disk /dev/mapper/cryptswap1: 1919 MB, 1919942656 bytes 255 heads, 63 sectors/track, 233 cylinders, total 3749888 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xaf5321b5 /etc/initramfs-tools/conf.d/resume file: RESUME=UUID=5da2c720-8787-4332-9317-7d96cf1e9b80 /etc/crypttab file: cryptswap1 /dev/sda8 /dev/urandom swap,cipher=aes-cbc-essiv:sha256 output of sudo swapon -as: Filename Type Size Used Priority /dev/mapper/cryptswap1 partition 1874940 0 -1 output of sudo free -m: total used free shared buffers cached Mem: 1476 1296 179 0 35 671 -/+ buffers/cache: 590 886 Swap: 1830 0 1830 So, how can this be fixed?

    Read the article

  • Scripting custom drawing in Delphi application with IF/THEN/ELSE statements?

    - by Jerry Dodge
    I'm building a Delphi application which displays a blueprint of a building, including doors, windows, wiring, lighting, outlets, switches, etc. I have implemented a very lightweight script of my own to call drawing commands to the canvas, which is loaded from a database. For example, one command is ELP 1110,1110,1290,1290,3,8388608 which draws an ellipse, parameters are 1110x1110 to 1290x1290 with pen width of 3 and the color 8388608 converted from an integer to a TColor. What I'm now doing is implementing objects with common drawing routines, and I'd like to use my scripting engine, but this calls for IF/THEN/ELSE statements and such. For example, when I'm drawing a light, if the light is turned on, I'd like to draw it yellow, but if it's off, I'd like to draw it gray. My current scripting engine has no recognition of such statements. It just accepts simple drawing commands which correspond with TCanvas methods. Here's the procedure I've developed (incomplete) for executing a drawing command on a canvas: function DrawCommand(const Cmd: String; var Canvas: TCanvas): Boolean; type TSingleArray = array of Single; var Br: TBrush; Pn: TPen; X: Integer; P: Integer; L: String; Inst: String; T: String; Nums: TSingleArray; begin Result:= False; Br:= Canvas.Brush; Pn:= Canvas.Pen; if Assigned(Canvas) then begin if Length(Cmd) > 5 then begin L:= UpperCase(Cmd); if Pos(' ', L)> 0 then begin Inst:= Copy(L, 1, Pos(' ', L) - 1); Delete(L, 1, Pos(' ', L)); L:= L + ','; SetLength(Nums, 0); X:= 0; while Pos(',', L) > 0 do begin P:= Pos(',', L); T:= Copy(L, 1, P - 1); Delete(L, 1, P); SetLength(Nums, X + 1); Nums[X]:= StrToFloatDef(T, 0); Inc(X); end; Br.Style:= bsClear; Pn.Style:= psSolid; Pn.Color:= clBlack; if Inst = 'LIN' then begin Pn.Width:= Trunc(Nums[4]); if Length(Nums) > 5 then begin Br.Style:= bsSolid; Br.Color:= Trunc(Nums[5]); end; Canvas.MoveTo(Trunc(Nums[0]), Trunc(Nums[1])); Canvas.LineTo(Trunc(Nums[2]), Trunc(Nums[3])); Result:= True; end else if Inst = 'ELP' then begin Pn.Width:= Trunc(Nums[4]); if Length(Nums) > 5 then begin Br.Style:= bsSolid; Br.Color:= Trunc(Nums[5]); end; Canvas.Ellipse(Trunc(Nums[0]),Trunc(Nums[1]),Trunc(Nums[2]),Trunc(Nums[3])); Result:= True; end else if Inst = 'ARC' then begin Pn.Width:= Trunc(Nums[8]); Canvas.Arc(Trunc(Nums[0]),Trunc(Nums[1]),Trunc(Nums[2]),Trunc(Nums[3]), Trunc(Nums[4]),Trunc(Nums[5]),Trunc(Nums[6]),Trunc(Nums[7])); Result:= True; end else if Inst = 'TXT' then begin Canvas.Font.Size:= Trunc(Nums[2]); Br.Style:= bsClear; Pn.Style:= psSolid; T:= Cmd; Delete(T, 1, Pos(' ', T)); Delete(T, 1, Pos(',', T)); Delete(T, 1, Pos(',', T)); Delete(T, 1, Pos(',', T)); Canvas.TextOut(Trunc(Nums[0]), Trunc(Nums[1]), T); Result:= True; end; end else begin //No space found, not a valid command end; end; end; end; What I'd like to know is what's a good lightweight third-party scripting engine I could use to accomplish this? I would hate to implement parsing of IF, THEN, ELSE, END, IFELSE, IFEND, and all those necessary commands. I need simply the ability to tell the scripting engine if certain properties meet certain conditions, it needs to draw the object a certain way. The light example above is only one scenario, but the same solution needs to also be applicable to other scenarios, such as a door being open or closed, locked or unlocked, and draw it a different way accordingly. This needs to be implemented in the object script drawing level. I can't hard-code any of these scripting/drawing rules, the drawing needs to be controlled based on the current state of the object, and I may also wish to draw a light a certain shade or darkness depending on how dimmed the light is.

    Read the article

  • AppHarbor - Azure Done Right AKA Heroku for .NET

    - by Robz / Fervent Coder
    Easy and Instant deployments and instant scale for .NET? Awhile back a few of us were looking at Ruby Gems as the answer to package management for .NET. The gems platform supported the concept of DLLs as packages although some changes would have needed to happen to have long term use for the entire community. From that we formed a partnership with some folks at Microsoft to make v2 into something that would meet wider adoption across the community, which people now call NuGet. So now we have the concept of package management. What comes next? Heroku Instant deployments and instant scaling. Stupid simple API. This is Heroku. It doesn’t sound like much, but when you think of how fast you can go from an idea to having someone else tinker with it, you can start to see its power. In literally seconds you can be looking at your rails application deployed and online. Then when you are ready to scale, you can do that. This is power. Some may call this “cloud-computing” or PaaS (Platform as a Service). I first ran into Heroku back in July when I met Nick of RubyGems.org. At the time there was no alternative in the .NET-o-sphere. I don’t count Windows Azure, mostly because it is not simple and I don’t believe there is a free version. Heroku itself would not lend itself well to .NET due to the nature of platforms and each language’s specific needs (solution stack).  So I tucked the idea in the back of my head and moved on. AppHarbor Enters The Scene I’m not sure when I first heard about AppHarbor as a possible .NET version of Heroku. It may have been in November, but I didn’t actually try it until January. I was instantly hooked. AppHarbor is awesome! It still has a ways to go to be considered Heroku for .NET, but it already has a growing community. I created a video series (at the bottom of this post) that really highlights how fast you can get a product onto the web and really shows the power and simplicity of AppHarbor. Deploying is as simple as a git/hg push to appharbor. From there they build your code, run any unit tests you have and deploy it if everything succeeds. The screen on the right shows a simple and elegant UI to getting things done. The folks at AppHarbor graciously gave me a limited number of invites to hand out. If you are itching to try AppHarbor then navigate to: https://appharbor.com/account/new?inviteCode=ferventcoder  After playing with it, send feedback if you want more features. Go vote up two features I want that will make it more like Heroku. Disclaimer: I am in no way affiliated with AppHarbor and have not received any funds or favors from anyone at AppHarbor. I just think it is awesome and I want others to know about it. From Zero To Deployed in 15 Minutes (Or Less) Now I have a challenge for you. I created a video series showing how fast I could go from nothing to a deployed application. It could have been from Zero to Deployed in Less than 5 minutes, but I wanted to show you the tools a little more and give you an opportunity to beat my time. And that’s the challenge. Beat my time and show it in a video response. The video series is below (at least one of the videos has to be watched on YouTube). The person with the best time by March 15th @ 11:59PM CST will receive a prize. Ground rules: .NET Application with a valid database connection Start from Zero Deployed with AppHarbor or an alternative A timer displayed in the video that runs during the entire process Video response published on YouTube or acceptable alternative Video(s) must be published by March 15th at 11:59PM CST. Either post the link here as a comment or on YouTube as a response (also by 11:59PM CST March 15th) From Zero To Deployed In 15 Minutes (Or Less) Part 1 From Zero To Deployed In 15 Minutes (Or Less) Part 2 From Zero To Deployed In 15 Minutes (Or Less) Part 3

    Read the article

  • Microsoft Forcing Dev/Partners Hands on Win 8 Through Certification

    - by D'Arcy Lussier
    I remember 2.5 years ago when Microsoft dropped a bomb on the Microsoft Partner community: all Gold competencies would require .NET 4 based premiere certifications (MCPD). Problem was, this gave a window of about 6 months for partners to update their employees’ certifications. At the place I was working, I put together an aggressive plan and we were able to attain the certs needed. Microsoft is always open that the certification requirements will change as the industry changes. .NET 1.0 certifications are useless here in 2012, and rightfully so they’ve been retired for a long time now. But now we’re seeing a new tactic by Microsoft – shifting gears away from certifications that speak to what industry needs and more to the Windows 8 agenda. Consider that currently the premiere development certification is the Microsoft Certified Professional Developer, which comes in three flavours – Web, Windows, and Azure. All require WCF and Data Access exams, as well as one that deals with the associated base technologies (ASP.NET, WinForms/WPF, Azure), and one that ties all three together in a solution-based exam. For Microsoft-based organizations, these skills aren’t just valid but necessary in building Microsoft applications. But the MCPD is being replaced with our old friend Microsoft Certified Solutions Developer (MCSD). So far, Microsoft has only released two types of MCSD – Web and Windows Store Apps. Windows Store Apps?! In a push to move developers to create WinRT-based applications, desktop development is now considered a second-class citizen in the eyes of Redmond. Also interesting are the language options for the exams: HTML5 and C#. Sorry VB folks, its time to embrace curly braces whether they be JavaScript or C#. Consider too the skills being assessed for the Windows Store Apps: Get your MCSD: Windows Store Apps Using HTML5 Get your MCSD: Windows Store Apps Using C# *Image Source: http://www.microsoft.com/learning/en/us/certification/mcsd-windows-store-apps.aspx Nov 21/2012 If you look at the skills being tested in each exam, you’ll find that skills like WCF and Data Access are downplayed compared to things like integrating Charms, facilitating Search, programming for the microphone and camera – all very Windows 8 focussed items. Where this becomes maddening is that Microsoft is still pushing Windows 7 with enterprise clients. According to a ZDNet article, Microsoft wants to see Windows 7 on 70% of enterprise desktops by mid 2013. Assuming they somehow meet that (its a pretty lofty goal), there’s years of traditional desktop-based development that will still be required at some level. For those thinking they’ll just write and stick with the MCPD certification, note that most exams that go towards that certification will be retired at the end of July 2013! (Read the small print). And while details haven’t been finalized, its a safe bet that MCPD certifications eventually won’t count towards Gold-level competencies in the Microsoft Partner program. What this means for Microsoft Partners and Developers is that certification for desktop development is going to be limited to Windows Store Apps unless Microsoft re-introduces a traditional desktop (WPF) based MCSD cert. Web Application Development – It’s Not All Bad There’s big changes on the web side of certification, but I actually see these changes as being for the good! Check out the new exam requirements for MCSD – Web Applications: Get your MCSD: Web Applications certification *Image Source: http://www.microsoft.com/learning/en/us/certification/cert-mcsd-web-applications.aspx Nov 21, 2012 We now *start* with HTML5, JavaScript, and CSS3! Now I’m sure that these will be slanted towards web development in IE, and I can hear designers everywhere bemoaning the CSS/IE combination. Still, I applaud Microsoft for adopting HTML5 as the go-to web technology and requiring certified developers to prove they have skills in the basics of web dev. The fact that the second exam clearly states “MVC Web Applications” shows that Web Forms is truly legacy and deprecated. That’s not to say there aren’t those out there that are still supporting or (for whatever reason) doing new dev with Web Forms, but this move by Microsoft is telling the community they better get on the MVC bandwagon if they want to stay current. Fantastic! And of course Azure needs to be here as well, and this is where the Microsoft agenda fits in. It’s no secret that there’s been a huge push in getting developers on to Azure. I don’t see this as being a bad thing either, as cloud computing (whether Azure, private, or 3rd party) is a necessary skill for developers to have here in 2012. The cynic in me realizes that the HTML5/JavaScript/CSS push wouldn’t be as prominent though if not for the Windows 8 Store App play, where HTML5 is a first class citizen (and an available language for the MCSD Windows Store App cert). In this case, the desktop developers loss is the web developers gain. Get Ready for Changes In addition to the changes in certifications, the Microsoft Partner competencies are going through changes as well. Web and Software Development are being merged into a single competency, meaning that licenses you would have received from having both as Gold are reduced. Other competencies are either being removed or changed, as are the exam requirements. In the same way that we’re seeing faster release cycles from Microsoft, so too will we see the Microsoft Partner Program and MS Certifications evolve faster than ever before. Many of us got caught in the last wave of changes, but this time we can see the wave coming – and it looks pretty big!

    Read the article

  • Creating ADF Faces Comamnd Button at Runtime

    - by Frank Nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} In ADF Faces, the command button is an instance of RichCommandButton and can be created at runtime. While creating the button is not difficult at all, adding behavior to it requires knowing about how to dynamically create and add an action listener reference. The example code below shows two methods: The first method, handleButtonPress is a public method exposed on a managed bean. public void handleButtonPress(ActionEvent event){   System.out.println("Event handled");   //optional: partially refresh changed components if command   //issued as a partial submit } The second method is called in response to a user interaction or on page load and dynamically creates and adds a command button. When the button is pressed, the managed bean method – the action handler – defined above is called. The action handler is referenced using EL in the created MethodExpression instance. If the managed bean is in viewScope, backingBeanScope or pageFlowsScope, then you need to add these scopes as a prefix to the EL (as you would when configuring the managed bean reference at design time) //Create command button and add it as a child to the parent component that is passed as an //argument to this method private void reateCommandButton(UIComponent parent){   RichCommandButton edit = new RichCommandButton();   //make the request partial   edit.setPartialSubmit(true);   edit.setText("Edit");                             //compose the method expression to invoke the event handler   FacesContext fctx = FacesContext.getCurrentInstance();   Application application = fctx.getApplication();   ExpressionFactory elFactory = application.getExpressionFactory();   ELContext elContext = facesCtx.getELContext();   MethodExpression methodExpressio = null;   //Make sure the EL expression references a valid managed bean method. Ensure   //the bean scope is properly addressed    methodExpression =  elFactory.createMethodExpression(                              elContext,"#{myRequestScopeBean.handleButtonPress}",                             Object.class,new Class[] {ActionEvent.class});   //Create the command buttonaction listener reference   MethodExpressionActionListener al = null;          al= new MethodExpressionActionListener(methodExpression);    edit.addActionListener(al);     //add new command button to parent component and PPR the component for     //the button to show    parent.getChildren().add(edit);    AdfFacesContext adfFacesContext = AdfFacesContext.getCurrentInstance();     adfFacesContext.addPartialTarget(parent);  }

    Read the article

  • Node Serialization in NetBeans Platform 7.0

    - by Geertjan
    Node serialization makes sense when you're not interested in the data (since that should be serialized to a database), but in the state of the application. For example, when the application restarts, you want the last selected node to automatically be selected again. That's not the kind of information you'll want to store in a database, hence node serialization is not about data serialization but about application state serialization. I've written about this topic in October 2008, here and here, but want to show how to do this again, using NetBeans Platform 7.0. Somewhere I remember reading that this can't be done anymore and that's typically the best motivation for me, i.e., to prove that it can be done after all. Anyway, in a standard POJO/Node/BeanTreeView scenario, do the following: Remove the "@ConvertAsProperties" annotation at the top of the class, which you'll find there if you used the Window Component wizard. We're not going to use property-file based serialization, but plain old java.io.Serializable  instead. In the TopComponent, assuming it is named "UserExplorerTopComponent", typically at the end of the file, add the following: @Override public Object writeReplace() { //We want to work with one selected item only //and thanks to BeanTreeView.setSelectionMode, //only one node can be selected anyway: Handle handle = NodeOp.toHandles(em.getSelectedNodes())[0]; return new ResolvableHelper(handle); } public final static class ResolvableHelper implements Serializable { private static final long serialVersionUID = 1L; public Handle selectedHandle; private ResolvableHelper(Handle selectedHandle) { this.selectedHandle = selectedHandle; } public Object readResolve() { WindowManager.getDefault().invokeWhenUIReady(new Runnable() { @Override public void run() { try { //Get the TopComponent: UserExplorerTopComponent tc = (UserExplorerTopComponent) WindowManager.getDefault().findTopComponent("UserExplorerTopComponent"); //Get the display text to search for: String selectedDisplayName = selectedHandle.getNode().getDisplayName(); //Get the root, which is the parent of the node we want: Node root = tc.getExplorerManager().getRootContext(); //Find the node, by passing in the root with the display text: Node selectedNode = NodeOp.findPath(root, new String[]{selectedDisplayName}); //Set the explorer manager's selected node: tc.getExplorerManager().setSelectedNodes(new Node[]{selectedNode}); } catch (PropertyVetoException ex) { Exceptions.printStackTrace(ex); } catch (IOException ex) { Exceptions.printStackTrace(ex); } } }); return null; } } Assuming you have a node named "UserNode" for a type named "User" containing a property named "type", add the bits in bold below to your "UserNode": public class UserNode extends AbstractNode implements Serializable { static final long serialVersionUID = 1L; public UserNode(User key) { super(Children.LEAF); setName(key.getType()); } @Override public Handle getHandle() { return new CustomHandle(this, getName()); } public class CustomHandle implements Node.Handle { static final long serialVersionUID = 1L; private AbstractNode node = null; private final String searchString; public CustomHandle(AbstractNode node, String searchString) { this.node = node; this.searchString = searchString; } @Override public Node getNode() { node.setName(searchString); return node; } } } Run the application and select one of the user nodes. Close the application. Start it up again. The user node is not automatically selected, in fact, the window does not open, and you will see this in the output: Caused: java.io.InvalidClassException: org.serialization.sample.UserNode; no valid constructor Read this article and then you'll understand the need for this class: public class BaseNode extends AbstractNode { public BaseNode() { super(Children.LEAF); } public BaseNode(Children kids) { super(kids); } public BaseNode(Children kids, Lookup lkp) { super(kids, lkp); } } Now, instead of extending AbstractNode in your UserNode, extend BaseNode. Then the first non-serializable superclass of the UserNode has an explicitly declared no-args constructor, Do the same as the above for each node in the hierarchy that needs to be serialized. If you have multiple nodes needing serialization, you can share the "CustomHandle" inner class above between all the other nodes, while all the other nodes will also need to extend BaseNode (or provide their own non-serializable super class that explicitly declares a no-args constructor). Now, when I run the application, I select a node, then I close the application, restart it, and the previously selected node is automatically selected when the application has restarted.

    Read the article

  • SQLIO Writes

    - by Grant Fritchey
    SQLIO is a fantastic utility for testing the abilities of the disks in your system. It has a very unfortunate name though, since it's not really a SQL Server testing utility at all. It really is a disk utility. They ought to call it DiskIO because they'd get more people using I think. Anyway, branding is not the point of this blog post. Writes are the point of this blog post. SQLIO works by slamming your disk. It performs as mean reads as it can or it performs as many writes as it can depending on how you've configured your tests. There are much smarter people than me who will get into all the various types of tests you should run. I'd suggest reading a bit of what Jonathan Kehayias (blog|twitter) has to say or wade into Denny Cherry's (blog|twitter) work. They're going to do a better job than I can describing all the benefits and mechanisms around using this excellent piece of software. My concerns are very focused. I needed to set up a series of tests to see how well our product SQL Storage Compress worked. I wanted to know the effects it would have on a system, the disk for sure, but also memory and CPU. How to stress the system? SQLIO of course. But when I set it up and ran it, following the documentation that comes with it, I was seeing better than 99% compression on the files. Don't get me wrong. Our product is magnificent, wonderful, all things great and beautiful, gets you coffee in the morning and is made mostly from bacon. But 99% compression. No, it's not that good. So what's up? Well, it's the configuration. The default mechanism is to load up a file, something large that will overwhelm your disk cache. You're instructed to load the file with a character 0x0. I never got a computer science degree. I went to film school. Because of this, I didn't memorize ASCII tables so when I saw this, I thought it was zero's or something. Nope. It's NULL. That's right, you're making a very large file, but you're filling it with NULL values. That's actually ok when all you're testing is the disk sub-system. But, when you want to test a compression and decompression, that can be an issue. I got around this fairly quickly. Instead of generating a file filled with NULL values, I just copied a database file for my tests. And to test it with SQL Storage Compress, I used a database file that had already been run through compression (about 40% compression on that file if you're interested). Now the reads were taken care of. I am seeing very realistic performance from decompressing the information for reads through SQLIO. But what about writes? Well, the issue is, what does SQLIO write? I don't have access to the code. But I do have access to the results. I did two different tests, just to be sure of what I was seeing. First test, use the .DAT file as described in the documentation. I opened the .DAT file after I was done with SQLIO, using WordPad. Guess what? It's a giant file full of air. SQLIO writes NULL values. What does that do to compression? I did the test again on a copy of an uncompressed database file. Then I ran the original and the SQLIO modified copy through ZIP to see what happened. I got better than 99% compression out of the SQLIO modified file (original file of 624,896kb went to 275,871kb compressed, after SQLIO it went to 608kb compressed). So, what does SQLIO write? It writes air. If you're trying to test it with compression or maybe some other type of file storage mechanism like dedupe, you need to know this because your tests really won't be valid. Should I find some other mechanism for testing? Yeah, if all I'm interested in is establishing performance to my own satisfaction, yes. But, I want to be able to compare my results with other people's results and we all need to be using the same tool in order for that to happen. SQLIO is the common mechanism that most people I know use to establish disk performance behavior. It'd be better if we could get SQLIO to do writes in some other fashion. Oh, and before I go, I get to brag a bit. Measuring IOPS, SQL Storage Compress outperforms my disk alone by about 30%.

    Read the article

  • How to Implement Single Sign-On between Websites

    - by hmloo
    Introduction Single sign-on (SSO) is a way to control access to multiple related but independent systems, a user only needs to log in once and gains access to all other systems. a lot of commercial systems that provide Single sign-on solution and you can also choose some open source solutions like Opensso, CAS etc. both of them use centralized authentication and provide more robust authentication mechanism, but if each system has its own authentication mechanism, how do we provide a seamless transition between them. Here I will show you the case. How it Works The method we’ll use is based on a secret key shared between the sites. Origin site has a method to build up a hashed authentication token with some other parameters and redirect the user to the target site. variables Status Description ssoEncode required hash(ssoSharedSecret + , + ssoTime + , + ssoUserName) ssoTime required timestamp with format YYYYMMDDHHMMSS used to prevent playback attacks ssoUserName required unique username; required when a user is logged in Note : The variables will be sent via POST for security reasons Building a Single Sign-On Solution Origin Site has function to 1. Create the URL for your Request. 2. Generate required authentication parameters 3. Redirect to target site. using System; using System.Web.Security; using System.Text; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { string postbackUrl = "http://www.targetsite.com/sso.aspx"; string ssoTime = DateTime.Now.ToString("yyyyMMddHHmmss"); string ssoUserName = User.Identity.Name; string ssoSharedSecret = "58ag;ai76"; // get this from config or similar string ssoHash = FormsAuthentication.HashPasswordForStoringInConfigFile(string.Format("{0},{1},{2}", ssoSharedSecret, ssoTime, ssoUserName), "md5"); string value = string.Format("{0}:{1},{2}", ssoHash,ssoTime, ssoUserName); Response.Clear(); StringBuilder sb = new StringBuilder(); sb.Append("<html>"); sb.AppendFormat(@"<body onload='document.forms[""form""].submit()'>"); sb.AppendFormat("<form name='form' action='{0}' method='post'>", postbackUrl); sb.AppendFormat("<input type='hidden' name='t' value='{0}'>", value); sb.Append("</form>"); sb.Append("</body>"); sb.Append("</html>"); Response.Write(sb.ToString()); Response.End(); } } Target Site has function to 1. Get authentication parameters. 2. Validate the parameters with shared secret. 3. If the user is valid, then do authenticate and redirect to target page. 4. If the user is invalid, then show errors and return. using System; using System.Web.Security; using System.Text; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { if (User.Identity.IsAuthenticated) { Response.Redirect("~/Default.aspx"); } } if (Request.Params.Get("t") != null) { string ticket = Request.Params.Get("t"); char[] delimiters = new char[] { ':', ',' }; string[] ssoVariable = ticket.Split(delimiters, StringSplitOptions.None); string ssoHash = ssoVariable[0]; string ssoTime = ssoVariable[1]; string ssoUserName = ssoVariable[2]; DateTime appTime = DateTime.MinValue; int offsetTime = 60; // get this from config or similar try { appTime = DateTime.ParseExact(ssoTime, "yyyyMMddHHmmss", null); } catch { //show error return; } if (Math.Abs(appTime.Subtract(DateTime.Now).TotalSeconds) > offsetTime) { //show error return; } bool isValid = false; string ssoSharedSecret = "58ag;ai76"; // get this from config or similar string hash = FormsAuthentication.HashPasswordForStoringInConfigFile(string.Format("{0},{1},{2}", ssoSharedSecret, ssoTime, ssoUserName), "md5"); if (string.Compare(ssoHash, hash, true) == 0) { if (Math.Abs(appTime.Subtract(DateTime.Now).TotalSeconds) > offsetTime) { //show error return; } else { isValid = true; } } if (isValid) { //Do authenticate; } else { //show error return; } } else { //show error } } } Summary This is a very simple and basic SSO solution, and its main advantage is its simplicity, only needs to add a single page to do SSO authentication, do not need to modify the existing system infrastructure.

    Read the article

  • VS2010 crashes when opening a vsp generated using VS 2012

    - by Tarun Arora
    I recently profiled some web applications using Visual Studio 2012, a vsp (Visual Studio Profile) file was generated as a result of the profiling session. I could successfully open the vsp file in Visual Studio 2012 as expected but when I tried to open the vsp file in Visual Studio 2010 the VS2010 IDE crashed. As a responsible citizen I raised bug # 762202 on Microsoft Connect site using the Microsoft Visual Studio 2012 Feedback Client. Note – In case you didn’t already know, VSP generated in Visual Studio 2012 is not backward compatible. Please refer below for the steps to reproduce the issue and the resolution of the connect bug. 1. Behaviour and Steps to Reproduce the Issue Description I have generated a vsp file by using the Visual Studio 2012 Standalone profiler. When I try and open the vsp file in Visual Studio 2010 the IDE crashes. I understand that a vsp generated by using VS 2012 cannot be opened in VS 2010, but the IDE crashing is not the behaviour I would expect to see. Steps to Reproduce the Issue 1. Pick up the Stand lone profiler from the VS 2012 installation media. The folder has both x 64 and x86 installer, since the machine I am using is x64 bit. I have installed the x64 version of the standalone profiler. 2. I have configured the system path by setting the 'environment variable' path to where the profiler is installed. In my case this is, C:\Program Files (x86)\Microsoft Visual Studio 11.0\Team Tools\Performance Tools 3. Created a new environment variable _NT_SYMBOL_PATH and set its value to CACHE*C:\SYMBOLSCACHE;SRV*C:\SYMBOLSCACHE*HTTP://MSDL.MICROSOFT.COM/DOWNLOAD/SYMBOLS;\\FOO\BUILD1234 4. Open up CMD as an administrator and run 'VSPerfASPNETCmd /tip http://localhost:56180/ /o:C:\Temp\SampleEISK.vsp' 5. This generates the following message on the cmd       Microsoft (R) VSPerf ASP.NET Command, Version 11.0.0.0     Copyright (C) Microsoft Corporation. All rights reserved.     Configuring and attaching to ASP.NET process. Please wait.     Setting up profiling environment.     Starting monitor.     Launching ASP.NET service.     Attaching Monitor to process.     Launching Internet Explorer.     The profiler is attached to ASP.net. Please run your application scenario now.     Press Enter to stop data collection...   6. I perform certain actions and then I come back to the cmd and hit enter to shut down the profiling. Once I do this, the following message is written to the cmd, Press Enter to stop data collection... Profiling now shut down. Report file "C:\Temp\SampleEISK.vsp" was generated. Running VsPerfReport, packing symbols into the .VSP. Shutting down profiling and restarting ASP.NET. Please wait. Restarting w3wp.exe.   7. I look in the C:\Temp folder and I can see the SampleEISK.vsp file generated. I can successfully open this file in Visual Studio 2012. 8. When I am trying to open the vsp file in VS 2010 the VS 2010 IDE crashes. Kaboooom! What I would expect to happen I expect to receive a message "VS 2010 does not support the vsp file generated by VS 2012". What actually happened The VS 2010 IDE crashed 2. Resolution This is a valid bug! However, there isn’t much value in releasing a hotfix for this issue. Refer below to the resolution provided by the Visual Studio Profiler Team.  Thank you for taking the time to report this issue. We completely agree that Visual Studio 2010 should not crash. However in this particular case this is not a bug we are going to retroactively release a fix to 2010 for at this point. Given that a fix would not unblock the scenario of opening a 2012 created file on Visual Studio 2010, and there is not an active update channel for Visual Studio 2010 other than manually locating and installing hot fixes, we will not be fixing this particular issue. Best Regards, Visual Studio Profiler Team   Though it would be great to improve the behaviour however, this is not a defect that would stop you from progressing in any way. It’s important to note however that VSP files generated by Visual Studio 2012 are not backward compatible so you should refrain from opening these files in Visual Studio 2010.

    Read the article

  • Why bother writing an Windows 8 app?

    - by Dennis Vroegop
    So you want to know more about development for Window 8. Great! There are lots of reasons you should be excited about this. Since I don’t know why YOU are interested in this, I’ll make a list of reasons people can choose from. (as a side note: whenever I talk about Win8 development I am referring to the Metro Style / WinRt side of things. Apps for the ‘classic’ desktop side of Win8 on Intel are business as usual…) So… Why would you care about making an app for Windows 8? 1. It’s cool. Let’s not beat around the bush: if you like development for a hobby then you’ll love to work on this new platform. You can create apps in a relative short time (short time as in compared to writing a new CRM system) and that makes it great for a hobby product. 2. You’ll stand out. Hey, we all need an ego boost every now and then. We all need to feel special. So if you can manage to be one of the first to have you app in the Store then you’ll likely to be noticed. Just close your eyes for a moment and image you standing in a bar. It’s crowded, and then you casually say “Oh yeah, I just had my app certified and it’s in the Win8 store now”. People will stop talking, will offer you drinks and beautiful women / gorgeous man / furry creatures from Alpha Centauri (whatever your preferences are) will propose. Or maybe not. Anyway…. 3. Make some cash! IDC predicts there will be about 350,000,000 Windows 8 licenses sold in the next year. Think about that number. 350,000,000. And they all have access to the Store. Where you’re app will be. With one little click they can select it, download and somehow magically $1.00 or $2.00 from their bank account is transferred to yours. Now, I am not saying that all of those people will download and buy your app but what if only 1% of them did? Remember: there aren’t that many apps available yet….. 4. Learn. Creating new small apps is a great way to learn new stuff. Yes, you could read about it (on this blog for instance) but the only way to learn something is to do it. So be prepared for the future and learn something new by doing it.Write an app! Now! 5. The biggie (for me at least): it’s fun. Even if you remove the points above it’s still fun to write for these devices and this platform. Now some of you will say : “But why not write a great app for IOS or Android?” I think this is a valid question. Of course the novelty of the platform wears out and points 2 and 3 from above list will not be as relevant as it is today. But still 1 4 and 5 remain. And don’t forget: if you already work on the Microsoft platform it’s not that hard to learn this new Win8 stuff. If you have done some XAML development (be it WPF or Silverlight) you are almost there in becoming a good Win8 developer. So you’ll be more productive much sooner than when you have to learn Objective C or Java. Even if you’re a HTML / Javascript developer (I say developer here, not designer) you’ll be up to speed on Win8 development pretty soon. Yes, you, that funky Web Developer who lives and breathes HTML5, CSS3 and JavaScript / Node.Js / JQuery: you too can be a Win8 developer. A first class Win8 developer! So.. Download the stuff you need from http://dev.windows.com install Windows 8 and Visual Studio 12 and by the time you’re ready I’ll be working on the next article: how to do all this? Happy coding!

    Read the article

  • What is Happening vs. What is Interesting

    - by Geertjan
    Devoxx 2011 was yet another confirmation that all development everywhere is either on the web or on mobile phones. Whether you looked at the conference schedule or attended sessions or talked to speakers at any point at all, it was very clear that no development whatsoever is done anymore on the desktop. In fact, that's something Tim Bray himself told me to my face at the speakers dinner. No new developments of any kind are happening on the desktop. Everyone who is currently on the desktop is working overtime to move all of their applications to the web. They're probably also creating a small subset of their application on an Android tablet, with an even smaller subset on their Android phone. Then you scratch that monolithic surface and find some interesting results. Without naming any names, I asked one of these prominent "ah, forget about the desktop" people at the Devoxx speakers dinner (and I have a witness): "Yes, the desktop is dead, but what about air traffic control, stock trading, oil analysis, risk management applications? In fact, what about any back office application that needs to be usable across all operating systems? Here there is no concern whatsoever with 100% accessibility which is, after all, the only thing that the web has over the desktop, (except when there's a network failure, of course, or when you find yourself in the 3/4 of the world where there's bandwidth problems)? There are 1000's of hidden applications out there that have processing requirements, security requirements, and the requirement that they'll be available even when the network is down or even completely unavailable. Isn't that a valid use case and aren't there 1000's of applications that fall into this so-called niche category? Are you not, in fact, confusing consumer applications, which are increasingly web-based and mobile-based, with high-end corporate applications, which typically need to do massive processing, of one kind or another, for which the web and mobile worlds are completely unsuited?" And you will not believe what the reply to the above question was. (Again, I have a witness to this discussion.) But here it is: "Yes. But those applications are not interesting. I do not want to spend any of my time or work in any way on those applications. They are boring." I'm sad to say that the leaders of the software development community, including those in the Java world, either share the above opinion or are led by it. Because they find something that is not new to be boring, they move on to what is interesting and start talking like the supposedly-boring developments don't even exist. (Kind of like a rapper pretending classical music doesn't exist.) Time and time again I find myself giving Java desktop development courses (at companies, i.e., not hobbyists, or students, but companies, i.e., the places where dollars are earned), where developers say to me: "The course you're giving about creating cross-platform, loosely coupled, and highly cohesive applications is really useful to us. Why do we never find information about this topic at conferences? Why can we never attend a session at a conference where the story about pluggable cross-platform Java is told? Why do we get the impression that we are uncool because we're not on the web and because we're not on a mobile phone, while the reason for that is because we're creating $1000,000 simulation software which has nothing to gain from being on the web or on the mobile phone?" And then I say: "Because nobody knows you exist. Because you're not submitting abstracts to conferences about your very interesting use cases. And because conferences tend to focus on what is new, which tends to be web related (especially HTML 5) or mobile related (especially Android). Because you're not taking the responsibility on yourself to tell the real stories about the real applications being developed all the time and every day. Because you yourself think your work is boring, while in fact it is fascinating. Because desktop developers are working from 9 to 5 on the desktop, in secure environments, such as banks and defense, where you can't spend time, nor have the interest in, blogging your latest tip or trick, as opposed to web developers, who tend to spend a lot of time on the web anyway and are therefore much more inclined to create buzz about the kind of work they're doing." So, next time you look at a conference program and wonder why there's no stories about large desktop development projects in the program, here's the short answer: "No one is going to put those items on the program until you start submitting those kinds of sessions. And until you start blogging. Until you start creating the buzz that the web developers have been creating around their work for the past 10 years or so. And, yes, indeed, programmers get the conference they deserve." And what about Tim Bray? Ask yourself, as Google's lead web technology evangelist, how many desktop developers do you think he talks to and, more generally, what his frame of reference is and what, clearly, he considers to be most interesting.

    Read the article

  • Trouble with Samba Domain

    - by Arkevius
    I'm having a bit of trouble setting up this Samba domain correctly. I'm getting an Access Denied error when trying to add a Windows XP machine to the domain. I'll go through my scenario in detail, but for those of you wanting a TLDR summary it'll be at the bottom of this post. I have HP Proliant server with Ubuntu 12.04 LTS installed. For this particular environment, I need this server to act as a PDC, file server, and print server. I began by updating and upgrading the packages (of course). Then went to install samba, gnome-desktop, wine, and cpanm. Samba was, of course, for the PDC and file/print services. The GUI was needed because a certain software has to be installed on there that needs a GUI. Wine was needed because the software is Windows-native. And cpanm was for a perl script I have running. For Samba, I went into the smb.conf file and enabled domain logons, changed the workgroup/domain name, the logon script for a per-group basis (netlogon/%g), enabled the netlogon and profiles share, and setup a couple of custom shares for the file service. The printer was added later, and seems to be working just fine. I then restarted the services, and used the net groupmap command to ensure my unix groups were mapped correctly to the Windows groups. After this, I went to a Windows box, and was able to successfully join the domain without a problem. After some fidgeting with the software to get it running on the win boxes from the server (it's a records management system program, which stores it's database files on the server), I went to add another computer to the domain. But now it's saying Access Denied. Before when I had this trouble it was because I forgot to add the group "machines" so Samba could create machine accounts. Thinking this was the case, I manually created the machine account to test this theory. However, it would still give me an Access Denied error. That must mean it has something to do with permissions now, correct? I've been fighting with this server for the past two weeks. If it's not one thing that;s wrong, then it's something else completely different. This would be the third time I've actually reinstalled everything to start over. I'll post snippets of my system settings below. If anything else is needed, just say the word and I'll gather up the info. The unix group 'domadmin' is the Domain Admins group. Samba Administrator account administrator:x:1000:1000:Administrator,,,:/home/administrator:/bin/bash Adminstrator's groups administrator adm cdrom sudo dip plugdev lpadmin sambashare domadmin crimestar Samba's Configuration FIle (a snippet anyways) [global] workgroup = CITYPD server string = BPDServer dns proxy = no log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d security = user encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user domain logons = yes logon path = \\%L\srv\samba\profiles\%U logon script = logon.bat add machine script = /usr/sbin/useradd -g machines -c "%u machine account" -d /var/lib/samba -s /bin/false %u domain master = yes usershare allow guests = yes [netlogon] comment = Network Logon Service path = /srv/samba/netlogon/%g guest ok = yes read only = yes browseable = no [profiles] comment = All Printers browseable = no path = /var/spool/samba printable = yes guest ok = no read only = yes create mask = 0700 [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes read only = yes guest ok = no write list = root, @lpadmin [crimestar] comment = "Crimestar DB" path = /srv/crimestar/db valid users = @domadmin, @crimestar admin users = administrator writeable = yes guest ok = no browseable = no create mask = 0666 directory mask = 0777 [crimestarfiles] path = /home/administrator/.wine/drive_c/crimestar admin users = administrator browseable = yes ls -la on /srv/samba/profiles drwxrwxrwx 2 root machines 4096 Nov 21 15:27 . drwxr-xr-x 4 root root 4096 Nov 21 15:28 .. ls -la on /srv/samba/netlogon drwxr-xr-x 6 root root 4096 Nov 21 15:30 . drwxr-xr-x 4 root root 4096 Nov 21 15:28 .. drwxr-xr-x 2 root root 4096 Nov 21 15:30 crimestar drwxr-xr-x 2 root root 4096 Nov 21 18:13 domadmin drwxr-xr-x 3 root root 4096 Nov 21 15:30 guests drwxr-xr-x 2 root root 4096 Nov 21 15:29 users GrouMap list Domain Users (S-1-5-21-2978508755-2341913247-928297747-513) -> users Domain Admins (S-1-5-21-2978508755-2341913247-928297747-512) -> domadmin Domain Guests (S-1-5-21-2978508755-2341913247-928297747-514) -> nogroup TLDR I'm getting an Access Denied error message while trying to join a windows box to a samba domain, even after I successfully joined another computer without a problem. System settings / files are quoted above. Anyone have any ideas or suggestions?

    Read the article

  • Cloud Computing Forces Better Design Practices

    - by Herve Roggero
    Is cloud computing simply different than on premise development, or is cloud computing actually forcing you to create better applications than you normally would? In other words, is cloud computing merely imposing different design principles, or forcing better design principles?  A little while back I got into a discussion with a developer in which I was arguing that cloud computing, and specifically Windows Azure in his case, was forcing developers to adopt better design principles. His opinion was that cloud computing was not yielding better systems; just different systems. In this blog, I will argue that cloud computing does force developers to use better design practices, and hence better applications. So the first thing to define, of course, is the word “better”, in the context of application development. Looking at a few definitions online, better means “superior quality”. As it relates to this discussion then, I stipulate that cloud computing can yield higher quality applications in terms of scalability, everything else being equal. Before going further I need to also outline the difference between performance and scalability. Performance and scalability are two related concepts, but they don’t mean the same thing. Scalability is the measure of system performance given various loads. So when developers design for performance, they usually give higher priority to a given load and tend to optimize for the given load. When developers design for scalability, the actual performance at a given load is not as important; the ability to ensure reasonable performance regardless of the load becomes the objective. This can lead to very different design choices. For example, if your objective is to obtains the fastest response time possible for a service you are building, you may choose the implement a TCP connection that never closes until the client chooses to close the connection (in other words, a tightly coupled service from a connectivity standpoint), and on which a connection session is established for faster processing on the next request (like SQL Server or other database systems for example). If you objective is to scale, you may implement a service that answers to requests without keeping session state, so that server resources are released as quickly as possible, like a REST service for example. This alternate design would likely have a slower response time than the TCP service for any given load, but would continue to function at very large loads because of its inherently loosely coupled design. An example of a REST service is the NO-SQL implementation in the Microsoft cloud called Azure Tables. Now, back to cloud computing… Cloud computing is designed to help you scale your applications, specifically when you use Platform as a Service (PaaS) offerings. However it’s not automatic. You can design a tightly-coupled TCP service as discussed above, and as you can imagine, it probably won’t scale even if you place the service in the cloud because it isn’t using a connection pattern that will allow it to scale [note: I am not implying that all TCP systems do not scale; I am just illustrating the scalability concepts with an imaginary TCP service that isn’t designed to scale for the purpose of this discussion]. The other service, using REST, will have a better chance to scale because, by design, it minimizes resource consumption for individual requests and doesn’t tie a client connection to a specific endpoint (which means you can easily deploy this service to hundreds of machines without much trouble, as long as your pockets are deep enough). The TCP and REST services discussed above are both valid designs; the TCP service is faster and the REST service scales better. So is it fair to say that one service is fundamentally better than the other? No; not unless you need to scale. And if you don’t need to scale, then you don’t need the cloud in the first place. However, it is interesting to note that if you do need to scale, then a loosely coupled system becomes a better design because it can almost always scale better than a tightly-coupled system. And because most applications grow overtime, with an increasing user base, new functional requirements, increased data and so forth, most applications eventually do need to scale. So in my humble opinion, I conclude that a loosely coupled system is not just different than a tightly coupled system; it is a better design, because it will stand the test of time. And in my book, if a system stands the test of time better than another, it is of superior quality. Because cloud computing demands loosely coupled systems so that its underlying service architecture can be leveraged, developers ultimately have no choice but to design loosely coupled systems for the cloud. And because loosely coupled systems are better… … the cloud forces better design practices. My 2 cents.

    Read the article

  • Why do I get an exception when playing multiple sound instances?

    - by Boreal
    Right now, I'm adding a rudimentary sound engine to my game. So far, I am able to load in a WAV file and play it once, then free up the memory when I close the game. However, the game crashes with a nice ArgumentOutOfBoundsException when I try to play another sound instance. Specified argument was out of the range of valid values. Parameter name: readLength I'm following this tutorial pretty much exactly, but I still keep getting the aforementioned error. Here's my sound-related code. /// <summary> /// Manages all sound instances. /// </summary> public static class Audio { static XAudio2 device; static MasteringVoice master; static List<SoundInstance> instances; /// <summary> /// The XAudio2 device. /// </summary> internal static XAudio2 Device { get { return device; } } /// <summary> /// Initializes the audio device and master track. /// </summary> internal static void Initialize() { device = new XAudio2(); master = new MasteringVoice(device); instances = new List<SoundInstance>(); } /// <summary> /// Releases all XA2 resources. /// </summary> internal static void Shutdown() { foreach(SoundInstance i in instances) i.Dispose(); master.Dispose(); device.Dispose(); } /// <summary> /// Registers a sound instance with the system. /// </summary> /// <param name="instance">Sound instance</param> internal static void AddInstance(SoundInstance instance) { instances.Add(instance); } /// <summary> /// Disposes any sound instance that has stopped playing. /// </summary> internal static void Update() { List<SoundInstance> temp = new List<SoundInstance>(instances); foreach(SoundInstance i in temp) if(!i.Playing) { i.Dispose(); instances.Remove(i); } } } /// <summary> /// Loads sounds from various files. /// </summary> internal class SoundLoader { /// <summary> /// Loads a .wav sound file. /// </summary> /// <param name="format">The decoded format will be sent here</param> /// <param name="buffer">The data will be sent here</param> /// <param name="soundName">The path to the WAV file</param> internal static void LoadWAV(out WaveFormat format, out AudioBuffer buffer, string soundName) { WaveStream wave = new WaveStream(soundName); format = wave.Format; buffer = new AudioBuffer(); buffer.AudioData = wave; buffer.AudioBytes = (int)wave.Length; buffer.Flags = BufferFlags.EndOfStream; } } /// <summary> /// Manages the data for a single sound. /// </summary> public class Sound : IAsset { WaveFormat format; AudioBuffer buffer; /// <summary> /// Loads a sound from a file. /// </summary> /// <param name="soundName">The path to the sound file</param> /// <returns>Whether the sound loaded successfully</returns> public bool Load(string soundName) { if(soundName.EndsWith(".wav")) SoundLoader.LoadWAV(out format, out buffer, soundName); else return false; return true; } /// <summary> /// Plays the sound. /// </summary> public void Play() { Audio.AddInstance(new SoundInstance(format, buffer)); } /// <summary> /// Unloads the sound from memory. /// </summary> public void Unload() { buffer.Dispose(); } } /// <summary> /// Manages a single sound instance. /// </summary> public class SoundInstance { SourceVoice source; bool playing; /// <summary> /// Whether the sound is currently playing. /// </summary> public bool Playing { get { return playing; } } /// <summary> /// Starts a new instance of a sound. /// </summary> /// <param name="format">Format of the sound</param> /// <param name="buffer">Buffer holding sound data</param> internal SoundInstance(WaveFormat format, AudioBuffer buffer) { source = new SourceVoice(Audio.Device, format); source.BufferEnd += (s, e) => playing = false; source.Start(); source.SubmitSourceBuffer(buffer); // THIS IS WHERE THE EXCEPTION IS THROWN playing = true; } /// <summary> /// Releases memory used by the instance. /// </summary> internal void Dispose() { source.Dispose(); } } The exception occurs on line 156 when I am playing the sound: source.SubmitSourceBuffer(buffer);

    Read the article

  • JDeveloper does not recognize existing subversion working directory

    - by Bob Webster
    Just a quick note about an issue where JDeveloper no longer recognized an existing subversion working directory. Symptom:  JDeveloper Versioning menu offers to Version an Application that is already versioned in svn. Cause: The repository url contained in the hidden .svn folders of the working directory is no longer valid. Solution: Determine the correct url for the Subversion repository and update the .svn working directory.Fix the url contained in the svn folders of the working directory using the svn switch command. Example:           In a shell change directory to the Application folder.           Run the svn info command to confirm the current settings.                $ svn info                   Path: .                   URL: http://192.168.1.128/repos/jdeveloperrepo/AsyncExamples/BPELCallAsync/trunk                   Repository Root: http://192.168.1.128/repos/jdeveloperrepo                   Repository UUID: 3dc5eb88-3001-0010-8d6e-fd6f73825647                   Revision: 145                   Node Kind: directory                   Schedule: normal                   Last Changed Rev: 145                   Last Changed Date: 2012-06-07 07:15:56 -0700 (Thu, 07 Jun 2012)            In this case, the IP address in the repository URL is incorrect,           the svn server is located at 192.168.56.1           Note: The IP Address currently set is displayed after the Project Name in the            Application Navigator.  See the screen snapshot above.            Run the svn switch command with the --relocate option            Provide as much of the urls as necessary to correctly rewrite the url from current to new.            For example,            to change the repository server address from 192.168.1.128   to   192.168.56.1                     $  svn switch --relocate  http://192.168.1.128   http://192.168.56.1  .                               (Note the trailing period in the above command)           When the url is correct, JDeveloper should recognize the Subversion Working Directory.

    Read the article

  • www.domain redirecting to google?

    - by aayush
    Note: A while back i had no place to host my domain, then via namecheap i set it to forward my domain to google I bought webhosting again today and everything was working fine. I set up vhosts and set up www.domain as the server alias. Both worked. Then i tried to set up a alternate subdomain test.domain, but failed (I did it by creating a alternate vhost right below the current one) as it kept redirecting to google. As a test, i replaced the www with test in serveralias, it still redirected to google but now even www redirects to google. I am using cloudflare, and i am really confused how to go about this. I tried listing www as a cname and as a A record, still redirecting to google. I tried checking via proxies e.t.c, its universal and hence not a problem of my PC. Please help, i am really distressed by this. I am running a ubuntu 13.10 x32 stack with LAMP. Here is what my domain.com.conf file looks like <VirtualHost *:80> # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. ServerName domain.com ServerAlias www.domain.com ServerAdmin webmaster@localhost DocumentRoot /var/www/domain.com/public_html # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined # For most configuration files from conf-available/, which are # enabled or disabled at a global level, it is possible to # include a line for only one particular virtual host. For example the # following line enables the CGI configuration for this host only # after it has been globally disabled with "a2disconf". #Include conf-available/serve-cgi-bin.conf </VirtualHost> There is a valid index.php file at the end of the documentroot aswell. The website in question is aayushagra.com Edit: On cloudflare i tried removing the www entirely, and it still sent me to google Edit: Zone file ;; Domain: aayushagra.com ;; Exported: 2013-11-03 07:37:52 ;; ;; This file is intended for use for informational and archival ;; purposes ONLY and MUST be edited before use on a production ;; DNS server. In particular, you must: ;; -- update the SOA record with the correct authoritative name server ;; -- update the SOA record with the contact e-mail address information ;; -- update the NS record(s) with the authoritative name servers for this domain. ;; ;; For further information, please consult the BIND documentation ;; located on the following website: ;; ;; http://www.isc.org/ ;; ;; And RFC 1035: ;; ;; http://www.ietf.org/rfc/rfc1035.txt ;; ;; Please note that we do NOT offer technical support for any use ;; of this zone data, the BIND name server, or any other third-party ;; DNS software. ;; ;; Use at your own risk. ;; $ORIGIN aayushagra.com. @ 3600 IN SOA aayushagra.com. root.aayushagra.com. ( 2013110301 ; serial 7200 ; refresh 3600 ; retry 86400 ; expire 3600) ; minimum ;; MX Records aayushagra.com. 300 IN MX aayushagra.com. ;; CNAME Records direct.aayushagra.com. 300 IN CNAME aayushagra.com. ;; A Records (IPv4 addresses) www.aayushagra.com. 300 IN A 146.185.140.31 aayushagra.com. 300 IN A 146.185.140.31

    Read the article

  • Data Source Security Part 1

    - by Steve Felts
    I’ve written a couple of articles on how to store data source security credentials using the Oracle wallet.  I plan to write a few articles on the various types of security available to WebLogic Server (WLS) data sources.  There are more options than you might think! There have been several enhancements in this area in WLS 10.3.6.  There are a couple of more enhancements planned for release WLS 12.1.2 that I will include here for completeness.  This isn’t intended as a teaser.  If you call your Oracle support person, you can get them now as minor patches to WLS 10.3.6.   The current security documentation is scattered in a few places, has a few incorrect statements, and is missing a few topics.  It also seems that the knowledge of how to apply some of these features isn’t written down.  The goal of these articles is to talk about WLS data source security in a unified way and to introduce some approaches to using the available features.  Introduction to WebLogic Data Source Security Options By default, you define a single database user and password for a data source.  You can store it in the data source descriptor or make use of the Oracle wallet.  This is a very simple and efficient approach to security.  All of the connections in the connection pool are owned by this user and there is no special processing when a connection is given out.  That is, it’s a homogeneous connection pool and any request can get any connection from a security perspective (there are other aspects like affinity).  Regardless of the end user of the application, all connections in the pool use the same security credentials to access the DBMS.   No additional information is needed when you get a connection because it’s all available from the data source descriptor (or wallet). java.sql.Connection conn =  mydatasource.getConnection(); Note: You can enter the password as a name-value pair in the Properties field (this not permitted for production environments) or you can enter it in the Password field of the data source descriptor. The value in the Password field overrides any password value defined in the Properties passed to the JDBC Driver when creating physical database connections. It is recommended that you use the Password attribute in place of the password property in the properties string because the Password value is encrypted in the configuration file (stored as the password-encrypted attribute in the jdbc-driver-params tag in the module file) and is hidden in the administration console.  The Properties and Password fields are located on the administration console Data Source creation wizard or Data Source Configuration tab. The JDBC API can also be used to programmatically specify a database user name and password as in the following.  java.sql.Connection conn = mydatasource.getConnection(“user”, “password”); According to the JDBC specification, it’s supposed to take a database user and associated password but different vendors implement this differently.  WLS, by default, treats this as an application server user and password.  The pair is authenticated to see if it’s a valid user and that user is used for WLS security permission checks.  By default, the user is then mapped to a database user and password using the data source credential mapper, so this API sort of follows the specification but database credentials are one-step removed from the application code.  More details and the rationale are described later. While the default approach is simple, it does mean that only one database user is doing all of the work.  You can’t figure out who actually did the update and you can’t restrict SQL operations by who is running the operation, at least at the database level.   Any type of per-user logic will need to be in the application code instead of having the database do it.  There are various WLS data source features that can be configured to provide some per-user information about the operations to the database. WebLogic Data Source Security Options This table describes the features available for WebLogic data sources to configure database security credentials and a brief description.  It also captures information about the compatibility of these features with one another. Feature Description Can be used with Can’t be used with User authentication (default) Default getConnection(user, password) behavior – validate the input and use the user/password in the descriptor. Set client identifier Proxy Session, Identity pooling, Use database credentials Use database credentials Instead of using the credential mapper, use the supplied user and password directly. Set client identifier, Proxy session, Identity pooling User authentication, Multi Data Source Set Client Identifier Set a client identifier property associated with the connection (Oracle and DB2 only). Everything Proxy Session Set a light-weight proxy user associated with the connection (Oracle-only). Set client identifier, Use database credentials Identity pooling, User authentication Identity pooling Heterogeneous pool of connections owned by specified users. Set client identifier, Use database credentials Proxy session, User authentication, Labeling, Multi-datasource, Active GridLink Note that all of these features are available with both XA and non-XA drivers. Currently, the Proxy Session and Use Database Credentials options are on the Oracle tab of the Data Source Configuration tab of the administration console (even though the Use Database Credentials feature is not just for Oracle databases – oops).  The rest of the features are on the Identity tab of the Data Source Configuration tab in the administration console (plan on seeing them all in one place in the future). The subsequent articles will describe these features in more detail.  Keep referring back to this table to see the big picture.

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >