Search Results

Search found 14531 results on 582 pages for 'proxy pass'.

Page 525/582 | < Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >

  • Why is there no service-oriented language?

    - by Wolfgang
    Edit: To avoid further confusion: I am not talking about web services and such. I am talking about structuring applications internally, it's not about how computers communicate. It's about programming languages, compilers and how the imperative programming paradigm is extended. Original: In the imperative programming field, we saw two paradigms in the past 20 years (or more): object-oriented (OO), and service-oriented (SO) aka. component-based (CB). Both paradigms extend the imperative programming paradigm by introducing their own notion of modules. OO calls them objects (and classes) and lets them encapsulates both data (fields) and procedures (methods) together. SO, in contrast, separates data (records, beans, ...) from code (components, services). However, only OO has programming languages which natively support its paradigm: Smalltalk, C++, Java and all other JVM-compatibles, C# and all other .NET-compatibles, Python etc. SO has no such native language. It only comes into existence on top of procedural languages or OO languages: COM/DCOM (binary, C, C++), CORBA, EJB, Spring, Guice (all Java), ... These SO frameworks clearly suffer from the missing native language support of their concepts. They start using OO classes to represent services and records. This leads to designs where there is a clear distinction between classes that have methods only (services) and those that have fields only (records). Inheritance between services or records is then simulated by inheritance of classes. Technically, its not kept so strictly but in general programmers are adviced to make classes to play only one of the two roles. They use additional, external languages to represent the missing parts: IDL's, XML configurations, Annotations in Java code, or even embedded DSL like in Guice. This is especially needed, but not limited to, since the composition of services is not part of the service code itself. In OO, objects create other objects so there is no need for such facilities but for SO there is because services don't instantiate or configure other services. They establish an inner-platform effect on top of OO (early EJB, CORBA) where the programmer has to write all the code that is needed to "drive" SO. Classes represent only a part of the nature of a service and lots of classes have to be written to form a service together. All that boiler plate is necessary because there is no SO compiler which would do it for the programmer. This is just like some people did it in C for OO when there was no C++. You just pass the record which holds the data of the object as a first parameter to the procedure which is the method. In a OO language this parameter is implicit and the compiler produces all the code that we need for virtual functions etc. For SO, this is clearly missing. Especially the newer frameworks extensively use AOP or introspection to add the missing parts to a OO language. This doesn't bring the necessary language expressiveness but avoids the boiler platform code described in the previous point. Some frameworks use code generation to produce the boiler plate code. Configuration files in XML or annotations in OO code is the source of information for this. Not all of the phenomena that I mentioned above can be attributed to SO but I hope it clearly shows that there is a need for a SO language. Since this paradigm is so popular: why isn't there one? Or maybe there are some academic ones but at least the industry doesn't use one.

    Read the article

  • Method flags as arguments or as member variables?

    - by Martin
    I think the title "Method flags as arguments or as member variables?" may be suboptimal, but as I'm missing any better terminology atm., here goes: I'm currently trying to get my head around the problem of whether flags for a given class (private) method should be passed as function arguments or via member variable and/or whether there is some pattern or name that covers this aspect and/or whether this hints at some other design problems. By example (language could be C++, Java, C#, doesn't really matter IMHO): class Thingamajig { private ResultType DoInternalStuff(FlagType calcSelect) { ResultType res; for (... some loop condition ...) { ... if (calcSelect == typeA) { ... } else if (calcSelect == typeX) { ... } else if ... } ... return res; } private void InteralStuffInvoker(FlagType calcSelect) { ... DoInternalStuff(calcSelect); ... } public void DoThisStuff() { ... some code ... InternalStuffInvoker(typeA); ... some more code ... } public ResultType DoThatStuff() { ... some code ... ResultType x = DoInternalStuff(typeX); ... some more code ... further process x ... return x; } } What we see above is that the method InternalStuffInvoker takes an argument that is not used inside this function at all but is only forwarded to the other private method DoInternalStuff. (Where DoInternalStuffwill be used privately at other places in this class, e.g. in the DoThatStuff (public) method.) An alternative solution would be to add a member variable that carries this information: class Thingamajig { private ResultType DoInternalStuff() { ResultType res; for (... some loop condition ...) { ... if (m_calcSelect == typeA) { ... } ... } ... return res; } private void InteralStuffInvoker() { ... DoInternalStuff(); ... } public void DoThisStuff() { ... some code ... m_calcSelect = typeA; InternalStuffInvoker(); ... some more code ... } public ResultType DoThatStuff() { ... some code ... m_calcSelect = typeX; ResultType x = DoInternalStuff(); ... some more code ... further process x ... return x; } } Especially for deep call chains where the selector-flag for the inner method is selected outside, using a member variable can make the intermediate functions cleaner, as they don't need to carry a pass-through parameter. On the other hand, this member variable isn't really representing any object state (as it's neither set nor available outside), but is really a hidden additional argument for the "inner" private method. What are the pros and cons of each approach?

    Read the article

  • Cloud hosted CI for .NET projects

    - by Scott Dorman
    Originally posted on: http://geekswithblogs.net/sdorman/archive/2014/06/02/cloud-hosted-ci-for-.net-projects.aspxContinuous integration (CI) is important. If you don’t have it set up…you should. There are a lot of different options available for hosting your own CI server, but they all require you to maintain your own infrastructure. If you’re a business, that generally isn’t a problem. However, if you have some open source projects hosted, for example on GitHub, there haven’t really been any options. That has changed with the latest release of AppVeyor, which bills itself as “Continuous integration for busy developers.” What’s different about AppVeyor is that it’s a hosted solution. Why is that important? By being a hosted solution, it means that I don’t have to maintain my own infrastructure for a build server. How does that help if you’re hosting an open source project? AppVeyor has a really competitive pricing plan. For an unlimited amount of public repositories, it’s free. That gives you a cloud hosted CI system for all of your GitHub projects for the cost of some time to set them up, which actually isn’t hard to do at all. I have several open source projects (hosted at https://github.com/scottdorman), so I signed up using my GitHub credentials. AppVeyor fully supported my two-factor authentication with GitHub, so I never once had to enter my password for GitHub into AppVeyor. Once it was done, I authorized GitHub and it instantly found all of the repositories I have (both the ones I created and the ones I cloned from elsewhere). You can even add “build badges” to your markdown files in GitHub, so anyone who visits your project can see the status of the lasted build. Out of the box, you can simply select a repository, add the build project, click New Build and wait for the build to complete. You now have a complete CI server running for your project. The best part of this, besides the fact that it “just worked” with almost zero configuration is that you can configure it through a web-based interface which is very streamlined, clean and easy to use or you can use a appveyor.yml file. This means that you can define your CI build process (including any scripts that might need to be run, etc.) in a standard file format (the YAML format) and store it in your repository. The benefits to that are huge. The file becomes a versioned artifact in your source control system, so it can be branched, merged, and is completely transparent to anyone working on the project. By the way, AppVeyor isn’t limited to just GitHub. It currently supports GitHub, BitBucket, Visual Studio Online, and Kiln. I did have a few issues getting one of my projects to build, but the same day I posted the problem to the support forum a fix was deployed, and I had a functioning CI build about 5 minutes after that. Since then, I’ve provided some additional feature requests and had a few other questions, all of which have seen responses within a 24-hour period. I have to say that it’s easily been one of the best customer support experiences I’ve seen in a long time. AppVeyor is still young, so it doesn’t yet have full feature parity with some of the older (more established) CI systems available,  but it’s getting better all the time and I have no doubt that it will quickly catch up to those other CI systems and then pass them. The bottom line, if you’re looking for a good cloud-hosted CI system for your .NET-based projects, look at AppVeyor.

    Read the article

  • How to deploy global managed beans

    - by frank.nimphius
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} "Global managed" beans is the term I use in this post to describe beans that are used across applications. Global managed beans contain helper - or utility - methods like or instead of JSFUtils and ADFUtils. The difference between global managed beans and static helper classes like JSFUtis and ADFUtils is that they are EL accessible, providing reusable functionality that is ready to use on UI components and - if accessed from Java - in other managed beans. For example, the ADF Faces page template (af:pageTemplate) allows you to define attributes for the consuming page to pass in object references or strings into it. It does not have method attributes that allow command components contained in a template to invoke listeners in managed beans and the ADF binding layer, or to execute actions. To create templates that provide global button or menu functionality, like logon, logout, print etc., an option for developers is to deployed managed beans with the ADF Faces page templates. To deploy a managed bean with a page template, create an ADF library from the project containing the template definition and import this ADF library into the target project using the Resource palette. When importing an ADF library, all its content is added to the project, including page template definitions, managed bean sources and configurations. More about page templates http://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e12419/tagdoc/af_pageTemplate.html Another use-case for globally configured managed beans is for creating helper methods to be used in many applications. Instead of creating a base managed bean class that then is extended by all managed beans used in applications, you can deploy a managed bean in a JAR file and add the faces-config.xml file with the managed bean configuration to the JAR's META-INF folder as shown below. Using a globally configured managed bean allows you to use Expression Language in the UI to access common functionality but also use Java in application specific managed beans. Storing the faces-config.xml file in the JAR file META-INF directory automatically makes it available when the JAR file is found in the class path of an application.

    Read the article

  • Projet Doneness and Einstein's Razor

    - by Malcolm Anderson
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} I’ve started working on a series of articles about the value of having testers involved in requirements gathering.  Today I was reminded of a useful tool that has provided value to me for at least 20 years.  To those of you who already use this tool, I’m interested in your stories where it has made a difference for you, and to those of you who have never heard of it, I hope sharing it will make a difference in your careers.   I was reminded of it because I just finished a 3 month set of personal projects and was reviewing the success of those projects while putting together my next set of 3 month projects.  During this review, I noticed that a good number of my projects did not have the level of success that I wanted.  The results were good, but they could have been better.  Then it hit me, I didn’t have clear enough doneness criteria.  As a Scrum Practitioner, I wouldn’t think of running a sprint without reviewing the backlog with Einstein's Razor, so why wouldn’t I do the same for my own projects?    I can hear a few of you asking "What's Einstein's Razor?"   I'm glad you asked.  I was once told that Einstein told an audience, "If you can't explain what you do to a relatively bright six year old, you probably don't understand it yourself."    This quote had an impact on me, especially early in my career as a solo developer.  At the time, I was mostly doing end to end software development.  I found that I saved myself a lot of pain and trouble by turning that quote around to “If you can't explain your project's doneness criteria in such a way that a relatively bright six year old can't competently determine your projects success or failure, then you have not broken it down to a fine enough level.”  There are more negatives in that quote than I’m happy with, but it still gives me tons of value to this day.     In your opinion, in your current projects, could a 6 year old competently pass or fail your next sprint?  What risks are you running if your answer is “No” ?

    Read the article

  • disks not ready in array causes mdadm to force initramfs shell

    - by RaidPinata
    Okay, this is starting to get pretty frustrating. I've read most of the other answers on this site that have anything to do with this issue but I'm still not getting anywhere. I have a RAID 6 array with 10 devices and 1 spare. The OS is on a completely separate device. At boot only three of the 10 devices in the raid are available, the others become available later in the boot process. Currently, unless I go through initramfs I can't get the system to boot - it just hangs with a blank screen. When I do boot through recovery (initramfs), I get a message asking if I want to assemble the degraded array. If I say no and then exit initramfs the system boots fine and my array is mounted exactly where I intend it to. Here are the pertinent files as near as I can tell. Ask me if you want to see anything else. # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # auto-create devices with Debian standard permissions # CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays # This file was auto-generated on Tue, 13 Nov 2012 13:50:41 -0700 # by mkconf $Id$ ARRAY /dev/md0 level=raid6 num-devices=10 metadata=1.2 spares=1 name=Craggenmore:data UUID=37eea980:24df7b7a:f11a1226:afaf53ae Here is fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sdc2 during installation UUID=3fa1e73f-3d83-4afe-9415-6285d432c133 / ext4 errors=remount-ro 0 1 # swap was on /dev/sdc3 during installation UUID=c4988662-67f3-4069-a16e-db740e054727 none swap sw 0 0 # mount large raid device on /data /dev/md0 /data ext4 defaults,nofail,noatime,nobootwait 0 0 output of cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid6 sda[0] sdd[10](S) sdl[9] sdk[8] sdj[7] sdi[6] sdh[5] sdg[4] sdf[3] sde[2] sdb[1] 23441080320 blocks super 1.2 level 6, 512k chunk, algorithm 2 [10/10] [UUUUUUUUUU] unused devices: <none> Here is the output of mdadm --detail --scan --verbose ARRAY /dev/md0 level=raid6 num-devices=10 metadata=1.2 spares=1 name=Craggenmore:data UUID=37eea980:24df7b7a:f11a1226:afaf53ae devices=/dev/sda,/dev/sdb,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh,/dev/sdi,/dev/sdj,/dev/sdk,/dev/sdl,/dev/sdd Please let me know if there is anything else you think might be useful in troubleshooting this... I just can't seem to figure out how to change the boot process so that mdadm waits until the drives are ready to build the array. Everything works just fine if the drives are given enough time to come online. edit: changed title to properly reflect situation

    Read the article

  • Using Solaris zfs + iscsi targets with Oracle VM

    - by wim.coekaerts
    I was playing with my Oracle VM setup and needed some shared storage that was block based. I did not have a storage array available but I did have a solaris box, that I use for Oracle VDI, available. I set up a few iscsi targets on this solaris server and exported them to my 2 Oracle VM servers. Here's how I did this : (1) On the solaris side : # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT rpool 544G 129G 415G 23% ONLINE - I just have a simple zpool, called rpool, on this box. It has plenty of space available for my needs. So I will use rpool and I will create 5 50gb vols : zfs create -V 50G rpool/ovm1 zfs create -V 50G rpool/ovm2 zfs create -V 50G rpool/ovm3 zfs create -V 50G rpool/ovm4 zfs create -V 50G rpool/ovm5 I want to use these volumes for iscsi so I have to enable them as shared iscsi devices : zfs set shareiscsi=on rpool/ovm1 zfs set shareiscsi=on rpool/ovm2 zfs set shareiscsi=on rpool/ovm3 zfs set shareiscsi=on rpool/ovm4 zfs set shareiscsi=on rpool/ovm5 The command iscsitadm list target should list these devices so make sure they show up. # iscsitadm list target Target: rpool/ovm1 iSCSI Name: iqn.1986-03.com.sun:02:896c766c-0943-4da5-d47e-9575b5a0be36 Connections: 2 Target: rpool/ovm2 iSCSI Name: iqn.1986-03.com.sun:02:a3116b46-73e0-e8c2-e80c-9a4f71aff069 Connections: 2 Target: rpool/ovm3 iSCSI Name: iqn.1986-03.com.sun:02:a838c400-2730-c0d6-f2c2-ee186a0261c1 Connections: 2 Target: rpool/ovm4 iSCSI Name: iqn.1986-03.com.sun:02:2e046afb-d66d-4f3f-c5de-8115e0ddd931 Connections: 2 Target: rpool/ovm5 iSCSI Name: iqn.1986-03.com.sun:02:66109fbe-81ac-ef05-f85e-ab8c1f34cb43 Connections: 2 At this point I want to make sure that I have some access control on these devices. To make it easier, I will create an alias for my 2 servers and use the alias for the ACL. get the iqn from the 2 servers on my 2 ovm servers (wcoekaer-srv1, wcoekaer-srv2) get the content of /etc/iscsi/initiatorname.iscsi (for each server) InitiatorName=iqn.1986-03.com.sun:01:2a7526f0ffff On the solaris side create the aliases : iscsitadm create initiator -n iqn.1986-03.com.sun:01:2a7526f0ffff wcoekaer-srv1 iscsitadm create initiator -n iqn.1986-03.com.sun:01:e31b08110f1 wcoekaer-srv5 Add the ACL to the targets : iscsitadm modify target -l wcoekaer-srv1 rpool/ovm1 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm2 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm3 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm4 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm5 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm1 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm2 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm3 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm4 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm5 (2) the Oracle VM side On each server just do 2 simple things : # iscsiadm -m discovery -t sendtargets -p ca-vdi1 where ca-vdi1 is my solaris server name # service iscsi restart When I do cat /proc/partitions on my servers I will see the devices show up # cat /proc/partitions major minor #blocks name 8 0 160836480 sda 8 1 104391 sda1 8 2 3148740 sda2 8 3 1052257 sda3 253 0 6377804 dm-0 253 1 6377804 dm-1 253 2 6377804 dm-2 8 16 52428800 sdb 8 32 52428800 sdc 8 48 52428800 sdd 8 80 52428800 sdf 8 64 52428800 sde These 5 new devices sd[b..f] are shared storage for Oracle VM and can be used to pass through to the VM's as phy: devices or put ocfs2 on it and use as shared filesystem storage for dom0 repositories. I am setting up an 11gR2 rac template (the cool stuff Saar did) so I am using my devices to create a 2 node RAC cluster with phy: devices.

    Read the article

  • SQL Server Optimizer Malfunction?

    - by Tony Davis
    There was a sharp intake of breath from the audience when Adam Machanic declared the SQL Server optimizer to be essentially "stuck in 1997". It was during his fascinating "Query Tuning Mastery: Manhandling Parallelism" session at the recent PASS SQL Summit. Paraphrasing somewhat, Adam (blog | @AdamMachanic) offered a convincing argument that the optimizer often delivers flawed plans based on assumptions that are no longer valid with today’s hardware. In 1997, when Microsoft engineers re-designed the database engine for SQL Server 7.0, SQL Server got its initial implementation of a cost-based optimizer. Up to SQL Server 2000, the developer often had to deploy a steady stream of hints in SQL statements to combat the occasionally wilful plan choices made by the optimizer. However, with each successive release, the optimizer has evolved and improved in its decision-making. It is still prone to the occasional stumble when we tackle difficult problems, join large numbers of tables, perform complex aggregations, and so on, but for most of us, most of the time, the optimizer purrs along efficiently in the background. Adam, however, challenged further any assumption that the current optimizer is competent at providing the most efficient plans for our more complex analytical queries, and in particular of offering up correctly parallelized plans. He painted a picture of a present where complex analytical queries have become ever more prevalent; where disk IO is ever faster so that reads from disk come into buffer cache faster than ever; where the improving RAM-to-data ratio means that we have a better chance of finding our data in cache. Most importantly, we have more CPUs at our disposal than ever before. To get these queries to perform, we not only need to have the right indexes, but also to be able to split the data up into subsets and spread its processing evenly across all these available CPUs. Improvements such as support for ColumnStore indexes are taking things in the right direction, but, unfortunately, deficiencies in the current Optimizer mean that SQL Server is yet to be able to exploit properly all those extra CPUs. Adam’s contention was that the current optimizer uses essentially the same costing model for many of its core operations as it did back in the days of SQL Server 7, based on assumptions that are no longer valid. One example he gave was a "slow disk" bias that may have been valid back in 1997 but certainly is not on modern disk systems. Essentially, the optimizer assesses the relative cost of serial versus parallel plans based on the assumption that there is no IO cost benefit from parallelization, only CPU. It assumes that a single request will saturate the IO channel, and so a query would not run any faster if we parallelized IO because the disk system simply wouldn’t be able to handle the extra pressure. As such, the optimizer often decides that a serial plan is lower cost, often in cases where a parallel plan would improve performance dramatically. It was challenging and thought provoking stuff, as were his techniques for driving parallelism through query logic based on subsets of rows that define the "grain" of the query. I highly recommend you catch the session if you missed it. I’m interested to hear though, when and how often people feel the force of the optimizer’s shortcomings. Barring mistakes, such as stale statistics, how often do you feel the Optimizer fails to find the plan you think it should, and what are the most common causes? Is it fighting to induce it toward parallelism? Combating unexpected plans, arising from table partitioning? Something altogether more prosaic? Cheers, Tony.

    Read the article

  • Microsoft Certifications &ndash; how to prep? and why?

    - by Kelly Jones
    I often get asked by my colleagues, “how do you prepare for Microsoft exams?” Well, the answer for me is a little complicated, so I thought I’d write up here what I do. The first thing I do is go to Microsoft’s website to find the exam that I need to take.  If you’re looking to get a particular certification, then their site lists the exam or exams that you’ll need to pass.  If you’ve already taken an exam, you can log onto the MCP website and use their certification planner.  This little tool tells you what tests you need, based on the exams you’ve already passed.  It is very helpful with the certifications that are multiple tests and especially ones that have electives. Once you’ve identified the test, you can use Microsoft’s website to see the topics that it covers.  This is a good outline to follow when you study.  I’ll keep this handy to reference back throughout my studying to make sure that I’m covering all the topics I need to know. The next step is probably where I am a little different from others.  IF the exam outline covers material that I’ve already been working with, then I’ll skip a lot of the studying and go directly to the practice tests.  However, if I’m looking at the outline and wondering how in the world do you do that? – then it’s time to hit the books. So, where to find study materials?  Try typing in the exam number into any search engine.  You’ll typically find a ton of resources.  If you’re lucky, you’ll find books that others recommend based on their studying and exam experience.  As a Sogeti employee, I have access to three really good resources: an internal company list of all of the consultants who have passed particular tests (on our Connex website), Books 24x7, and Transcender practice exams. Once my studying is done (either through books or experience), I’ll go through the practice exams.  I find them really helpful in getting my knowledge lined up to the thinking process that the exam writers use.  If I’m relying on my experience, then this really helps me to identify gaps in my knowledge that I’ll need to fill. That’s about it.  If I’m doing ok on the practice exams, then I’ll take the real thing.  I’ve found that the practice exams are usually more difficult than then real thing. Oh – one other thing I do related to Microsoft exams – I try to take any beta exams that Microsoft makes available that fall into my skill set.  Microsoft has started a blog to announce these and the seats usually fill up really quick.  The blog is at http://blogs.technet.com/betaexams/ . You don’t get your results instantly, like a normal exam, instead you have to wait for everyone to finish taking the beta exams and for Microsoft to determine which questions they are using and which they are dropping.  So, be prepared to wait six to eight weeks for your results.

    Read the article

  • Incorrect lighting results with deferred rendering

    - by Lasse
    I am trying to render a light-pass to a texture which I will later apply on the scene. But I seem to calculate the light position wrong. I am working on view-space. In the image above, I am outputting the attenuation of a point light which is currently covering the whole screen. The light is at 0,10,0 position, and I transform it to view-space first: Vector4 pos; Vector4 tmp = new Vector4 (light.Position, 1); // Transform light position for shader Vector4.Transform (ref tmp, ref Camera.ViewMatrix, out pos); shader.SendUniform ("LightViewPosition", ref pos); Now to me that does not look as it should. What I think it should look like is that the white area should be on the center of the scene. The camera is at the corner of the scene, and it seems as if the light would move along with the camera. Here's the fragment shader code: void main(){ // default black color vec3 color = vec3(0); // Pixel coordinates on screen without depth vec2 PixelCoordinates = gl_FragCoord.xy / ScreenSize; // Get pixel position using depth from texture vec4 depthtexel = texture( DepthTexture, PixelCoordinates ); float depthSample = unpack_depth(depthtexel); // Get pixel coordinates on camera-space by multiplying the // coordinate on screen-space by inverse projection matrix vec4 world = (ImP * RemapMatrix * vec4(PixelCoordinates, depthSample, 1.0)); // Undo the perspective calculations vec3 pixelPosition = (world.xyz / world.w) * 3; // How far the light should reach from it's point of origin float lightReach = LightColor.a / 2; // Vector in between light and pixel vec3 lightDir = (LightViewPosition.xyz - pixelPosition); float lightDistance = length(lightDir); vec3 lightDirN = normalize(lightDir); // Discard pixels too far from light source //if(lightReach < lightDistance) discard; // Get normal from texture vec3 normal = normalize((texture( NormalTexture, PixelCoordinates ).xyz * 2) - 1); // Half vector between the light direction and eye, used for specular component vec3 halfVector = normalize(lightDirN + normalize(-pixelPosition)); // Dot product of normal and light direction float NdotL = dot(normal, lightDirN); float attenuation = pow(lightReach / lightDistance, LightFalloff); // If pixel is lit by the light if(NdotL > 0) { // I have moved stuff from here to above so I can debug them. // Diffuse light color color += LightColor.rgb * NdotL * attenuation; // Specular light color color += LightColor.xyz * pow(max(dot(halfVector, normal), 0.0), 4.0) * attenuation; } RT0 = vec4(color, 1); //RT0 = vec4(pixelPosition, 1); //RT0 = vec4(depthSample, depthSample, depthSample, 1); //RT0 = vec4(NdotL, NdotL, NdotL, 1); RT0 = vec4(attenuation, attenuation, attenuation, 1); //RT0 = vec4(lightReach, lightReach, lightReach, 1); //RT0 = depthtexel; //RT0 = 100 / vec4(lightDistance, lightDistance, lightDistance, 1); //RT0 = vec4(lightDirN, 1); //RT0 = vec4(halfVector, 1); //RT0 = vec4(LightColor.xyz,1); //RT0 = vec4(LightViewPosition.xyz/100, 1); //RT0 = vec4(LightPosition.xyz, 1); //RT0 = vec4(normal,1); } What am I doing wrong here?

    Read the article

  • Error X3650 when compiling shader in XNA

    - by Saikai
    I'm attempting to convert the XBDEV.NET Mosaic Shader for use in my XNA project and having trouble. The compiler errors out because of the half globals. At first I tried replacing the globals and just writing the variables explicitly in the code, but that garbles the Output. Next I tried replacing all the half with float vars, but that still garbles the resulting Image. I call the effect file from SpriteBatch.Begin(). Is there a way to convert this shader to the new pixel shader conventions? Are there any good tutorials for this topic? Here is the shader file for reference: /*****************************************************************************/ /* File: tiles.fx Details: Modified version of the NVIDIA Composer FX Demo Program 2004 Produces a tiled mosaic effect on the output. Requires: Vertex Shader 1.1 Pixel Shader 2.0 Modified by: [email protected] (www.xbdev.net) */ /*****************************************************************************/ float4 ClearColor : DIFFUSE = { 0.0f, 0.0f, 0.0f, 1.0f}; float ClearDepth = 1.0f; /******************************** TWEAKABLES *********************************/ half NumTiles = 40.0; half Threshhold = 0.15; half3 EdgeColor = {0.7f, 0.7f, 0.7f}; /*****************************************************************************/ texture SceneMap : RENDERCOLORTARGET < float2 ViewportRatio = { 1.0f, 1.0f }; int MIPLEVELS = 1; string format = "X8R8G8B8"; string UIWidget = "None"; >; sampler SceneSampler = sampler_state { texture = <SceneMap>; AddressU = CLAMP; AddressV = CLAMP; MIPFILTER = NONE; MINFILTER = LINEAR; MAGFILTER = LINEAR; }; /***************************** DATA STRUCTS **********************************/ struct vertexInput { half3 Position : POSITION; half3 TexCoord : TEXCOORD0; }; /* data passed from vertex shader to pixel shader */ struct vertexOutput { half4 HPosition : POSITION; half2 UV : TEXCOORD0; }; /******************************* Vertex shader *******************************/ vertexOutput VS_Quad( vertexInput IN) { vertexOutput OUT = (vertexOutput)0; OUT.HPosition = half4(IN.Position, 1); OUT.UV = IN.TexCoord.xy; return OUT; } /********************************** pixel shader *****************************/ half4 tilesPS(vertexOutput IN) : COLOR { half size = 1.0/NumTiles; half2 Pbase = IN.UV - fmod(IN.UV,size.xx); half2 PCenter = Pbase + (size/2.0).xx; half2 st = (IN.UV - Pbase)/size; half4 c1 = (half4)0; half4 c2 = (half4)0; half4 invOff = half4((1-EdgeColor),1); if (st.x > st.y) { c1 = invOff; } half threshholdB = 1.0 - Threshhold; if (st.x > threshholdB) { c2 = c1; } if (st.y > threshholdB) { c2 = c1; } half4 cBottom = c2; c1 = (half4)0; c2 = (half4)0; if (st.x > st.y) { c1 = invOff; } if (st.x < Threshhold) { c2 = c1; } if (st.y < Threshhold) { c2 = c1; } half4 cTop = c2; half4 tileColor = tex2D(SceneSampler,PCenter); half4 result = tileColor + cTop - cBottom; return result; } /*****************************************************************************/ technique tiles { pass p0 { VertexShader = compile vs_1_1 VS_Quad(); ZEnable = false; ZWriteEnable = false; CullMode = None; PixelShader = compile ps_2_0 tilesPS(); } }

    Read the article

  • OPN Specialized Latest News (15th November)

    - by swalker
    HELPING YOU TO SPECIALIZE WebCenter Implementation Specialist Exam Preparation Webcasts: WebCenter Content And WebCenter Portal Oracle Partner Network would like to invite you to Refresh Courses for WebCenter Content and WebCenter Portal, to help partners to prepare for the WebCenter Implementation Specialist EXAMS. This is a 3 hours intensive refresher partner-only training session, providing attendees with an overview of WebCenter Content and WebCenter Portal functions and related topics. After the refresher part you will be able to take the relevant Implementation Specialist EXAM depending on your personal focus. NOTE: This is only suitable for experienced WebCenter Content or WebCenter Portal practitioners Who should attend? Partner Consultants who want to become an Oracle WebCenter Content or a WebCenter Portal Certified Implementation Specialist or both, that will help them to differentiate themselves in front of customers and support their Companies to become Specialized. Webcast Details: Click here to read more... Specialized Partners Only! New Service to Promote Your Events The Partner Event Publisher has just been made available to all specialized partners in EMEA.  Partners now have the opportunity to publish their events to the Oracle.com/events site and spread the word on their upcoming live in-person and/or live webcast events. Click here to read more information and watch a short video demo. VADs Get Specialized Effective November 1, 2011 , VADs, with a valid Value Added Distributor Agreement will no longer be required to meet customer reference requirements outlined in the business criteria section in order to become specialized. VADs must continue meet all other business and competency criteria set forth in the applicable Knowledge Zone prior to specialization approval. New Certification Pillar Axiom 600 Storage System Your opportunity to take the Pillar Axiom 600 Storage System Essentials (1Z0-581) Exam is vailable now in beta. Pass the exam so you can become a Pillar Axiom 600 Storage Systems Implementation Specialist! Free vouchers are available for Oracle Partners! If you would like to receive a free Beta exam voucher, please send your request to [email protected] and include your name, business email address, company, and the Exam name Pillar Axiom 600 Storage System Essentials Beta exam. New Certification Available: Oracle Utilities Customer Care and Billing Oracle Utilities Customer Care and Billing 2 Essentials (1Z0-562) is a solution designed to help you meet market windows and regulatory deadlines while enjoying a low total cost of ownership and a high return on investment. Take the exam now to become an  Oracle Utilities Customer Care and Billing 2 Essentials Implementation Specialists. MEASURING YOUR SUCCESS We had 1674 Specialized Partners covering 5364 Specializations. Please note that due to OPN contract renewals at any given point in time there are valid Specialized Partners and Specializations which are temporarily not captured in the total statistics. An incremental 1961 individuals were accredited as Implementation Specialists giving an EMEA cumulative total of 9598 Implementation Specialists 26 ISVs obtained one or more Ready's, for a total of 53 Ready's Don't forget! You can submit your own press releases to Oracle! Every time you achieve specialization we'd like to support you getting the message out! Press guidelines and a submission link can be found on the OPN Portal here.

    Read the article

  • How to define template directives (from an API perspective)?

    - by Ralph
    Preface I'm writing a template language (don't bother trying to talk me out of it), and in it, there are two kinds of user-extensible nodes. TemplateTags and TemplateDirectives. A TemplateTag closely relates to an HTML tag -- it might look something like div(class="green") { "content" } And it'll be rendered as <div class="green">content</div> i.e., it takes a bunch of attributes, plus some content, and spits out some HTML. TemplateDirectives are a little more complicated. They can be things like for loops, ifs, includes, and other such things. They look a lot like a TemplateTag, but they need to be processed differently. For example, @for($i in $items) { div(class="green") { $i } } Would loop over $items and output the content with the variable $i substituted in each time. So.... I'm trying to decide on a way to define these directives now. Template Tags The TemplateTags are pretty easy to write. They look something like this: [TemplateTag] static string div(string content = null, object attrs = null) { return HtmlTag("div", content, attrs); } Where content gets the stuff between the curly braces (pre-rendered if there are variables in it and such), and attrs is either a Dictionary<string,object> of attributes, or an anonymous type used like a dictionary. It just returns the HTML which gets plunked into its place. Simple! You can write tags in basically 1 line. Template Directives The way I've defined them now looks like this: [TemplateDirective] static string @for(string @params, string content) { var tokens = Regex.Split(@params, @"\sin\s").Select(s => s.Trim()).ToArray(); string itemName = tokens[0].Substring(1); string enumName = tokens[1].Substring(1); var enumerable = data[enumName] as IEnumerable; var sb = new StringBuilder(); var template = new Template(content); foreach (var item in enumerable) { var templateVars = new Dictionary<string, object>(data) { { itemName, item } }; sb.Append(template.Render(templateVars)); } return sb.ToString(); } (Working example). Basically, the stuff between the ( and ) is not split into arguments automatically (like the template tags do), and the content isn't pre-rendered either. The reason it isn't pre-rendered is because you might want to add or remove some template variables or something first. In this case, we add the $i variable to the template variables, var templateVars = new Dictionary<string, object>(data) { { itemName, item } }; And then render the content manually, sb.Append(template.Render(templateVars)); Question I'm wondering if this is the best approach to defining custom Template Directives. I want to make it as easy as possible. What if the user doesn't know how to render templates, or doesn't know that he's supposed to? Maybe I should pass in a Template instance pre-filled with the content instead? Or maybe only let him tamper w/ the template variables, and then automatically render the content at the end? OTOH, for things like "if" if the condition fails, then the template wouldn't need to be rendered at all. So there's a lot of flexibility I need to allow in here. Thoughts?

    Read the article

  • Building Tag Cloud Declarative ADF Component

    - by Arunkumar Ramamoorthy
    When building a website, there could a requirement to add a tag cloud to let the users know the popular tags (or terms) used in the site. In this blog, we would build a simple declarative component to be used as tag cloud in the page. To start with, we would first create the declarative component, which could display the tag cloud. We will do that by creating a new custom application from the new gallery. Give a name for the app and the project and from the new gallery, let us create a new ADF Declarative Component We need to specify the name for the declarative component, attributes in it etc. as follows For displaying the tags as cloud, we need to pass the content to this component. So, we will create an attribute to hold the values for the tag. Let us name it as "value" and make it as java.lang.String  type. Once after this, to hold the component, we need to create a tag library. This can be done by clicking on the Add Tag Library button. Clicking on OK buttons in all the open dialogs would create a declarative component for us. Now, we need to display the tag cloud based on the value passed to the component. To do that, we assume that the value is a Tree Binding and has two attributes in it, say "Name" and "Weight". To make a tag cloud, we would put together the "Name" in a loop and set it's font size based on the "Weight". After putting our logic to work, here is how the source look Attributes added to the declarative components can be retrieved by using #{attrs.<attribute_name>}. Now, we need to deploy this project as ADF Library Jar file, so that this can be distributed to the consuming applications. We'll select ADF Library Jar as type and create the profile. We would be getting the jar file after deployment. To test the functionality, we could create a simple Fusion Web Application. To add our custom component to the consuming application, we can create a file system connection pointing to the location where the jar file is and add it or, add through the project properties of the ViewController project. Now, our custom component has been added to the consuming application. We could test that by creating a VO in the model project with a query like, select 'Faces' as Name,25 as Weight from dual union all select 'ADF', 15 from dual  union all select 'ADFdi', 30 from dual union all select 'BC4J', 20 from dual union all select 'EJB', 40 from dual union all select 'WS', 35 from dual Add this VO to the AppModule, so that it would be exposed to the data control. Then, we could create a jspx page, and add a tree binding to the VO created. We can now see our Tag Cloud declarative component is available in the component palette.  It can be inserted from the component palette to our page and set it's value property to CollectionModel of the tree binding created. Now that we've created the Declarative component and added that to our page successfully, we can run the page to see how it looks. As per the query, the Tags are displayed in different fonts, based on their weight.

    Read the article

  • Controlling soft errors and false alarms in SSIS

    - by Jim Giercyk
    If you are like me, you dread the 3AM wake-up call.  I would say that the majority of the pages I get are false alarms.  The alerts that require action often require me to open an SSIS package, see where the trouble is and try to identify the offending data.  That can be very time-consuming and can take quite a chunk out of my beauty sleep.  For those reasons, I have developed a simple error handling scenario for SSIS which allows me to rest a little easier.  Let me first say, this is a high level discussion; getting into the nuts and bolts of creating each shape is outside the scope of this document, but if you have an average understanding of SSIS, you should have no problem following along. In the Data Flow above you can see that there is a caution triangle.  For the purpose of this discussion I am creating a truncation error to demonstrate the process, so this is to be expected.  The first thing we need to do is to redirect the error output.  Double-clicking on the Query shape presents us with the properties window for the input.  Simply set the columns that you want to redirect to Redirect Row in the dropdown box and hit Apply. Without going into a dissertation on error handling, I will just note that you can decide which errors you want to redirect on Error and on Truncation.  Therefore, to override this process for a column or condition, simply do not redirect that column or condition. The next thing we want to do is to add some information about the error; specifically, the name of the package which encountered the error and which step in the package wrote the record to the error table.  REMEMBER: If you redirect the error output, your package will not fail, so you will not know where the error record was created without some additional information.    I added 3 columns to my error record; Severity, Package Name and Step Name.  Severity is just a free-form column that you can use to note whether an error is fatal, whether the package is part of a test job and should be ignored, etc.  Package Name and Step Name are system variables. In my package I have created a truncation situation, where the firstname column is 50 characters in the input, but only 4 characters in the output.  Some records will pass without truncation, others will be sent to the error output.  However, the package will not fail. We can see that of the 14 input rows, 8 were redirected to the error table. This information can be used by another step or another scheduled process or triggered to determine whether an error should be sent.  It can also be used as a historical record of the errors that are encountered over time.  There are other system variables that might make more sense in your infrastructure, so try different things.  Date and time seem like something you would want in your output for example.  In summary, we have redirected the error output from an input, added derived columns with information about the errors, and inserted the information and the offending data into an error table.  The error table information can be used by another step or process to determine, based on the error information, what level alert must be sent.  This will eliminate false alarms, and give you a head start when a genuine error occurs.

    Read the article

  • How should I plan the inheritance structure for my game?

    - by Eric Thoma
    I am trying to write a platform shooter in C++ with a really good class structure for robustness. The game itself is secondary; it is the learning process of writing it that is primary. I am implementing an inheritance tree for all of the objects in my game, but I find myself unsure on some decisions. One specific issue that it bugging me is this: I have an Actor that is simply defined as anything in the game world. Under Actor is Character. Both of these classes are abstract. Under Character is the Philosopher, who is the main character that the user commands. Also under Character is NPC, which uses an AI module with stock routines for friendly, enemy and (maybe) neutral alignments. So under NPC I want to have three subclasses: FriendlyNPC, EnemyNPC and NeutralNPC. These classes are not abstract, and will often be subclassed in order to make different types of NPC's, like Engineer, Scientist and the most evil Programmer. Still, if I want to implement a generic NPC named Kevin, it would nice to be able to put him in without making a new class for him. I could just instantiate a FriendlyNPC and pass some values for the AI machine and for the dialogue; that would be ideal. But what if Kevin is the one benevolent Programmer in the whole world? Now we must make a class for him (but what should it be called?). Now we have a character that should inherit from Programmer (as Kevin has all the same abilities but just uses the friendly AI functions) but also should inherit from FriendlyNPC. Programmer and FriendlyNPC branched away from each other on the inheritance tree, so inheriting from both of them would have conflicts, because some of the same functions have been implemented in different ways on the two of them. 1) Is there a better way to order these classes to avoid these conflicts? Having three subclasses; Friendly, Enemy and Neutral; from each type of NPC; Engineer, Scientist, and Programmer; would amount to a huge number of classes. I would share specific implementation details, but I am writing the game slowly, piece by piece, and so I haven't implemented past Character yet. 2) Is there a place where I can learn these programming paradigms? I am already trying to take advantage of some good design patterns, like MVC architecture and Mediator objects. The whole point of this project is to write something in good style. It is difficult to tell what should become a subclass and what should become a state (i.e. Friendly boolean v. Friendly class). Having many states slows down code with if statements and makes classes long and unwieldy. On the other hand, having a class for everything isn't practical. 3) Are there good rules of thumb or resources to learn more about this? 4) Finally, where does templating come in to this? How should I coordinate templates into my class structure? I have never actually taken advantage of templating honestly, but I hear that it increases modularity, which means good code.

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Starting over and new to Ubuntu

    - by 2funnyyone
    We have been having repeated problems with our interent service and using windows xp & sp3 (users and premissions) I see no need for them. I started with computers long before windows. Every since sp 3 come out in 2009 I have had nothing but problems. I have lost so many computers to virius and trojans, we just stack them up. We are with Qwest/ Century link which is using advertising servers which I think is causing the problem. All the computers are networked together which is not how I set them up. I beleive Century link is networking them through assignment of a domain for our home. This causes all the computers to crash twice. This is getting expensive. We tried buying new harddrives but reinfect with hours of connecting to internet. I also beleive the modem, router and all computers are infected. I put combofix on this one and that is the only reason we are still online with this laptop. I am afraid to install new equipment because my partner and I are on SSDI and this cost a lot. I go to school at UOP and had to run off a flash and reboot this laptop to recovery every other day or so, this pass month. New plan is: We are getting ready to install new equipment but afraid to reinfect again. Need help to install new equipment. The plan is to use current internet services from Qwest/ now Century Link. The list of New equipment in order: Century link wireless modem is ZyXEL PK5000Z with 4 direct connect Ethernet ports Next Dell Optiplex 210L ( used auction purchase ) 2 gb ram 80 g hard drive Ubuntu 11.10 operating system Next Wireless D-Link router WBR-1310 with 4 direct connect Ethernet ports OK-------- Purchased Dell OEM disk for Repair or Reinstalling Windows XP Professional Operating system (2 roommates as well) All infected computers are Dell desktops or laptops with XP Pro Also purchasing Ubuntu 12.04 for 3 computers. We like the way it runs but still learning it. Questions 1] How do we fdisk the infected computers without infecting new system. We have Dos disks, but none have floppy dish drive. We do have a new floppy disk drive and usb adapter we purchased from Amazon. 2] We are thinking Avast internet security because of the boot scan. We want all software loaded before reconnecting. We can manually load our internet provider information. We purchased StopZilla $100 for 5 computers, but not sure that is what we need. But need how to setup ports security and services we will need. Really lost at this part. So we are safe when we go back on the internet. 3] Want to connect reloaded fdisk systems to router as public connection and no sharing. Do not want to network all computers. 4] Want parental/ ownership control from Ubuntu system for internet connection (Children and friends). Do we restrict at the modem and/ or router? Any help would be a blessing. I do not want to go alone on this anymore.

    Read the article

  • Passed: Exam 70-480: Programming in HTML5 with JavaScript and CSS3

    First off: Mission accomplished successfully. And it was fun! Using the resources listed in my previous article about Learning Content, I'd like to thank Microsoft Technical Evangelists Jeremy Foster and Michael Palermo for their excellent jump start videos on Channel 9, and the various authors at Pluralsight. Local Prometric testing centre Back in November I chose a local testing centre which was the easiest to access from my office despite the horrible traffic you might experience here on the island. Actually, it was not the closest one. But due to their website, their awards as Microsoft Learning Center, and my general curiosity about the premises, I gave FRCI my priority. Boy, how should I regret this decision this morning... The official Prometric exam guide asks any attendee to show up at least 30 minutes prior to the scheduled time of the test. Well, this should have been the easier part but unfortunately due to heavier traffic than usual I arrived only 20 minutes before time. Not too bad but more to come. The building called 'le Hub' is nicely renovated and provides the right environment for an IT group of companies like FRCI. I think they have currently 5 independent IT departments over there. Even the handling at the reception was straight forward, welcoming and at my ease. But then... first shock: "We don't have any exam registration for today." - Hm, that's nice... Here's my mail confirmation from Prometric. First attack successfully handled and the lady went off again to check their records. Next shock: A couple of minutes later, another guy tries to explain me that "the staff of the testing centre is already on vacation and the centre is officially closed." - Are you kidding me? Here's the official confirmation by Prometric, and I don't find it funny that I take a day off today only to hear this kind of blubbering nonsense. I thought that I'll be on the safe side choosing a company with a good reputation here on the island. Another 40 (!) minutes later, they finally come back to the waiting area with a pre-filled form about the test appointment. And finally, after an hour of waiting, discussing, restarting the testing PC, and lots of talk, I am allowed to sit down and take the exam. Exam details Well, you know the rules. Signing an NDA doesn't allow me to provide you any details about the questions or topics that have been covered. Please check out the official exam description, and you're on the right way. Sorry, guys... ;-) The result "Congratulations! You have passed this Microsoft Certification exam." - In general, I have to admit that the parts on HTML5 and CSS3 were the easiest after all, and that I have to get myself a little bit more familiar with certain Javascript features like class definitions, inheritance and data security. Anyway, exam passed - who cares about the details? Next goal Of course, the journey to Microsoft Certifications continues and my next goal is to pass exams 70-481 - Essentials of Developing Windows Store Apps using HTML5 and JavaScript and 70-482 - Advanced Windows Store App Development using HTML5 and JavaScript. This would allow me to achieve the certification of MCSD: Windows Store Apps using HTML5. I guess, during 2013 I'll be busy with various learning and teaching lessons.

    Read the article

  • Projective texture and deferred lighting

    - by Vodácek
    In my previous question, I asked whether it is possible to do projective texturing with deferred lighting. Now (more than half a year later) I have a problem with my implementation of the same thing. I am trying to apply this technique in light pass. (my projector doesn't affect albedo). I have this projector View a Projection matrix: Matrix projection = Matrix.CreateOrthographicOffCenter(-halfWidth * Scale, halfWidth * Scale, -halfHeight * Scale, halfHeight * Scale, 1, 100000); Matrix view = Matrix.CreateLookAt(Position, Target, Vector3.Up); Where halfWidth and halfHeight is are half of the texture's width and height, Position is the Projector's position and target is the projector's target. This seems to be ok. I am drawing full screen quad with this shader: float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; texture2D ProjectorTexture; float4x4 ProjectorViewProjection; sampler2D depthSampler = sampler_state { texture = <DepthTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = <NormalTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D projectorSampler = sampler_state { texture = <ProjectorTexture>; AddressU = Clamp; AddressV = Clamp; }; float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position :POSITION0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = input.Position; output.PositionCopy=output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 texCoord =postProjToScreen(input.PositionCopy) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); //return float4(depth.r,0,0,1); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; //compute projection float3 projection=tex2D(projectorSampler,postProjToScreen(mul(position,ProjectorViewProjection)) + halfPixel()); return float4(projection,1); } In first part of pixel shader is recovered position from G-buffer (this code I am using in other shaders without any problem) and then is tranformed to projector viewprojection space. Problem is that projection doesn't appear. Here is an image of my situation: The green lines are the rendered projector frustum. Where is my mistake hidden? I am using XNA 4. Thanks for advice and sorry for my English. EDIT: Shader above is working but projection was too small. When I changed the Scale property to a large value (e.g. 100), the projection appears. But when the camera moves toward the projection, the projection expands, as can bee seen on this YouTube video.

    Read the article

  • 3 Incredibly Useful Projects to jump-start your Kinect Development.

    - by mbcrump
    I’ve been playing with the Kinect SDK Beta for the past few days and have noticed a few projects on CodePlex worth checking out. I decided to blog about them to help spread awareness. If you want to learn more about Kinect SDK then you check out my”Busy Developer’s Guide to the Kinect SDK Beta”. Let’s get started:   KinectContrib is a set of VS2010 Templates that will help you get started building a Kinect project very quickly. Once you have it installed you will have the option to select the following Templates: KinectDepth KinectSkeleton KinectVideo Please note that KinectContrib requires the Kinect for Windows SDK beta to be installed. Kinect Templates after installing the Template Pack. The reference to Microsoft.Research.Kinect is added automatically.  Here is a sample of the code for the MainWindow.xaml in the “Video” template: <Window x:Class="KinectVideoApplication1.MainWindow" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="480" Width="640"> <Grid> <Image Name="videoImage"/> </Grid> </Window> and MainWindow.xaml.cs using System; using System.Windows; using System.Windows.Media; using System.Windows.Media.Imaging; using Microsoft.Research.Kinect.Nui; namespace KinectVideoApplication1 { public partial class MainWindow : Window { //Instantiate the Kinect runtime. Required to initialize the device. //IMPORTANT NOTE: You can pass the device ID here, in case more than one Kinect device is connected. Runtime runtime = new Runtime(); public MainWindow() { InitializeComponent(); //Runtime initialization is handled when the window is opened. When the window //is closed, the runtime MUST be unitialized. this.Loaded += new RoutedEventHandler(MainWindow_Loaded); this.Unloaded += new RoutedEventHandler(MainWindow_Unloaded); //Handle the content obtained from the video camera, once received. runtime.VideoFrameReady += new EventHandler<Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs>(runtime_VideoFrameReady); } void MainWindow_Unloaded(object sender, RoutedEventArgs e) { runtime.Uninitialize(); } void MainWindow_Loaded(object sender, RoutedEventArgs e) { //Since only a color video stream is needed, RuntimeOptions.UseColor is used. runtime.Initialize(Microsoft.Research.Kinect.Nui.RuntimeOptions.UseColor); //You can adjust the resolution here. runtime.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color); } void runtime_VideoFrameReady(object sender, Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs e) { PlanarImage image = e.ImageFrame.Image; BitmapSource source = BitmapSource.Create(image.Width, image.Height, 96, 96, PixelFormats.Bgr32, null, image.Bits, image.Width * image.BytesPerPixel); videoImage.Source = source; } } } You will find this template pack is very handy especially for those new to Kinect Development.   Next up is The Coding4Fun Kinect Toolkit which contains extension methods and a WPF control to help you develop with the Kinect SDK. After downloading the package simply add a reference to the .dll using either the WPF or WinForms version. Now you will have access to several methods that can help you save an image: (for example) For a full list of extension methods and properties, please visit the site at http://c4fkinect.codeplex.com/. Kinductor – This is a great application for just learning how to use the Kinect SDK. The project uses MVVM Light and is a great start for those looking how to structure their first Kinect Application. Conclusion: Things are already getting easier for those working with the Kinect SDK. I imagine that after a few more months we will see the SDK go out of beta and allow commercial applications to run using it. I am very excited and hope that you continue reading my blog for more Kinect, WPF and Silverlight news.  Subscribe to my feed

    Read the article

  • How change LOD in geometry?

    - by ChaosDev
    Im looking for simple algorithm of LOD, for change geometry vertexes and decrease frame time. Im created octree, but now I want model or terrain vertex modify algorithm,not for increase(looking on tessellation later) but for decrease. I want something like this Questions: Is same algorithm can apply either to model and terrain correctly? Indexes need to be modified ? I must use octree or simple check distance between camera and object for desired effect ? New value of indexcount for DrawIndexed function needed ? Code: //m_LOD == 10 in the beginning //m_RawVerts - array of 3d Vector filled with values from vertex buffer. void DecreaseLOD() { m_LOD--; if(m_LOD<1)m_LOD=1; RebuildGeometry(); } void IncreaseLOD() { m_LOD++; if(m_LOD>10)m_LOD=10; RebuildGeometry(); } void RebuildGeometry() { void* vertexRawData = new byte[m_VertexBufferSize]; void* indexRawData = new DWORD[m_IndexCount]; auto context = mp_D3D->mp_Context; D3D11_MAPPED_SUBRESOURCE data; ZeroMemory(&data,sizeof(D3D11_MAPPED_SUBRESOURCE)); context->Map(mp_VertexBuffer->mp_buffer,0,D3D11_MAP_READ,0,&data); memcpy(vertexRawData,data.pData,m_VertexBufferSize); context->Unmap(mp_VertexBuffer->mp_buffer,0); context->Map(mp_IndexBuffer->mp_buffer,0,D3D11_MAP_READ,0,&data); memcpy(indexRawData,data.pData,m_IndexBufferSize); context->Unmap(mp_IndexBuffer->mp_buffer,0); DWORD* dwI = (DWORD*)indexRawData; int sz = (m_VertexStride/sizeof(float));//size of vertex element //algorithm must be here. std::vector<Vector3d> vertices; int i = 0; for(int j = 0; j < m_VertexCount; j++) { float x1 = (((float*)vertexRawData)[0+i]); float y1 = (((float*)vertexRawData)[1+i]); float z1 = (((float*)vertexRawData)[2+i]); Vector3d lv = Vector3d(x1,y1,z1); //my useless attempts if(j+m_LOD+1<m_RawVerts.size()) { float v1 = VECTORHELPER::Distance(m_RawVerts[dwI[j]],m_RawVerts[dwI[j+m_LOD]]); float v2 = VECTORHELPER::Distance(m_RawVerts[dwI[j]],m_RawVerts[dwI[j+m_LOD+1]]); if(v1>v2) lv = m_RawVerts[dwI[j+1]]; else if(v2<v1) lv = m_RawVerts[dwI[j+2]]; } (((float*)vertexRawData)[0+i]) = lv.x; (((float*)vertexRawData)[1+i]) = lv.y; (((float*)vertexRawData)[2+i]) = lv.z; i+=sz;//pass others vertex format values without change } for(int j = 0; j < m_IndexCount; j++) { //indices ? } //set vertexes to device UpdateVertexes(vertexRawData,mp_VertexBuffer->getSize()); delete[] vertexRawData; delete[] indexRawData; }

    Read the article

  • Omni-directional light shadow mapping with cubemaps in WebGL

    - by Winged
    First of all I must say, that I have read a lot of posts describing an usage of cubemaps, but I'm still confused about how to use them. My goal is to achieve a simple omni-directional (point) light type shading in my WebGL application. I know that there is a lot more techniques (like using Two-Hemispheres or Camera Space Shadow Mapping) which are way more efficient, but for an educational purpose cubemaps are my primary goal. Till now, I have adapted a simple shadow mapping which works with spotlights (with one exception: I don't know how to cut off the glitchy part beyond the reach of a single shadow map texture): glitchy shadow mapping<<< So for now, this is how I understand the usage of cubemaps in shadow mapping: Setup a framebuffer (in case of cubemaps - 6 framebuffers; 6 instead of 1 because every usage of framebufferTexture2D slows down an execution which is nicely described here <<<) and a texture cubemap. Also in WebGL depth components are not well supported, so I need to render it to RGBA first. this.texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_CUBE_MAP, this.texture); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MAG_FILTER, gl.LINEAR); for (var face = 0; face < 6; face++) gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, gl.RGBA, this.size, this.size, 0, gl.RGBA, gl.UNSIGNED_BYTE, null); gl.bindTexture(gl.TEXTURE_CUBE_MAP, null); this.framebuffer = []; for (face = 0; face < 6; face++) { this.framebuffer[face] = gl.createFramebuffer(); gl.bindFramebuffer(gl.FRAMEBUFFER, this.framebuffer[face]); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, this.texture, 0); gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, this.depthbuffer); var e = gl.checkFramebufferStatus(gl.FRAMEBUFFER); // Check for errors if (e !== gl.FRAMEBUFFER_COMPLETE) throw "Cubemap framebuffer object is incomplete: " + e.toString(); } Setup the light and the camera (I'm not sure if should I store all of 6 view matrices and send them to shaders later, or is there a way to do it with just one view matrix). Render the scene 6 times from the light's position, each time in another direction (X, -X, Y, -Y, Z, -Z) for (var face = 0; face < 6; face++) { gl.bindFramebuffer(gl.FRAMEBUFFER, shadow.buffer.framebuffer[face]); gl.viewport(0, 0, shadow.buffer.size, shadow.buffer.size); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); camera.lookAt( light.position.add( cubeMapDirections[face] ) ); scene.draw(shadow.program); } In a second pass, calculate the projection a a current vertex using light's projection and view matrix. Now I don't know If should I calculate 6 of them, because of 6 faces of a cubemap. ScaleMatrix pushes the projected vertex into the 0.0 - 1.0 region. vDepthPosition = ScaleMatrix * uPMatrixFromLight * uVMatrixFromLight * vWorldVertex; In a fragment shader calculate the distance between the current vertex and the light position and check if it's deeper then the depth information read from earlier rendered shadow map. I know how to do it with a 2D Texture, but I have no idea how should I use cubemap texture here. I have read that texture lookups into cubemaps are performed by a normal vector instead of a UV coordinate. What vector should I use? Just a normalized vector pointing to the current vertex? For now, my code for this part looks like this (not working yet): float shadow = 1.0; vec3 depth = vDepthPosition.xyz / vDepthPosition.w; depth.z = length(vWorldVertex.xyz - uLightPosition) * linearDepthConstant; float shadowDepth = unpack(textureCube(uDepthMapSampler, vWorldVertex.xyz)); if (depth.z > shadowDepth) shadow = 0.5; Could you give me some hints or examples (preferably in WebGL code) how I should build it?

    Read the article

  • PowerShell and SMO – be careful how you iterate

    - by Fatherjack
    I’ve yet to have a totally smooth experience with PowerShell and it was late on Friday when I crashed into this problem. I haven’t investigated if this is a generally well understood circumstance and if it is then I apologise for repeating everything. Scenario: I wanted to scan a number of server for many properties, including existing logins and to identify which accounts are bestowed with sysadmin privileges. A great task to pass to PowerShell, so with a heavy heart I started up PowerShellISE and started typing. The script doesn’t come easily to me but I follow the logic of SMO and the properties and methods available with the language so it seemed something I should be able to master. Version #1 of my script. And the results it returns when executed against my home laptop server. These results looked good and for a long time I was concerned with other parts of the script, for all intents and purposes quite happy that this was an accurate assessment of the server. Let’s just review my logic for each step of the code at the top. Lines 1 to 7 just set up our variables and write out the header message Line 8 our first loop, to go through each login on the server Line 10 an inner loop that will assess each role name that each login has been assigned Line 11 a test to see if each role has the name ‘sysadmin’ Line 13 write out the login name with a bright format as it is a sysadmin login Line 17 write out the login name with no formatting It is quite possible that here someone with more PowerShell experience than me will be shouting at their screen pointing at the error I made but to me this made total sense. Until I altered the code, I altered lines 6 and 7 of code above to be: $c = $Svr.Logins.Count write-host “There are $c Logins on the server” This changed my output to look like this: This started alarm bells ringing – there are clearly not 13 logins listed So, let’s see where things are going wrong, edit the script so it looks like this. I’ve highlighted the changes to make Running this code shows me these results Our $n variable should count up by one for each login returned and We are clearly missing some logins. I referenced this list back to Management Studio for my server and see the Logins as below, where there are clearly 13 logins. We see a Login called Annette in SSMS but not in the script results so I opened that up and looked at its properties and it’s server roles in particular. The account has only public access to the server. Inspection of the other logins that the PowerShell script misses out show they too are only members of the public role. Right now I can’t work out whether there is a good reason for this and if it should be expected behaviour or not. Please spend a few minutes to leave a comment if you have an opinion or theory for this. How to get the full list of logins. Clearly I needed to get a full list of the logins so set about reviewing my code to see if there was a better way to iterate through the roles for each login. This is the code that I came up with and I think it is doing everything that I need it to. It gives me the expected results like this: So it seems that the ListMembers() method is the trouble maker in my first versions of the code. I would have expected that ListMembers should return Logins that are only members of the public role, certainly Technet makes no reference to it being left out in it’s Login.ListMembers details. Suffice to say, it’s a lesson learned and I will approach using it with caution in future circumstances.

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

< Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >