Search Results

Search found 4931 results on 198 pages for 'burnt hand'.

Page 143/198 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • Too Clever for My Own Good

    - by AjarnMark
    Yesterday I caught myself being a little too clever for my own good with some ASP.NET code.  It seems that I have forgotten some of my good old classic HTML and JavaScript skills, and become too dependent on the .NET Framework and WebControls to do the work for me.  Here’s the scenario… In order to improve the User Interface and better communicate to the user when something is happening that they need to wait for, we have started to modify some of our larger (slower) pages to display messages like Processing… or Reloading… while they are cycling through a postback.  (Yes, I understand this could be improved by using AJAX / Callbacks and so on, but even then, you need to let your user know that they need to wait for that section to be re-rendered, so for the moment these pages will continue to use good ol’ Postbacks.)  It’s a very simple trick, really.  All I want to do is when some control triggers a postback, first run a little client-side JavaScript to hide the main contents of the page (such as a GridView) and display the appropriate message.  This lets the user know, “Hey, we’re doing something, don’t click another link or scroll and try to take action right now.” The first places I hooked this up were easy.  Most common cause of a postback:  Buttons.  And when you’re writing the markup or declarative code for an ASP:Button control, there is the handy OnClientClick property which is designed for just this purpose…to run client-side JavaScript before the postback occurs.  This is distinguished from the OnClick property which tells the control what Server-side code to run.  Great!  Done!  Easy! But then there are other controls like DropDownLists and CheckBoxes that we use on our pages with the AutoPostback=True setting which cause postbacks.  And these don’t have OnClientClick or OnClientSelectedIndexChanged events.  So I started getting creative, using an ASP:CustomValidator control in conjunction with setting the CausesValidation and ValidationGroup settings on these controls, which basically caused the action on the control to fire the Custom Validator, which was defined with a Client Side validation function which then did the hide content/show message code (and return a meaningless IsValid setting).  This also caused me to define a different ValidationGroup setting for my real data entry validator controls so that I could control them separately and only have them fire when I really wanted validation, and not just my show/hide trick. For a little while I was pretty proud of myself for coming up with this clever approach to get around what I considered to be a serious oversight on the DropDownList and CheckBox controls declarative syntax.  Then, in the midst of my smugness, just as I was about to commit my changes to the source code repository, it dawned on me that there is a much simpler and much more appropriate way to accomplish this.  All that I really needed to do was to put in my server-side code (I used the Page_Init section) a call to MyControl.Attributes.Add(“onClick”, “myJavaScriptFunctionName()”) for the checkboxes, and for the DropDownLists (which become select tags) use “onChange” instead of “onClick”.  This is exactly the type of thing that the Attributes collection is there for…so you can add attributes to be rendered with the control that you would have otherwise stuck right into the HTML markup if you had been doing this by hand in the first place. Ugh!  A few hours wasted on clever tricks that I ended up completely removing, but I did learn a lot more about custom validators and validation groups in the process.  And got a good reminder that all that stuff (HTML, JavaScript, and CSS) I learned back when I wrote classic ASP pages is still valuable today.  Oh, and one more thing…don’t get lulled into too much reliance on the the whiz-bang tool to do it for you.  After all, WebControls are just another layer of abstraction, and sometimes you need to dig down through the layers and get a little closer to the native language.

    Read the article

  • C# 5 Async, Part 2: Asynchrony Today

    - by Reed
    The .NET Framework has always supported asynchronous operations.  However, different mechanisms for supporting exist throughout the framework.  While there are at least three separate asynchronous patterns used through the framework, only the latest is directly usable with the new Visual Studio Async CTP.  Before delving into details on the new features, I will talk about existing asynchronous code, and demonstrate how to adapt it for use with the new pattern. The first asynchronous pattern used in the .NET framework was the Asynchronous Programming Model (APM).  This pattern was based around callbacks.  A method is used to start the operation.  It typically is named as BeginSomeOperation.  This method is passed a callback defined as an AsyncCallback, and returns an object that implements IAsyncResult.  Later, the IAsyncResult is used in a call to a method named EndSomeOperation, which blocks until completion and returns the value normally directly returned from the synchronous version of the operation.  Often, the EndSomeOperation call would be called from the callback function passed, which allows you to write code that never blocks. While this pattern works perfectly to prevent blocking, it can make quite confusing code, and be difficult to implement.  For example, the sample code provided for FileStream’s BeginRead/EndRead methods is not simple to understand.  In addition, implementing your own asynchronous methods requires creating an entire class just to implement the IAsyncResult. Given the complexity of the APM, other options have been introduced in later versions of the framework.  The next major pattern introduced was the Event-based Asynchronous Pattern (EAP).  This provides a simpler pattern for asynchronous operations.  It works by providing a method typically named SomeOperationAsync, which signals its completion via an event typically named SomeOperationCompleted. The EAP provides a simpler model for asynchronous programming.  It is much easier to understand and use, and far simpler to implement.  Instead of requiring a custom class and callbacks, the standard event mechanism in C# is used directly.  For example, the WebClient class uses this extensively.  A method is used, such as DownloadDataAsync, and the results are returned via the DownloadDataCompleted event. While the EAP is far simpler to understand and use than the APM, it is still not ideal.  By separating your code into method calls and event handlers, the logic of your program gets more complex.  It also typically loses the ability to block until the result is received, which is often useful.  Blocking often requires writing the code to block by hand, which is error prone and adds complexity. As a result, .NET 4 introduced a third major pattern for asynchronous programming.  The Task<T> class introduced a new, simpler concept for asynchrony.  Task and Task<T> effectively represent an operation that will complete at some point in the future.  This is a perfect model for thinking about asynchronous code, and is the preferred model for all new code going forward.  Task and Task<T> provide all of the advantages of both the APM and the EAP models – you have the ability to block on results (via Task.Wait() or Task<T>.Result), and you can stay completely asynchronous via the use of Task Continuations.  In addition, the Task class provides a new model for task composition and error and cancelation handling.  This is a far superior option to the previous asynchronous patterns. The Visual Studio Async CTP extends the Task based asynchronous model, allowing it to be used in a much simpler manner.  However, it requires the use of Task and Task<T> for all operations.

    Read the article

  • Conference networking for the socially awkward

    - by Melanie Townsend
    Do you approach a room full of strangers with excitement at all the new people you’re going to chat to over coffee and a muffin as you swap tales of how you convinced your manager to give you the day “off”? Or, do you find rooms full of strangers intimidating and begin by scouting out a place you can stand quietly and not be in someone’s way until the next session begins? If you’re on the train to extrovert city, that’s great, well done, move along. If, on the other hand, a room full of strangers who all seem to inexplicably know each other already is more challenge than opportunity, then making those connections with other professionals can be more difficult. So, here’s some advice, some gleaned from other things I’ve read online when trying to overcome my own discomfort in large groups (hopefully minus the infuriating condescension), others are just things I’ve found helpful over the years. Start small Smaller groups are less intimidating, and, now that you’ve taken the plunge to show up, it’s harder to remain inconspicuous. I find it’s easier to speak to new people once the option NOT to has been taken away. You’re there now, smile through the awkward and you’ll be forever grateful when the three people you’ve met and gotten to know here are also at that gigantic conference later on (ideally, introducing you to other people). Smile, or at the very least, stop scowling You probably don’t even know you’re doing it. If your resting face doesn’t come across as manically happy, tinge that with some social anxiety and you become one great ball of unapproachable. Normally, I wouldn’t suggest this as a problem that needs fixing, I have personally honed this face to use while travelling alone all the time. However, if you are indeed hoping to meet some useful people and get the most out of this conference, you may need to remind yourself to smile. Prepare some ice breakers This is going to sound stupid, like “no one does this right?” stupid, but, just, trust me a minute. It’s okay to prepare. You don’t need to write word-for-word questions to ask people and practice them in a mirror – that would be strange. I’m suggesting to just have an arsenal of questions to ask people if you get stuck, what session has been your favorite, which ones are you most looking forward to, have you heard X presenter speak before, what did you think of them? Even just thinking about these things in advance can help, and, as a bonus, while the other person is answering it gives you a moment to tamp down that panic, I mean breathe, I mean get to know them. You’re not alone (in the least creepy way possible) See that person in the corner clutching their phone with a mild deer in the headlights look?  That is potentially your new conference buddy. Starting with something along the lines of: I don’t know about you, the sessions here are great but I find the crowds a little tough to deal with. Mind if I park here for a second? is a decent opener. Just walking around and looking at exhibitors (if applicable) is fine, but it’s a little too easy to wander about and not actually speak to anyone if that’s all you’re doing. If joining a group of people talking is too much to start with, one-on-one can be easier. Have goals Are there people in particular you wanted to speak to? Did you have a personal goal of speaking to at least “x” new people? Are you trying to get a contact in a specific company because you want to work with them on something? Does the business have vague goals as well that you may or may not be judged on later? Making specific goals you can accomplish lets you know whether you’ve actually succeeded in your “networking pursuits” or what you need to work on more for next time. Everyone’s got their own coping technique. Some people are able to remind themselves that “humans are fundamentally social creatures” and somehow that helps them, others drink which is not really something I recommend for professional conferences but to each their own, and some focus on the fact that networking can play a big role in their career path. Just do what works for you, and if there’re any tricks you’ve found helpful over the years, please share em.

    Read the article

  • How to run RCU from the command line

    - by Kevin Smith
    When I was trying to figure out how to run RCU on 64-bit Linux I found this post. It shows how to run RCU from the command line. It didn't actually work for me, so you can see my post on how to run RCU on 64-bit Linux. But, seeing how to run RCU from the command got me started thinking about running RCU from the command line to create the schema for WebCenter Content. That post got me part of the way there since it shows how run RCU silently from the command line, but to do this you need to know the name of the RCU component for WebCenter Content. I poked around in the RCU files and found the component name for WCC is CONTENTSERVER11. There is a contentserver11 directory in rcuHome/rcu/integration and when you look at the contentserver11.xml file you will see <RepositoryConfig COMP_ID="CONTENTSERVER11"> With the component name for WCC in hand I was able to use this command line to run RCU and create the schema for WCC. .../rcuHome/bin/rcu -silent -createRepository -databaseType ORACLE -connectString localhost:1521:orcl1 -dbUser sys -dbRole sysdba -schemaPrefix TEST -component CONTENTSERVER11 -f <rcu_passwords.txt To make the silent part work and not have it prompt you for the passwords needed (sys password and password for each schema) you use the -f option and specify a file containing the passwords, one per line, in the order the components are listed on the -component argument. Here is the output from rcu when I ran the above command. Processing command line ....Repository Creation Utility - Checking PrerequisitesChecking Global PrerequisitesRepository Creation Utility - Checking PrerequisitesChecking Component PrerequisitesRepository Creation Utility - Creating TablespacesValidating and Creating TablespacesRepository Creation Utility - CreateRepository Create in progress.Percent Complete: 0...Percent Complete: 100Repository Creation Utility: Create - Completion SummaryDatabase details:Host Name              : localhostPort                   : 1521Service Name           : ORCL1Connected As           : sysPrefix for (prefixable) Schema Owners : TESTRCU Logfile            : /u01/app/oracle/logdir.2012-09-26_07-53/rcu.logComponent schemas created:Component                            Status  LogfileOracle Content Server 11g - Complete Success /u01/app/oracle/logdir.2012-09-26_07-53/contentserver11.logRepository Creation Utility - Create : Operation Completed This works fine if you want to use the default tablespace sizes and options, but there does not seem to be a way to specify the tablespace options on the command line. You can specify the name of the tablespace and temp tablespace, but they must already exist in the database before running RCU. I guess you can always create the tablespaces first using your desired sizes and options and then run RCU and specify the tablespaces you created. When looking up the command line options in the RCU doc I found it has the list of components for each product that it supports. See Appendix B in the RCU User's Guide.

    Read the article

  • Computing a normal matrix in conjunction with gluLookAt

    - by Chris Smith
    I have a hand-rolled camera class that converts yaw, pitch, and roll angles into a forward, side, and up vector suitable for calling gluLookAt. Using this camera class I can modify the model-view matrix to move about the 3D world just fine. However, I am having trouble when using this camera class (and associated model-view matrix) when trying to perform directional lighting in my vertex shader. The problem is that the light direction, (0, 1, 0) for example, is relative to where the 'camera is looking' and not the actual world coordinates. (Or is this eye coordinates vs. model coordinates?) I would like the light direction to be unaffected by the camera's viewing direction. For example, when the camera is looking down the Z axis the ground is lit correctly. However, if I point the camera straight at the ground, then it goes dark. This is (I think) because the light direction is parallel with the camera's 'up' vector which is perpendicular with the ground's normal vector. I tried computing the normal matrix without taking the camera's model view into account, but then none of my objects were rotated correctly. Sorry if this sounds vague. I suspect there is a straight forward answer, but I'm not 100% clear on how the normal matrix should be used for transforming vertex normals in my vertex shader. For reference, here is pseudo code for my rendering loop: pMatrix = new Matrix(); pMatrix = makePerspective(...) mvMatrix = new Matrix() camera.apply(mvMatrix); // Calls gluLookAt // Move the object into position. mvMatrix.translatev(position); mvMatrix.rotatef(rotation.x, 1, 0, 0); mvMatrix.rotatef(rotation.y, 0, 1, 0); mvMatrix.rotatef(rotation.z, 0, 0, 1); var nMatrix = new Matrix(); nMatrix.set(mvMatrix.get().getInverse().getTranspose()); // Set vertex shader uniforms. gl.uniformMatrix4fv(shaderProgram.pMatrixUniform, false, new Float32Array(pMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.mvMatrixUniform, false, new Float32Array(mvMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.nMatrixUniform, false, new Float32Array(nMatrix.getFlattened())); // ... gl.drawElements(gl.TRIANGLES, this.vertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0); And the corresponding vertex shader: // Attributes attribute vec3 aVertexPosition; attribute vec4 aVertexColor; attribute vec3 aVertexNormal; // Uniforms uniform mat4 uMVMatrix; uniform mat4 uNMatrix; uniform mat4 uPMatrix; // Varyings varying vec4 vColor; // Constants const vec3 LIGHT_DIRECTION = vec3(0, 1, 0); // Opposite direction of photons. const vec4 AMBIENT_COLOR = vec4 (0.2, 0.2, 0.2, 1.0); float ComputeLighting() { vec4 transformedNormal = vec4(aVertexNormal.xyz, 1.0); transformedNormal = uNMatrix * transformedNormal; float base = dot(normalize(transformedNormal.xyz), normalize(LIGHT_DIRECTION)); return max(base, 0.0); } void main(void) { gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0); float lightWeight = ComputeLighting(); vColor = vec4(aVertexColor.xyz * lightWeight, 1.0) + AMBIENT_COLOR; } Note that I am using WebGL, so if the anser is use glFixThisProblem(...) any pointers on how to re-implement that on WebGL if missing would be appreciated.

    Read the article

  • wifi hardware switch doesn't work on a Dell 1018

    - by user42566
    I have a problem with my Dell 1018 Inspiron. I can't switch the wifi on, through the key on the keyboard. I think it's a driver problem since Ubuntu 11.10. This are the versions i tried: Ubuntu 10.04 / 10.10 It's possible to install the driver by hand: sudo add-apt-repository ppa:lexical/hwe-wireless sudo apt-get update sudo apt-get install rtl8192ce-dkms Ubuntu 11.04 It works out of the box Ubuntu 11.10 / 12.04 I haven’t found any solution for these versions. The "ppa:lexical/hwe-wireless" doesn't work for these versions. It says Can not find package rtl8192ce-dkms. The window of additional drivers is empty. So I can't install the driver. The wired network works good. Here is some information: 0: dell-wifi: Wireless LAN Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: yes sudo lshw -class network *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:05:00.0 logical name: eth0 version: 05 serial: 5c:26:0a:0d:20:10 size: 10Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=rtl_nic/rtl8105e-1.fw latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:43 ioport:2000(size=256) memory:f0f2c000-f0f2cfff memory:f0f18000-f0f1bfff *-network description: Wireless interface product: RTL8188CE 802.11b/g/n WiFi Adapter vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:07:00.0 logical name: wlan0 version: 01 serial: 70:f1:a1:fe:15:bd width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=rtl8192ce driverversion=3.2.0-22-generic-pae firmware=N/A ip=192.168.1.76 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn resources: irq:17 ioport:3000(size=256) memory:f0100000-f0103fff mark@mark-Inspiron-1018:~$ mark@mark-Inspiron-1018:~$ sudo lspci -nn 00:00.0 Host bridge [0600]: Intel Corporation N10 Family DMI Bridge [8086:a010] 00:02.0 VGA compatible controller [0300]: Intel Corporation N10 Family Integrated Graphics Controller [8086:a011] 00:02.1 Display controller [0380]: Intel Corporation N10 Family Integrated Graphics Controller [8086:a012] 00:1b.0 Audio device [0403]: Intel Corporation N10/ICH 7 Family High Definition Audio Controller [8086:27d8] (rev 02) 00:1c.0 PCI bridge [0604]: Intel Corporation N10/ICH 7 Family PCI Express Port 1 [8086:27d0] (rev 02) 00:1c.1 PCI bridge [0604]: Intel Corporation N10/ICH 7 Family PCI Express Port 2 [8086:27d2] (rev 02) 00:1d.0 USB controller [0c03]: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 [8086:27c8] (rev 02) 00:1d.1 USB controller [0c03]: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 [8086:27c9] (rev 02) 00:1d.2 USB controller [0c03]: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 [8086:27ca] (rev 02) 00:1d.3 USB controller [0c03]: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 [8086:27cb] (rev 02) 00:1d.7 USB controller [0c03]: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller [8086:27cc] (rev 02) 00:1e.0 PCI bridge [0604]: Intel Corporation 82801 Mobile PCI Bridge [8086:2448] (rev e2) 00:1f.0 ISA bridge [0601]: Intel Corporation NM10 Family LPC Controller [8086:27bc] (rev 02) 00:1f.2 SATA controller [0106]: Intel Corporation N10/ICH7 Family SATA Controller [AHCI mode] [8086:27c1] (rev 02) 00:1f.3 SMBus [0c05]: Intel Corporation N10/ICH 7 Family SMBus Controller [8086:27da] (rev 02) 05:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 05) 07:00.0 Network controller [0280]: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter [10ec:8176] (rev 01) mark@mark-Inspiron-1018:~$ mark@mark-Inspiron-1018:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 002: ID 174f:1127 Syntek mark@mark-Inspiron-1018:~$

    Read the article

  • JavaOne Latin America 2012 Trip Report

    - by reza_rahman
    JavaOne Latin America 2012 was held at the Transamerica Expo Center in Sao Paulo, Brazil on December 4-6. The conference was a resounding success with a great vibe, excellent technical content and numerous world class speakers. Some notable local and international speakers included Bruno Souza, Yara Senger, Mattias Karlsson, Vinicius Senger, Heather Vancura, Tori Wieldt, Arun Gupta, Jim Weaver, Stephen Chin, Simon Ritter and Henrik Stahl. Topics covered included the JCP/JUGs, Java SE 7, HTML 5/WebSocket, CDI, Java EE 6, Java EE 7, JSF 2.2, JMS 2, JAX-RS 2, Arquillian and JavaFX. Bruno Borges and I manned the GlassFish booth at the Java Pavilion on Tuesday and Webnesday. The booth traffic was decent and not too hectic. We met a number of GlassFish adopters including perhaps one of the largest GlassFish deployments in Brazil as well as some folks migrating to Java EE from Spring. We invited them to share their stories with us. We also talked with some key members of the local Java community. Tuesday evening we had the GlassFish party at the Tribeca Pub. The party was definitely a hit and we could have used a larger venue (this was the first time we had the GlassFish party in Brazil). Along with GlassFish enthusiasts, a number of Java community leaders were there. We met some of the same folks again at the JUG leader's party on Wednesday evening. On Thursday Arun Gupta, Bruno Borges and I ran a hands-on-lab on JAX-RS, WebSocket and Server-Sent Events (SSE) titled "Developing JAX-RS Web Applications Utilizing Server-Sent Events and WebSocket". This is the same Java EE 7 lab run at JavaOne San Francisco. The lab provides developers a first hand glipse of how an HTML 5 powered Java EE application might look like. We had an overflow crowd for the lab (at one point we had about twenty people standing) and the lab went very well. The slides for the lab are here: Developing JAX-RS Web Applications Utilizing Server-Sent Events and WebSocket from Reza Rahman The actual contents for the lab is available here. Give me a shout if you need help getting it up and running. I gave two solo talks following the lab. The first was on JMS 2 titled "What’s New in Java Message Service 2". This was essentially the same talk given by JMS 2 specification lead Nigel Deakin at JavaOne San Francisco. I talked about the JMS 2 simplified API, JMSContext injection, delivery delays, asynchronous send, JMS resource definition in Java EE 7, standardized configuration for JMS MDBs in EJB 3.2, mandatory JCA pluggability and the like. The session went very well, there was good Q & A and someone even told me this was the best session of the conference! The slides for the talk are here: What’s New in Java Message Service 2 from Reza Rahman My last talk for the conference was on JAX-RS 2 in the keynote hall. Titled "JAX-RS 2: New and Noteworthy in the RESTful Web Services API" this was basically the same talk given by the specification leads Santiago Pericas-Geertsen and Marek Potociar at JavaOne San Francisco. I talked about the JAX-RS 2 client API, asyncronous processing, filters/interceptors, hypermedia support, server-side content negotiation and the like. The talk went very well and I got a few very kind complements afterwards. The slides for the talk are here: JAX-RS 2: New and Noteworthy in the RESTful Web Services API from Reza Rahman On a more personal note, Sao Paulo has always had a special place in my heart as the incubating city for Sepultura and Soulfy -- two of my most favorite heavy metal musical groups of all time! Consequently, the city has a perpertually alive and kicking metal scene pretty much any given day of the week. This time I got to check out a solid performance by local metal gig Republica at the legendary Manifesto Bar. I also wanted to see a Dio Tribute at the Blackmore but ran out of time and energy... Overall I enjoyed the conference/Sao Paulo and look forward to going to Brazil again next year!

    Read the article

  • Lease Accounting Closed for Comment

    - by Theresa Hickman
    December 15, 2010 marked the last day to send public comments to FASB and IASB on lease accounting. June 2011 is the deadline for the final consideration of the Leases Exposure Draft that will be given to standard setters in order to create a new lease accounting standard. Landlords, lessees, retailers, airlines industry, etc. are all worried right now about the changes to lease accounting. They feel the changes will be too costly and complex without adding significant improvement to the quality and relevance of financial statements. In a nutshell, IASB and FASB want to abolish operating leases where the lessee records the periodic payments as an expense over time. The proposed changes will mean that the accounting for leases will move from the P&L and hit both the lessee's and lessor's balance sheets. For companies that occupy a lot of property, this could significantly increase their liabilities not to mention front-load much of the costs that they were able to spread out over time before. Why are IASB and FASB doing this? Their goal is to have consistent accounting for both the lessees and lessors with higher quality financial statements. Leasing is one of four major projects being undertaken by the IASB and FASB in order to complete convergence between US GAAP and IFRS. I spoke to our resident accounting expert Seamus Moran about this to better understand how this might impact accounting software. He reminded me that the proposed changes to both US GAAP and IFRS in respect to leases are "proposed." It is still inappropriate to account for leases the way they are being proposed and we still need to account for them in accordance to the current regulations, which is what current accounting software programs, such as E-Business Suite Release 12.1 and prior and PeopleSoft Enterprise support. The FASB (US GAAP) and IASB (IFRS) exposure drafts (EDs) that outline the proposal were published. The FASB edition was published on August 17th, with comments due by December 15th. The IASB edition was published on the same date, and comments were due in London on the same date. Exposure drafts are the method both the FASB and the IASB use to solicit General Acceptance, the "GA" in GAAP. Both Boards will consider the input they have received, and perhaps revise the proposal. The proposal has come in for some criticism, both from the finance houses and the uses of the leased assets. There is, given the opposition to it, an excellent chance that the Leasing proposal will be modified or rewritten. We will know this in about six months, the usual time it takes for the FASB and IASB to digest the comments they receive. If they feel the proposal has General Acceptance, they will issue the final Standard at that time; if not, they will issue a revised proposal with another year of comment of drafting. Oracle participates in the standard setting process and is fully aware of the leasing proposal. We have designs that would reflect the proposal in hand. These designs will be finalized when the proposal is finalized. It is likely that customers will develop new financial arrangements if the proposal is finalized, and we are working with customers and partners to stay in touch with people's business responses to the proposal. The IASB and FASB are aware that ERP companies will have to revise their software, and that the companies filing results under IFRS or under US GAAP will have to implement such software. The form and timing of the release of the updated software will depend on the schedule of the take up of the new standard, the complexity of the standard, and the releases supported at the time the standard becomes effective.

    Read the article

  • CSM shadow errors when models are split

    - by KaiserJohaan
    I'm getting closer to fixing CSM, but there seems to be one more issue at hand. At certain angles, the models will be caught/split between two shadow map cascades, like below. first depth split second depth split - here you can see the model is caught between the splits How does one fix this? Increase the overlapping boundaries between the splits? Or is the frustrum erronous? CameraFrustrum CalculateCameraFrustrum(const float fovDegrees, const float aspectRatio, const float minDist, const float maxDist, const Mat4& cameraViewMatrix, Mat4& outFrustrumMat) { CameraFrustrum ret = { Vec4(1.0f, -1.0f, 0.0f, 1.0f), Vec4(1.0f, 1.0f, 0.0f, 1.0f), Vec4(-1.0f, 1.0f, 0.0f, 1.0f), Vec4(-1.0f, -1.0f, 0.0f, 1.0f), Vec4(1.0f, -1.0f, 1.0f, 1.0f), Vec4(1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, -1.0f, 1.0f, 1.0f), }; const Mat4 perspectiveMatrix = PerspectiveMatrixFov(fovDegrees, aspectRatio, minDist, maxDist); const Mat4 invMVP = glm::inverse(perspectiveMatrix * cameraViewMatrix); outFrustrumMat = invMVP; for (Vec4& corner : ret) { corner = invMVP * corner; corner /= corner.w; } return ret; } Mat4 CreateDirLightVPMatrix(const CameraFrustrum& cameraFrustrum, const Vec3& lightDir) { Mat4 lightViewMatrix = glm::lookAt(Vec3(0.0f), -glm::normalize(lightDir), Vec3(0.0f, -1.0f, 0.0f)); Vec4 transf = lightViewMatrix * cameraFrustrum[0]; float maxZ = transf.z, minZ = transf.z; float maxX = transf.x, minX = transf.x; float maxY = transf.y, minY = transf.y; for (uint32_t i = 1; i < 8; i++) { transf = lightViewMatrix * cameraFrustrum[i]; if (transf.z > maxZ) maxZ = transf.z; if (transf.z < minZ) minZ = transf.z; if (transf.x > maxX) maxX = transf.x; if (transf.x < minX) minX = transf.x; if (transf.y > maxY) maxY = transf.y; if (transf.y < minY) minY = transf.y; } Mat4 viewMatrix(lightViewMatrix); viewMatrix[3][0] = -(minX + maxX) * 0.5f; viewMatrix[3][1] = -(minY + maxY) * 0.5f; viewMatrix[3][2] = -(minZ + maxZ) * 0.5f; viewMatrix[0][3] = 0.0f; viewMatrix[1][3] = 0.0f; viewMatrix[2][3] = 0.0f; viewMatrix[3][3] = 1.0f; Vec3 halfExtents((maxX - minX) * 0.5, (maxY - minY) * 0.5, (maxZ - minZ) * 0.5); return OrthographicMatrix(-halfExtents.x, halfExtents.x, halfExtents.y, -halfExtents.y, halfExtents.z, -halfExtents.z) * viewMatrix; }

    Read the article

  • Critical Threads Optimization

    - by Rafael Vanoni
    Background One of the more common issues we've been seeing in the field is the growing difficulty in optimizing performance of multi-threaded applications. A good portion of this difficulty is due to the increasing complexity of modern processors that present various degrees of sharing relationships between hardware components. Take any current CMT processor and you'll find any number of CPUs sharing execution pipelines, floating point units, caches, etc. Consequently, applying the traditional recipe of one software thread for each CPU will have varying degrees of success, according to the layout of the underlying hardware. On top of this increasing complexity we've also seen processors with features that aim at dynamically resourcing software threads according to their utilization. Intel's Turbo Boost allows processors to increase their operating frequency if there is enough thermal headroom available and the processor isn't fully utilized. More recently, the SPARC T4 processor introduced dynamic threading, allowing each core to dynamically allocate more resources to its active CPUs. Both cases are in essence recognizing that current processors will be running a wide mix of workloads, some will be designed for throughput, others for low latency. The hardware is providing mechanisms to dynamically resource threads according to their runtime behavior. We're very aware of these challenges in Solaris, and have been working to provide the best out of box performance while providing mechanisms to further optimize applications when necessary. The Critical Threads Optimzation was introduced in Solaris 10 8/11 and Solaris 11 as one such mechanism that allows customers to both address issues caused by contention over shared hardware resources and explicitly take advantage of features such as T4's dynamic threading. What it is The basic idea is to allow performance critical threads to execute with more exclusive access to hardware resources. For example, when deploying an application that implements a producer/consumer model, it'll likely be advantageous to give the producer more exclusive access to the hardware instead of having it competing for resources with all the consumers. In the case of a T4 based system, we may want to have a producer running by itself on a single core and create one consumer for each of the remaining CPUs. With the Critical Threads Optimization we're extending the semantics of scheduling priorities (which thread should run first) to include priority over shared resources (which thread should have more "space"). Now the scheduler will not only run higher priority threads first: it will also provide them with more exclusive access to hardware resources if they are available. How does it work ? Using the previous example in Solaris 11, all you'd have to do would be to place the producer in the Fixed Priority (FX) scheduling class at priority 60, or in the Real Time (RT) class at any priority and Solaris will try to give it more "hardware space". On both Solaris 10 8/11 and Solaris 11 this can be achieved through the existing priocntl(1,2) and priocntlset(2) interfaces. If your application already assigns these priorities to performance critical threads, there's no additional step you need to take. One important aspect of this optimization is that it requires some level of idleness in the system, either as a result of sizing the application before hand or through periods of transient idleness during runtime. If the system is fully committed, the scheduler will put all the available CPUs to work.Best practices If you're an application developer, we encourage you to look into assigning the right priorities for the different threads in your application. Solaris provides different scheduling classes (Time Share, Interactive, Fair Share, Fixed Priority and Real Time) that offer different policies and behaviors. It is not always simple to figure out which set of threads are critical to the performance of a workload, and it may not always be feasible to take advantage of this optimization, but we believe that this can be correctly (and safely) done during development. Overall, the out of box performance in Solaris should meet your workload's requirements. If you are looking into that extra bit of performance, then the Critical Threads Optimization may be what you're looking for.

    Read the article

  • How to Waste Your Marketing Budget

    - by Mike Stiles
    Philosophers have long said if you find out where a man’s money is, you’ll know where his heart is. Find out where money in a marketing budget is allocated, and you’ll know how adaptive and ready that company is for the near future. Marketing spends are an investment. Not unlike buying stock, the money is placed in areas the marketer feels will yield the highest return. Good stock pickers know the lay of the land, the sectors, the companies, and trends. Likewise, good marketers should know the media available to them, their audience, what they like & want, what they want their marketing to achieve…and trends. So what are they doing? And how are they doing? A recent eTail report shows nearly half of retailers planned on focusing on SEO, SEM, and site research technologies in the coming months. On the surface, that’s smart. You want people to find you. And you’re willing to let the SEO tail wag the dog and dictate the quality (or lack thereof) of your content such as blogs to make that happen. So search is prioritized well ahead of social, multi-channel initiatives, email, even mobile - despite the undisputed explosive growth and adoption of it by the public. 13% of retailers plan to focus on online video in the next 3 months. 29% said they’d look at it in 6 months. Buying SEO trickery is easy. Attracting and holding an audience with wanted, relevant content…that’s the hard part. So marketers continue to kick the content can down the road. Pretty risky since content can draw and bind customers to you. Asked to look a year ahead, retailers started thinking about CRM systems, customer segmentation, and loyalty, (again well ahead of online video, social and site personalization). What these investors are missing is social is spreading across every function of the enterprise and will be a part of CRM, personalization, loyalty programs, etc. They’re using social for engagement but not for PR, customer service, and sales. Mistake. Allocations are being made seemingly blind to the trends. Even more peculiar are the results of an analysis Mary Meeker of Kleiner Perkins made. She looked at how much time people spend with media types and how marketers are investing in those media. 26% of media consumption is online, marketers spend 22% of their ad budgets there. 10% of media time is spent with mobile, but marketers are spending 1% of their ad budgets there. 7% of media time is spent with print, but (get this) marketers spend 25% of their ad budgets there. It’s like being on Superman’s Bizarro World. Mary adds that of the online spending, most goes to search while spends on content, even ad content, stayed flat. Stock pickers know to buy low and sell high. It means peering with info in hand into the likely future of a stock and making the investment in it before it peaks. Either marketers aren’t believing the data and trends they’re seeing, or they can’t convince higher-ups to acknowledge change and adjust their portfolios accordingly. Follow @mikestilesImage via stock.xchng

    Read the article

  • Two Candidates + One Job = Two Different Outcomes

    - by david.talamelli
    Recruiters have always headhunted (sidenote: I do not like this word, in general I think the type of people who use the phrase “headhunting” are the ones who are trying to sound more important than what they likely are). Any serious Recruiter engages in direct recruiting activity, it is part and parcel of the business it is not something unique. With the uptake in Social Media the past 4-5 years, we have seen an increase in the number of Recruiters proactively reaching out to people about job opportunities. We have also seen this activity increase across all levels of hire, from help desk roles to C-Level Executives. While getting approached about a role can be a nice boost to a person’s ego, do not let it give you an inflated sense of entitlement. It is The way that people handle themselves during these calls and subsequent interviews will have a large impact on their potential to land that job. Last week I spoke to two very different candidates, both about the same position and both with very different outcomes. On paper, Candidate #1 looked fantastic; they ticked many of the boxes that we were looking for. The person is working at global IT company and working in a similar role as the one we were hiring for but not in as senior as the role we had. This role would have been the perfect step to getting involved in more complex work for the person. Candidate #2 had less polished IT experience, ticked some of the boxes we were looking for and on paper in comparison to Candidate #1 was not as close a fit as Candidate #1 was. It seemed like I was comparing apples and oranges. After speaking to both candidates it turns out I was comparing apples and oranges except the person better suited for our role was not the one I was expecting it would be. The first candidate on paper looked great – they had the experience we were looking for and appeared to be just right for the role, but after talking to them, they gave me the impression that they thought the world owed them. The impression I was left with was that they did not equate success with hard work, they seemed more interested in “what is in it for me”. Rather than having a proper conversation with me, I was often cut off and asked to hurry it up when explaining our business, what we are doing, etc... . This person seemed more interested in the job title and money than how rather than think about ways to make the role successful. Candidate #2 who had limited experience, made up for any perceived lack of experience and them some with a demonstrated motivation to succeed and do the things needed to make that happen. Candidate #2 made a great first impression, they did not seem afraid of hard work and demonstrated a “team player” attitude. In talking to them they kept me engaged, listened and asked thoughtful questions that made me think this is the type of person who creates their own luck and who would thrive in a place like Oracle. Skills, capabilities, experience and a good resume can certainly get your foot in the door, but the wrong attitude or approach to work can close those opportunities just as easily. On the other hand, hard work, effort and a genuine work ethic may help open those doors that would otherwise closed for you. A resume with all the credentials gets you in the front door but that is just the beginning of the process. It is not how we start the race that is important, it’s how things end that matter most.

    Read the article

  • Things I've noticed with DVCS

    - by Wes McClure
    Things I encourage: Frequent local commits This way you don't have to be bothered by changes others are making to the central repository while working on a handful of related tasks.  It's a good idea to try to work on one task at a time and commit all changes at partitioned stopping points.  A local commit doesn't have to build, just FYI, so a stopping point doesn't mean a build point nor a point that you can push centrally.  There should be several of these in any given day.  2 hours is a good indicator that you might not be leveraging the power of frequent local commits.  Once you have verified a set of changes works, save them away, otherwise run the risk of introducing bugs into it when working on the next task.  The notion of a task By task I mean a related set of changes that can be completed in a few hours or less.  In the same token don’t make your tasks so small that critically related changes aren’t grouped together.  Use your intuition and the rest of these principles and I think you will find what is comfortable for you. Partial commits Sometimes one task explodes or unknowingly encompasses other tasks, at this point, try to get to a stopping point on part of the work you are doing and commit it so you can get that out of the way to focus on the remainder.  This will often entail committing part of the work and continuing on the rest. Outstanding changes as a guide If you don't commit often it might mean you are not leveraging your version control history to help guide your work.  It's a great way to see what has changed and might be causing problems.  The longer you wait, the more that has changed and the harder it is to test/debug what your changes are doing! This is a reason why I am so picky about my VCS tools on the client side and why I talk a lot about the quality of a diff tool and the ability to integrate that with a simple view of everything that has changed.  This is why I love using TortoiseHg and SmartGit: they show changed files, a diff (or two way diff with SmartGit) of the current selected file and a commit message all in one window that I keep maximized on one monitor at all times. Throw away / stash commits There is extreme value in being able to throw away a commit (or stash it) that is getting out of hand.  If you do not commit often you will have to isolate the work you want to commit from the work you want to throw away, which is wasted productivity and highly prone to errors.  I find myself doing this about once a week, especially when doing exploratory re-factoring.  It's much easier if I can just revert all outstanding changes. Sync with the central repository daily The rest of us depend on your changes.  Don't let them sit on your computer longer than they have to.  Waiting increases the chances of merge conflict which just decreases productivity.  It also prohibits us from doing deploys when people say they are done but have not merged centrally.  This should be done daily!  Find a way to partition the work you are doing so that you can sync at least once daily. Things I discourage: Lots of partial commits right at the end of a series of changes If you notice lots of partial commits at the end of a set of changes, it's likely because you weren't frequently committing, nor were you watching for the size of the task expanding beyond a single commit.  Chances are this cost you productivity if you use your outstanding changes as a guide, since you would have an ever growing list of changes. Committing single files Committing single files means you waited too long and no longer understand all the changes involved.  It may mean there were overlapping changes in single files that cannot be isolated.  In either case, go back to the suggestions above to avoid this.  Committing frequently does not mean committing frequently right at the end of a day's work. It should be spaced out over the course of several tasks, not all at the end in a 5 minute window.

    Read the article

  • Green (Screen) Computing

    - by onefloridacoder
    I recently was given an assignment to create a UX where a user could use the up and down arrow keys, as well as the tab and enter keys to move through a Silverlight datagrid that is going be used as part of a high throughput data entry UI. And to be honest, I’ve not trapped key codes since I coded JavaScript a few years ago.  Although the frameworks I’m using made it easy, it wasn’t without some trial and error.    The other thing that bothered me was that the customer tossed this into the use case as they were delivering the use case.  Fine.  I’ll take a whack at anything and beat up myself and beg (I’m not beyond begging for help) the community for help to get something done if I have to. It wasn’t as bad as I thought and I thought I would hopefully save someone a few keystrokes if you wanted to build a green screen for your customer.   Here’s the ValueConverter to handle changing the strings to decimals and then back again.  The value is a nullable valuetype so there are few extra steps to take.  Usually the “ConvertBack()” method doesn’t get addressed but in this case we have two-way binding and the converter needs to ensure that if the user doesn’t enter a value it will remain null when the value is reapplied to the model object’s setter.  1: using System; 2: using System.Windows.Data; 3: using System.Globalization; 4:  5: public class NullableDecimalToStringConverter : IValueConverter 6: { 7: public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) 8: { 9: if (!(((decimal?)value).HasValue)) 10: { 11: return (decimal?)null; 12: } 13: if (!(value is decimal)) 14: { 15: throw new ArgumentException("The value must be of type decimal"); 16: } 17:  18: NumberFormatInfo nfi = culture.NumberFormat; 19: nfi.NumberDecimalDigits = 4; 20:  21: return ((decimal)value).ToString("N", nfi); 22: } 23:  24: public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) 25: { 26: decimal nullableDecimal; 27: decimal.TryParse(value.ToString(), out nullableDecimal); 28:  29: return nullableDecimal == 0 ? null : nullableDecimal.ToString(); 30: } 31: }            The ConvertBack() method uses TryParse to create a value from the incoming string so if the parse fails, we get a null value back, which is what we would expect.  But while I was testing I realized that if the user types something like “2..4” instead of “2.4”, TryParse will fail and still return a null.  The user is getting “puuu-lenty” of eye-candy to ensure they know how many values are affected in this particular view. Here’s the XAML code.   This is the simple part, we just have a DataGrid with one column here that’s bound to the the appropriate ViewModel property with the Converter referenced as well. 1: <data:DataGridTextColumn 2: Header="On-Hand" 3: Binding="{Binding Quantity, 4: Mode=TwoWay, 5: Converter={StaticResource DecimalToStringConverter}}" 6: IsReadOnly="False" /> Nothing too magical here.  Just some XAML to hook things up.   Here’s the code behind that’s handling the DataGridKeyup event.  These are wired to a local/private method but could be converted to something the ViewModel could use, but I just need to get this working for now. 1: // Wire up happens in the constructor 2: this.PicDataGrid.KeyUp += (s, e) => this.HandleKeyUp(e);   1: // DataGrid.BeginEdit fires when DataGrid.KeyUp fires. 2: private void HandleKeyUp(KeyEventArgs args) 3: { 4: if (args.Key == Key.Down || 5: args.Key == Key.Up || 6: args.Key == Key.Tab || 7: args.Key == Key.Enter ) 8: { 9: this.PicDataGrid.BeginEdit(); 10: } 11: }   And that’s it.  The ValueConverter was the biggest problem starting out because I was using an existing converter that didn’t take nullable value types into account.   Once the converter was passing back the appropriate value (null, “#.####”) the grid cell(s) and the model objects started working as I needed them to. HTH.

    Read the article

  • Orchestrating the Virtual Enterprise, Part I

    - by Kathryn Perry
    A guest post by Jon Chorley, Oracle's Chief Sustainability Officer & Vice President, SCM Product Strategy During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems. In my next post, I will share examples of companies that have made that shift and talk more about the distributed orchestration process.

    Read the article

  • What are some good questions (and good/bad answers) to ask at an interview to gauge the competency of the company/team?

    - by Wayne M
    I'm already familiar with the Joel Test, but it's been my experience that some of the questions there have the answers "massaged" to make the company seem better than it is. I've had several jobs in the past that, for instance, claimed they had a QA process and did unit testing, and what they really meant is "The programmers test the app, and test with the debugger and via trial-and-error."; they said they used SVN but they just lumped everything into one giant repository and had no concept of branching/merging or anything more complicated than updating and committing; said they can build in one step and what they really mean is it's "one step" to copy dozens of files by hand from the programmer's PC to the live server. How do you go about properly gauging a company's environment to make sure that it's a well-evolved company and not stuck on doing things a certain way because they've done it for years and they're ignorant of change? You can almost never ask to see their source code, so you're stuck trying to figure out if the interviewer's answer is accurate or BS to make the company seem good. Besides the Joel Test what are some other good questions to get the proper feel for a company, and more importantly what are some good and bad answers that could indicate a good or bad company? I mean something like (take at face value, please, it's all I could think of at short notice): Question: How does the software team apply the SOLID principles and Inversion of Control to their code? Good Answer: We adhere to SOLID wherever possible; we use TDD so it kind of forces us to write abstract, testable code. We use Ninject for our IoC container because it's fairly easy to configure - it was that or StructureMap but I find Ninject a bit more intuitive, and who doesn't like ninjas? You're not a pirate, are you? Bad Answer: Our code is pretty secure, yeah. And what's this Inversion of Control thing? I've never heard of it before. You see what I did there. The "good" answer uses facts to back it up and has a bit of "in crowd" humor; the bad answer shows complete ignorance of the question - not necessarily a bad thing if you are interviewing for a manger/director position, but a terrible answer and a huge red flag if you're interviewing as a developer and talking to a senior developer or manager! My biggest problem at the moment is being able to take a generic response and gauge whether it's the good or bad answer; more often than not it's the bad kind and I find myself frustrated almost from day one at the new job. I suppose I could name drop if I ask about specific things (e.g. "Do you write unit tests?" and if the answer is yes, ask if they use NUnit, MbUnit or something else; if they mention data access ask if they use a clean ORM like NHibernate or something more coupled like EF or Linq) but is there another way short of being resolute to actually call the interview on things (which will almost certainly result in not getting the job, but if they are skirting the question it's probably not a job I want).

    Read the article

  • How to Mentor a Junior Developer

    - by Josh Johnson
    This title is a little broad but I may need to give a little background before I can ask my question properly. I know that similar questions have been asked here already. But in my case I'm not asking if I should be mentoring someone or if the person is a good fit for being a software developer. That is not my place to judge. I have not been asked outright, but it is apparent that myself and other fellow senior developers are to mentor the new developers that start here. I have no problem with this whatsoever and, in many cases, it lends me a fresh perspective on things and I end up learning in the process. Also, I remember how beneficial it was in the beginning of my career when someone would take some time to teach me something. When I say "new developer" they could be anywhere from fresh out of college to having a year or two of experience. Recently and in the past we've had people start here who seem to have an attitude toward development/programming which is different from mine and hard for me to reconcile; they seem to extract just enough information to get the task done but not really learn from it. I find myself going over and over the same issues with them. I understand that part of this could be a personality thing, but I feel it's my job to do my best and sort of push them out of the nest while they're under my wing, so to speak. How can I impart just enough information so that they will learn but not give so much as to solve the problem for them? Or perhaps: What's the proper response to questions that are designed to take the path of least resistance and, in essence, force them to learn instead of take the easy way out? These questions are probably more general teaching questions and don't have that much to do specifically with software development. Note: I do not get a say in what tasks they are working on. Management doles the task out and it could be anything from a very simple bug fix to starting an entire application by themselves. While this is not ideal by any means and obviously presents its own gauntlet of challenges, I feel it's a topic best left for another question. So the best I can do is help them with the problem at hand and try to help them break it down into simpler problems and also check their commit logs and point out mistakes that they made. My main objectives are to: Help them out and give them the tools they need to start becoming more self-reliant. Steer them in the right direction and break bad development habits early on. Lessen the amount of time I spend with them (the personality type described above seems to need much more one-on-one time and does not do well over IM or email. While that's generally fine, I can't always stop what I'm working on, break my stride, and help them debug an error on a moments notice; I have my own projects that need to get done).

    Read the article

  • Growing Talent

    The subtitle of Daniel Coyles intriguing book The Talent Code is Greatness Isnt Born. Its Grown. Heres How. The Talent Code proceeds to layout a theory of how expertise can be cultivated through specific practices that encourage the growth of myelin in the brain. Myelin is a material that is produced and wraps around heavily used circuits in the brain, making them more efficient. Coyle uses an analogy that geeks will appreciate. When a circuit in the brain is used a lot (i.e. a specific action is repeated), the myelin insulates that circuit, increasing its bandwidth from telephone over copper to high speed broadband. This leads to the funny phenomenon of effortless expertise. Although highly skilled, the best players make it look easy. Coyle provides some biological backing for the long held theory that it takes 10,000 hours of practice to achieve mastery over a given subject. 10,000 hours or 10 years, as in, Teach Yourself Programming in Ten Years and others. However, it is not just that more hours equals more mastery. The other factors that Coyle identifies includes deep practice, practice which crucially involves drills that are challenging without being impossible. Another way to put it is that every day you spend doing only tasks you find monotonous and automatic, you are literally stagnating your brains development! Perhaps Coyles subtitle, needs one more phrase, Greatness Isnt Born. Its Grown. Heres How. And oh yeah, its not easy. Challenging yourself, continuing to persist in the face of repeated failures, practicing every day is not easy. As consultants, we sell our expertise, so it makes sense that we plan projects so that people can play to their strengths. At the same time, an important part of our culture is constant improvement, challenging yourself to be better. And the balancing contest ensues. I just finished working on a proof of concept (POC) we did for a project we are bidding on. Completely time boxed, so our team naturally split responsibilities amongst ourselves according to who was better at what. I must have been pretty bad at the other components, as I found myself working on the user interface, not my usual strength. The POC had a website frontend, and one thing I do know is HTML. After starting out in pure ASP.NET WebForms, I got frustrated as time was ticking, I knew what I wanted in HTML, but I couldnt coax the right output out of the ASP.NET controls. I needed two or three elements on the screen that were identical in layout, with different content. With a backup plan in  of writing the HTML into the response by hand, I decided to challenge myself a bit and see what I could do in an hour or two using the Microsoft submitted jQuery micro-templating JavaScript library. This risk paid off. I was able to quickly get the user interface up and running, responsive to the JSON data we were working with. I felt energized by the double win of getting the POC ready and learning something new. Opportunities  specifically like this POC dont come around often, but the takeaway is that while it wont be easy, there are ways to generate your own opportunities to grow towards greatness.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL Authority News – Secret Tool Box of Successful Bloggers: 52 Tips to Build a High Traffic Top Ranking Blog

    - by Pinal Dave
    When I started this blog, it was meant as a bookmark for myself for helpful tips and tricks.  Gradually, it grew into a blog that others were reading and commenting on.  While SQL and databases are my first love and the reason I started this blog, the side effect was that I discovered I loved writing.  I discovered a secret goal I didn’t even know I wanted – I wanted to become an author.  For a long time, writing this blog satisfied that urge.  Gradually, though, I wanted to see my name in print. 12th Book Over the past few years I have authored and co-authored a number of books – they are all based on my knowledge of SQL Server, and were meant to spread my years of experience into the world, to share what I have learned with my community.  I currently have elevan of these “manuals” available for sale.  As exciting as it was to see my name in print, I still felt that there was more I could do as an author. That is when I realized that I am more than just a SQL expert.  I have been writing this blog now for more than 10 years, and it grew from a personal bookmark to a thriving website with over 2 million views per month.  I thought to myself “I could write a book about how to create a successful blog!”  And that is exactly what I did.  I am extremely excited to share with all of you my new book – “Secret Toolbox of Successful Bloggers.” A Labor of Love This project has been a labor of love for me.  It started out as a series for this blog – I would post one article a week until I felt the topic had been covered.  I found that as I wrote, new topics kept popping up in my mind, and eventually this small blog series grew into a full book.  The blog series was large enough to last a whole year, so I definitely thought that it could be a full book.  Ideas on how to become a successful blogger were so frequent that, I will admit, I feel like there is so much I left out of this book.  I had a lot more to say than I originally thought! I am so excited to be sharing this book with all of you.  I am so passionate about this topic, and I feel like there are so many people who can benefit from this book.  I know that when I started this blog, I did not know what I was doing, and I would have loved a “helping hand” to tell what to do and what not to do.  If this book can act that way to any of my readers, I feel it is a success. Rules of Thumb If you are interested in the topic of becoming a blogger, as you read this book, keep in mind that it is suggestions only.  Blogging is so new to the world that while there are “rules of thumb” about what to do and what not to do, a map of steps (“first, do x, then do y”) is not going to work for every single blogger.  This book is meant to encourage new bloggers to put their content out there in the world, to be brave and create a community like the one I have here at SQL Authority.  I have gained so much from this community, I wanted to give something back, and this book is just one small part. I hope that everyone who reads this books finds at least one helpful tip, and that everyone can experience the joy of blogging.  That is the whole reason I wrote this book, and what I hope everyone takes away from it. Where Can You Get It? You can get the book from following URL: Kindle eBook | Print Book Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL

    Read the article

  • More Free Apps Bound for the Marketplace

    - by Scott Kuhl
    Microsoft has announced they are raising the limit of free applications a developer can submit from 5 to 100.  But what does that really mean? First, lets look at the reason for the limitation.  The iTunes Store and the Android Market both have a lot more applications available than the Windows Phone Marketplace.  But that says nothing about the quality of those applications.  I attended a couple of pre-launch events and Microsoft representatives were clearly told to send a message. We don’t want a bunch of junky applications that do nothing but spam the marketplace.  That was the reason for the 5 free application limit. Okay, so now what has the result been?  Well, there are still fart apps, but there is no sign of a developer flooding the marking with 1500 wallpaper applications or 1000 of the same application all pointed at different RSS feeds.   On the other hand there are developers who want to release real free apps but are constrained by the 5 app limit. So why did Microsoft change it’s mind?  Is it to get the count of applications up, or is to make developers happy?  Windows Phone Marketplace is growing fast but it’s a long way behind the other guys.   I don’t think Microsoft wants to have 100,000 apps show up in the next 3 months if they are loaded with copy cat apps.  Those numbers will get picked apart quickly and the press will start complaining about  the same problems the Android Market has.  I do think the bump was at developer request.  Microsoft is usually good about listening to developer feedback, but has been pretty slow about it at times.  And from a financial perspective, there will me more apps that Microsoft has to review that they will see no profit on.  At least not until they bake in a advertising model connected to Bing. Ultimately, what does this mean for the future? Well, there are developers out there looking to release more than 5 simple free apps, so I think we will see more hobby apps.  And there are developers out there trying to make money from advertising instead of sales, so I think we will see more of those also.  But the category that I think will grow the fastest is free versions of paid applications that are the same as the trial version of the application.  While technically that makes no sense, its purely a marketing move.  Free apps get downloaded a lot more than paid apps, even with a trial mode.  It always surprises me how little consumers are willing to spend on mobile apps.  How many reviews of applications have you seen that says something like “a bit pricey at $1.99”.  Really?  Have you looked at how much you spend on your phone and plan?  I always thought the trial mode baked into Windows Marketplace was a good idea.  So I’m not sure how the more open free market will play out. In the long run though, I won’t be surprised to see a Bing ad mobile ad model show up so Microsoft can capitalize on the more open and free Windows Marketplace. Bonus: The Oatmeal on How I Feel About Buying Apps

    Read the article

  • What is the difference between Workcenters, Dashboards, and the Interaction Hub?

    - by Matthew Haavisto
    Oracle Open World has just concluded.  Over the course of the conference, we presented several sessions covering different aspects of the PeopleSoft user experience, including Workcenters, Dashboards, and the PeopleSoft Interaction Hub (formerly known as the PeopleSoft Applications Portal).  Although we've produced collateral on these features and covered them in sessions, it became apparent at the conference that customers still have many questions about the these products, including how they are licensed, how they are installed, what their various purposes are, and how they can be used together synergistically. Let's Start with Licensing and Installation As you may know, we've extended the restricted use license (RUL) for the Interaction Hub.  This grants customers with PeopleTools 8.52 licenses the right to install the Interaction Hub for free for use as specified in the Tools license notes.  Note that this means customers receive a restricted use license for the Interaction Hub that doesn't cost them an additional license fee, but it is a separate product, not part of PeopleTools or PeopleSoft applications, and is a separate installation.  This means customers must provide the infrastructure to install and run the Hub, just like any other application.  The benefits of using the Hub to unify your PeopleSoft user experience can be great.  PeopleSoft applications have not yet delivered instances of the Hub with their products, though they may in the future. Workcenters and Dashboards, on the other hand, are frameworks provided by PeopleTools.  No other license is required, and no additional installation of a separate product is needed (apart from PeopleTools and PeopleSoft applications).  PeopleSoft applications are delivering instances of the workcenters and dashboards with their products.  Some are available now, and more are coming in future releases.  These delivered workcenter and dashboard instances require no additional licenses, and no additional installations beyond Tools and the applications that provide them.  In addition, the workcenter and dashboard frameworks provided by PeopleTools can be used by customers to build their own workcenters and dashboards, and it's quite easy and simple to do so. What are Their Differences?  What Purposes do they Serve? Workcenters, Dashboards and the Interaction Hub appear somewhat similar.  They all contain pagelets, and have some visual characteristics in common.  However, their strengths and purposes are very different, and they were designed to provide different benefits to your PeopleSoft ecosystem. Workcenters and Dashboards have the following characteristics: Designed for specific roles Focus on the daily tasks of those roles Help to streamline the work performed most often Personal view of my work world Makes navigation and search easier and quicker, particularly for transactions and decision support Reports and data needed for day-to-day work Personalizable, but minimal Delivered by PS Apps, but can be altered by customer for their requirements Customers can create their own Workcenters can be used for guided processes  The Interaction Hub is designed to aggregate content from multiple applications, and is is used to unify the user experience of those applications.  It offers a rich, web site-based user experience, and is often used to provide access to infrequently performed activities like benefits enrollment, payroll inquiries, life event changes, onboarding, and so on. Full-featured and robust Centrally administered Pushed to large audience Broad info like Company News Infrequent activities like benefits, not day-to-day tasks Self-service, access to employer info Central launch point for other activities and can navigate to workcenters and dashboards Deployed by customers or consultants, instances not delivered by PeopleSoft (at this time) Content management Unified PS application navigation Although these products are quite different and serve different purposes in your PeopleSoft environment, they can be used together to provide a richer, more efficient and engaging user experience for your all your user communities.

    Read the article

  • what differs a computer scientist/software engineer to regular people who learn programming language and APIs?

    - by Amumu
    In University, we learn and reinvent the wheel a lot to truly learn the programming concepts. For example, we may learn assembly language to understand, what happens inside the box, and how the system operates, when we execute our code. This helps understanding higher level concepts deeper. For example, memory management like in C is just an abstraction of manually managed memory contents and addresses. The problem is, when we're going to work, usually productivity is required more. I could program my own containers, or string class, or date/time (using POSIX with C system call) to do the job, but then, it would take much longer time to use existing STL or Boost library, which abstract all of those thing and very easy to use. This leads to an issue, that a regular person doesn't need to get through all the low level/under the hood stuffs, who learns only one programming language and using language-related APIs. These people may eventually compete with the mainstream graduates from computer science or software engineer and call themselves programmers. At first, I don't think it's valid to call them programmers. I used to think, a real programmer needs to understand the computer deeply (but not at the electronic level). But then I changed my mind. After all, they get the job done and satisfy all the test criteria (logic, performance, security...), and in business environment, who cares if you're an expert and understand how computer works or not. You may get behind the "amateurs" if you spend to much time learning about how things work inside. It is totally valid for those people to call themselves programmers. This makes me confuse. So, after all, programming should be considered an universal skill? Does programming language and concepts matter or the problems we solve matter? For example, many C/C++ vs Java and other high level language, one of the main reason is because C/C++ features performance, as well as accessing low level facility. One of the main reason (in my opinion), is coding in C/C++ seems complex, so people feel good about it (not trolling anyone, just my observation, and my experience as well. Try to google "C hacker syndrome"). While Java on the other hand, made for simplifying programming tasks to help developers concentrate on solving their problems. Based on Java rationale, if the programing language keeps evolve, one day everyone can map their logic directly with natural language. Everyone can program. On that day, maybe real programmers are mathematicians, who could perform most complex logic (including business logic and academic logic) without worrying about installing/configuring compiler, IDEs? What's our job as a computer scientist/software engineer? To solve computer specific problems or to solve problems in general? For example, take a look at this exame: http://cm.baylor.edu/ICPCWiki/attach/Problem%20Resources/2010WorldFinalProblemSet.pdf . The example requires only basic knowledge about the programming language, but focus more on problem solving with the language. In sum, what differs a computer scientist/software engineer to regular people who learn programming language and APIs? A mathematician can be considered a programmer, if he is good enough to use programming language to implement his formula. Can we programmer do this? Probably not for most of us, since we specialize about computer, not math. An electronic engineer, who learns how to use C to program for his devices, can be considered a programmer. If the programming languages keep being simplified, may one day the software engineers, who implements business logic and create softwares, be obsolete? (Not for computer scientist though, since many of the CS topics are scientific, and science won't change, but technology will).

    Read the article

  • Synchronous Actions

    - by Dan Krasinski-Oracle
    Since the introduction of SMF, svcadm(1M) has had the ability to enable or disable a service instance and wait for that service instance to reach a final state.  With Oracle Solaris 11.2, we’ve expanded the set of administrative actions which can be invoked synchronously. Now all subcommands of svcadm(1M) have synchronous behavior. Let’s take a look at the new usage: Usage: svcadm [-v] [cmd [args ... ]] svcadm enable [-rt] [-s [-T timeout]] <service> ... enable and online service(s) svcadm disable [-t] [-s [-T timeout]] <service> ... disable and offline service(s) svcadm restart [-s [-T timeout]] <service> ... restart specified service(s) svcadm refresh [-s [-T timeout]] <service> ... re-read service configuration svcadm mark [-It] [-s [-T timeout]] <state> <service> ... set maintenance state svcadm clear [-s [-T timeout]] <service> ... clear maintenance state svcadm milestone [-d] [-s [-T timeout]] <milestone> advance to a service milestone svcadm delegate [-s] <restarter> <svc> ... delegate service to a restarter As you can see, each subcommand now has a ‘-s’ flag. That flag tells svcadm(1M) to wait for the subcommand to complete before returning. For enables, that means waiting until the instance is either ‘online’ or in the ‘maintenance’ state. For disable, the instance must reach the ‘disabled’ state. Other subcommands complete when: restart A restart is considered complete once the instance has gone offline after running the ‘stop’ method, and then has either returned to the ‘online’ state or has entered the ‘maintenance’ state. refresh If an instance is in the ‘online’ state, a refresh is considered complete once the ‘refresh’ method for the instance has finished. mark maintenance Marking an instance for maintenance completes when the instance has reached the ‘maintenance’ state. mark degraded Marking an instance as degraded completes when the instance has reached the ‘degraded’ state from the ‘online’ state. milestone A milestone transition can occur in one of two directions. Either the transition moves from a lower milestone to a higher one, or from a higher one to a lower one. When moving to a higher milestone, the transition is considered complete when the instance representing that milestone reaches the ‘online’ state. The transition to a lower milestone, on the other hand, completes only when all instances which are part of higher milestones have reached the ‘disabled’ state. That’s not the whole story. svcadm(1M) will also try to determine if the actions initiated by a particular subcommand cannot complete. Trying to enable an instance which does not have its dependencies satisfied, for example, will cause svcadm(1M) to terminate before that instance reaches the ‘online’ state. You’ll also notice the optional ‘-T’ flag which can be used in conjunction with the ‘-s’ flag. This flag sets a timeout, in seconds, after which svcadm gives up on waiting for the subcommand to complete and terminates. This is useful in many cases, but in particular when the start method for an instance has an infinite timeout but might get stuck waiting for some resource that may never become available. For the C-oriented, each of these administrative actions has a corresponding function in libscf(3SCF), with names like smf_enable_instance_synchronous(3SCF) and smf_restart_instance_synchronous(3SCF).  Take a look at smf_enable_instance_synchronous(3SCF) for details.

    Read the article

  • Method flags as arguments or as member variables?

    - by Martin
    I think the title "Method flags as arguments or as member variables?" may be suboptimal, but as I'm missing any better terminology atm., here goes: I'm currently trying to get my head around the problem of whether flags for a given class (private) method should be passed as function arguments or via member variable and/or whether there is some pattern or name that covers this aspect and/or whether this hints at some other design problems. By example (language could be C++, Java, C#, doesn't really matter IMHO): class Thingamajig { private ResultType DoInternalStuff(FlagType calcSelect) { ResultType res; for (... some loop condition ...) { ... if (calcSelect == typeA) { ... } else if (calcSelect == typeX) { ... } else if ... } ... return res; } private void InteralStuffInvoker(FlagType calcSelect) { ... DoInternalStuff(calcSelect); ... } public void DoThisStuff() { ... some code ... InternalStuffInvoker(typeA); ... some more code ... } public ResultType DoThatStuff() { ... some code ... ResultType x = DoInternalStuff(typeX); ... some more code ... further process x ... return x; } } What we see above is that the method InternalStuffInvoker takes an argument that is not used inside this function at all but is only forwarded to the other private method DoInternalStuff. (Where DoInternalStuffwill be used privately at other places in this class, e.g. in the DoThatStuff (public) method.) An alternative solution would be to add a member variable that carries this information: class Thingamajig { private ResultType DoInternalStuff() { ResultType res; for (... some loop condition ...) { ... if (m_calcSelect == typeA) { ... } ... } ... return res; } private void InteralStuffInvoker() { ... DoInternalStuff(); ... } public void DoThisStuff() { ... some code ... m_calcSelect = typeA; InternalStuffInvoker(); ... some more code ... } public ResultType DoThatStuff() { ... some code ... m_calcSelect = typeX; ResultType x = DoInternalStuff(); ... some more code ... further process x ... return x; } } Especially for deep call chains where the selector-flag for the inner method is selected outside, using a member variable can make the intermediate functions cleaner, as they don't need to carry a pass-through parameter. On the other hand, this member variable isn't really representing any object state (as it's neither set nor available outside), but is really a hidden additional argument for the "inner" private method. What are the pros and cons of each approach?

    Read the article

  • Big Data – ClustrixDB – Extreme Scale SQL Database with Real-time Analytics, Releases Software Download – NewSQL

    - by Pinal Dave
    There are so many things to learn and there is so little time we all have. As we have little time we need to be selective to learn whatever we learn. I believe I know quite a lot of things in SQL but I still do not know what is around SQL. I have started to learn about NewSQL recently. If you wonder what is NewSQL I encourage all of you to read my blog post about NewSQL over here Big Data – Buzz Words: What is NewSQL – Day 10 of 21. NewSQL databases are quickly becoming popular – providing the scale of NoSQL with the SQL features and transactions. As a part of learning NewSQL database, I have recently started to learn about ClustrixDB. ClustrixDB has been the most mature NewSQL database used by some of the largest internet sites in the world for over 3 years, with extensive SQL support. In addition to scale, it provides fast real-time analytics by bringing massively parallel processing (MPP), available only in warehousing databases, to the transactional database. The reason I am more intrigued about learning ClustrixDB is their recent announcement on Oct 31. ClustrixDB was only available as an appliance, but now with their software release on Oct 31, everyone can use it. It is now available as forever free for up to 12 cores with community support, and there is a 45 day trial for unlimited cluster sizes. With the forever free world, I am indeed interested in ClustrixDB now. I know that few of the leading eCommerce sites in the world uses them for their transactional database. Here are few of the details I have quickly noted for ClustrixDB. ClustrixDB allows user to: Scale by simply adding nodes to the cluster with a single command Run billions of transactions a day Run fast real-time analytics Achieve high-availability with recovery from node failure Manages itself Easily migrate from MySQL as it is nearly plug-and-play compatible, use MySQL drivers, tools and replication. While I was going through the documentation I realized that ClustrixDB also has extensive support for SQL features including complex queries involving joins on a dozen or more tables, aggregates, sorts, sub-queries. It also supports stored procedures, triggers, foreign keys, partitioned and temporary tables, and fully online schema changes. It is indeed a very matured product and SQL solution. Indeed Clusterix sound very promising solution, I decided to dig a bit deeper to understand who are current customers of the Clustrix as they exist in the industry for quite a few years. Their client list is indeed very interesting and here is my quick research about them. Twoo.com – Europe’s largest social discovery (dating) site runs 4.4 Billion Transactions a day with table sizes over a Terabyte, on a 168 core cluster. EngageBDR – Top 3 in the online advertising category uses ClustrixDB to serve 6.9 billion ads a day through real-time bidding platform. Their reports went from 4 hours to 15 seconds. NoMoreRack – Top 2 fastest growing e-commerce company in US used ClustrixDB for high availability and fast growth through Amazon cloud. MakeMyTrip – India’s leading travel site runs on ClustrixDB with two clusters running as multi-master in Chennai and Bangalore. Many enterprises such as AOL, CSC, Rakuten, Symantec use ClustrixDB when their applications need scale. I must accept that I am impressed with the information I have learned so far and now is the time to do some hand’s on experience with their product. I want to learn this technology so in future when it is about NewSQL, I know what I am talking about. Read more why Clustrix explains why you ClustrixDB might be the right database for you. Download ClustrixDB with me today and install it on your machine so in future when we discuss the technical aspects of it, we all are on the same page. The software can be downloaded here. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: Big Data, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Clustrix

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >