Search Results

Search found 2417 results on 97 pages for 'states'.

Page 87/97 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Off center projection

    - by N0xus
    I'm trying to implement the code that was freely given by a very kind developer at the following link: http://forum.unity3d.com/threads/142383-Code-sample-Off-Center-Projection-Code-for-VR-CAVE-or-just-for-fun Right now, all I'm trying to do is bring it in on one camera, but I have a few issues. My class, looks as follows: using UnityEngine; using System.Collections; public class PerspectiveOffCenter : MonoBehaviour { // Use this for initialization void Start () { } // Update is called once per frame void Update () { } public static Matrix4x4 GeneralizedPerspectiveProjection(Vector3 pa, Vector3 pb, Vector3 pc, Vector3 pe, float near, float far) { Vector3 va, vb, vc; Vector3 vr, vu, vn; float left, right, bottom, top, eyedistance; Matrix4x4 transformMatrix; Matrix4x4 projectionM; Matrix4x4 eyeTranslateM; Matrix4x4 finalProjection; ///Calculate the orthonormal for the screen (the screen coordinate system vr = pb - pa; vr.Normalize(); vu = pc - pa; vu.Normalize(); vn = Vector3.Cross(vr, vu); vn.Normalize(); //Calculate the vector from eye (pe) to screen corners (pa, pb, pc) va = pa-pe; vb = pb-pe; vc = pc-pe; //Get the distance;; from the eye to the screen plane eyedistance = -(Vector3.Dot(va, vn)); //Get the varaibles for the off center projection left = (Vector3.Dot(vr, va)*near)/eyedistance; right = (Vector3.Dot(vr, vb)*near)/eyedistance; bottom = (Vector3.Dot(vu, va)*near)/eyedistance; top = (Vector3.Dot(vu, vc)*near)/eyedistance; //Get this projection projectionM = PerspectiveOffCenter(left, right, bottom, top, near, far); //Fill in the transform matrix transformMatrix = new Matrix4x4(); transformMatrix[0, 0] = vr.x; transformMatrix[0, 1] = vr.y; transformMatrix[0, 2] = vr.z; transformMatrix[0, 3] = 0; transformMatrix[1, 0] = vu.x; transformMatrix[1, 1] = vu.y; transformMatrix[1, 2] = vu.z; transformMatrix[1, 3] = 0; transformMatrix[2, 0] = vn.x; transformMatrix[2, 1] = vn.y; transformMatrix[2, 2] = vn.z; transformMatrix[2, 3] = 0; transformMatrix[3, 0] = 0; transformMatrix[3, 1] = 0; transformMatrix[3, 2] = 0; transformMatrix[3, 3] = 1; //Now for the eye transform eyeTranslateM = new Matrix4x4(); eyeTranslateM[0, 0] = 1; eyeTranslateM[0, 1] = 0; eyeTranslateM[0, 2] = 0; eyeTranslateM[0, 3] = -pe.x; eyeTranslateM[1, 0] = 0; eyeTranslateM[1, 1] = 1; eyeTranslateM[1, 2] = 0; eyeTranslateM[1, 3] = -pe.y; eyeTranslateM[2, 0] = 0; eyeTranslateM[2, 1] = 0; eyeTranslateM[2, 2] = 1; eyeTranslateM[2, 3] = -pe.z; eyeTranslateM[3, 0] = 0; eyeTranslateM[3, 1] = 0; eyeTranslateM[3, 2] = 0; eyeTranslateM[3, 3] = 1f; //Multiply all together finalProjection = new Matrix4x4(); finalProjection = Matrix4x4.identity * projectionM*transformMatrix*eyeTranslateM; //finally return return finalProjection; } // Update is called once per frame public void FixedUpdate () { Camera cam = camera; //calculate projection Matrix4x4 genProjection = GeneralizedPerspectiveProjection( new Vector3(0,1,0), new Vector3(1,1,0), new Vector3(0,0,0), new Vector3(0,0,0), cam.nearClipPlane, cam.farClipPlane); //(BottomLeftCorner, BottomRightCorner, TopLeftCorner, trackerPosition, cam.nearClipPlane, cam.farClipPlane); cam.projectionMatrix = genProjection; } } My error lies in projectionM = PerspectiveOffCenter(left, right, bottom, top, near, far); The debugger states: Expression denotes a `type', where a 'variable', 'value' or 'method group' was expected. Thus, I changed the line to read: projectionM = new PerspectiveOffCenter(left, right, bottom, top, near, far); But then the error is changed to: The type 'PerspectiveOffCenter' does not contain a constructor that takes '6' arguments. For reasons that are obvious. So, finally, I changed the line to read: projectionM = new GeneralizedPerspectiveProjection(left, right, bottom, top, near, far); And the error I get is: is a 'method' but a 'type' was expected. With this last error, I'm not sure what it is I should do / missing. Can anyone see what it is that I'm missing to fix this error?

    Read the article

  • 5 Things I Learned About the IT Labor Shortage

    - by Oracle Accelerate for Midsize Companies
    by Jim Lein | Sr. Principal Product Marketing Director | Oracle Midsize Programs | @JimLein Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} 5 Things I Learned About the IT Labor Shortage A gentle autumn breeze is nudging the last golden leaves off the aspen trees. It’s time to wrap up the series that I started back in April, “The Growing IT Labor Shortage: Are You Feeling It?” Even in a time of relatively high unemployment, labor shortages exist depending on many factors, including location, industry, IT requirements, and company size. According to Manpower Groups 2013 Talent Shortage Survey, 35% of hiring managers globally are having difficulty filling jobs. Their top three challenges in filling jobs are: 1. lack of technical competencies (hard skills) 2. Lack of available applicants 3. Lack of experience The same report listed Technicians as the most difficult position to fill in the United States For most companies, Human Capital and Talent Management have never been more strategic and they are striving for ways streamline processes, reduce turnover, and lower costs (see this Oracle whitepaper, “ Simplify Workforce Management and Increase Global Agility”). Everyone I spoke to—partner, customer, and Oracle experts—agreed that it can be extremely challenging to hire and retain IT talent in today’s labor market. And they generally agreed on the causes: a. IT is so pervasive that there are myriad moving parts requiring support and expertise, b. thus, it’s hard for university graduates to step in and contribute immediately without experience and specialization, c. big IT companies generally aren’t the talent incubators that they were in the freewheeling 90’s due to bottom line pressures that require hiring talent that can hit the ground running, and d. it’s often too expensive for resource-strapped midsize companies to invest the time and money required to get graduates up to speed. Here are my top lessons learned from my conversations with the experts. 1. A Better Title Would Have Been, “The Challenges of Finding and Retaining IT Talent That Matches Your Requirements” There are more applicants than jobs but it’s getting tougher and tougher to find individuals that perfectly fit each and every role. Top performing companies are increasingly looking to hire the “almost ready”, striving to keep their existing talent more engaged, and leveraging their employee’s social and professional networks to quickly narrow down candidate searches (here’s another whitepaper, “A Strategic Approach to Talent Management”). 2. Size Matters—But So Does Location Midsize companies must strive to build cultures that compete favorably with what large enterprises can offer, especially when they aren’t within commuting distance of IT talent strongholds. They can’t always match the compensation and benefits offered by large enterprises so it's paramount to offer candidates high quality of life and opportunities to build their resumes in alignment with their long term career aspirations. 3. Get By With a Little Help From Your Friends It doesn’t always make sense to invest time and money in training an employee on a task they will not perform frequently. Or get in a bidding war for talent with skills that are rare and in high demand. Many midsize companies are finding that it makes good economic sense to contract with partners for remote support rather than trying to divvy up each and every role amongst their lean staff. Internal staff can be assigned to roles that will have the highest positive impact on achieving organizational goals. 4. It’s Actually Both “What You Know” AND “Who You Know” If I was hiring someone today I would absolutely leverage the social and professional networks of my co-workers. Period. Most research shows that hiring in this manner is less expensive and time consuming AND produces better results. There is also some evidence that suggests new hires from employees’ networks have higher job performance and retention rates. 5. I Have New Respect for Recruiters and Hiring Managers My hats off to them—it’s not easy hiring and retaining top talent with today’s challenges. Check out the infographic, “A New Day: Taking HR from Chaos to Control”, on Oracle’s Human Capital Management solutions home page. You can also explore all of Oracle’s HCM solutions from that page based on your role. You can read all the posts in this series by clicking on the links in the right sidebar. Stay tuned…we’ll continue to post thought leadership on HCM and Talent Management topics.

    Read the article

  • Productivity vs Security [closed]

    - by nerijus
    Really do not know is this right place to ask such a questions. But it is about programming in a different light. So, currently contracting with company witch pretends to be big corporation. Everyone is so important that all small issues like developers are ignored. Give you a sample: company VPN is configured so that if you have VPN then HTTP traffic is banned. Bearing this in mind can you imagine my workflow: Morning. Ok time to get latest source. Ups, no VPN. Let’s connect. Click-click. 3 sec. wait time. Ok getting source. Do I have emails? Ups. VPN is on, can’t check my emails. Need to wait for source to come up. Finally here it is! Ok Click-click VPN is gone. What is in my email. Someone reported a bug. Good, let’s track it down. It is in TFS already. Oh, dam, I need VPN. Click-click. Ok, there is description. Yea, I have seen this issue in stachoverflow.com. Let’s go there. Ups, no internet. Click-click. No internet. What? IPconfig… DHCP server kicked me out. Dam. Renew ip. 1..2..3. Ok internet is back. Google: site: stachoverflow.com 3 min. I have solution. Great I love stackoverflow.com. Don’t want to remember days where there was no stackoveflow.com. Ok. Copy paste this like to studio. Dam, studio is stalled, can’t reach files on TFS. Click-click. VPN is back. Get source out, paste my code. Grand. Let’s see what other comments about an issue in stackoverflow.com tells. Hmm.. There is a link. Click. Dammit! No internet. Click-click. No internet. DHCP kicked me out. Dammit. Now it is even worse: this happens 3-4 times a day. After certain amount of VPN connections open\closed my internet goes down solid. Only way to get internet back is reboot. All my browser tabs/SQL windows/studio will be gone. This happened just now when I am typing this. Back to issue I am solving right now: I am getting frustrated - I do not care about better solution for this issue. Let’s do it somehow and forget. This Click-click barrier between internet and TFS kills me… Sounds familiar? You could say there are VPN settings to change. No! This is company laptop, not allowed to do changes. I am very very lucky to have admin privileges on my machine. Most of developers don’t. So just learned to live with this frustration. It takes away 40-60 minutes daily. Tried to email company support, admins. They are too important ant too busy with something that just ignored my little man’s problem. Politely ignored. Question is: Is this normal in corporate world? (Have been in States, Canada, Germany. Never seen this.)

    Read the article

  • A little primer on using TFS with a small team

    - by johndoucette
    The scenario; A small team of 3 developers mostly in maintenance mode with traditional ASP.net, classic ASP, .Net integration services and utilities with the company’s third party packages, and a bunch of java-based Coldfusion web applications all under Visual Source Safe (VSS). They are about to embark on a huge SharePoint 2010 new construction project and wanted to use subversion instead VSS. TFS was a foreign word and smelled of “high cost” and of an “over complicated process”. Since they had no preconditions about the old TFS versions (‘05 & ‘08), it was fun explaining how simple it was to install a TFS server and get the ball rolling, with or without all the heavy stuff one sometimes associates with such a huge and powerful application management lifecycle product. So, how does a small team begin using TFS? 1. Start by using source control and migrate current VSS source trees into TFS. You can take the latest version or migrate the entire version history. It’s up to you on whether you want a clean start or need quick access to all the version notes and history of the bits. 2. Since most shops are mainly in maintenance mode with existing applications, begin using bug workitems for everything. When you receive an issue/bug from your current tracking system, manually enter the workitem in TFS right through Visual Studio. You can automate the integration to the current tracking system later or replace it entirely. Believe me, this thing is powerful and can handle even the largest of help desks. 3. With new construction, begin work with requirements and task workitems and follow the traditional sprint-based development lifecycle. Obviously, some minor training will be needed, but don’t fear, this is very intuitive and MSDN has a ton of lesson based labs and videos. 4. For the java developers, use the new Team Explorer Everywhere 2010 plugin (recently known as Teamprise). There is a seamless interface in Eclipse, but also a good command-line utility for other environments such as Dreamweaver. 5. Wait to fully integrate the whole workitem/project management/testing process until your team is familiar with the integrated workitems for bugs and code. After a while, you will see the team wanting more transparency into the work they are all doing and naturally, everyone will want workitems to help them organize the chaos! 6. Management will be limited in the value of the reports until you have a fully blown implementation of project planning, construction, build, deployment and testing. However, there are some basic “bug rate” reports and current backlog listings that can provide good information. Some notable explanations of TFS; Work Item Tracking and Project Management - A workitem represents the unit of work within the system which enables tracking of all activities produced by a user, whether it is a developer, business user, project manager or tester. The properties of a workitem such as linked changesets (checked-in code), who updated the data and when, the states and reasons for change, are all transitioned to a data warehouse within TFS for reporting purposes. A workitem can be defines as a "bug", "requirement", test case", or a "change request". They drive the work effort by the individual assigned to it and also provide a key role in defining what needs to be done. Workitems are the things the team needs to do to accomplish a goal. Test Case Management - Starting with a workitem known as a "test case", a tester (or developer) can now author and manage test cases within a formal test plan subsystem. Although TFS supports the test case workitem type, there is a new product known as the VS Test Professional 2010 which allows a tester to facilitate manual tests including fast forwarding steps in the process to arrive at the assertion point quickly. This repeatable process provides quick regression tests and can be conducted by the business user to ensure completeness during UAT. In addition, developers no longer can provide a response to a bug with the line "cannot reproduce". With every test run, attachments including the recorded session, captured environment configurations and settings, screen shots, intellitrace (debugging history), and in some cases if the lab manager is being used, a snapshot of the tested environment is available. Version Control - A modern system allowing shared check-in/check-out, excellent merge conflict resolution, Shelvesets (personal check-ins), branching/merging visualization, public workspaces, gated check-ins, security hierarchy capabilities, and changeset/workitem tracking. Knowing what was done with the code by any developer has become much easier to picture and resolve issues. Team Build - Automate the compilation process whether you need it to be whenever a developer checks-in code, periodically such as nightly builds for testers in the morning, or manual builds to be deployed into production. Each build can run through pre-determined tests, perform code analysis to see if the developer conforms to the team standards, and reject the build if either fails. Project Portal & Reporting - Provide management with a dashboard with insight into the project(s). "Where are we" in each step of the way including past iterations and the current burndown rate. Enabling this feature is easy as it seamlessly interfaces with existing SharePoint implementations.

    Read the article

  • USB external drive is not recognized by any OS, how to troubleshoot in Ubuntu?

    - by Breno
    First of all I would like to inform you that I saw a question similar to mine but the error was different, so here's my problem... I have an external HD samsung s2 model of 500GB and a day to day just stopped working, tried in other systems (windows and mac) however are not recognized. In the windows device manager when I insert the usb it states that the device in question are not working properly. Well, in the logs of my ubuntu 4.12 I see the following message when I insert my usb device in: [ 2967.560216] usb 7-2: new full-speed USB device number 2 using uhci_hcd [ 2967.680182] usb 7-2: device descriptor read/64, error -71 [ 2967.904176] usb 7-2: device descriptor read/64, error -71 [ 2968.120227] usb 7-2: new full-speed USB device number 3 using uhci_hcd [ 2968.240207] usb 7-2: device descriptor read/64, error -71 [ 2968.464063] usb 7-2: device descriptor read/64, error -71 [ 2968.680087] usb 7-2: new full-speed USB device number 4 using uhci_hcd [ 2969.092085] usb 7-2: device not accepting address 4, error -71 [ 2969.208155] usb 7-2: new full-speed USB device number 5 using uhci_hcd [ 2969.624076] usb 7-2: device not accepting address 5, error -71 [ 2969.624118] hub 7-0:1.0: unable to enumerate USB device on port 2 [ 4520.240340] usb 7-1: new full-speed USB device number 6 using uhci_hcd [ 4520.364079] usb 7-1: device descriptor read/64, error -71 [ 4520.588109] usb 7-1: device descriptor read/64, error -71 [ 4520.804140] usb 7-1: new full-speed USB device number 7 using uhci_hcd [ 4520.924136] usb 7-1: device descriptor read/64, error -71 [ 4521.148083] usb 7-1: device descriptor read/64, error -71 [ 4521.364105] usb 7-1: new full-speed USB device number 8 using uhci_hcd [ 4521.776237] usb 7-1: device not accepting address 8, error -71 [ 4521.888206] usb 7-1: new full-speed USB device number 9 using uhci_hcd [ 4522.296102] usb 7-1: device not accepting address 9, error -71 [ 4522.296150] hub 7-0:1.0: unable to enumerate USB device on port 1 [ 4749.036104] usb 7-2: new full-speed USB device number 10 using uhci_hcd [ 4749.156209] usb 7-2: device descriptor read/64, error -71 [ 4749.380215] usb 7-2: device descriptor read/64, error -71 [ 4749.596206] usb 7-2: new full-speed USB device number 11 using uhci_hcd [ 4749.716409] usb 7-2: device descriptor read/64, error -71 [ 4749.940110] usb 7-2: device descriptor read/64, error -71 [ 4750.156257] usb 7-2: new full-speed USB device number 12 using uhci_hcd [ 4750.572150] usb 7-2: device not accepting address 12, error -71 [ 4750.684215] usb 7-2: new full-speed USB device number 13 using uhci_hcd [ 4751.100182] usb 7-2: device not accepting address 13, error -71 [ 4751.100224] hub 7-0:1.0: unable to enumerate USB device on port 2 Here is my system: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 002: ID 08ff:2810 AuthenTec, Inc. AES2810 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 02) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 02) 00:1c.1 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 (rev 02) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 02) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 92) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 02) 00:1f.2 IDE interface: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 2 port SATA Controller [IDE mode] (rev 02) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02) 00:1f.5 IDE interface: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 2 port SATA Controller [IDE mode] (rev 02) 02:01.0 CardBus bridge: Ricoh Co Ltd RL5c476 II (rev ba) 02:01.1 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 IEEE 1394 Controller (rev 04) 02:01.2 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 21) 09:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5756ME Gigabit Ethernet PCI Express 0c:00.0 Network controller: Broadcom Corporation BCM4312 802.11b/g LP-PHY (rev 01) Does anyone have any clue what would be the problem?

    Read the article

  • How to get paid and figure out if I want to keep this client [migrated]

    - by Heiner Fawkes
    I have a client who is not paying on time, but it looks like the specifics don't match similar questions on this SE site. I got a call from a client I did website work for years ago. I had not done this kind of work for many years and frankly I'm not sure I want to now, but nevertheless about a month ago I agreed to bring his website, SEO, social media, and overall marketing for his small business up to speed. Why? He has told me many times how I'm the most honest, most well-informed contractor he's had experience with. And I personally kind of like him too. So I started working on an hourly basis. I sent one very small invoice and got paid. Then we talked a whole lot about all sorts of feature he would like me to implement. I started that work, and sent a second invoice on the first of the month (one of my two stated billing days). I didn't get paid. On every invoice it states that I charge a whopping ten percent per week late. I sent many voicemails and emails asking to please let me know what's going on with payment, and didn't get replies. Then the 15th of the month rolled around (which I stated initially as one of my invoicing dates). Since I hadn't been paid for the last invoice, I simply didn't send him an invoice at that time but emailed him and said that I will combine it with the next scheduled invoice for this reason (probably a bad idea I realize). Eventually he sent a portion of the invoice payment. I emailed back to let him know that he's three weeks late and what the remaining balance is. Finally we got in touch via phone. He basically told me that he thought I hadn't done all of the work I said I did. He looked at the page source code and it didn't look complete to him. I explained why his perception would be different and what work I had done as specified. He accepted this and said that part of the reason he didn't pay in full is that he's been swamped with personal family stuff, and part of the reason is that he didn't think I did all the work. That struck me as pretty weird. He also expressed concern that he has no idea now how much all the changes he has asked for are going to cost. And once again, he told me how honest and high-quality my services are compared to others he has dealt with. He also said he would pay me more (but not all) of the now three weeks overdue invoice that day. I didn't receive any payment. Basically this is how the client relationship strikes me: He's not good at communication. He's very busy and English isn't his first language. He almost never replies to emails but phone calls are fine. He's asked me to avoid emails for communication and I've asked him to please use email. He might not have enough money to afford all the things he has asked for. But so far I have been working for an hourly fee (which is quite high). He also has started paying monthly for hosting and social media services from me. What seems very abnormal is for a client to be so overdue on payments and to actually withhold payment of an invoice without any communication because he didn't think the work was done. I told him that I will send dollar estimates of each module of remaining work so that we can decide which ones are the highest priority if he cannot afford them all. I also reiterated that in the future if he has doubts about the work or an inability to pay, he must contact me immediately to say so. I basically plan to state the following to him: I would like to work for him and help his business. I also have sympathy for his recent family difficulties. I am happy to figure out payment plans that would work better for him, but first I need to be paid in full for all outstanding invoices, especially given that I skipped one of them just to be nice. The most crucial thing I need is communication about any problems with my work or his ability to pay. Once again, he heeds to pay in full immediately before we negotiate anything else. Does the above seem like an appropriate communication? Is anything missing from it? Is anything I'm doing here really abnormal?

    Read the article

  • Reference Data Management and Master Data: Are Relation ?

    - by Mala Narasimharajan
    Submitted By:  Rahul Kamath  Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise Master Data Management (MDM) solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or restructuring sales territories to enable equitable distribution of leads to sales teams following the acquisition of new products, or adding additional cost centers to enable fine grain control over expenses. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? How does it relate to Master Data? Reference data is a close cousin of master data. While master data is challenged with problems of unique identification, may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and gives them contextual value. In fact, the creation of a new master data element may require new reference data to be created. For example, when a European company acquires a US business, chances are that they will now need to adapt their product line taxonomy to include a new category to describe the newly acquired US product line. Further, the cross-border transaction will also result in a revised geo hierarchy. The addition of new products represents changes to master data while changes to product categories and geo hierarchy are examples of reference data changes.1 The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Change Management: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change. References 1 Master Data versus Reference Data, Malcolm Chisholm, April 1, 2006.

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 5

    - by MarkPearl
    Learning Outcomes Describe the operation of a memory cell Explain the difference between DRAM and SRAM Discuss the different types of ROM Explain the concepts of a hard failure and a soft error respectively Describe SDRAM organization Semiconductor Main Memory The two traditional forms of RAM used in computers are DRAM and SRAM DRAM (Dynamic RAM) Divided into two technologies… Dynamic Static Dynamic RAM is made with cells that store data as charge on capacitors. The presence or absence of charge in a capacitor is interpreted as a binary 1 or 0. Because capacitors have natural tendency to discharge, dynamic RAM requires periodic charge refreshing to maintain data storage. The term dynamic refers to the tendency of the stored charge to leak away, even with power continuously applied. Although the DRAM cell is used to store a single bit (0 or 1), it is essentially an analogue device. The capacitor can store any charge value within a range, a threshold value determines whether the charge is interpreted as a 1 or 0. SRAM (Static RAM) SRAM is a digital device that uses the same logic elements used in the processor. In SRAM, binary values are stored using traditional flip flop logic configurations. SRAM will hold its data as along as power is supplied to it. Unlike DRAM, no refresh is required to retain data. SRAM vs. DRAM DRAM is simpler and smaller than SRAM. Thus it is more dense and less expensive than SRAM. The cost of the refreshing circuitry for DRAM needs to be considered, but if the machine requires a large amount of memory, DRAM turns out to be cheaper than SRAM. SRAMS are somewhat faster than DRAM, thus SRAM is generally used for cache memory and DRAM is used for main memory. Types of ROM Read Only Memory (ROM) contains a permanent pattern of data that cannot be changed. ROM is non volatile meaning no power source is required to maintain the bit values in memory. While it is possible to read a ROM, it is not possible to write new data into it. An important application of ROM is microprogramming, other applications include library subroutines for frequently wanted functions, System programs, Function tables. A ROM is created like any other integrated circuit chip, with the data actually wired into the chip as part of the fabrication process. To reduce costs of fabrication, we have PROMS. PROMS are… Written only once Non-volatile Written after fabrication Another variation of ROM is the read-mostly memory, which is useful for applications in which read operations are far more frequent than write operations, but for which non volatile storage is required. There are three common forms of read-mostly memory, namely… EPROM EEPROM Flash memory Error Correction Semiconductor memory is subject to errors, which can be classed into two categories… Hard failure – Permanent physical defect so that the memory cell or cells cannot reliably store data Soft failure – Random error that alters the contents of one or more memory cells without damaging the memory (common cause includes power supply issues, etc.) Most modern main memory systems include logic for both detecting and correcting errors. Error detection works as follows… When data is to be read into memory, a calculation is performed on the data to produce a code Both the code and the data are stored When the previously stored word is read out, the code is used to detect and possibly correct errors The error checking provides one of 3 possible results… No errors are detected – the fetched data bits are sent out An error is detected, and it is possible to correct the error. The data bits plus error correction bits are fed into a corrector, which produces a corrected set of bits to be sent out An error is detected, but it is not possible to correct it. This condition is reported Hamming Code See wiki for detailed explanation. We will probably need to know how to do a hemming code – refer to the textbook (pg. 188 – 189) Advanced DRAM organization One of the most critical system bottlenecks when using high-performance processors is the interface to main memory. This interface is the most important pathway in the entire computer system. The basic building block of main memory remains the DRAM chip. In recent years a number of enhancements to the basic DRAM architecture have been explored, and some of these are now on the market including… SDRAM (Synchronous DRAM) DDR-DRAM RDRAM SDRAM (Synchronous DRAM) SDRAM exchanges data with the processor synchronized to an external clock signal and running at the full speed of the processor/memory bus without imposing wait states. SDRAM employs a burst mode to eliminate the address setup time and row and column line precharge time after the first access In burst mode a series of data bits can be clocked out rapidly after the first bit has been accessed SDRAM has a multiple bank internal architecture that improves opportunities for on chip parallelism SDRAM performs best when it is transferring large blocks of data serially There is now an enhanced version of SDRAM known as double data rate SDRAM or DDR-SDRAM that overcomes the once-per-cycle limitation of SDRAM

    Read the article

  • To Bit or Not To Bit

    - by Johnm
    'Twas a long day of troubleshooting and firefighting and now, with most of the office vacant, you face a blank scripting window to create a new table in his database. Many questions circle your mind like dirty water gurgling down the bathtub drain: "How normalized should this table be?", "Should I use an identity column?", "NVarchar or Varchar?", "Should this column be NULLABLE?", "I wonder what apple blue cheese bacon cheesecake tastes like?" Well, there are times when the mind goes it's own direction. A Bit About Bit At some point during your table creation efforts you will encounter the decision of whether to use the bit data type for a column. The bit data type is an integer data type that recognizes only the values of 1, 0 and NULL as valid. This data type is often utilized to store yes/no or true/false values. An example of its use would be a column called [IsGasoline] which would be intended to contain the value of 1 if the row's subject (a car) had a gasoline engine and a 0 if the subject did not have a gasoline engine. The bit data type can even be found in some of the system tables of SQL Server. For example, the sysssispackages table in the msdb database which contains SQL Server Integration Services Package information for the packages stored in SQL Server. This table contains a column called [IsEncrypted]. A value of 1 indicates that the package has been encrypted while the value of 0 indicates that it is not. I have learned that the most effective way to disperse the crowd that surrounds the office coffee machine is to engage into SQL Server debates. The bit data type has been one of the most reoccurring, as well as the most enjoyable, of these topics. It contains a practical side and a philosophical side. Practical Consideration This data type certainly has its place and is a valuable option for database design; but it is often used in situations where the answer is really not a pure true/false response. In addition, true/false values are not very informative or scalable. Let's use the previously noted [IsGasoline] column for illustration. While on the surface it appears to be a rather simple question when evaluating a car: "Does the car have a gasoline engine?" If the person entering data is entering a row for a Jeep Liberty, the response would be a 1 since it has a gasoline engine. If the person is entering data is entering a row for a Chevrolet Volt, the response would be a 0 since it is an electric engine. What happens when a person is entering a row for the gasoline/electric hybrid Toyota Prius? Would one person's conclusion be consistent with another person's conclusion? The argument could be made that the current intent for the database is to be used only for pure gasoline and pure electric engines; but this is where the scalability issue comes into play. With the use of a bit data type a database modification and data conversion would be required if the business decided to take on hybrid engines. Whereas, alternatively, if the int data type were used as a foreign key to a reference table containing the engine type options, the change to include the hybrid option would only require an entry into the reference table. Philosophical Consideration Since the bit data type is often used for true/false or yes/no data (also called Boolean) it presents a philosophical conundrum of what to do about the allowance of the NULL value. The inclusion of NULL in a true/false or yes/no response simply violates the logical principle of bivalence which states that "every proposition is either true or false". If NULL is not true, then it must be false. The mathematical laws of Boolean logic support this concept by stating that the only valid values of this scenario are 1 and 0. There is another way to look at this conundrum: NULL is also considered to be the absence of a response. In other words, it is the equivalent to "undecided". Anyone who watches the news can tell you that polls always include an "undecided" option. This could be considered a valid option in the world of yes/no/dunno. Through out all of these considerations I have discovered one absolute certainty: When you have found a person, or group of persons, who are willing to entertain a philosophical debate of the bit data type, you have found some true friends.

    Read the article

  • Adventures in Lab Management Configuration: Part 2 of 3

    - by Enrique Lima
    The first post was the high level overview. Now it is time for the details on what was done to the existing CMMI Project based on CMMI v 4.2. The first step was to go into Visual Studio, then from the Team Project Collection Settings and then to the Process Template Manager.  Once there, it was a matter of selecting the appropriate template (MSF for CMMI Process Improvement v5.0) and download to a point I could reference later (for example C:\Templates). Then on to using the steps from the guidance post. Since I was using an x64 deployment, I will make reference to the path as <toolpath>, however the actual path to reference in a 64-bit environment is “C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE”. As I mentioned on the previous post, make sure to first perform a backup of the Configuration, Collection and Warehouse DBs.  If you did not apply any changes to the names and such, then you will find those as tfs_Configuration, tfs_DefaultCollection and tfs_Warehouse. Now, the work needed with the witadmin tool: That includes the uploading of the structures that differ from v4.2 to v5.0 There is likely going to be an issue with the naming of some fields. For example, TFS 2010 likes something along the lines of “Area ID”, whereas TFS 2008 would have had it as “AreaID”.  So, this will need to be corrected.  Some posts will have you go through this after the errors pop up.  I would recommend doing this process prior to executing the importwitd process.  witadmin listfields /collection:<path to collection> > c:\ListFields.txt Review the following fields: AreaID, review the Name property and validate if it states “AreaID”, the you will need to rename the Name field to reflect “Area ID”. ExternalLinkCount, RelatedLinkCount, HyperLinkCount, AttachedFileCount and IterationID would be the other fields to check. To correct the issue, then execute the following: witadmin changefield /collection:<path to collection> /n:"System.ExternalLinkCount" /name:"External Link Count" Repeat for Area ID, Related Link Count, Hyperlink Count, Attached File Count and Iteration ID.  Once this is done, proceed with the commands below. witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\TestCase.xml" witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\SharedStep.xml" witadmin importcategories /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\categories.xml" Modifications to the Bug Definition: First step is to export the existing definition. witadmin exportwitd /collection<path to collection> /p:<project> /n:bug /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Make modifications to recently exported MyBug.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Repeat the process for the the Scenario or Requirement Type Definition witadmin exportwitd /collection<path to collection> /p:<project> /n:requirement /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Make modifications to recently exported MyRequirement.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Provide the Bug Field Mapping definition, after creating the file as specified here: http://msdn.microsoft.com/en-us/library/ff452591.aspx#TCMBugFieldMapping tcm bugfieldmapping /import /mappingfile:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\bugfieldmappings.xml" /collection:<path to collection> /teamproject:<project name>

    Read the article

  • Implement Tree/Details With Taskflow Regions Using EJB

    - by Deepak Siddappa
    This article describes on Display Tree/Details using taskflow regions.Use Case DescriptionLet us take scenario where we need to display Tree/Details, left region contains category hierarchy with items listed in a tree structure (ex:- Region-Countries-Locations-Departments in tree format) and right region contains the Employees list.In detail, Here User may drills down through categories using a tree until Employees are listed. Clicking the tree node name displays Employee list in the adjacent pane related to particular tree node. Implementation StepsThe script for creating the tables and inserting the data required for this application CreateSchema.sql Lets create a Java EE Web Application with Entities based on Regions, Countries, Locations, Departments and Employees table. Create a Stateless Session Bean and data control for the Stateless Session Bean. Add the below code to the session bean and expose the method in local/remote interface and generate a data control for that.Note:- Here in the below code "em" is a EntityManager. public List<Employees> empFilteredByTreeNode(String treeNodeType, String paramValue) { String queryString = null; try { if (treeNodeType == "null") { queryString = "select * from Employees emp ORDER BY emp.employee_id ASC"; } else if (Pattern.matches("[a-zA-Z]+[_]+[a-zA-Z]+[_]+[[0-9]+]+", treeNodeType)) { queryString = "select * from employees emp INNER JOIN departments dept\n" + "ON emp.department_id = dept.department_id JOIN locations loc\n" + "ON dept.location_id = loc.location_id JOIN countries cont\n" + "ON loc.country_id = cont.country_id JOIN regions reg\n" + "ON cont.region_id = reg.region_id and reg.region_name = '" + paramValue + "' ORDER BY emp.employee_id ASC"; } else if (treeNodeType.contains("regionsFindAll_bc_countriesList_1")) { queryString = "select * from employees emp INNER JOIN departments dept \n" + "ON emp.department_id = dept.department_id JOIN locations loc \n" + "ON dept.location_id = loc.location_id JOIN countries cont \n" + "ON loc.country_id = cont.country_id and cont.country_name = '" + paramValue + "' ORDER BY emp.employee_id ASC"; } else if (treeNodeType.contains("regionsFindAll_bc_locationsList_1")) { queryString = "select * from employees emp INNER JOIN departments dept ON emp.department_id = dept.department_id JOIN locations loc ON dept.location_id = loc.location_id and loc.city = '" + paramValue + "' ORDER BY emp.employee_id ASC"; } else if (treeNodeType.trim().contains("regionsFindAll_bc_departmentsList_1")) { queryString = "select * from Employees emp INNER JOIN Departments dept ON emp.DEPARTMENT_ID = dept.DEPARTMENT_ID and dept.DEPARTMENT_NAME = '" + paramValue + "'"; } } catch (NullPointerException e) { System.out.println(e.getMessage()); } return em.createNativeQuery(queryString, Employees.class).getResultList(); } In the ViewController project, create two ADF taskflow with page Fragments and name them as FirstTaskflow and SecondTaskflow respectively. Open FirstTaskflow,from component palette drop view(Page Fragment) name it as TreeList.jsff. Open SeconfTaskflow, from component palette drop view(Page Fragment) name it as EmpList.jsff and create two paramters in its overview parameters tab as shown in below image. Open TreeList.jsff , from data control palette drop regionsFindAll->Tree as ADF Tree. In Edit Tree Binding dialog, for Tree Level Rules select the display attributes as follows:-model.Regions - regionNamemodel.Countries - countryNamemodel.Locations - citymodel.Departments - departmentName In structure panel, click on af:Tree - t1 and select selectionListener with edit property. Create a "TreeBean" managed bean with scope as "session" as shown in below Image. Create new method as getTreeNodeSelectedValue and click ok. Open TreeBean managed bean and add the below code: private String treeNodeType; private String paramValue; public void getTreeNodeSelectedValue(SelectionEvent selectionEvent) { RichTree tree = (RichTree)selectionEvent.getSource(); RowKeySet addedSet = selectionEvent.getAddedSet(); Iterator i = addedSet.iterator(); TreeModel model = (TreeModel)tree.getValue(); model.setRowKey(i.next()); JUCtrlHierNodeBinding node = (JUCtrlHierNodeBinding)tree.getRowData(); //oracle.jbo.Row Row rw = node.getRow(); Object selectedTreeNode = node.getAttribute(0); Object treeListType = node.getBindings(); String treeNodeType = treeListType.toString(); this.setParamValue(selectedTreeNode.toString()); this.setTreeNodeType(treeNodeType); } public void setTreeNodeType(String treeNodeType) { this.treeNodeType = treeNodeType; } public String getTreeNodeType() { return treeNodeType; } public void setParamValue(String paramValue) { this.paramValue = paramValue; } public String getParamValue() { return paramValue; }<br /> Open EmpList.jsff , from data control palette drop empFilteredByTreeNode->Employees->Table as ADF Read-only Table. After selecting the  Employees result set, in Edit Action Binding dialog window pass the pageFlowScope parameters as shown in below Image. In empList.jsff page, click Binding tab and click on Create Executable binding and select Invoke action and follow as shown in below image. Edit executeEmpFiltered invoke action properties and set the Refresh to ifNeeded, So when ever the page needs the method will be executed. Create Main.jspx page with page template as Oracle Three Column Layout. Drop FirstTaskflow as Region in start facet and drop SecondTaskflow as Region in center facet, Edit task Flow Binding dialog window pass the Input Paramters as shown in below Image. Run the Main.jspx, tree will be displayed in left region and emp details will displyaed on the right region. Click on the Americas in tree node, all emp related to the Americas related will be displayed. Click on Americas->United States of America->South San Francisco->Accounting, only employee belongs to the Accounting department will be displayed.

    Read the article

  • About Solaris 11 and UltraSPARC II/III/IV/IV+

    - by nospam(at)example.com (Joerg Moellenkamp)
    I know that I will get the usual amount of comments like "Oh, Jörg ? you can't be negative about Oracle" for this article. However as usual I want to explain the logic behind my reasoning. Yes ? I know that there is a lot of UltraSPARC III, IV and IV+ gear out there. But there are some very basic questions: Does your application you are currently running on this gear stops running just because you can't run Solaris 11 on it? What is the need to upgrade a system already in production to Solaris 11? I have the impression, that some people think that the systems get useless in the moment Oracle releases Solaris 11. I know that Sun sold UltraSPARC IV+ systems until 2009. The Sun SF490 introduced 2004 for example, that was a Sun SF480 with UltraSPARC IV and later with UltraSPARC IV+. And yes, Sun made some speedbumps. At that time the systems of the UltraSPARC III to IV+ generations were supported on Solaris 8, on Solaris 9 and on Solaris 10. However from my perspective we sold them to customers, which weren't able to migrate to Solaris 10 because they used applications not supported on Solaris 9 or who just didn't wanted to migrate to Solaris 10. Believe it or not ? I personally know two customers that migrated core systems to Solaris 10 in ? well 2008/9. This was especially true when the M3000 was announced in 2008 when it closed the darned single socket gap. It may be different at you site, however that's what I remember about that time when talking with customers. At first: Just because there is no Solaris 11 for UltraSPARC III, IV and IV+, it doesn't mean that Solaris 10 will go away anytime soon. I just want to point you to "Expect Lifetime Support - Hardware and Operating Systems". It states about Premier Support:Maintenance and software upgrades are included for Oracle operating systems and Oracle VM for a minimum of eight years from the general availability date.GA for Solaris 10 was in 2005. Plus 8 years ? 2013 ? at minimum. Then you can still opt for 3 years of "Extended Support" ? 2016 ? at minimum. 2016 your systems purchased in 2009 are 7 years old. Even on systems purchased at the very end of the lifetime of that system generation. That are the rules as written in the linked document. I said minimum The actual dates are even further in the future: Premier Support for Solaris 10 ends in 2015, Extended support ends 2018. Sustaining support ? indefinite. You will find this in the document "Oracle Lifetime Support Policy: Oracle Hardware and Operating Systems".So I don't understand when some people write, that Oracle is less protective about hardware investments than Sun. And for hardware it's the same as with Sun: Service 5 years after EOL as part of Premier Support. I would like to write about a different perspective as well: I have to be a little cautious here, because this is going in the roadmap area, so I will mention the public sources here: John Fowler told last year that we have to expect at at least 3x the single thread performance of T3 for T4. We have 8 cores in T4, as stated by Rick Hetherington. Let's assume for a moment that a T4 core will have the performance of a UltraSPARC core (just to simplify math and not to disclosing anything about the performance, all existing SPARC cores are considered equal). So given this pieces of information, you could consolidate 8 V215, 4 or 8 V245, 2 full blown V445,2 full blown 490, 2 full blown M3000 on a single T4 SPARC processor. The Fowler roadmap prezo talked about 4-socket systems with T4. So 32 V215, 16 to 8 V245, 8 fullblown V445, 8 full blown V490, 8 full blown M3000 in a system image. I think you get the idea. That said, most of the systems we are talking about have already amortized and perhaps it's just time to invest in new systems to yield other advantages like reduced space consumptions, like reduced power consumption, like some of the neat features sun4v gives you, and yes ? reduced number of processor licenses for Oracle and less money for Oracle HW/SW support. As much as I dislike it myself that my own UltraSPARC III and UltraSPARC II based systems won't run on Solaris 11 (and I have quite a few of them in my personal lab), I really think that the impact on production environments will be much less than most people think now. By the way: The reason for this move is a quite significant new feature. I will tell you that it was this feature, when it's out. I assume, telling just a word more could lead to much more time to blog.

    Read the article

  • Microsoft Forcing Dev/Partners Hands on Win 8 Through Certification

    - by D'Arcy Lussier
    I remember 2.5 years ago when Microsoft dropped a bomb on the Microsoft Partner community: all Gold competencies would require .NET 4 based premiere certifications (MCPD). Problem was, this gave a window of about 6 months for partners to update their employees’ certifications. At the place I was working, I put together an aggressive plan and we were able to attain the certs needed. Microsoft is always open that the certification requirements will change as the industry changes. .NET 1.0 certifications are useless here in 2012, and rightfully so they’ve been retired for a long time now. But now we’re seeing a new tactic by Microsoft – shifting gears away from certifications that speak to what industry needs and more to the Windows 8 agenda. Consider that currently the premiere development certification is the Microsoft Certified Professional Developer, which comes in three flavours – Web, Windows, and Azure. All require WCF and Data Access exams, as well as one that deals with the associated base technologies (ASP.NET, WinForms/WPF, Azure), and one that ties all three together in a solution-based exam. For Microsoft-based organizations, these skills aren’t just valid but necessary in building Microsoft applications. But the MCPD is being replaced with our old friend Microsoft Certified Solutions Developer (MCSD). So far, Microsoft has only released two types of MCSD – Web and Windows Store Apps. Windows Store Apps?! In a push to move developers to create WinRT-based applications, desktop development is now considered a second-class citizen in the eyes of Redmond. Also interesting are the language options for the exams: HTML5 and C#. Sorry VB folks, its time to embrace curly braces whether they be JavaScript or C#. Consider too the skills being assessed for the Windows Store Apps: Get your MCSD: Windows Store Apps Using HTML5 Get your MCSD: Windows Store Apps Using C# *Image Source: http://www.microsoft.com/learning/en/us/certification/mcsd-windows-store-apps.aspx Nov 21/2012 If you look at the skills being tested in each exam, you’ll find that skills like WCF and Data Access are downplayed compared to things like integrating Charms, facilitating Search, programming for the microphone and camera – all very Windows 8 focussed items. Where this becomes maddening is that Microsoft is still pushing Windows 7 with enterprise clients. According to a ZDNet article, Microsoft wants to see Windows 7 on 70% of enterprise desktops by mid 2013. Assuming they somehow meet that (its a pretty lofty goal), there’s years of traditional desktop-based development that will still be required at some level. For those thinking they’ll just write and stick with the MCPD certification, note that most exams that go towards that certification will be retired at the end of July 2013! (Read the small print). And while details haven’t been finalized, its a safe bet that MCPD certifications eventually won’t count towards Gold-level competencies in the Microsoft Partner program. What this means for Microsoft Partners and Developers is that certification for desktop development is going to be limited to Windows Store Apps unless Microsoft re-introduces a traditional desktop (WPF) based MCSD cert. Web Application Development – It’s Not All Bad There’s big changes on the web side of certification, but I actually see these changes as being for the good! Check out the new exam requirements for MCSD – Web Applications: Get your MCSD: Web Applications certification *Image Source: http://www.microsoft.com/learning/en/us/certification/cert-mcsd-web-applications.aspx Nov 21, 2012 We now *start* with HTML5, JavaScript, and CSS3! Now I’m sure that these will be slanted towards web development in IE, and I can hear designers everywhere bemoaning the CSS/IE combination. Still, I applaud Microsoft for adopting HTML5 as the go-to web technology and requiring certified developers to prove they have skills in the basics of web dev. The fact that the second exam clearly states “MVC Web Applications” shows that Web Forms is truly legacy and deprecated. That’s not to say there aren’t those out there that are still supporting or (for whatever reason) doing new dev with Web Forms, but this move by Microsoft is telling the community they better get on the MVC bandwagon if they want to stay current. Fantastic! And of course Azure needs to be here as well, and this is where the Microsoft agenda fits in. It’s no secret that there’s been a huge push in getting developers on to Azure. I don’t see this as being a bad thing either, as cloud computing (whether Azure, private, or 3rd party) is a necessary skill for developers to have here in 2012. The cynic in me realizes that the HTML5/JavaScript/CSS push wouldn’t be as prominent though if not for the Windows 8 Store App play, where HTML5 is a first class citizen (and an available language for the MCSD Windows Store App cert). In this case, the desktop developers loss is the web developers gain. Get Ready for Changes In addition to the changes in certifications, the Microsoft Partner competencies are going through changes as well. Web and Software Development are being merged into a single competency, meaning that licenses you would have received from having both as Gold are reduced. Other competencies are either being removed or changed, as are the exam requirements. In the same way that we’re seeing faster release cycles from Microsoft, so too will we see the Microsoft Partner Program and MS Certifications evolve faster than ever before. Many of us got caught in the last wave of changes, but this time we can see the wave coming – and it looks pretty big!

    Read the article

  • What Can We Learn About Software Security by Going to the Gym

    - by Nick Harrison
    There was a recent rash of car break-ins at the gym. Not an epidemic by any stretch, probably 4 or 5, but still... My gym used to allow you to hang your keys from a peg board at the front desk. This way you could come to the gym dressed to work out, lock your valuables in your car, and not have anything to worry about. Ignorance is bliss. The problem was that anyone who wanted to could go pick up your car keys, click the unlock button and find your car. Once there, they could rummage through your stuff and then walk back in and finish their workout as if nothing had happened. The people doing this were a little smatter then the average thief and would swipe some but not all of your cash leaving everything else in place. Most thieves would steal the whole car and be busted more quickly. The victims were unaware that anything had happened for several days. Fortunately, once the victims realized what had happened, the gym was still able to pull security tapes and find out who was misbehaving. All of the bad guys were busted, and everyone can now breathe a sigh of relieve. It is once again safe to go to the gym. Except there was still a fundamental problem. Putting your keys on a peg board by the front door is just asking for bad things to happen. One person got busted exploiting this security flaw. Others can still be exploiting it. In fact, others may well have been exploiting it and simply never got caught. How long would it take you to realize that $10 was missing from your wallet, if everything else was there? How would you even know when it went missing? Would you go to the front desk and even bother to ask them to review security tapes if you were only missing a small amount. Once highlighted, it is easy to see how commonly such vulnerability may have been exploited. So the gym did the very reasonable precaution of removing the peg board. To me the most shocking part of this story is the resulting uproar from gym members losing the convenient key peg. How dare they remove the trusted peg board? How can I work out now, I have to carry my keys from machine to machine? How can I enjoy my workout with this added inconvenience? This all happened a couple of weeks ago, and some people are still complaining. In light of the recent high profile hacking, there are a couple of parallels that can be drawn. Many web sites are riddled with vulnerabilities are crazy and easily exploitable as leaving your car keys by the front door while you work out. No one ever considered thanking the people who were swiping these keys for pointing out the vulnerability. Without a hesitation, they had their gym memberships revoked and are awaiting prosecution. The gym did recognize the vulnerability for what it is, and closed up that attack vector. What can we learn from this? Monitoring and logging will not prevent a crime but they will allow us to identify that a crime took place and may help track down who did it. Once we find a security weakness, we need to eliminate it. We may never identify and eliminate all security weaknesses, but we cannot allow well known vulnerabilities to persist in our system. In our case, we are not likely to meet resistance from end users. We are more likely to meet resistance from stake holders, product owners, keeper of schedules and budgets. We may meet resistance from integration partners, co workers, and third party vendors. Regardless of the source, we will see resistance, but the weakness needs to be dealt with. There is no need to glorify a cracker for bringing to light a security weakness. Regardless of their claimed motives, they are not heroes. There is also no point in wasting time defending weaknesses once they are identified. Deal with the weakness and move on. In may be embarrassing to find security weaknesses in our systems, but it is even more embarrassing to continue ignoring them. Even if it is unpopular, we need to seek out security weaknesses and eliminate them when we find them. http://www.sans.org has put together the Common Weakness Enumeration http://cwe.mitre.org/ which lists out common weaknesses. The site navigation takes a little getting used to, but there is a treasure trove here. Here is the detail page for SQL Injection. It clearly states how this can be exploited, in case anyone doubts that the weakness should be taken seriously, and more importantly how to mitigate the risk.

    Read the article

  • Taking a Flying Leap

    - by Lance Shaw
    Yesterday, I went skydiving with three of my children.  It was thrilling, scary, invigorating and exciting. While there is obvious risk involved, the reward and feeling of success was well worth it. You might already be wondering what skydiving would have to with WebCenter, so let me explain. Implementing a skydiving program and becoming an instructor does not happen overnight.  It does not happen with the purchase of the needed technology. Not one of us would go out, buy a parachute, the harnesses, helmet and all the gear and be able to convince anyone that we are now ready to be a skydiving instructor. The fact is that obtaining the technology is merely a small piece of the overall process and so is the case with managing content in your company. You don't just buy the right software (Oracle WebCenter Content) and go to your boss and declare information management success. There is planning, research and effort that goes into deploying software of any kind and especially when it is as mission-critical to the success of your business as Enterprise Content Management. To become a certified skydiving instructor takes at least 3 years of commitment and often longer. In the United States, candidates must complete over 500 solo jumps of their own over a minimum of 36 months and then must complete additional rigorous training under observation.  When you consider the amount of time and effort involved, it's not unlike getting a college degree and anyone that has trusted their lives to one of these instructors will no doubt appreciate their dedication to the curriculum.  Implementing an ECM system won't take that long, but it certainly requires commitment, analysis and consideration. But guess what?  Humans are involved and that means that mistakes can happen and that rules change.  This struck me while reading an excellent post on darkreading.com by Glenn S. Phillips entitled "Mission Impossible: 4 Reasons Compliance is Impossible".  His over-arching point was that with information management and security, environments change and people are involved meaning the work is never done.  He stated that you can never claim your compliance efforts are complete because of the following reasons. People are involved.  And lets face it, some are more trustworthy than others. Change is Constant. There is always some new technology coming along that is disruptive. Consumer grade cloud file sharing and sync tools come to mind here. Compliance is interpreted, not defined.  Laws and the judges that read them are always on the move. Technology is a tool, not a complete solution. There is no magic pill. The skydiving analogy holds true here as well.  Ultimately, a single person packs your parachute.  For obvious reasons, you prefer that this person be trustworthy but there are no absolute guarantees of a 100% error-free scenario.  Weather and wind conditions are never a constant and the best-laid plans for a great day of skydiving are easily disrupted by forces outside of your control.  Rules and regulations vary by location and may be updated at any time and as I mentioned early on, even the best technology on its own will only get you started. The good news is that, like skydiving, with the right technology, the right planning, the right team and a proper understanding of the rules and regulations that govern your industry, your ECM deployment can be a great success.  Failure to plan for any of the 4 factors that Glenn outlined in his article will certainly put your deployment and maybe even your company at risk, so consider them carefully. As a final aside, for those of you who consider skydiving an incredibly dangerous and risky pastime, consider this comparative statistic.  In 2012, the U.S. Parachute Association recorded 19 fatal skydiving accidents in the U.S. out of roughly 3.1 million jumps.  That’s 0.006 fatalities per 1,000 jumps. By comparison, the U.S. National Highway Traffic Safety Administration reports that there were 34,080 deaths due to car accidents in 2012.  Based on the percentages, one could argue that it is safer to jump out of a plane than to drive to the airport where the skydiving will take place. While the way you manage, secure, classify, control, retain and dispose of company files may not carry as much risk as driving or skydiving, it certainly carries risk for the organization when not planned and deployed appropriately.  Consider all the factors involved in your organization as you make your content management plans.  For additional areas of consideration, be sure to download our free whitepaper on the topic entitled "The Top 10 Criteria for Choosing an ECM System" which is available for download here.

    Read the article

  • Generically correcting data before save with Entity Framework

    - by koevoeter
    Been working with Entity Framework (.NET 4.0) for a week now for a data migration job and needed some code that generically corrects string values in the database. You probably also have seen things like empty strings instead of NULL or non-trimmed texts ("United States       ") in "old" databases, and you don't want to apply a correcting function on every column you migrate. Here's how I've done this (extending the partial class of my ObjectContext):public partial class MyDatacontext{    partial void OnContextCreated()    {        SavingChanges += OnSavingChanges;    }     private void OnSavingChanges(object sender, EventArgs e)    {        foreach (var entity in GetPersistingEntities(sender))        {            foreach (var propertyInfo in GetStringProperties(entity))            {                var value = (string)propertyInfo.GetValue(entity, null);                 if (value == null)                {                    continue;                }                 if (value.Trim().Length == 0 && IsNullable(propertyInfo))                {                    propertyInfo.SetValue(entity, null, null);                }                else if (value != value.Trim())                {                    propertyInfo.SetValue(entity, value.Trim(), null);                }            }        }    }     private IEnumerable<object> GetPersistingEntities(object sender)    {        return ((ObjectContext)sender).ObjectStateManager            .GetObjectStateEntries(EntityState.Added | EntityState.Modified)             .Select(e => e.Entity);    }    private IEnumerable<PropertyInfo> GetStringProperties(object entity)    {        return entity.GetType().GetProperties()            .Where(pi => pi.PropertyType == typeof(string));    }    private bool IsNullable(PropertyInfo propertyInfo)    {        return ((EdmScalarPropertyAttribute)propertyInfo             .GetCustomAttributes(typeof(EdmScalarPropertyAttribute), false)            .Single()).IsNullable;    }}   Obviously you can use similar code for other generic corrections.

    Read the article

  • WEBLOGIC 12C HANDS-ON BOOTCAMP

    - by agallego
      Oracle PartnerNetwork | Account | Feedback   JOIN THE ORACLE WEBLOGIC PARTNER COMMUNITY AND ATTEND A WEBLOGIC 12C HANDS-ON BOOTCAMP Dear partner As a valued partner we would like to invite you for the WebLogic Partner Community and our WebLogic 12c hands-on Bootcamps – free of charge! Please first login at http://partner.oracle.com and then visit: WebLogic Partner Community. (If you need support with your account please contact the Oracle Partner Business Center). The goal of the WebLogic Partner Community is to provide you with the latest information on Oracle's offerings and to facilitate the exchange of experience within community members. Register Now FREE Assessment vouchers to become certified and WebLogic Server 12c 200 new Features and Training Connect and Network   WebLogic Blogs   WebLogic on Facebook   WebLogic on LinkedIn   WebLogic on Twitter   WebLogic on Oracle Mix WebLogic 12c hands-on Workshops We offer free3 days hands-on WebLogic 12c workshops for Oracle partners who want to become Application Grid Specialized: Register Here! Country Date Location Registration   Germany  3-5 April 2012 Oracle Düsseldorf Click here   France  24-26 April 2012 Oracle Colombes Click here   Spain 08-10 May 2012  Oracle Madrid  Click here   Netherlands  22-24 May 2012  Oracle Amsterdam  Click here   United Kingdom  06-08 June 2012  Oracle Reading  Click here   Italy  19-21 June 2012  Oracle Cinisello Balsamo  Click here   Portugal  10-12 July 2012  Oracle Lisbon  Click here Skill requirements Attendees need to have the following skills as this is required by the product-set and to make sure they get the most out of the training: Basic knowledge in Java and JavaEE Understanding the Application Server concept Basic knowledge in older releases of WebLogic Server would be beneficial Member of WebLogic Partner Community for registration please vist http://www.oracle.com/partners/goto/wls-emea Hardware requirements Every participant works on his own notebook. The minimal hardware requirements are: 4Gb physical RAM (we will boot the image with 2Gb RAM)  dual core CPU 15 GB HD Software requirements Please install Oracle VM VirtualBox 4.1.8 Follow-up and certification  With the workshop registration you agree to the following next steps Follow-up training attend and pass the Oracle Application Grid Certified Implementation Specialist Registration For details and registration please visit Register Here Free WebLogic Certification (Free assessment voucher to become certified) For all WebLogic experts, we offer free vouchers worth $195 for the Oracle Application Grid Certified Implementation Specialist assessment. To demonstrate your WebLogic knowledge you first have to pass the free online assessment Oracle Application Grid PreSales Specialist. For free vouchers, please send an e-mail with the screenshot of your Oracle Application Grid PreSales certirficate to [email protected] including your Name, Company, E-mail and Country. Note: This offer is limited to partners from Europe Middle East and Africa. Partners from other countries please contact your Oracle partner manager. WebLogic Specialization To become specialized in Application Grid, please make sure that you access the: Application Grid Specialization Guide Application Grid Specialization Checklist If you have any questions please contact the Oracle Partner Business Center. Oracle WebLogic Server 12c Key New Capabilities Java EE 6 and Developer Productivity Simplified Deployment and Management with Virtualization Integrated Traffic Management Enhanced High Availability and Disaster Recovery Much Higher Performance For more information please visit: Presentation from the WebLogic 12c launch Technical Presentation from the WebLogic 12c launch WebLogic OTN Website WebLogic 12c Virtual Conference Environment WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea (OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Best regards, Jürgen Kress WebLogic Partner Adoption EMEA Tel. +49 89 1430 1479 E-Mail: [email protected]   Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Contact PBC | Legal Notices and Terms of Use | Privacy Statement Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States Your privacy is important to us. You can login to your account to update your e-mail subscriptions or you can opt-out of all Oracle Marketing e-mails at any time. Please note that opting-out of Marketing communications does not affect your receipt of important business communications related to your current relationship with Oracle such as Security Updates, Event Registration notices, Account Management and Support/Service communications.

    Read the article

  • Rules for Naming

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Naming Documents (or is it “Document, Naming”?) Tis but thy name that is my enemy; Thou art thyself, though not a Montague. What's Montague? It is nor hand, nor foot, Nor arm, nor face, nor any other part Belonging to a man. O, be some other name! What's in a name? That which we call a rose By any other name would smell as sweet; So Romeo would, were he not Romeo call'd, Retain that dear perfection which he owes Without that title. Romeo, doff thy name And for that name which is no part of thee Take all myself.  Shakespeare – Romeo and Juliet Act II, Scene 2 We normally only use the bold portion of the famous Shakespearean quote above, but it is really out of context. As the play unfolds, we learn that a name is all too powerful. Indeed it is because of their names that the doomed lovers die. There might be life and death in a name (BTW, when I wrote this monogram, I was in Hatfield, PA. Remember the Hatfields and the McCoys?) This is a bit extreme, but in the field of Knowledge Management (KM) names are of the utmost importance as well. When I write an article about managing SharePoint sites, how should I name it? “Managing a site” or “Site, managing”? Nine times out of ten I’d opt for the latter. Almost everything we do is “Managing” so to make life easier for a person looking for meaningful content, we title our articles starting with the differentiator rather than the common factor. As a rule of thumb, we start the name with the noun rather than the verb. It is not what we do that is the primary key; it is what we do it to. So, answer this – is it a “rule of thumb” or a “thumb rule?” This is tough. A lot of what we do when naming is a judgment call. Both thumb and rule are nouns, albeit concrete and abstract (more about this later), but to most people “thumb rule” is meaningless while “rule of thumb” is an idiom. The difference between knowledge and information is that knowledge is meaningful information placed in context. Thus I elect the “rule of thumb”. It is the more meaningful title. Abstract and Concrete are relative terms. Many nouns (and verbs) that are abstract to a commoner, are concrete to a practitioner of one profession or another and may even have different concrete meanings in different professional jargons. Think about “running”. To an executive it means running a business, to a marathoner its meaning is much more literal. Generally speaking, we store and disseminate knowledge within a practice more than we do it in general. Even dictionaries encyclopedias define terms as they apply to different audiences. The rule of thumb is to put the more concrete first, but within the audience’s jargon. Even the title of this monogram is a question. Do I name it “Naming Documents” or “Documents, Naming”? Well, my own rule of thumb (“Here he goes again!?”) states that the latter is better because it starts with a noun, but this is a document about naming more than it about documents. The rules of naming also apply to graphs and charts, excel spreadsheets, and so on. Thus, I vote for the former.  A better title could have been “Naming Objects” only the word “Object” is a bit too abstract. How about just “Naming” or “Naming, rules of”? You get the drift. One of the ways to resolve all of this is to store the documents in Knowledge-Bases, which may become the subjects of a future punditry. Knowledge bases use keywords to describe their content.  Use a Metadata store for the keywords to at least attempt some common grounds. Here is another general rule (rule of thumb?!!) – put at least the one keyword in the title. Use subtitles. Here is an example: Migrating documents – Screening, cleaning, and organizing our knowledge. The main keyword is “documents”, next is “migrating”, other keywords also appear in the subtitle. They are “screening”, “cleaning”, and “organizing”. Any questions? Send me an amply named document by email: [email protected]

    Read the article

  • With Its MySQL Database-as-a-Service CERN Empowers Scientists

    - by Bertrand Matthelié
    The European Organization for Nuclear Research (CERN) is one of the world’s largest and most respected centers for scientific research. Founded in 1954 and located near Geneva on the Franco-Swiss border, CERN was one of Europe’s first joint ventures. Today, it has 20 member states. The organization uses the world’s largest and most complex scientific instruments to study fundamental particles and the origin of the universe. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Challenges Better support the scientists associated with a CERN research program who selected MySQL as their database. Empower users, enabling them to be as self-reliant as possible. Minimize complexity and costs for the CERN IT department to support the growing number of MySQL deployments. Solution Delivered a MySQL Database-as-a-Service offering to the CERN employees and the scientists associated with the organization. Allowed researchers selecting MySQL for their project to get access to a database instance hosted by the CERN IT department, either from the start or once their application has become critical. Implemented the service using Oracle’s server virtualization software, Oracle VM, for increased flexibility and reduced costs. Empowered users with a self-service approach, providing them with tools to manage MySQL themselves while handling backups and other basic database administration tasks for them. Enabled scientists to rely on MySQL with increased reliability, security and manageability while reducing complexity and minimizing costs. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} "The Cloud model has allowed us to deliver a self-service platform to our MySQL users, empowering them while minimizing costs for CERN." Tony Cass, Database Services Group Leader, IT department, CERN.

    Read the article

  • Different fan behaviour in my laptop after upgrade, what to do now?

    - by student
    After upgrading from lubuntu 13.10 to 14.04 the fan of my laptop seems to run much more often than in 13.10. When it runs, it doesn't run continously but starts and stops every second. fwts fan results in Results generated by fwts: Version V14.03.01 (2014-03-27 02:14:17). Some of this work - Copyright (c) 1999 - 2014, Intel Corp. All rights reserved. Some of this work - Copyright (c) 2010 - 2014, Canonical. This test run on 12/05/14 at 21:40:13 on host Linux einstein 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64. Command: "fwts fan". Running tests: fan. fan: Simple fan tests. -------------------------------------------------------------------------------- Test 1 of 2: Test fan status. Test how many fans there are in the system. Check for the current status of the fan(s). PASSED: Test 1, Fan cooling_device0 of type Processor has max cooling state 10 and current cooling state 0. PASSED: Test 1, Fan cooling_device1 of type Processor has max cooling state 10 and current cooling state 0. PASSED: Test 1, Fan cooling_device2 of type LCD has max cooling state 15 and current cooling state 10. Test 2 of 2: Load system, check CPU fan status. Test how many fans there are in the system. Check for the current status of the fan(s). Loading CPUs for 20 seconds to try and get fan speeds to change. Fan cooling_device0 current state did not change from value 0 while CPUs were busy. Fan cooling_device1 current state did not change from value 0 while CPUs were busy. ADVICE: Did not detect any change in the CPU related thermal cooling device states. It could be that the devices are returning static information back to the driver and/or the fan speed is automatically being controlled by firmware using System Management Mode in which case the kernel interfaces being examined may not work anyway. ================================================================================ 3 passed, 0 failed, 0 warning, 0 aborted, 0 skipped, 0 info only. ================================================================================ 3 passed, 0 failed, 0 warning, 0 aborted, 0 skipped, 0 info only. Test Failure Summary ================================================================================ Critical failures: NONE High failures: NONE Medium failures: NONE Low failures: NONE Other failures: NONE Test |Pass |Fail |Abort|Warn |Skip |Info | ---------------+-----+-----+-----+-----+-----+-----+ fan | 3| | | | | | ---------------+-----+-----+-----+-----+-----+-----+ Total: | 3| 0| 0| 0| 0| 0| ---------------+-----+-----+-----+-----+-----+-----+ Here is the output of lsmod lsmod Module Size Used by i8k 14421 0 zram 18478 2 dm_crypt 23177 0 gpio_ich 13476 0 dell_wmi 12761 0 sparse_keymap 13948 1 dell_wmi snd_hda_codec_hdmi 46207 1 snd_hda_codec_idt 54645 1 rfcomm 69160 0 arc4 12608 2 dell_laptop 18168 0 bnep 19624 2 dcdbas 14928 1 dell_laptop bluetooth 395423 10 bnep,rfcomm iwldvm 232285 0 mac80211 626511 1 iwldvm snd_hda_intel 52355 3 snd_hda_codec 192906 3 snd_hda_codec_hdmi,snd_hda_codec_idt,snd_hda_intel snd_hwdep 13602 1 snd_hda_codec snd_pcm 102099 3 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel snd_page_alloc 18710 2 snd_pcm,snd_hda_intel snd_seq_midi 13324 0 snd_seq_midi_event 14899 1 snd_seq_midi snd_rawmidi 30144 1 snd_seq_midi coretemp 13435 0 kvm_intel 143060 0 kvm 451511 1 kvm_intel snd_seq 61560 2 snd_seq_midi_event,snd_seq_midi joydev 17381 0 serio_raw 13462 0 iwlwifi 169932 1 iwldvm pcmcia 62299 0 snd_seq_device 14497 3 snd_seq,snd_rawmidi,snd_seq_midi snd_timer 29482 2 snd_pcm,snd_seq lpc_ich 21080 0 cfg80211 484040 3 iwlwifi,mac80211,iwldvm yenta_socket 41027 0 pcmcia_rsrc 18407 1 yenta_socket pcmcia_core 23592 3 pcmcia,pcmcia_rsrc,yenta_socket binfmt_misc 17468 1 snd 69238 17 snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_hda_codec_idt,snd_pcm,snd_seq,snd_rawmidi,snd_hda_codec,snd_hda_intel,snd_seq_device,snd_seq_midi soundcore 12680 1 snd parport_pc 32701 0 mac_hid 13205 0 ppdev 17671 0 lp 17759 0 parport 42348 3 lp,ppdev,parport_pc firewire_ohci 40409 0 psmouse 102222 0 sdhci_pci 23172 0 sdhci 43015 1 sdhci_pci firewire_core 68769 1 firewire_ohci crc_itu_t 12707 1 firewire_core ahci 25819 2 libahci 32168 1 ahci i915 783485 2 wmi 19177 1 dell_wmi i2c_algo_bit 13413 1 i915 drm_kms_helper 52758 1 i915 e1000e 254433 0 drm 302817 3 i915,drm_kms_helper ptp 18933 1 e1000e pps_core 19382 1 ptp video 19476 1 i915 I tried one answer to the similar question: loud fan on Ubuntu 14.04 and created a /etc/i8kmon.conf like the following: # Run as daemon, override with --daemon option set config(daemon) 1 # Automatic fan control, override with --auto option set config(auto) 1 # Status check timeout (seconds), override with --timeout option set config(timeout) 2 # Report status on stdout, override with --verbose option set config(verbose) 1 # Temperature thresholds: {fan_speeds low_ac high_ac low_batt high_batt} set config(0) {{0 0} -1 55 -1 55} set config(1) {{0 1} 50 60 55 65} set config(2) {{1 1} 55 80 60 85} set config(3) {{2 2} 70 128 75 128} With this setup the fan goes on even if the temperature is below 50 degree celsius (I don't see a pattern). However I get the impression that the CPU got's hotter in average than without this file. What changes from 13.10 to 14.04 may be responsible for this? If this is a bug, for which package I should report the bug?

    Read the article

  • MVVM load data during or after ViewModel construction?

    - by mkmurray
    My generic question is as the title states, is it best to load data during ViewModel construction or afterward through some Loaded event handling? I'm guessing the answer is after construction via some Loaded event handling, but I'm wondering how that is most cleanly coordinated between ViewModel and View? Here's more details about my situation and the particular problem I'm trying to solve: I am using the MVVM Light framework as well as Unity for DI. I have some nested Views, each bound to a corresponding ViewModel. The ViewModels are bound to each View's root control DataContext via the ViewModelLocator idea that Laurent Bugnion has put into MVVM Light. This allows for finding ViewModels via a static resource and for controlling the lifetime of ViewModels via a Dependency Injection framework, in this case Unity. It also allows for Expression Blend to see everything in regard to ViewModels and how to bind them. So anyway, I've got a parent View that has a ComboBox databound to an ObservableCollection in its ViewModel. The ComboBox's SelectedItem is also bound (two-way) to a property on the ViewModel. When the selection of the ComboBox changes, this is to trigger updates in other views and subviews. Currently I am accomplishing this via the Messaging system that is found in MVVM Light. This is all working great and as expected when you choose different items in the ComboBox. However, the ViewModel is getting its data during construction time via a series of initializing method calls. This seems to only be a problem if I want to control what the initial SelectedItem of the ComboBox is. Using MVVM Light's messaging system, I currently have it set up where the setter of the ViewModel's SelectedItem property is the one broadcasting the update and the other interested ViewModels register for the message in their constructors. It appears I am currently trying to set the SelectedItem via the ViewModel at construction time, which hasn't allowed sub-ViewModels to be constructed and register yet. What would be the cleanest way to coordinate the data load and initial setting of SelectedItem within the ViewModel? I really want to stick with putting as little in the View's code-behind as is reasonable. I think I just need a way for the ViewModel to know when stuff has Loaded and that it can then continue to load the data and finalize the setup phase. Thanks in advance for your responses.

    Read the article

  • Sunrise / set calculations

    - by dassouki
    I'm trying to calculate the sunset / rise times using python based on the link provided below. My results done through excel and python do not match the real values. Any ideas on what I could be doing wrong? My Excel sheet can be found under .. http://transpotools.com/sun_time.xls # Created on 2010-03-28 # @author: dassouki # @source: [http://williams.best.vwh.net/sunrise_sunset_algorithm.htm][2] # @summary: this is based on the Nautical Almanac Office, United States Naval # Observatory. import math, sys class TimeOfDay(object): def calculate_time(self, in_day, in_month, in_year, lat, long, is_rise, utc_time_zone): # is_rise is a bool when it's true it indicates rise, # and if it's false it indicates setting time #set Zenith zenith = 96 # offical = 90 degrees 50' # civil = 96 degrees # nautical = 102 degrees # astronomical = 108 degrees #1- calculate the day of year n1 = math.floor( 275 * in_month / 9 ) n2 = math.floor( ( in_month + 9 ) / 12 ) n3 = ( 1 + math.floor( in_year - 4 * math.floor( in_year / 4 ) + 2 ) / 3 ) new_day = n1 - ( n2 * n3 ) + in_day - 30 print "new_day ", new_day #2- calculate rising / setting time if is_rise: rise_or_set_time = new_day + ( ( 6 - ( long / 15 ) ) / 24 ) else: rise_or_set_time = new_day + ( ( 18 - ( long/ 15 ) ) / 24 ) print "rise / set", rise_or_set_time #3- calculate sun mean anamoly sun_mean_anomaly = ( 0.9856 * rise_or_set_time ) - 3.289 print "sun mean anomaly", sun_mean_anomaly #4 calculate true longitude true_long = ( sun_mean_anomaly + ( 1.916 * math.sin( math.radians( sun_mean_anomaly ) ) ) + ( 0.020 * math.sin( 2 * math.radians( sun_mean_anomaly ) ) ) + 282.634 ) print "true long ", true_long # make sure true_long is within 0, 360 if true_long < 0: true_long = true_long + 360 elif true_long > 360: true_long = true_long - 360 else: true_long print "true long (360 if) ", true_long #5 calculate s_r_a (sun_right_ascenstion) s_r_a = math.degrees( math.atan( 0.91764 * math.tan( math.radians( true_long ) ) ) ) print "s_r_a is ", s_r_a #make sure it's between 0 and 360 if s_r_a < 0: s_r_a = s_r_a + 360 elif true_long > 360: s_r_a = s_r_a - 360 else: s_r_a print "s_r_a (modified) is ", s_r_a # s_r_a has to be in the same Quadrant as true_long true_long_quad = ( math.floor( true_long / 90 ) ) * 90 s_r_a_quad = ( math.floor( s_r_a / 90 ) ) * 90 s_r_a = s_r_a + ( true_long_quad - s_r_a_quad ) print "s_r_a (quadrant) is ", s_r_a # convert s_r_a to hours s_r_a = s_r_a / 15 print "s_r_a (to hours) is ", s_r_a #6- calculate sun diclanation in terms of cos and sin sin_declanation = 0.39782 * math.sin( math.radians ( true_long ) ) cos_declanation = math.cos( math.asin( sin_declanation ) ) print " sin/cos declanations ", sin_declanation, ", ", cos_declanation # sun local hour cos_hour = ( math.cos( math.radians( zenith ) ) - ( sin_declanation * math.sin( math.radians ( lat ) ) ) / ( cos_declanation * math.cos( math.radians ( lat ) ) ) ) print "cos_hour ", cos_hour # extreme north / south if cos_hour > 1: print "Sun Never Rises at this location on this date, exiting" # sys.exit() elif cos_hour < -1: print "Sun Never Sets at this location on this date, exiting" # sys.exit() print "cos_hour (2)", cos_hour #7- sun/set local time calculations if is_rise: sun_local_hour = ( 360 - math.degrees(math.acos( cos_hour ) ) ) / 15 else: sun_local_hour = math.degrees( math.acos( cos_hour ) ) / 15 print "sun local hour ", sun_local_hour sun_event_time = sun_local_hour + s_r_a - ( 0.06571 * rise_or_set_time ) - 6.622 print "sun event time ", sun_event_time #final result time_in_utc = sun_event_time - ( long / 15 ) + utc_time_zone return time_in_utc #test through main def main(): print "Time of day App " # test: fredericton, NB # answer: 7:34 am long = 66.6 lat = -45.9 utc_time = -4 d = 3 m = 3 y = 2010 is_rise = True tod = TimeOfDay() print "TOD is ", tod.calculate_time(d, m, y, lat, long, is_rise, utc_time) if __name__ == "__main__": main()

    Read the article

  • How to get dropdown value using jsp:useBean and jsp:setProperty?

    - by littlevahn
    I have a rather simple form in JSP that looks like this: <form action="response.jsp" method="POST"> <label>First Name:</label><input type="text" name="firstName" /><br> <label>Last Name:</label><input type="text" name="lastName" /><br> <label>Email:</label><input type="text" name="email" /><br> <label>Re-enter Email:</label><input type="text" name="emailRe" /><br> <label>Address:</label><input type="text" name="address" /><br> <label>Address 2:</label><input type="text" name="address2" /><br> <label>City:</label><input type="text" name="city" /><br> <label>Country:</label> <select name="country"> <option value="0">--Country--</option> <option value="1">United States</option> <option value="2">Canada</option> <option value="3">Mexico</option> </select><br> <label>Phone:</label><input type="text" name="phone" /><br> <label>Alt Phone:</label><input type="text" name="phoneAlt" /><br> <input type="submit" value="submit" /> </form> But when I try and access the value of the select box in my Java class I get null. Ive tried reading it in as a String and an Array of strings neither though seems to be grabbing the right value. The response.jsp looks like this: <%@ page language="java" %> <%@ page import="java.util.*" %> <%@page contentType="text/html" pageEncoding="UTF-8"%> <%! %> <jsp:useBean id="formHandler" class="validation.RegHandler" scope="request"> <jsp:setProperty name="formHandler" property="*" /> </jsp:useBean> <% if (formHandler.validate()) { %> <jsp:forward page="success.jsp"/> <% } else { %> <jsp:forward page="retryReg.jsp"/> <% } %> I already have Java script validation in place but I wanted to make sure I covered validation and checking for non-JS users. The RegHandler just uses the name field to refer to the value in the form. Any Idea how I could access the select box's value?

    Read the article

  • getDate with Jquery Datepicker

    - by matt
    i am trying to get date from my implementation of jquery date picker, add it to a string and display the resulting image in my div. Something however is just not working. Can anyone check out the code and have a look at it? The code is supposed to take the date from date picker, combine it in a string which is to have the necessary code to display tag, images are located in /image and in the format aYY-MM-DD.png, new to this date picker and can't quite get it down yet. <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <link type="text/css" href="css/ui-lightness/jquery-ui-1.8.custom.css" rel="stylesheet" /> <script type="text/javascript" src="js/jquery-1.4.2.min.js"></script> <script type="text/javascript" src="js/jquery-ui-1.8.custom.min.js"></script> <script type="text/javascript"> $(function(){ // Datepicker $('#datepicker').datepicker({ dateFormat: 'yy-mm-dd', inline: true, minDate: new Date(2010, 1 - 1, 1), maxDate:new Date(2010, 12 - 1, 31), altField: '#datepicker_value', }); //hover states on the static widgets $('#dialog_link, ul#icons li').hover( function() { $(this).addClass('ui-state-hover'); }, function() { $(this).removeClass('ui-state-hover'); } ); }); //var img_date = .datepicker('getDate'); var day1 = $("#datepicker").datepicker('getDate').getDate(); var month1 = $("#datepicker").datepicker('getDate').getMonth() + 1; var year1 = $("#datepicker").datepicker('getDate').getFullYear(); var fullDate = year1 + "-" + month1 + "-" + day1; var str_output = "<h1><center><img src=\"/images/a" + fullDate + ".png\"></center></h1><br/><br>"; page_output.innerHTML = str_output; // writing the results to the div element (page_out) </script> </head> <body style="background-color:#000;color:#fff;margin: auto auto;"> <!-- Datepicker --> <div id="datepicker"></div> <!-- Highlight / Error --> <p>Date Value: <input type="text" id="datepicker_value" /></p> <div id="page_output" style="text-align:center; margin-top:80px; margin-bottom:20px; "></div> </body>

    Read the article

  • Android CheckBox -- Restoring State After Screen Rotation

    - by Jared M
    I have come across some very unexpected (and incredibly frustrating) functionality while trying to restore the state of a list of CheckBoxes after a screen rotation. I figured I first would try to give a textual explanation without the code, in case someone is able to determine a solution without all the gory details. If anyone needs more details I can post the code. I have a scrolling list of complex Views that contain CheckBoxes. I have been unsuccessful in restoring the state of these check boxes after a screen rotation. I have implemented onSaveInstanceState and have successfully transfered the list of selected check boxes to the onCreate method. This is handled by passing a long[] of database ids to the Bundle. In onCreate() I check the Bundle for the array of ids. If the array is there I use it to determine which check boxes to check when the list is being built. I have created a number of test methods and have confirmed that the check boxes are being set correctly, based on the id array. As a last check I am checking the states of all check boxes at the very end of onCreate(). Everything looks good... unless I rotate the screen. When I rotate the screen, one of two things happens: 1) If any number of the check boxes are selected, except for the last one, all check boxes are off after a rotation. 2) If the last check box is checked before rotation, then all check boxes are checked after rotation. Like I said, I check the state of the boxes at the very end of my onCreate(). The thing is, the state of the boxes at the end of onCreate is correct based on what I selected before the rotation. However, the state of the boxes on the screen does not reflect this. In addition, I have implemented each check box's setOnCheckChangedListener() and I have confirmed that my check boxes' state's are being altered after my onCreate method returns. Anyone have an idea of what is going on? Why would the state of my check boxes change after my onCreate method returns? Thanks in advance for your help. I have been trying to degub this for a couple days now. After I found that my check boxes were apparently changing somewhere outside my own code I figured it was time to ask around.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >