Search Results

Search found 9233 results on 370 pages for 'high school'.

Page 273/370 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • Macbook (non-pro, late 2010) Power Management Issues relating to Nvidia-Current drivers

    - by gbvitaobscura
    I have tried many potential solutions but to no avail! (this is going to be a long post) Essentially, when I first install ubuntu (I have tested 10.11-12.04 beta) I can change the brightness of the macbook backlight. One of the first things I usually do once my machine finishes installing is enter "sudo apt-get install nvidia-current" into the terminal to get my Nvidia graphics card set up. Before I install nvidia-current power management works flawlessly. After installing nvidia-current my graphics driver works mostly properly (more on that later). When I press F1 or F2 notify osd pops up and shows icon for the backlight along with a meter which changes according to how much I press F1 or F2, but the backlight does not change brightness at all. Two supplementary facts: when I am running off of battery power the backlight does not dim ever, also changing the backlight brightness through the terminal does not work (it recognizes the command but changes nothing). Things which did not work: 1. editing xorg.conf 2. python script/hack 3. editing grub Things which did work: 1. removing Nvidia-Current Final piece of information: Although my graphics card works in every way but Power Management it can not be found through System Settings/Details (Graphics Unknown) I'm using the NVIDIA GeForce 320M. the output of lspci is: 00:00.0 Host bridge: NVIDIA Corporation MCP89 HOST Bridge (rev a1) 00:00.1 RAM memory: NVIDIA Corporation MCP89 Memory Controller (rev a1) 00:01.0 RAM memory: NVIDIA Corporation Device 0d6d (rev a1) 00:01.1 RAM memory: NVIDIA Corporation Device 0d6e (rev a1) 00:01.2 RAM memory: NVIDIA Corporation Device 0d6f (rev a1) 00:01.3 RAM memory: NVIDIA Corporation Device 0d70 (rev a1) 00:02.0 RAM memory: NVIDIA Corporation Device 0d71 (rev a1) 00:02.1 RAM memory: NVIDIA Corporation Device 0d72 (rev a1) 00:03.0 ISA bridge: NVIDIA Corporation MCP89 LPC Bridge (rev a2) 00:03.1 RAM memory: NVIDIA Corporation MCP89 Memory Controller (rev a1) 00:03.2 SMBus: NVIDIA Corporation MCP89 SMBus (rev a1) 00:03.3 RAM memory: NVIDIA Corporation MCP89 Memory Controller (rev a1) 00:03.4 Co-processor: NVIDIA Corporation MCP89 Co-Processor (rev a1) 00:04.0 USB controller: NVIDIA Corporation MCP89 OHCI USB 1.1 Controller (rev a1) 00:04.1 USB controller: NVIDIA Corporation MCP89 EHCI USB 2.0 Controller (rev a2) 00:06.0 USB controller: NVIDIA Corporation MCP89 OHCI USB 1.1 Controller (rev a1) 00:06.1 USB controller: NVIDIA Corporation MCP89 EHCI USB 2.0 Controller (rev a2) 00:08.0 Audio device: NVIDIA Corporation MCP89 High Definition Audio (rev a2) 00:09.0 Ethernet controller: NVIDIA Corporation MCP89 Ethernet (rev a1) 00:0a.0 IDE interface: NVIDIA Corporation MCP89 SATA Controller (rev a2) 00:0b.0 RAM memory: NVIDIA Corporation Device 0d75 (rev a1) 00:15.0 PCI bridge: NVIDIA Corporation Device 0d9b (rev a1) 00:17.0 PCI bridge: NVIDIA Corporation MCP89 PCI Express Bridge (rev a1) 01:00.0 Network controller: Broadcom Corporation BCM43224 802.11a/b/g/n (rev 01) 02:00.0 VGA compatible controller: NVIDIA Corporation Device 08a0 (rev a2) There is a very similar bug on Launchpad which I have posted below. https://bugs.launchpad.net/ubuntu/+source/nvidia-graphics-drivers/+bug/764195

    Read the article

  • ZTE USB Modem AC2736, connection not possible in Ubuntu 12.04.1 LTS

    - by Fredo
    It's a long post but nearly covers all my experiments and changes I did to my NM. Hope the information is complete and if there are still question, more information can be provided. I've a ZTE AC2736 USB Modem (CDMA modem) which worked fine in Ubuntu 10.04 /11.10. I recently switched to 12.04.1 (precise pangolin). after the switch the first issue I faced was to connect to internet using my USB modem (ISP: Reliance Brand: Netconnect). Tried to run the drivers provided by Reliance but they are old and won't support Kernel 2.6.30 above. since the code was not downloaded with ISO image (of 12.04) i couldn't compile the files provided in such driver. lsusb does detect it as Modem with output similar to 19d2:fff1 ZTE CDMA technologies inc. (or similar as i didn't note it down) If it is detected as USB storage it shows 19d2:fff5 (as per few online forums, i may be wrong here). I used the network Manager and configured the modem to dial #777 (default) and the ISP provided username:password combination. It tries to connect to internet (3-4 times automatically)but fails to get online. once I was able to connect in the monring hours and the message was flashed 'registered on CDMA home network'. I was able to run an update. the kernel was updated with 3.0.2 -pae OR something similar (can find out if required). I surfed the net for about 2 hours later before restarting. After the restart, again the Modem was not able to get me online. I kept trying for many times. I tried changing the setting in NM. One evening after dark I was able to connect to network with same message flash 'registered on CDMA home network' (the message was similar, i'm not precise here,sorry). I was able to surf the net for nearly 3 hours before I switched off my Laptop. I'm not able to get online after that day, It's been 3 days now. I'll try the observered theory of late/early hours sometime soon as mentioned below. Laptop configuration : Make: Lenovo Model: B480 Processor: Intel B950 RAM: 4G DDR3 HDD: 500 G Broadcom Wireless/Bluetooth/Ethernet LAN OS: FreeDOS / Ubuntu 12.04 LTS (dualboot) Kernel 3.0.2-pae Obeservation : I'm able to connect to internet in those hours when generally the speed is high (low usage by other network (wireless) users). like in early mornings or late nights. This is strange as connection should not be dependent on bandwidth usage. Any help would be appraciated to fix this issue. before I decide to cahnge the OS or ISP.

    Read the article

  • A Slice of Raspberry Pi

    - by Phil Factor
    Guest editorial for the ITPro/SysAdmin newsletter The Raspberry Pi Foundation has done a superb design job on their new $35 network-enabled Linux computer. This tiny machine, incorporating an ARM processor on a Broadcom BCM2835 multimedia chip, aims to put the fun back into learning computing. The public response has been overwhelmingly positive.Note that aim: "…to put the fun back". Education in Information Technology is in dire straits. It always has been, but seems to have deteriorated further still, even in the face of improved provision of equipment.In many countries, the government controls the curriculum. It predicted a shortage in office-based IT skills, and so geared the ICT curriculum toward mind-numbing training in word-processing and spreadsheet skills. Instead, the shortage has turned out to be in people with an engineering-mindset, who can solve problems with whatever technologies are available and learn new techniques quickly, in a rapidly-changing field.In retrospect, the assumption that specific training was required rather than an education was an idiotic response to the arrival of mainstream information technology. As a result, ICT became a disaster area, which discouraged a generation of youngsters from a career in IT, and thereby led directly to the shortage of people with the skills that are required to exploit the potential of Information Technology..Raspberry Pi aims to reverse the trend. This is a rig that is geared to fast graphics in high resolution. It is no toy. It should be a superb games machine. However, the use of Fedora, Debian, or Arch Linux ARM shows the more serious educational intent behind the Foundation's work. It looks like it will even do some office work too!So, get hold of any power supply that provides a 5VDC source at the required 700mA; an old Blackberry charger will do or, alternatively, it will run off four AA cells. You'll need a USB hub to support the mouse and keyboard, and maybe a hard drive. You'll want a DVI monitor (with audio out) or TV (sound and video). You'll also need to be able to cope with wired Ethernet 10/100, if you want networking.With this lot assembled, stick the paraphernalia on the back of the HDTV with Blu Tack, get a nice keyboard, and you have a classy Linux-based home computer. The major cost is in the T.V and the keyboard. If you're not already writing software for this platform, then maybe, at a time when some countries are talking of orders in the millions, you should consider it.

    Read the article

  • A Community Cure for a String Splitting Headache

    - by Tony Davis
    A heartwarming tale of dogged perseverance and Community collaboration to solve some SQL Server string-related headaches. Michael J Swart posted a blog this week that had me smiling in recognition and agreement, describing how an inquisitive Developer or DBA deals with a problem. It's a three-step process, starting with discomfort and anxiety; a feeling that one doesn't know as much about one's chosen specialized subject as previously thought. It progresses through a phase of intense research and learning until finally one achieves breakthrough, blessed relief and renewed optimism. In this case, the discomfort was provoked by the mystery of massively high CPU when searching Unicode strings in SQL Server. Michael explored the problem via Stack Overflow, Google and Twitter #sqlhelp, finally leading to resolution and a blog post that shared what he learned. Perfect; except that sometimes you have to be prepared to share what you've learned so far, while still mired in the phase of nagging discomfort. A good recent example of this recently can be found on our own blogs. Despite being a loud advocate of the lightning fast T-SQL-based string splitting techniques, honed to near perfection over many years by Jeff Moden and others, Phil Factor retained a dogged conviction that, in theory, shredding element-based XML using XQuery ought to be even more efficient for splitting a string to create a table. After some careful testing, he found instead that the XML way performed and scaled miserably by comparison. Somewhat subdued, and with a nagging feeling that perhaps he was still missing "something", he posted his findings. What happened next was a joy to behold; the community jumped in to suggest subtle changes in approach, using an attribute-based rather than element-based XML list, and tweaking the XQuery shredding. The result was performance and scalability that surpassed all other techniques. I asked Phil how quickly he would have arrived at the real breakthrough on his own. His candid answer was "never". Both are great examples of the power of Community learning and the latter in particular the importance of being brave enough to parade one's ignorance. Perhaps Jeff Moden will accept the string-splitting gauntlet one more time. To quote the great man: you've just got to love this community! If you've an interesting tale to tell about being helped to a significant breakthrough for a problem by the community, I'd love to hear about it. Cheers, Tony.

    Read the article

  • How far is too far?

    - by David Dorf
    Previously I've talked about Safeway's personalized pricing as well as Target's use of analytics to learn about customers.  Then last week I read about Orbitz tailoring their hotel offers based on the browser used.  (Orbitz claims that Mac users are 40% more likely than PC users to book four- or five-star hotels.)  So just how far is too far when tailoring the retail experience? When most consumers read about these types of tactics, they tend to feel violated, as if someone was reading their personal diary.  Nobody wants to be tricked into buying things.  Walking into a grocery store and seeing crates of apples stacked high looks enticing, but the crates are just for display and the apples may be over a year old.  Even though its much cheaper to print markdown tags, many retailers manually write the price tags because consumers think they deal is better if the price is hand-written. The technology already exists to personalize prices and experiences for consumers.  People get upset thinking they paid more for something than a neighbor, but it already happens all the time with cars, flights, and the use of loyalty programs and coupons. There are many variables at play for any purchase.  They only difference is that the customer segments are getting smaller, sometimes reaching a size of one. There's two ways to look at this.  Retailers have always manipulated the environment to get consumers to buy more -- or -- Retailers are getting better at tuning the shopping experience for consumers.  I choose the latter, and so do most consumers by spending their money in the stores they like.  Consumers like to see fresh flowers at the entrance to the grocery store, and they like to see specials scrawled on chalkboards. The key is making sure that consumers benefit from the experience as well.  I'm willing to give up some personal information in exchange for discounts and more relevant marketing, and the next-generation of shoppers are even less concerned about privacy.  Retailers need to use all the tools available to differentiate their offers and connect with their customers. So if Orbitz wants to put three-star hotels at the top of the list for me because I'm using a PC, that's fine by me.

    Read the article

  • Breaking up the Workday– Overcoming the Workaholic Syndrome

    - by dwahlin
    Hi, my name’s Dan Wahlin and I’m a workaholic – I admit it. It’s good from the standpoint that I get a lot done but it also has a lot of cons associated with it as well that I’m not proud of. I literally can’t watch TV without feeling like I should be doing something more productive (although I have no problem going to see movies at a theater or watching sporting events – that’s my escape I guess). On vacation it’s sometimes difficult the first few days to just “let go” of work and enjoy the time with my family. I always feel like I should be checking email and following up with different business projects. Fortunately, my wife knows me really well after 17 years of marriage and “gently” restricts my usage of laptops and other gadgets while we’re out. She also reminds me that constantly burying my face in gadgets just isn’t cool and shows a distinct lack of self control. On a given day I typically put in between 12 (at a minimum) up to 16-18 hours working on projects. My company does .NET consulting (ASP.NET/jQuery, SharePoint and Silverlight) but we also do a lot in the training space so there’s always a client project, some new courseware or some other deliverable that has to be worked on. My normal process for handling that is to just work my butt off and see how much I can get done. That process has worked well for a long time but when you start realizing that your happiness comes from how much work you accomplished that day then you have a problem. That’s especially true if you have kids (which I do….two awesome boys). It’s almost as if working more hours feels like I’m more successful or something which is of course ridiculous. It may actually mean that I’m too distracted or disorganized. Lately I’ve realized that while I’m still productive and always meet my deadlines, I’m really burnt out by the afternoon and have lost some of the excitement I used to have. Part of that’s normal I think given that I’ve been doing this for close to 15 years now, but in thinking through it more I realized that I just need to get away from the desk and take a break. By far, the happiest time of my life was my childhood. Part of that was due to having awesome parents, having far less responsibility (a big factor I suspect), being able to hang-out with friends at school, playing sports, games, etc. but I think a big part of the overall happiness came from being outside a lot. I lived on my bike as a little kid and as I grew up I shared time between riding an ATV all over the place, shooting hoops on the basketball court, playing golf and working on a golf course (all outside work of course).  Being a software developer and trainer I generally spend 95% or more of my day indoors and only see the sun when driving from place to place or by looking out the window (that’s sad because I live in a suburb of Phoenix, AZ where it’s nearly always sunny). I haven’t looked into any scientific studies on the matter, but I’d be willing to bet there’s a direct correlation between overall productivity/happiness and being outside some throughout the day (sunny or not). But, I wasn’t sure what to do about it since I do have a lot of deadlines I need to meet after all. While talking with my wife last night I mentioned how I feel like I’m in a rut and want to get the “fun” back that I used to have. She immediately said that I need to start making time for breaks (a real quick fact – she’s a lot smarter than me and nearly always right). Of course my first thought was that I’d be less productive taking breaks. If I spend 2 hours just relaxing then I’m losing 2 hours of work. But, I thought about it more and realized that I’m probably less productive when I work 10+ hours and only take less than 30 minutes for a lunch break to relax a little. I bet my brain is screaming, “Please let me relax a little so I can figure out these problems you’re trying to resolve!”. So, starting today I’m going to try to break the workaholic habit and spend time outside of the office. That could mean sitting around outside, working out, golfing, or whatever. I’ve decided that no gadgets are allowed during that time and that I shouldn’t work for more than 4 hours straight without taking a break. I have no idea how my little “break the workaholic syndrome” experiment will go or how long it will last, but I’d be very interested in hearing from others on how they keep fresh and focused without working yourself to death. If you have any specific ideas, techniques or practices you follow please share them. There’s a lot more to life than work and some of us (and I’m thinking of myself specifically) need to take a long, hard look at what kind of balance we currently have. I’d hate to look back at my life when I’m 80 years old and say, “The only thing I did was work – I missed out on life!”.

    Read the article

  • Should I be an algorithm developer, or java web frameworks type developer?

    - by Derek
    So - as I see it, there are really two kinds of developers. Those that do frameworks, web services, pretty-making front ends, etc etc. Then there are developers that write the algorithms that solve the problem. That is, unless the problem is "display this raw data in some meaningful way." In that case, the framework/web developer guy might be doing both jobs. So my basic problem is this. I have been an algorithms kind of software developer for a few years now. I double majored in Math and Computer science, and I have a master's in systems engineering. I have never done any web-dev work, with the exception of a couple minor jobs, and some hobby level stuff. I have been job interviewing lately, and this is what happens: Job is listed as "programmer- 5 years of experience with the following: C/C++, Java,Perl, Ruby, ant, blah blah blah" Recruiter calls me, says they want me to come in for interview In the interview, find out they have some webservices development, blah blah blah When asked in the interview, talk about my experience doing algorithms, optimization, blah blah..but very willing to learn new languages, frameworks, etc Get a call back saying "we didn't think you were a fit for the job you interviewed wtih, but our algorithm team got wind of you and wants to bring you on" This has happened to me a couple times now - see a vague-ish job description looking for a "programmer" Go in, find out they are doing some sort of web-based tool, maybe with some hardcore algorithms running in the background. interview with people for the web-based tool, but get an offer from the algorithms people. So the question is - which job is the better job? I basically just want to get a wide berth of experience at this level of my career, but are algorithm developers so much in demand? Even more so than all these supposed hot in demand web developer guys? Will I be ok in the long run if I go into the niche of math based algorithm development, and just little to no, or hobby level web-dev experience? I basically just don't want to pigeon hole myself this early. My salary is already starting to get pretty high - and I can see a company later on saying "we really need a web developer, but we'll hire this 50k/year college guy, instead of this 100k/year experience algorithm guy" Cliffs notes: I have been doing algorithm development. I consider myself to be a "good programmer." I would have no problem picking up web technologies and those sorts of frameworks. During job interviews, I keep getting "we think you've got a good skillset - talk to our algorithm team" instead of wanting me to learn new skills on the job to do their web services or whhatever other new technology they are doing. Edit: Whenever I am talking about algorithm development here - I am talking about the code that produces the answer. Typically I think of more math-based algorithms: solving a financial problem, solving a finite element method, image processing, etc

    Read the article

  • What are the software design essentials? [closed]

    - by Craig Schwarze
    I've decided to create a 1 page "cheat sheet" of essential software design principles for my programmers. It doesn't explain the principles in any great depth, but is simply there as a reference and a reminder. Here's what I've come up with - I would welcome your comments. What have I left out? What have I explained poorly? What is there that shouldn't be? Basic Design Principles The Principle of Least Surprise – your solution should be obvious, predictable and consistent. Keep It Simple Stupid (KISS) - the simplest solution is usually the best one. You Ain’t Gonna Need It (YAGNI) - create a solution for the current problem rather than what might happen in the future. Don’t Repeat Yourself (DRY) - rigorously remove duplication from your design and code. Advanced Design Principles Program to an interface, not an implementation – Don’t declare variables to be of a particular concrete class. Rather, declare them to an interface, and instantiate them using a creational pattern. Favour composition over inheritance – Don’t overuse inheritance. In most cases, rich behaviour is best added by instantiating objects, rather than inheriting from classes. Strive for loosely coupled designs – Minimise the interdependencies between objects. They should be able to interact with minimal knowledge of each other via small, tightly defined interfaces. Principle of Least Knowledge – Also called the “Law of Demeter”, and is colloquially summarised as “Only talk to your friends”. Specifically, a method in an object should only invoke methods on the object itself, objects passed as a parameter to the method, any object the method creates, any components of the object. SOLID Design Principles Single Responsibility Principle – Each class should have one well defined purpose, and only one reason to change. This reduces the fragility of your code, and makes it much more maintainable. Open/Close Principle – A class should be open to extension, but closed to modification. In practice, this means extracting the code that is most likely to change to another class, and then injecting it as required via an appropriate pattern. Liskov Substitution Principle – Subtypes must be substitutable for their base types. Essentially, get your inheritance right. In the classic example, type square should not inherit from type rectangle, as they have different properties (you can independently set the sides of a rectangle). Instead, both should inherit from type shape. Interface Segregation Principle – Clients should not be forced to depend upon methods they do not use. Don’t have fat interfaces, rather split them up into smaller, behaviour centric interfaces. Dependency Inversion Principle – There are two parts to this principle: High-level modules should not depend on low-level modules. Both should depend on abstractions. Abstractions should not depend on details. Details should depend on abstractions. In modern development, this is often handled by an IoC (Inversion of Control) container.

    Read the article

  • Oracle Leader in Transportation Management

    - by John Murphy
    Oracle Named a Leader in the Transportation Management Systems Market by Leading Analyst Firm Redwood Shores, Calif. – October 15, 2012 News Facts Gartner, Inc. has placed Oracle Transportation Management in the Leaders Quadrant of its 2012 report, “Magic Quadrant for Transportation Management Systems (TMS).” (1) Gartner Magic Quadrants position vendors within a particular market segment based on their completeness of vision and ability to execute on that vision. According to the report, “Multiple subcomponents make up a comprehensive TMS across planning (for example, load consolidation, routing, mode selection and carrier selection) and execution (for example, tendering loads to carriers, shipment track and trace, and freight audit and payment).” Built on modern, flexible, Internet based architecture, Oracle Transportation Management is a global transportation and logistics operations system that allows companies to minimize cost, optimize service levels, support sustainability initiatives, and create flexible business process automation within their transportation and logistics networks. With a share of 26% of worldwide software revenue for 2011, Oracle is also number one in TMS vendor share according to Gartner’s report, “Market Trends: A Golden Opportunity in the Transportation Management System Market, 2012 – 2016.” (2) Supporting Quote “Shippers and logistics service providers face increasingly complex challenges as they try to reduce costs, secure capacity and improve overall freight efficiency,” said Derek Gittoes, vice president, logistics product strategy, Oracle. “We believe our high standing in both Gartner reports is a reflection of Oracle’s commitment to addressing these challenges by delivering the industry’s broadest and deepest transportation management platform. With a flexible and modern platform, we are able to support customers with both basic transportation needs, as well as those with highly complex logistics requirements.” Supporting Resources Magic Quadrant for Transportation Management Systems Market Trends: A Golden Opportunity in the Transportation Management System Market, 2012 – 2016 Oracle Transportation Management (1) Gartner, Inc., “Magic Quadrant for Transportation Management Systems,” by C. Dwight Klappich, August 23, 2012 (2) Gartner, Inc., “Market Trends: A Golden Opportunity in the Transportation Management System Market, 2012 – 2016,” by Chad Eschinger and C. Dwight Klappich, September 24, 2012. About Oracle Applications Over 65,000 customers worldwide rely on Oracle's complete, open and integrated enterprise applications to achieve superior results. Oracle provides a secure path for customers to benefit from the latest technology advances that improve the customer software experience and drive better business performance. Oracle Applications Unlimited is Oracle's commitment to customer choice through continuous investment and innovation in current applications offerings. Oracle's next-generation Fusion Applications build upon that commitment, and are designed to work with and evolve Oracle's Applications Unlimited offerings. Oracle's lifetime support policy helps ensure customers will continue to have a choice in upgrade paths, based on their enterprise needs. For more information on the latest Oracle Applications releases go towww.oracle.com/applications About Oracle Oracle engineers hardware and software to work together in the cloud and in your data center. For more information about Oracle (NASDAQ:ORCL), visit www.oracle.com. Trademarks Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. ###   Karen [email protected] Simon JonesBlanc & [email protected]

    Read the article

  • WebCenter Customer Spotlight: Los Angeles Department of Water and Power

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution Summary Los Angeles Department of Water and Power (LADWP) is the largest public utility company in the United States with over 1.6 million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. The goal of the project was to implement a newly designed web portal to increase customer self-service while reducing transactions via IVR and automate many of the paper based processes to web based workflows for their 1.6 million customers. LADWP implemented a Self Service Portal using Oracle WebCenter Portal & Oracle WebCenter Content and Oracle SOA Suite for the integration of their complex back-end systems infrastructure. The new portal has received extremely positive feedback from not only the customers and users of the portal, but also other utilities. At Oracle OpenWorld 2012, LADWP won the prestigious WebCenter innovation award for their innovative solution. Company OverviewLos Angeles Department of Water and Power (LADWP) is the largest public utility company in the United States with over 1.6 million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. LADWP also bills most of these customers for sanitation services provided by another department in the city of Los Angeles.  Business ChallengesThe goal of the project was to implement a newly designed web portal that is easy to navigate from a web browser and mobile devices, as well as be the platform for surfacing internet and intranet applications at LADWP. The primary objective of the new portal was to increase customer self-service while reducing the transactions via IVR and walk-up and to automate many of the paper based processes to web based workflows for customers. This includes automation of Self Service implemented through My Account (Bill Pay, Payment History, Bill History, Usage analysis, Service Request Management) Financial Assistance Programs Customer Rebate Programs Turn Off/Turn On/Transfer of Services Outage Reporting eNotification (SMS, email) Solution DeployedLADWP implemented a Self Service Portal using Oracle WebCenter Portal & Oracle WebCenter Content. Using Oracle SOA Suite they integrated various back-end systems including Oracle Siebel CRM IBM Mainframe based CIS FILENET for document management EBP Eletronic Bill Payment System HP Imprint System for BillXML data Other systems including outage reporting systems, SMS service, etc. The new portal’s features include: Complete Graphical redesign based on best practices in UI Design for high usability Customer Self Service implemented through MyAccount (Bill Pay, Payment History, Bill History, Usage Analysis, Service Request Management) Financial Assistance Programs (CRM, WebCenter) Customer Rebate Programs (CRM, WebCenter) Turn On/Off/Transfer of services (Commercial & Residential) Outage Reporting eNotification (SMS, email) Multilingual (English & Spanish) – using WebCenter multi-language support Section 508 (ADA) Compliant Search – Using WebCenter SES (Secured Enterprise Search) Distributed Authorship in WebCenter Content Mobile Access (any Mobile Browser) Business ResultsThe new portal has received extremely positive feedback from not only customers and users of the portal, but also other utilities. At Oracle OpenWorld 2012, LADWP won the prestigious WebCenter innovation award for their innovative solution. Additional Information LADWP OpenWorld presentation Oracle WebCenter Portal Oracle WebCenter Content Oracle SOA Suite

    Read the article

  • Windows Azure Recipe: Big Data

    - by Clint Edmonson
    As the name implies, what we’re talking about here is the explosion of electronic data that comes from huge volumes of transactions, devices, and sensors being captured by businesses today. This data often comes in unstructured formats and/or too fast for us to effectively process in real time. Collectively, we call these the 4 big data V’s: Volume, Velocity, Variety, and Variability. These qualities make this type of data best managed by NoSQL systems like Hadoop, rather than by conventional Relational Database Management System (RDBMS). We know that there are patterns hidden inside this data that might provide competitive insight into market trends.  The key is knowing when and how to leverage these “No SQL” tools combined with traditional business such as SQL-based relational databases and warehouses and other business intelligence tools. Drivers Petabyte scale data collection and storage Business intelligence and insight Solution The sketch below shows one of many big data solutions using Hadoop’s unique highly scalable storage and parallel processing capabilities combined with Microsoft Office’s Business Intelligence Components to access the data in the cluster. Ingredients Hadoop – this big data industry heavyweight provides both large scale data storage infrastructure and a highly parallelized map-reduce processing engine to crunch through the data efficiently. Here are the key pieces of the environment: Pig - a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. Mahout - a machine learning library with algorithms for clustering, classification and batch based collaborative filtering that are implemented on top of Apache Hadoop using the map/reduce paradigm. Hive - data warehouse software built on top of Apache Hadoop that facilitates querying and managing large datasets residing in distributed storage. Directly accessible to Microsoft Office and other consumers via add-ins and the Hive ODBC data driver. Pegasus - a Peta-scale graph mining system that runs in parallel, distributed manner on top of Hadoop and that provides algorithms for important graph mining tasks such as Degree, PageRank, Random Walk with Restart (RWR), Radius, and Connected Components. Sqoop - a tool designed for efficiently transferring bulk data between Apache Hadoop and structured data stores such as relational databases. Flume - a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large log data amounts to HDFS. Database – directly accessible to Hadoop via the Sqoop based Microsoft SQL Server Connector for Apache Hadoop, data can be efficiently transferred to traditional relational data stores for replication, reporting, or other needs. Reporting – provides easily consumable reporting when combined with a database being fed from the Hadoop environment. Training These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. Hadoop Learning Resources (20+ tutorials and labs) Huge collection of resources for learning about all aspects of Apache Hadoop-based development on Windows Azure and the Hadoop and Windows Azure Ecosystems SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • schedule compliance and keeping technical supports and resolving issues

    - by imays
    I am an entrepreneur of a small software developer company. The flagship product is developed by myself and my company grew up to 14 people. One of pride is that we've never have to be invested or loaned. The core development team is 5 people. 3 are seniors and 2 are juniors. After the first release, we've received many issues from our customers. Most of them are bug issues, customization needs, usage questions and upgrade requests. The issues from customers are incoming many times everyday, so it takes little time or much time of our developers. Because of our product is a software development kit(SDK) so most of questions can be answered only from our developers. And, for resolving bug issues, developers must be involved. Estimating time to resolve bug is hard. I fully understand it. However, our developers insist they cannot set the any due date of each project because they are busy doing technical supports and bug fixes by issues from customers everyday. Of course, they never do overwork. I suggested them an idea to divide the team into two parts: one for focusing on development by milestones, other for doing technical supports and bug fixes without setting due days. Then we could announce release plan officially. After the finish of release, two parts exchange the role for next milestone. However, they say they "NO, because it is impossible to share knowledge and design document fully." They still say they cannot set the release date and they request me to alter the due date flexibly. They does not fix the due date of each milestone. Fortunately, our company is not loaned and invested so we are not chocked. But I think it is bad idea to keep this situation. I know the story of ant and grasshopper. Our customers are tired of waiting forever of our release date. Companies consume limited time and money. If flexible due date without limit could be acceptable, could they accept flexible salary day? What is the root cause of our problem? All that I want is to fix and achieve precisely due date of each milestone without losing frequent technical supports. I think there must be solution for this situation. Please answer me. Thanks in advance. PS. Our tools and ways of project management are Trello, Mantis-like issue tracker, shared calendar software and scrum(collected cards into series of 'small and high completeness' projects).

    Read the article

  • Building Private IaaS with SPARC and Oracle Solaris

    - by ferhat
    A superior enterprise cloud infrastructure with high performing systems using built-in virtualization! We are happy to announce the expansion of Oracle Optimized Solution for Enterprise Cloud Infrastructure with Oracle's SPARC T-Series servers and Oracle Solaris.  Designed, tuned, tested and fully documented, the Oracle Optimized Solution for Enterprise Cloud Infrastructure now offers customers looking to upgrade, consolidate and virtualize their existing SPARC-based infrastructure a proven foundation for private cloud-based services which can lower TCO by up to 81 percent(1). Faster time to service, reduce deployment time from weeks to days, and can increase system utilization to 80 percent. The Oracle Optimized Solution for Enterprise Cloud Infrastructure can also be deployed at up to 50 percent lower cost over five years than comparable alternatives(2). The expanded solution announced today combines Oracle’s latest SPARC T-Series servers; Oracle Solaris 11, the first cloud OS; Oracle VM Server for SPARC, Oracle’s Sun ZFS Storage Appliance, and, Oracle Enterprise Manager Ops Center 12c, which manages all Oracle system technologies, streamlining cloud infrastructure management. Thank you to all who stopped by Oracle booth at the CloudExpo Conference in New York. We were also at Cloud Boot Camp: Building Private IaaS with Oracle Solaris and SPARC, discussing how this solution can maximize return on investment and help organizations manage costs for their existing infrastructures or for new enterprise cloud infrastructure design. Designed, tuned, and tested, Oracle Optimized Solution for Enterprise Cloud Infrastructure is a complete cloud infrastructure or any virtualized environment  using the proven documented best practices for deployment and optimization. The solution addresses each layer of the infrastructure stack using Oracle's powerful SPARC T-Series as well as x86 servers with storage, network, virtualization, and management configurations to provide a robust, flexible, and balanced foundation for your enterprise applications and databases.  For more information visit Oracle Optimized Solution for Enterprise Cloud Infrastructure. Solution Brief: Accelerating Enterprise Cloud Infrastructure Deployments White Paper: Reduce Complexity and Accelerate Enterprise Cloud Infrastructure Deployments Technical White Paper: Enterprise Cloud Infrastructure on SPARC (1) Comparison based on current SPARC server customers consolidating existing installations including Sun Fire E4900, Sun Fire V440 and SPARC Enterprise T5240 servers to latest generation SPARC T4 servers. Actual deployments and configurations will vary. (2) Comparison based on solution with SPARC T4-2 servers with Oracle Solaris and Oracle VM Server for SPARC versus HP ProLiant DL380 G7 with VMware and Red Hat Enterprise Linux and IBM Power 720 Express - Power 730 Express with IBM AIX Enterprise Edition and Power VM.

    Read the article

  • Tap into MySQL's Amazing Performance Results with the Performance Tuning Course

    - by Antoinette O'Sullivan
    Want to leverage the high-speed load utilities, distinctive memory caches, full text indexes, and other performance-enhancing mechanisms that MySQL offers to fuel today's critical business systems. The authentic MySQL Performance Tuning course, in 4 days, teaches you to evaluate the MySQL architecture, learn to use the tools, configure the database for performance, tune application and SQL code, tune the server, examine the storage engines, assess the application architecture, and learn general tuning concepts. You can take this course in one the following three ways: Training-on-Demand: Access the streaming video, instructor delivery of this course from your own desk, at your own pace. Book time for hands-on practice when it suits you. Live-Virtual Class: Take this instructor-led class live from your own desk. With 700 events on the schedule you are sure to find a time and date to suit you! In-Class: Travel to a classroom to take this class. A sample of events on the schedule are as follows.  Location  Date  Delivery Language  Hamburg, Germany  22 October 2012  German  Prague, Czech Republic  1 October 2012  Czech  Warsaw, Poland  3 December 2012  Polish  London, England  19 November 2012  English  Rome, Italy  23 October 2012  Italian Lisbon, Portugal  6 November 2012  European Portugese  Aix en Provence, France  4 September 2012   French  Strasbourg, France 16 October 2012   French  Nieuwegein, Netherlands 26 November 2012   Dutch  Madrid, Spain 17 December 2012   Spanish  Mechelen, Belgium  1 October 2012  English  Riga, Latvia  10 December 2012  Latvian  Petaling Jaya, Malaysia  10 September 2012 English   Edmonton, Canada 10 December 2012   English  Vancouver, Canada 10 December 2012   English  Ottawa, Canada 26 November 2012   English  Toronto, Canada 26 November 2012   English  Montreal, Canada 26 November 2012   English  Mexico City, Mexico 10 September 2012   Spanish  Sao Paolo, Brazil 26 November 2012  Brazilian Portugese   Tokyo, Japan 19 November 2012   Japanese  Tokyo, Japan  19 November 2012  Japanese For further information on this class, or to register your interest in additional events, go to the Oracle University Portal: http://oracle.com/education/mysql

    Read the article

  • Goals for 2010 Retrospective

    - by Brian Jackett
    As we approach the end of 2010 I’d like to take a  few minutes to reflect back on this past year and revisit the goals that I set for myself at the beginning of the year (click here to see those goals).  I feel it is important to track your goals not only to see if you accomplished them but also to see what new directions in life you pursued.  Once we enter into 2011 I’ll follow up with a new post on goals for the new year. Professional Blog – This year I intended to write at least 2 posts a month.  Looking back I far surpassed that goal by writing 47 posts (this one being my 48th).  As with many things in life, quantity does not mean quality.  A good example is a number of my posts announcing upcoming speaking engagements and providing links to presentation slides and scripts.  That aside, I like to at least keep content relatively fresh on this blog  which I was able to accomplish.  At the same time I’ve gotten much more comfortable in my blogging style and it has become much easier to write. Speaking – I didn’t define a clear goal for speaking engagements, but had a rough idea of wanting to speak at 2-3 events.  Once again I far exceeded that number by speaking at 10 separate events and delivering 12+ presentations.  I’m very thankful for all of the opportunities that I was given and all of the wonderful people I have met as a result. Volunteering – This year I intended to help out with the COSPUG (now Buckeye SPUG) steering committee and Stir Trek conference.  I fulfilled both goals and as well as taking on lead organizer duties for the first ever SharePoint Saturday Columbus.  Each of these events and groups turned out to be successful and I was glad to be a part of them all.  I look forward to continuing to volunteer with each next year in some capacity. Android Development – My goal for getting into Android development was a late addition, but one I didn’t necessarily fulfill.  I spent a couple nights downloading the tools, configuring my environment, and going through some “simple” tutorials.  I say “simple” because in my opinion the tutorials were not laid out very well, took a long time to get running properly, and confused me more than helped.  After about a week I was frustrated with the process and didn’t think it was a good use of my time.  On a side note, I’ve dabbled in Windows Phone 7 development over the past few months and have been very excited by how easy and intuitive it was to get started and develop some proof of concepts. Personal Getting in Shape – I had intended to play on recreational sports leagues and work out on a semi-regular basis.  For the most part I fulfilled this goal by playing on various softball and volleyball leagues as well as using the gym.  At the same time I had some major setbacks.  In the spring I badly sprained my ankle and got hit in the knee with a softball which kept me inactive for almost 2 months.  More recently I broke my knuckle (click here to read about it) which I am still recovering from. Volunteering – On the volunteering front I kept my commitments at my parish’s high school youth group.  As for other volunteering opportunities I got involved with a great organization called Columbus Gives Back (website).  I’ve volunteered with them a few times and really enjoy their goal to provide opportunities to people with busy schedules.  They  offer a variety of events typically after work hours and spread out around Columbus with no set commitments on time you need to put in.  If you have the time or motivation I highly recommend them. House/Condo – I had been thinking of buying a house or condo this past summer, but decided to extend my apartment lease for another year instead.  I have begun the search for a place in the past few weeks and am excited begin the process of owning a home. Conclusion     This year I was able to set and achieve many of my goals.  For next year I’ll try to put more specific numbers to all of my goals.  If any of you readers set goals for 2011 feel free to send me a link as I’d love to see what you are aiming to accomplish.  Have a great end of 2010 and best wishes for the start of 2011!       -Frog Out

    Read the article

  • Windows Desktop Virtualization Gets Easier

    - by andrewbrust
    This past Thursday, Microsoft announced that Windows (7) Virtual PC (WVPC) and its XP Mode feature would no longer require hardware assisted virtualization (HAV).  That means any PC running Windows 7 Pro, or higher, can now run this software.  And that’s a great thing because, as I noted in a post almost five month ago, determining whether a given PC you might be planning to buy actually offers HAV can be extremely difficult.  That meant even dedicated, sophisticated PC users, with a budget for new hardware, might be blocked from using this technology.  And that was just plain silly. One of the features offered by WVPC, and utilized heavily by XP Mode, is the concept of virtual applications: apps within a guest VM that can actually run within the host’s desktop environment.  I find this feature so powerful that my February Redmond Review column entertained the notion of a future version of Windows that runs all applications in this manner. The elimination of the HAV requirement for XP Mode and WVPC was just one of many virtualization-related announcements Microsoft made on Thursday.  And, interestingly, most of the others were also desktop-related, rather than server-related.  This is a welcome change from the multi-year period in which Microsoft enhanced its server virtualization lineup (in Hyper-V) and let the desktop platform fester.  Microsoft now seems to understand desktop virtualization is in high-demand and strengthens the Windows franchise.  As I explained in the column, even cloud computing can have a desktop spin if desktop virtualization is part of the equation. One company that knows this well is Citrix, and a closer alliance between Microsoft and Citrix was one of the many announcements from Thursday.  In fact, there’s a whole Web site dedicated to the alliance at http://www.citrixandmicrosoft.com/. I’d love to see virtual applications and entire virtual desktops offered as Azure-branded services.  This could allow me to run, for example, the full Office client on a variety of desktops I might use, and for large organizations it could easily reduce the expense, burden and duration of the deployment cycle for new versions of Office.  Business Intelligence providers, including my own firm, twentysix New York, would find great relief in enabling their customers to run the newest version of Excel, with the latest BI capabilities, instead of having to wait the requisite two to three years it takes for many Fortune 500 customers to upgrade. Microsoft should do more, and faster.  WVPC still does not support 64-bit guest images, even on 64-bit hosts.  That needs to be fixed.  File access from the guest to the host needs to be improved (right now, it’s done through Terminal Services/Remote Desktop file sharing, and it’s slow) and VM load times need to be significantly reduced before virtualized apps can become the norm.  (I suppose the advance of solid state drive technology will help there.) I do think these improvements will come, because Microsoft is focused on the virtual desktop now.  And that’s a smart focus to have.

    Read the article

  • A big flat text file or a HTML site for language documentation?

    - by Bad Sector
    A project of mine is a small embeddable Tcl-like scripting language, LIL. While i'm mostly making it for my own use, i think it is interesting enough for others to use, so i want it to have a nice (but not very "wordy") documentation. So far i'm using a single flat readme.txt file. It explains the language's syntax, features, standard functions, how to use the C API, etc. Also it is easy to scan and read in almost every environment out there, from basic text-only terminals to full-fledged high-end graphical desktop environments. However, while i tried to keep things nicely formatted (as much as this is possible in plain text), i still think that being a big (and growing) wall of text, it isn't as easy on the eyes as it could be. Also i feel that sometimes i'm not writing as much as i want in order to avoid expanding the text too much. So i thought i could use another project of mine, QuHelp, which is basically a help site generator for sites like this one with a sidebar that provides a tree of topics/subtopics and offline full text search. With this i can use HTML to format the documentation and if i use QuHelp for some other project that uses LIL, i can import LIL's documentation as part of the other project's documentation. However converting the existing documentation to QuHelp/HTML isn't a small task, especially when it comes to functions (i'll need to put more detail on them than what currently exists in the readme.txt file). Also it loses the wide range of availability that it currently has (even if QuHelp's generated code degrades gracefully down to console-only web browsers, plain text is readable from everywhere, including from popular editors such as Vim and Emacs - i had someone once telling me that he likes LIL's documentation because it is readable without leaving his editor). So, my question is simply this: should i keep the documentation as it is now in the form of a single readme.txt file or should i convert it to something like the site i mentioned above? There is also the option to do both, but i'm not sure if i'll be able to always keep them in sync or if it is worth the effort. After asking around in IRC i've got mixed answers: some liked the wide availability of the single text file, others said that it is looks as bad as a man page (personally i don't mind that - i can read man pages just fine - but other people might have issues reading them). What do you think?

    Read the article

  • Friday Fun: Snowball

    - by Asian Angel
    It is Christmas Eve and hopefully you are enjoying the start of an early weekend away from work. This week we have a snowball throwing game for you to try out, so bundle up and get ready to let those snowballs fly! Snowball The object of the game is to use your snowball ammo to harass the drunk businessman and send him flying along distance-wise as far as you can. Simply use your mouse to aim and click the left button to throw snowballs. You can monitor your stats on the silver bar towards the top of the window. The sound can also be disabled if the music is bothering you, but keep in mind that all sound will be disabled if you use the option. Time to get those snowballs flying through the air!! Keep hitting the businessman with your snowballs as you chase after him. Make certain that your aim is good or you will quickly run out of snowballs! You can really get him moving along at a good rate and he can even go high enough in the air to disappear off the screen for a few moments. There is a also chance that your aim will be so wicked with the snowballs that you will literally knock the drunk businessman’s head off! Weird but possible… The game ends when one of these two events occur: 1.) you run out of snowballs or 2.) the businessman literally bounces back at and then drops behind you as seen in the screenshot here. The moment either happens your score will pop up and then you have the opportunity to try again. Have fun! Note: The bounce back event can happen when encountering cars. Play Snowball Latest Features How-To Geek ETC How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know HTG Explains: Which Linux File System Should You Choose? HTG Explains: Why Does Photo Paper Improve Print Quality? An Alternate Star Wars Christmas Special [Video] Sunset in a Tropical Paradise Wallpaper Natural Wood Grain Icons for Your Desktop and App Launcher Docks My Blackberry Is Not Working! The Apple Too?! [Funny Video] Hidden Tracks Your Stolen Mac; Free Until End of January Why the Other Checkout Line Always Moves Faster

    Read the article

  • New hidden parameters in Oracle 11.2

    - by Mike Dietrich
    We really welcome every external review of our slides. And also recommendations from customers visiting our workshops. So it happened to me more than a week ago that Marco Patzwahl, the owner of MuniqSoft GmbH, had a very lengthy train ride in Germany (as the engine drivers go on strike this week it could have become even worse) and nothing better to do then reviewing our slide set. And he had plenty of recommendations. Besides that he pointed us to something at least I was not aware of and added it to the slides: In patch set 11.2.0.2 a new behaviour for datafile write errors has been implemented. With this release ANY write error to a datafile will cause the instance to abort. Before 11.2.0.2 those errors usually led to an offline datafile if the database operates in archivelog mode (your production database do, don’t they?!) and the datafile does not belong to the SYSTEM tablespace. Internal discussion found this behaviour not up-to-date and alligned with RAC systems and modern storages. Therefore it has been changed and a new underscore parameter got introduced. _DATAFILE_WRITE_ERRORS_CRASH_INSTANCE=TRUE This is the default setting´and the new behaviour beginning with Oracle 11.2.0.2 If you would like to revert to the pre-11.2.0.2 behaviour you’ll have to set in your init.ora/spfile this parameter to false. But keep in mind that there’s a reason why this has been changed. You’ll find more info in MOS Note: 7691270.8 and this topic in the current version of the slides on slide 255. Thanks to Marco for the review!!   And then I received an email from Kurt Van Meerbeeck today. Kurt is pretty well known in the Oracle community. And he’s the owner of jDUL/DUDE, a database unloading tool which bypasses the Oracle database engine and access data direclty from the blocks. Kurt visited the upgrade workshop two weeks ago in Belgium and did highlight to me that since Oracle 11.2.0.1 even though you haven’t set neither SGA_TARGET nor MEMORY_TARGET the database might still do resize operations. Reason why this behaviour has been changed: Prevention of ORA-4031 errors. But on databases with extremly high loads this can cause trouble. Further information can be found in MOS Note:1269139.1 . And the parameter set to TRUE by default is called _MEMORY_IMM_MODE_WITHOUT_AUTOSGA=TRUE This can be found now in the slide set as well on slide number 240. And thanks to Kurt for this information!!

    Read the article

  • Removing hard-coded values and defensive design vs YAGNI

    - by Ben Scott
    First a bit of background. I'm coding a lookup from Age - Rate. There are 7 age brackets so the lookup table is 3 columns (From|To|Rate) with 7 rows. The values rarely change - they are legislated rates (first and third columns) that have stayed the same for 3 years. I figured that the easiest way to store this table without hard-coding it is in the database in a global configuration table, as a single text value containing a CSV (so "65,69,0.05,70,74,0.06" is how the 65-69 and 70-74 tiers would be stored). Relatively easy to parse then use. Then I realised that to implement this I would have to create a new table, a repository to wrap around it, data layer tests for the repo, unit tests around the code that unflattens the CSV into the table, and tests around the lookup itself. The only benefit of all this work is avoiding hard-coding the lookup table. When talking to the users (who currently use the lookup table directly - by looking at a hard copy) the opinion is pretty much that "the rates never change." Obviously that isn't actually correct - the rates were only created three years ago and in the past things that "never change" have had a habit of changing - so for me to defensively program this I definitely shouldn't store the lookup table in the application. Except when I think YAGNI. The feature I am implementing doesn't specify that the rates will change. If the rates do change, they will still change so rarely that maintenance isn't even a consideration, and the feature isn't actually critical enough that anything would be affected if there was a delay between the rate change and the updated application. I've pretty much decided that nothing of value will be lost if I hard-code the lookup, and I'm not too concerned about my approach to this particular feature. My question is, as a professional have I properly justified that decision? Hard-coding values is bad design, but going to the trouble of removing the values from the application seems to violate the YAGNI principle. EDIT To clarify the question, I'm not concerned about the actual implementation. I'm concerned that I can either do a quick, bad thing, and justify it by saying YAGNI, or I can take a more defensive, high-effort approach, that even in the best case ultimately has low benefits. As a professional programmer does my decision to implement a design that I know is flawed simply come down to a cost/benefit analysis?

    Read the article

  • Brightness Crash and Fan issues in 12.04.1

    - by S.A. McIntosh
    I would just like to state beforehand that I am a total novice in using Ubuntu when it comes to the more complex issues. So I thought it would be best to finally come here and ask for help before being re-directed or closed out for a solution. I have already looked high and low on this board for one but nothing came up for my particular case, so I might as well take a shot asking for the first time here. This is what I have at the moment: -Dell Insprion 1764 w/ 64-bit Intel i5 Core -Dual Boot: Windows 7/Ubuntu 12.04.1 32-bit (from 12.04 install) -Unity shell -Linux kernel: 3.2.0-32 generic-pae ...and this is my fglrxinfo: OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: ATI Mobility Radeon HD 5000 Series OpenGL version string: 4.2.11627 Compatibility Profile Context The one issue I have with using Ubuntu is brightness. With the driver in every time I use the slider in the brightness and lock settings or use the keyboard function, it freezes, goes black and comes up with a scrambled colors page like this in the video. So I have looked all over this board and the web for answers looking for a solution that might have an answer. This is what I have done so far to fix this: -First Solution: Looking around, I found this small fix using terminal: sudo gedit /etc/rc.local followed by adding this into "rc.local" echo # > /sys/class/backlight/acpi_video0/brightness This works rarely with the graphics driver still in and I often get lucky say during restart but reboot would only snap back the brightness at max. -Second Solution Simply remove the graphics driver while leaving the solution of first behind. This solves the issue but results in having the monitor flicker and flash at startup which in itself is not a problem to me but maybe not so good for monitor health. Also it causes the fan to speed up throughout the session and render any program that needs the driver useless. -Third Solution This is the most obvious. Just simply use the brightness on AMD Catalyst Control Center software that came with the driver, and I can say that it's form of brightness is HORRIBLE compared to the actual settings. Which leads up to where I am now, back to the driver to stop the fan speed-up and seems that the only solution to the brightness crash is to use the keyboard-controlled brightness at the login screen NOT the desktop if I want the issued effect but will just snap at max bright again if I restart. Fan speed problem is dealt with but now run the risk of crashing my computer if I so much touch the brightness settings. Speaking of which I found this on launchpad and it seems that the issue has been going far since June of 2012. Any help, redirect link or reference would be greatly appreciated. Thank you.

    Read the article

  • June IOUG events

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Independent Oracle User Group (IOUG) Regional Events: June 11-12, 2012 – Broomfield, CO 2-Day Seminar- “ High Performance PL/SQL & Oracle Database 11g New Features” Steven Feuerstein, generally considered the world’s leading PL/SQL expert, will be presenting his all-new, 2-day, “Higher Performance PL/SQL and Oracle 11g PL/SQL New Features” seminar on June 11 & 12 at Level 3 Communications in Broomfield, Colorado.  This will be Steven’s first Denver seminar in almost 4  years.  Who knows when he will offer another? http://www.rmoug.org/ June 14, 2012 – Ottawa, Ontario Pythian’s Gwen Shapira puts on 3 great presentations focused on NoSQL, making OLTP run fast and Big Data. http://www.oug-ottawa.org/pls/htmldb/f?p=327:27:1317735724699447::NO June 21, 2012 – Calgary, Alberta Big Data and Extreme Analytics Summit http://coug.ab.ca/ June 22, 2012 – Westborough, MA 10 Things You Probably Did Not Know? With Tom Kyte PL/SQL turns 23 years old this year. It was first introduced in 1988 with Oracle6 Database. This session looks at five technical things about PL/SQL you probably did not know: under-the-covers features that make PL/SQL quite simply the most efficient language with which to process data in the database. http://noug.com/  June 28/29, 2012 – Plano, Texas Jonathan Lewis Oracle Performance Seminars The DOUG (DALLAS ORACLE USERS GROUP) has invited SpeakTech to return to Dallas, and they’re bringing Jonathan Lewis! Topics are Beating the Oracle Optimizer – June 28, 2012, Trouble Shooting & Tuning – June 29, 2012 http://www.eventbrite.com/event/3082448687

    Read the article

  • Interview with Koen Aben, Supply Chain Director of WE Fashion

    - by user801960
    We recently spoke to Koen Aben, the Supply Chain Director of WE Fashion, who gave us some insight into how Oracle supported the international fashion retailer through the completion of a large scale integration project across its 340 European stores. Koen explains the reasoning behind the project which was to create a common retail foundation and to integrate and align working processes to drive insight and enable continued growth. It is always good to hear from someone of Koen’s experience who can articulate the benefits of partnering with the right company for such an extensive project as this. Koen explains that a crucial element of such a project is to unify business applications into a common platform, adding that for successful growth, retailers really need to achieve enterprise-wide alignment. At the start of the three year project, WE Fashion’s application platform was fragmented impacting the company’s ability to support sustained growth. In light of this, WE Fashion invested in its processes, systems, teams and partnerships to build the needed retail foundation. Now after successfully completing the project, the basis is in place to ensure that growth is unimpeded. In the video, Koen Aben highlights some of the factors necessary for the success of the project as: Having an understanding that the process of creating a growth platform for a company is a long journey Accepting that during a lengthy project such as this, there will be high and low points experienced within the project team and the business, but that the relationship with your partners is crucial to the success of the project. Having the correct team in place will prove to be the “lynch –pin” of any successful project Oracle supported Koen and his team in implementing this project, and is recognised for the role it played during this development in partnership with the company. On his experience with working with the Oracle team, Koen points out that in the critical situations, Oracle was there to ensure that the right people were in place whenever needed and this was key to ensuring the project’s success. Since Oracle is one of the few providers that can offer an enterprise-wide retail platform, our best practice approach is key to connecting interactions throughout the business to enable insight and optimise operations. This is a great example of a large scale international retail project, where the true success of its completion is reflected in how proud the company is about what has been achieved, and the fact that results are already being seen.

    Read the article

  • Bay Area Coherence Special Interest Group Next Meeting July 21, 2011

    - by csoto
    Date: Thursday, July 21, 2011 Time: 4:30pm - 8:15pm ET (note that Parking at 475 Sansome Closes at 8:30pm) Where: Oracle Office, 475 Sansome Street, San Francisco, CA Google Map We will be providing snacks and beverages. Register! - Registration is required for building security. Presentation Line Up:? 5:10pm - Batch Processing Using Coherence in Oracle Group Policy Administration - Paul Cleary, Oracle Oracle Insurance Policy Administration (OIPA) is a flexible, rules-based policy administration solution that provides full record keeping for all policy lifecycle transactions. One component of OIPA is Cycle processing, which is the batch processing of pending insurance transactions. This presentation introduces OIPA and Cycle processing, describing the unique challenges of processing a high volume of transactions within strict time windows. It then reviews how OIPA uses Oracle Coherence and the Processing Pattern to meet these challenges, describing implementation specifics that highlight the simplicity and robustness of the Processing Pattern. 6:10pm - Secure, Optimize, and Load Balance Coherence with F5 - Chris Akker, F5 F5 Networks, Inc., the global leader in Application Delivery Networking, helps the world’s largest enterprises and service providers realize the full value of virtualization, cloud computing, and on-demand IT. Recently, F5 and Oracle partnered to deliver a novel solution that integrates Oracle Coherence 3.7 with F5 BIG-IP Local Traffic Manager (LTM). This session will introduce F5 and how you can leverage BIG-IP LTM to secure, optimize, and load balance application traffic generated from Coherence*Extend clients across any number of servers in a cluster and to hardware-accelerate CPU-intensive SSL encryption. 7:10pm - Using Oracle Coherence to Enable Database Partitioning and DC Level Fault Tolerance - Alexei Ragozin, Independent Consultant and Brian Oliver, Oracle Partitioning is a very powerful technique for scaling database centric applications. One tricky part of partitioned architecture is routing of requests to the right database. The routing layer (routing table) should know the right database instance for each attribute which may be used for routing (e.g. account id, login, email, etc): it should be fast, it should fault tolerant and it should scale. All the above makes Oracle Coherence a natural choice for implementing such routing tables in partitioned architectures. This presentation will cover synchronization of the grid with multiple databases, conflict resolution, cross cluster replication and other aspects related to implementing robust partitioned architecture. Additional Info:?? - Download Past Presentations: The presentations from the previous meetings of the BACSIG are available for download here. Click on the presentation titles to download the PDF files. - Join the Coherence online community on our Oracle Coherence Users Group on LinkedIn. - Contact BACSIG with any comments, questions, presentation proposals and content suggestions.

    Read the article

  • Devoxx 2011: Java EE 6 Hands-on Lab Delivered

    - by arungupta
    I, along with Alexis's help, delivered a Java EE 6 hands-on lab to a packed room of about 40+ attendees at Devoxx 2011. The lab was derived from the OTN Developer Days 2012 version but added lot more content to showcase several Java EE 6 technologies. The problem statement from the lab document states: This hands-on lab builds a typical 3-tier Java EE 6 Web application that retrieves customer information from a database and displays it in a Web page. The application also allows new customers to be added to the database as well. The string-based and type-safe queries are used to query and add rows to the database. Each row in the database table is published as a RESTful resource and is then accessed programmatically. Typical design patterns required by a Web application like validation, caching, observer, partial page rendering, and cross-cutting concerns like logging are explained and implemented using different Java EE 6 technologies. The lab covered Java Persistence API 2, Servlet 3, Enterprise JavaBeans 3.1, JavaServer Faces 2, Java API for RESTful Web Services 1.1, Contexts and Dependency Injection 1.0, and Bean Validation 1.0 over 47 pages of detailed self-paced instructions. Here is the complete Table of Contents: The lab can be downloaded from here and requires only NetBeans IDE "All" or "Java EE" version, which includes GlassFish anyway. All the feedback received from the lab has been incorporated in the instructions and bugs filed (Updated 49559, 205232, 205248, 205256). 80% of the attendees could easily complete the lab and some even completed in much less than 3 hours. That indicates that either more content needs to be added to the lab or the intellectual level of the attendees at the conference was pretty high. I think the lab has enough content for 3 hours but we moved at a much more faster pace so I conclude on the latter. Truly a joy to conduct a lab to 40 Devoxxians! Another related lab that might be handy for folks is "Develop, Deploy, and Monitor your Java EE 6 applications using GlassFish 3.1 Cluster". It explains how: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete lab instructions and source code are available and you can try them. I plan to continue evolving the contents for the Java EE 6 hands-on lab to cover more technologies and features and will announce them on this blog. Let me know on what else would you like to see in the future versions.

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >