Search Results

Search found 68147 results on 2726 pages for 'context sensitive help'.

Page 433/2726 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • Production Access Denied! Who caused this rule anyways?

    - by Matt Watson
    One of the biggest challenges for most developers is getting access to production servers. In smaller dev teams of less than about 5 people everyone usually has access. Then you hire developer #6, he messes something up in production... and now nobody has access. That is how it always starts in small dev teams. I think just about every rule of life there is gets created this way. One person messes it up for the rest of us. Rules are then put in place to try and prevent it from happening again.Breaking the rules is in our nature. In this example it is for good cause and a necessity to support our applications and troubleshoot problems as they arise. So how do developers typically break the rules? Some create their own method to collect log files off servers so they can see them. Expensive log management programs can collect log files, but log files alone are not enough. Centralizing where important errors are logged to is common. Some lucky developers are given production server access by the IT operations team out of necessity. Wait. That's not fair to all developers and knowingly breaks the company rule!  When customers complain or the system is down, the rules go out the window. Commonly lead developers get production access because they are ultimately responsible for supporting the application and may be the only person who knows how to fix it. The problem with only giving lead developers production access is it doesn't scale from a support standpoint. Those key employees become the go to people to help solve application problems, but they also become a bottleneck. They end up spending up to half of their time every day helping resolve application defects, performance problems, or whatever the fire of the day is. This actually the last thing you want your lead developers doing. They should be working on something more strategic like major enhancements to the product. Having production access can actually be a curse if you are the guy stuck hunting down log files all day. Application defects are good tasks for junior developers. They can usually handle figuring out simple application problems. But nothing is worse than being a junior developer who can't figure out those problems and the back log of them grows and grows. Some of them require production server access to verify a deployment was done correctly, verify config settings, view log files, or maybe just restart an application. Since the junior developers don't have access, they end up bugging the developers who do have access or they track down a system admin to help. It can take hours or days to see server information that would take seconds or minutes if they had access of their own. It is very frustrating to the developer trying to solve the problem, the system admin being forced to help, and most importantly your customers who are not happy about the situation. This process is terribly inefficient. Production database access is also important for solving application problems, but presents a lot of risk if developers are given access. They could see data they shouldn't.  They could write queries on accident to update data, delete data, or merely select every record from every table and bring your database to its knees. Since most of the application we create are data driven, it can be very difficult to track down application bugs without access to the production databases.Besides it being against the rule, why don't all developers have access? Most of the time it comes down to security, change of control, lack of training, and other valid reasons. Developers have been known to tinker with different settings to try and solve a problem and in the process forget what they changed and made the problem worse. So it is a double edge sword. Don't give them access and fixing bugs is more difficult, or give them access and risk having more bugs or major outages being created!Matt WatsonFounder, CEOStackifyAgile Support for Agile Developers

    Read the article

  • Does your analytic solution tell you what questions to ask?

    - by Manan Goel
    Analytic solutions exist to answer business questions. Conventional wisdom holds that if you can answer business questions quickly and accurately, you can take better business decisions and therefore achieve better business results and outperform the competition. Most business questions are well understood (read structured) so they are relatively easy to ask and answer. Questions like what were the revenues, cost of goods sold, margins, which regions and products outperformed/underperformed are relatively well understood and as a result most analytics solutions are well equipped to answer such questions. Things get really interesting when you are looking for answers but you don’t know what questions to ask in the first place? That’s like an explorer looking to make new discoveries by exploration. An example of this scenario is the Center of Disease Control (CDC) in United States trying to find the vaccine for the latest strand of the swine flu virus. The researchers at CDC may try hundreds of options before finally discovering the vaccine. The exploration process is inherently messy and complex. The process is fraught with false starts, one question or a hunch leading to another and the final result may look entirely different from what was envisioned in the beginning. Speed and flexibility is the key; speed so the hundreds of possible options can be explored quickly and flexibility because almost everything about the problem, solutions and the process is unknown.  Come to think of it, most organizations operate in an increasingly unknown or uncertain environment. Business Leaders have to take decisions based on a largely unknown view of the future. And since the value proposition of analytic solutions is to help the business leaders take better business decisions, for best results, consider adding information exploration and discovery capabilities to your analytic solution. Such exploratory analysis capabilities will help the business leaders perform even better by empowering them to refine their hunches, ask better questions and take better decisions. That’s your analytic system not only answering the questions but also suggesting what questions to ask in the first place. Today, most leading analytic software vendors offer exploratory analysis products as part of their analytic solutions offerings. So, what characteristics should be top of mind while evaluating the various solutions? The answer is quite simply the same characteristics that are essential for exploration and analysis – speed & flexibility. Speed is required because the system inherently has to be agile to handle hundreds of different scenarios with large volumes of data across large user populations. Exploration happens at the speed of thought so make sure that you system is capable of operating at speed of thought. Flexibility is required because the exploration process from start to finish is full of unknowns; unknown questions, answers and hunches. So, make sure that the system is capable of managing and exploring all relevant data – structured or unstructured like databases, enterprise applications, tweets, social media updates, documents, texts, emails etc. and provides flexible Google like user interface to quickly explore all relevant data. Getting Started You can help business leaders become “Decision Masters” by augmenting your analytic solution with information discovery capabilities. For best results make sure that the solution you choose is enterprise class and allows advanced, yet intuitive, exploration and analysis of complex and varied data including structured, semi-structured and unstructured data.  You can learn more about Oracle’s exploratory analysis solutions by clicking here.

    Read the article

  • Removing Barriers to Create Effective Data Models

    After years of creating and maintaining data models, I have started to notice common barriers that decrease the accuracy and usefulness of models. In my opinion, the main causes of these barriers are the lack of knowledge and communication from within a company. The lack of knowledge in regards to data models or data modeling can take many forms. Company Culture Knowledge Whether documented or undocumented, existing business rules of a company can affect how data is modeled. For example, if a company only allows 1 assigned person per customer to be able to manipulate a customer’s record then then a data model that includes an associated table that joins customers and employee’s would be unneeded because that would allow for the possibility of multiple employees to handle a customer because of the potential for a many to many relationship between Customers and Employees. Technical Knowledge Depending on the data modeler’s proficiency in modeling data they can inadvertently cause issues and/or complications with a design without even noticing. It is important that companies share data modeling responsibilities so that the models are developed from multiple perspectives of a system, company and the original problem.  In addition, the tools that a company selects to create data models can also affect the accuracy of the model if designer are not familiar with the tools or the tools are too complex to use for the designer. Existing System Knowledge In order for a data modeler to model data for an existing system so that new changes can be applied to a system then they need to at least know the basic concepts of a system so that they can work within it. This will promote reusability of data and prevent the chance of duplicating data. Project Knowledge This should be pretty obvious, but it is very hard to create an accurate data model without knowing what data needs to be modeled. I have always found it strange that I have been asked to start modeling data prior to a client formalizing any requirements. Usually when this happens I have to make several iterations to a model, and the client still does not know exactly what they want.  In addition additional issues can arise when certain stakeholders of a project are not consulted prior to the design or after the project is over because it can cause miss understandings and confusion by the end user as well as possibly not solving the original problem for which a project is intended to solve. One common thread between each type of knowledge is that they can all be avoided through the use of good communication. For example, if a modeler is new to a company then they should ask older employees about any business specific rules that may be documented or undocumented that must be applied to projects in general. Furthermore, if a modeler is not really familiar with a specific data modeling software then they need to speak up and ask for help form other employees or their manager. This will not only help the modeler in the project, but also help them in future projects that they do for the company. Additionally, if a project is not clearly defined prior to a data modeler being assigned the modeling project then it is their responsibility to communicate with the other stakeholders to clarify any part of a project that is unclear so that the data model that is created is accurately aligned with a project.

    Read the article

  • Cloud Apps News @#OOW12

    - by Natalia Rachelson
    All eyes were on Oracle this past week and the news cycle was in full swing. What better time to make some key announcements that were guaranteed to create buzz ... and so we did. The name of the game was Cloud! Here are the key Cloud announcements for Apps, which included Fusion Tap that enables mobility across all Cloud Apps, HCM customer momentum in the Cloud, and our very first ERP Cloud Services customer. Oracle Unveils Oracle Fusion Tap for the iPadOracle Fusion Tap - Productivity Amplified Anywhere, Anytime "Both the enterprise and technology providers must recognize the need to innovate and adapt for the increasing mobility of the workforce - not just for sales teams, but across the organization," said Carter Lusher, Research Fellow and Chief Analyst of Enterprise Applications Ecosystem, Ovum. "A mobile application that quickly and powerfully allows employees to make connections, analyze data, and complete activities at any time and wherever they may be located drives new levels of business value and enhances efficiency. Frankly, mobile access is no longer a 'nice to have' but a 'must have.'"  "The mobile workforce is a business reality, and Oracle Fusion Tap is an example of how Oracle delivers mobile and cloud innovations that fundamentally improve productivity and how we work," said Chris Leone, Senior Vice President of Application Development, Oracle. "With Oracle Fusion Tap users will have an all-in-one, easily extensible app that puts mission-critical data and colleague connection at their fingertips." The entire release is available here http://www.oracle.com/us/corporate/press/1855392 Customers Live on Oracle Fusion Human Capital ManagementOracle HCM Cloud Service Helps Power HR's Contribution to the Business "More than 25 of the 100-plus customers who have selected Oracle Fusion Human Capital Management (HCM) are already live. Ardent Leisure, Peach Aviation, Toshiba Medical Systems and Zillow have deployed Oracle HCM Cloud Service and are using it to transform their HR operations. They join companies such as Principal Financial Group and Elizabeth Arden, who are already using Oracle HCM Cloud Service to help manage international growth and deliver pervasive, role-based, configurable solutions to their employees. With these recent go-lives, Oracle takes a leading position in successfully bringing live HCM customers in the cloud."  "As a technology company, Zillow looked to a partner who could scale with us. Zillow has gone live on Oracle HCM Cloud Service, which will give us the ability automate and streamline HR operations for our employees in the near future," said Sarah Bilton, Senior Director HR, Zillow. Read the entire release here http://www.oracle.com/us/corporate/press/1859573 Lending Club Selects Oracle ERP Cloud Service to Help Increase Insight and EfficienciesOracle ERP Cloud Service Provides an Open Architecture, Best-of-Breed Decision-Making, and Scalability in the Cloud "Lending Club, the leading platform for investing in and obtaining personal loans, has selected Oracle ERP Cloud Service to help improve decision-making and workflow, implement robust reporting, and take advantage of the inherent scalability and cost savings provided by the cloud. With more than 76,000 borrowers and 90,000 investors Lending Club utilizes technology and innovation to reduce the cost of traditional banking and offer borrowers better rates and investors better returns.  After an extensive search, Lending Club selected Oracle ERP Cloud Service due to the breadth and depth of capabilities and ongoing innovation of Oracle ERP Cloud Service, as well as Oracle's open architecture, industry leadership and commitment to partners." "Lending Club is an innovative, data-intensive, high-growth company and we needed a solution and partner that could match us," said Carrie Dolan, CFO, Lending Club. "We conducted a thorough review of our options, and Oracle ERP Cloud Service was the clear winner in terms of capabilities and business value as well as commitment to us as a customer." Read the entire release here http://www.oracle.com/us/corporate/press/1859020

    Read the article

  • Administer, manage, monitor, and fine tune the performance of your Oracle SOA Suite 11g Service Infrastructure and SOA composite applications.

    - by JuergenKress
    Key Features of the book If you are an Oracle SOA suite administrator, then this book is your bible. It gives you everything you need to know about all your tasks and help you to apply what you learn in your everyday life right from the first chapter. The book walks through promoting code across environments, performance tuning the service infrastructure, monitoring the environment, configuring security policies, managing the dehydration store, backing and restoring environments and so on. Packed with real-world examples from authors' own experiences, this books offers a unique insight into Oracle SOA Suite Administration. Detailed description The book begins with an introduction of SOA and quickly moves on to management of SOA composite applications. Readers will learn how to manage composite applications, their deployments and lifecycles. Equipped with this knowledge, readers will be introduced to monitoring and performance tuning SOA Suite, monitoring instances, messages, and composite applications, managing faults and exceptions, configuring audit levels of composite applications to include end-to-end monitoring through the use of extended logging as well as administering and configuring all SOA Suite components. A very important aspect of administration is tuning and optimizing the infrastructure for performance and book offers real work recommendations to monitor and performance tune service engines, the underlying WebLogic server, threads and timeouts, files systems, and composite applications. It also covers detailed administration of individual service components, configuring the infrastructure MBeans using both Oracle Enterprise Manager Fusion Middleware Control and WLST based scripts, migrating worklist preferences and BAM data across environments, setting up Email, LDAP and custom XPath. An administrator is always trusted with troubleshooting and root causing problems in the infrastructure and this book will help you through the troubleshooting approaches as how to identify faults and exception through extended logging and thread dumps and find solutions to common startup problems and deployment issues. The advanced contents of this book explains OWSM security framework and how to secure components deployed to the infrastructure along with the details of all groundwork needed to ready the environment. Last few chapters help you to understand and deal with managing the metadata services repository and dehydration store, backup and recovery and concluding with advanced topics such as silent/scripted installations, cloning, upgrading, patching and high availability installations. Packed with real-world examples, and tips straight from the trench; this book offers insights into SOA Suite administration that you will not find elsewhere. Part of our writing style in this book draws heavily on the philosophy of reuse and as such the book provide an ample of executable SQL queries and WLST scripts that administrators can reuse and extend to perform most of the administration tasks such as monitoring instances, processing times, instance states and perform automatic deployments, tuning, migration, and installation. These scripts are spread over each of the chapters in the book and can also be downloaded from here. The book is available in different formats at the following websites: Paperback and eBook versions & Kindle version. It is available for order and signed copies are available through our web site. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA book,SOA Suite Adminsitration,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • If not now, then when?

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2013/10/25/if-not-now-then-when.aspx The time has been flying by this year. It seems like only yesterday that I mentioned the gorillagator, a simple construct of confusion to try to draw attention to my message. In reality, that message was sent over a month ago. During that time, the hours slipped to days and days to weeks. Many exciting things have happened to myself; I'm sure many exciting things have happened to you. I'm also sure that many terrifying things have happened to children and their families. 62 children enter treatment at a Children's Miracle Network Hospital every minute. That's nearly 60,000 children since I sent the last email. To put that number in perspective, that is more than the population of Greenland. If we expand that to the past year, they have been nearly 550,000 children treated. That is almost the population of Huntsville, Decatur, and all their suburbs combined. Over the past 4 years, I have raised a little more than $3,000 for Children's Hospital of Alabama. As a result, I received a call from the organizers of Extra Life thanking me for my dedicated work and informing me that I was the top supporters for Children's Hospital of Alabama ... with my measly three grand. We can do much better than that. It may sound like I'm trying to have fun by playing games for 24 hours. It is more than that. It is me using my time and body as a catalyst. It is me putting my passion to work for a cause. It is me turning my love into something tangible. I have been campaigning and fighting to give these children a chance for years. I have been asking you to help me support these children and families. I've been putting in countless hours of talking to people, impassioned emails, and carefully constructed tweets. I have been fighting with cutting edge, and sometimes expensive, technology to try to provide live streams of my marathons. I yearly put my body through 24 (and, this year, 25) hours of no sleep. I do this to represent the countless hours these families sit awake at their children's side. All I ask is a few minutes on a website and a few dollars. These few minutes and few dollars go a long way help people that are experiencing circumstances that only occur in our nightmares. I also ask that you take one extra step. Forward this plea to those that you know. I can only reach a small fraction of a percentage of the people that may be able to help. Together, we can reach the world. I raise money for Children's Hospital of Alabama. As this message branches out, people may wish to support a hospital closer to their area. I have included a link to the list of people that have dedicated their time and have received no donations. Find someone on the list supporting your local hospital and give them a donation. Let them know that their time and effort are appreciated. Together, we can do something great. Together, we can make a difference. Together, we all stand tall. Thank you. You can get more information at http://www.extra-life.org and http://childrensmiraclenetworkhospitals.org/" My donation page is http://www.extra-life.org/participant/cgardner The list of participants without donations is http://www.extra-life.org/index.cfm?fuseaction=donorDrive.eventParticipantList&page=629&eventID=512

    Read the article

  • SQL SERVER – Importance of User Without Login

    - by pinaldave
    Some questions are very open ended and it is very hard to come up with exact requirements. Here is one question I was asked in recent User Group Meeting. Question: “In recent version of SQL Server we can create user without login. What is the use of it?” Great question indeed. Let me first attempt to answer this question but after reading my answer I need your help. I want you to help him as well with adding more value to it. Answer: Let us visualize a scenario. An application has lots of different operations and many of them are very sensitive operations. The common practice was to do give application specific role which has more permissions and access level. When a regular user login (not system admin), he/she might have very restrictive permissions. The application itself had a user name and password which means applications can directly login into the database and perform the operation. Developers were well aware of the username and password as it was embedded in the application. When developer leaves the organization or when the password was changed, the part of the application had to be changed where the same username and passwords were used. Additionally, developers were able to use the same username and password and login directly to the same application. In earlier version of SQL Server there were application roles. The same is later on replaced by “User without Login”. Now let us recreate the above scenario using this new “User without Login”. In this case, User will have to login using their own credentials into SQL Server. This means that the user who is logged in will have his/her own username and password. Once the login is done in SQL Server, the user will be able to use the application. Now the database should have another User without Login which has all the necessary permissions and rights to execute various operations. Now, Application will be able to execute the script by impersonating “user without login – with more permissions”. Here there is assumed that user login does not have enough permissions and another user (without login) there are more rights. If a user knows how the application is using the database and their various operations, he can switch the context to user without login making him enable for doing further modification. Make sure to explicitly DENY view definition permission on the database. This will make things further difficult for user as he will have to know exact details to get additional permissions. If a user is System Admin all the details which I just mentioned in above three paragraphs does not apply as admin always have access to everything. Additionally, the method describes above is just one of the architecture and if someone is attempting to damage the system, they will still be able to figure out a workaround. You will have to put further auditing and policy based management to prevent such incidents and accidents. I guess this is my answer. I read it multiple times but I still feel that I am missing something. There should be more to this concept than what I have just described. I have merely described one scenario but there will be many more scenarios where this situation will be useful. Now is your turn to help – please leave a comment with the additional suggestion where exactly “User without Login” will be useful as well did I miss anything when I described above scenario. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Attending a Career Fair: &ldquo;Don&rsquo;t be shy &ndash; Be prepared&rdquo;

    - by jessica.ebbelaar
    There are a large number of ways to interact with companies nowadays. The career fair is a very effective and personal way to interact with a number of different companies in a very short period of time. Here are some simple tips to help you perform during a career fair. Do research The key to being successful at a career fair is to do research before you go. Make a first selection of the companies you feel could be interesting for you. Include many types of employers. Once you have decided on the list of companies you want to visit, go to their career portal. Inform yourself about what the company does, i.e what roles there are available, how the company culture is described, what impression the testimonials give you. The question that you still have after reviewing this information, are the ones you can discuss with the company on the fair. Sell yourself Visit the companies you have on your top 5 list first, so you will be at your highest energy level to make that first impression. Think in advance about what you are going to tell the recruiter. Prepare a 30-second introduction (including degree, strengths, skills & experience) Be confident when you talk about your experience. Remember to start the conversation with a smile, make good eye contact and give a firm handshake. You could be speaking to your next manager, so be professional! If you already know what jobs you are interested in, relate your skills and experience to the roles that the company has available. If you are not yet sure gather as much information as you can about employment and/or hiring procedures, specific skills necessary for different jobs, training and career paths. Stand out As career fairs are very crowded and the attending companies meet with a lot of potential candidates on one day, you have to make sure you are noticed in a positive way. A good preparation and asking questions that show you have a good understanding of the industry, organization and roles will help you. Be aware of time demands on employers. Do not monopolize an employer's time. Dress appropriately to make a good first impression. Bring your resume Do not forget to bring your resume in print or on a USB-stick to the fair. If you are searching for different types of jobs, bring different versions of your resume. Your resume should be short and professional on white paper that is free of graphics or fancy print styles and containing larger margins for interviewer notes. Follow up After each conversation ask who you can contact for follow-up discussions about the specific roles. Use the back of a business card to record notes that help you remember important details and follow-up instructions. If no card is available, record the contact information and your comments in your notepad or phone. Last but not least, thank everyone you talk to for their time. Follow up as soon as possible with thank you notes that address the companies’ hiring needs, your qualifications, and express your desire for a second interview. What not to do… Do not visit a company with a group of friends. Interact with the companies on your own, to make your own positive impression. Do not walk up to a recruiter and interrupt a current conversation; wait your turn and be polite. What you should absolutely avoid is a grab and run on freebies! Take the time to speak to the company and ask for a freebie at the end of the conversation in case they are not offered to you. Good luck with the preparations for the career fair you will attend. Oracle recruiters look forward to meet you! They will be present on a large number of fairs in the region. For an overview of the fairs go to the Events & Calendar page on http://campus.oracle.com If you have any questions related to this article feel free to contact [email protected].

    Read the article

  • OPN Specialized Latest News (15th November)

    - by swalker
    HELPING YOU TO SPECIALIZE WebCenter Implementation Specialist Exam Preparation Webcasts: WebCenter Content And WebCenter Portal Oracle Partner Network would like to invite you to Refresh Courses for WebCenter Content and WebCenter Portal, to help partners to prepare for the WebCenter Implementation Specialist EXAMS. This is a 3 hours intensive refresher partner-only training session, providing attendees with an overview of WebCenter Content and WebCenter Portal functions and related topics. After the refresher part you will be able to take the relevant Implementation Specialist EXAM depending on your personal focus. NOTE: This is only suitable for experienced WebCenter Content or WebCenter Portal practitioners Who should attend? Partner Consultants who want to become an Oracle WebCenter Content or a WebCenter Portal Certified Implementation Specialist or both, that will help them to differentiate themselves in front of customers and support their Companies to become Specialized. Webcast Details: Click here to read more... Specialized Partners Only! New Service to Promote Your Events The Partner Event Publisher has just been made available to all specialized partners in EMEA.  Partners now have the opportunity to publish their events to the Oracle.com/events site and spread the word on their upcoming live in-person and/or live webcast events. Click here to read more information and watch a short video demo. VADs Get Specialized Effective November 1, 2011 , VADs, with a valid Value Added Distributor Agreement will no longer be required to meet customer reference requirements outlined in the business criteria section in order to become specialized. VADs must continue meet all other business and competency criteria set forth in the applicable Knowledge Zone prior to specialization approval. New Certification Pillar Axiom 600 Storage System Your opportunity to take the Pillar Axiom 600 Storage System Essentials (1Z0-581) Exam is vailable now in beta. Pass the exam so you can become a Pillar Axiom 600 Storage Systems Implementation Specialist! Free vouchers are available for Oracle Partners! If you would like to receive a free Beta exam voucher, please send your request to [email protected] and include your name, business email address, company, and the Exam name Pillar Axiom 600 Storage System Essentials Beta exam. New Certification Available: Oracle Utilities Customer Care and Billing Oracle Utilities Customer Care and Billing 2 Essentials (1Z0-562) is a solution designed to help you meet market windows and regulatory deadlines while enjoying a low total cost of ownership and a high return on investment. Take the exam now to become an  Oracle Utilities Customer Care and Billing 2 Essentials Implementation Specialists. MEASURING YOUR SUCCESS We had 1674 Specialized Partners covering 5364 Specializations. Please note that due to OPN contract renewals at any given point in time there are valid Specialized Partners and Specializations which are temporarily not captured in the total statistics. An incremental 1961 individuals were accredited as Implementation Specialists giving an EMEA cumulative total of 9598 Implementation Specialists 26 ISVs obtained one or more Ready's, for a total of 53 Ready's Don't forget! You can submit your own press releases to Oracle! Every time you achieve specialization we'd like to support you getting the message out! Press guidelines and a submission link can be found on the OPN Portal here.

    Read the article

  • Nokia Lumia 920 Windows Phone 8 Announcement

    - by Tim Murphy
    Today Nokia and Microsoft had an event to officially introduce the Lumia 920.  Below is a rundown of some of the things I found interesting. As a person who likes photography there was a lot to drool over.  The main feature that caught my attention was PureView with its optical stabilization.  This alone should improve the majority of you pictures.  Add to that the SmartShoot Object remover that uses multiple images to remove unwanted people or objects that move through your picture and you never have to accept reality again. For the most part the lenses concept introduced in Windows Phone 8 just makes the usability of leveraging camera better.  Of course that is Microsoft’s selling point.  One lens that caught my attention was the Bing lens.  I have to say it is about time that we can take pictures and use them to search for answers using Bing. There were a couple of features shown that involved augmented reality.  One was similar to the yapf application that is already in the market which overlays restaurants and other destination over live camera views.  The other was using the navigation directions with a live view. Then you get down to some of the physical features of the Lumia 920.  The one that got the most stage time is that it has a great 2000mah battery which can be charged wirelessly.  They also pointed out the improved glare reduction of the 4.5 in. curved glass screen.  This hardware improvement is improved further with software that detects glare conditions and adjusts the display attributes to enhance viewing ease. Adding to the wireless cool factor of the Lumia 920 is the general NFC capabilities.  This was demonstrated with NFC docking stations as well as JBL speakers and headphones. There was one more hardware feature that I applauded.  The super sensitive touch screen did away with one of my pet peeves with capacitive touch screens.  You will never have to remove you gloves to operate your phone again.  The mittens that they did the demo with looked more like boxing gloves. I was disappointed with Joe Belfiore said that they were only going to show a couple of new features of the Windows Phone 8 and would hear more at future events.  One of the things he did show is the ability to customize which buttons you preferred as defaults in IE10.  For example you could have the folders button where the refresh button normally is.  He also showed that at long last you can natively take screenshots on your phone.  Hopefully he will be back quickly to give us the rest of the features. The most disappointing part of the event was that we never found out when they would be released or how much they would cost.  Let’s hope this comes soon.  Even with these couple of items still left on my wish list I can’t wait to get my hands on a Lumia 920.  del.icio.us Tags: Windows Phone,Windows Phone 8,Nokia,Lumia,Lumia 920,Microsoft

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • Blink-Data vs Instinct?

    - by Samantha.Y. Ma
    In his landmark bestseller Blink, well-known author and journalist Malcolm Gladwell explores how human beings everyday make seemingly instantaneous choices --in the blink of an eye--and how we “think without thinking.”  These situations actually aren’t as simple as they seem, he postulates; and throughout the book, Gladwell seeks answers to questions such as: 1.    What makes some people good at thinking on their feet and making quick spontaneous decisions?2.    Why do some people follow their instincts and win, while others consistently seem to stumble into error?3.    Why are some of the best decisions often those that are difficult to explain to others?In Blink, Gladwell introduces us to the psychologist who has learned to predict whether a marriage will last, based on a few minutes of observing a couple; the tennis coach who knows when a player will double-fault before the racket even makes contact with the ball; the antiquities experts who recognize a fake at a glance. Ultimately, Blink reveals that great decision makers aren't those who spend the most time deliberating or analyzing information, but those who focus on key factors among an overwhelming number of variables-- i.e., those who have perfected the art of "thin-slicing.” In Data vs. Instinct: Perfecting Global Sales Performance, a new report sponsored by Oracle, the Economist Intelligence Unit (EIU) explores the roles data and instinct play in decision-making by sales managers and discusses how sales executives can increase sales performance through more effective  territory planning and incentive/compensation strategies.If you are a sales executive, ask yourself this:  “Do you rely on knowledge (data) when you plan out your sales strategy?  If you rely on data, how do you ensure that your data sources are reliable, up-to-date, and complete?  With the emergence of social media and the proliferation of both structured and unstructured data, how do you know that you are applying your information/data correctly and in-context?  Three key findings in the report are:•    Six out of ten executives say they rely more on data than instinct to drive decisions. •    Nearly one half (48 percent) of incentive compensation plans do not achieve the desired results. •    Senior sales executives rely more on current and historical data than on forecast data. Strikingly similar to what Gladwell concludes in Blink, the report’s authors succinctly sum up their findings: "The best outcome is a combination of timely information, insightful predictions, and support data."Applying this insight is crucial to creating a sound sales plan that drives alignment and results.  In the area of sales performance management, “territory programs and incentive compensation continue to present particularly complex challenges in an increasingly globalized market," say the report’s authors. "It behooves companies to get a better handle on translating that data into actionable and effective plans." To help solve this challenge, CRM Oracle Fusion integrates forecasting, quotas, compensation, and territories into a single system.   For example, Oracle Fusion CRM provides a natural integration between territories, which define the sales targets (e.g., collection of accounts) for the sales force, and quotas, which quantify the sales targets. In fact, territory hierarchy is a core analytic dimension to slice and dice sales results, using sales analytics and alerts to help you identify where problems are occurring. This makes territoriesStart tapping into both data and instinct effectively today with Oracle Fusion CRM.   Here is a short video to provide you with a snapshot of how it can help you optimize your sales performance.  

    Read the article

  • The Connected Company: WebCenter Portal - Feedback - Analytics and Polls

    - by Michael Snow
    Evernote Export body, td { }Guest Post by: Mitchell Palski, Staff Sales Consultant The importance of connecting peers has been widely recognized and socialized as a critical component of employee intranets. Organizations are striving to provide mediums for sharing knowledge and improving awareness across their enterprise. Indirectly, the socialization of your enterprise should lead to cost savings and improved product/service quality. However, many times the direct effects of connecting an organization’s leadership with its employees are overlooked. Oracle WebCenter Portal can help you bridge that gap by gathering implicit and explicit feedback. Implicit Feedback Through Usage Analytics Analytics allows administrators to track and analyze WebCenter Portal traffic and usage. Analytics provides the following basic functionality: Usage Tracking Metrics: Analytics collects and reports metrics of common WebCenter Portal functions, including community and portlet traffic. Behavior Tracking: Analytics can be used to analyze WebCenter Portal metrics to determine usage patterns, such as page visit duration and usage over time. User Profile Correlation: Analytics can be used to correlate metric information with user profile information. Usage tracking reports can be viewed and filtered by user profile data such as country, company or title. Usage analytics help measure how users interact with website content – allowing your IT staff and business analysts to make informed decisions when planning development for your next intranet enhancement. For example: If users are not accessing your Announcements page and missing critical information that they need to be aware of, you may elect to use graphical links on the home page to direct more users to that page. As a result, the number of employee help-requests to HR decreases. If users are not accessing your News page to read recent articles, you may elect to stop spending as much time updating the page with new stories and cut costs in your communications department. You notice that there is a high volume of users accessing the Employee Dashboard page so your organization decides to continue making personalization enhancements to the page and investing in the Portal tool that most users are accessing. Usage analytics aren’t necessarily a new concept in the IT industry. What sets WebCenter Portal Analytics apart is: Reports are tailored for WebCenter specific tools Report can be easily added to a page as simple as a drag-and-drop Explicit Feedback Through Polls WebCenter Portal users can create, edit, take, and analyze online polls. With polls, you can survey your audience (such as their opinions and their experience level), check whether they can recall important information, and gather feedback and metrics. How many times have you been involved in a requirements discussion and someone has asked a question similar to “Well how do you know that no one likes our home page?” and the response is “Everyone says they hate it! That’s all anyone complains about.” No one has any measurable, quantifiable metric to gauge user satisfaction. Analytics measure usage, but your organization also needs to measure the quality of your portal as defined by the actual people that use it. With that information, your leadership can make informed decisions that will not only match usage patterns but also relate to employees on a personal level. The end result is a connection between employees and leadership that gives everyone in the organization a sense of ownership of their Portal rather than the feeling of development decisions being segregated to leadership only. Polls can be created and edited through the Poll Manager: Polls and View Poll Results can easily be added to a page through drag-and-drop. What did we learn? Being a “connected” company doesn’t just mean helping employees connect with each other horizontally across your enterprise. It also means connecting those employees to the decisions that affect their everyday activities. Through WebCenter Portal Usage Analytics and Polls, any decision that is made to remove a Portal page, update a Portal page, or develop new Portal functionality, can be justified by quantifiable metrics. Instead of fielding complaints and hearing that your employees don’t have a voice, give those employees a voice and listen!

    Read the article

  • Introducing the Oracle Linux Playground yum repo

    - by wcoekaer
    We just introduced a new yum repository/channel on http://public-yum.oracle.com called the playground channel. What we started doing is the following: When a new stable mainline kernel is released by Linus or GregKH, we internally build RPMs to test it and do some QA work around it to keep track of what's going on with the latest development kernels. It helps us understand how performance moves up or down and if there are issues, we try to help look into them and of course send that stuff back upstream. Many Linux users out there are interested in trying out the latest features but there are some potential barriers to do this. (1) in general, you are looking at an upstream development distribution, which means that everything changes both in userspace(random applications) and kernel. Projects like Fedora are very useful and someone that wants to just see how the entire distribution evolves with all the changes, this is a great way to be current. A drawback here, though, is that if you have applications that are not part of the distribution, there's a lot of manual work involved or they might just not work because the changes are too drastic. The introduction of systemd is a good example. (2) when you look at many of our customers, that are interested in our database products or applications, the starting point of having a supported/certified userspace/distribution, like Oracle Linux, is a much easier way to get your feet wet in seeing what new/future Linux kernel enhancements could do. This is where the playground channel comes into play. When you install Oracle Linux 6 (which anyone can download and use from http://edelivery.oracle.com/linux), grab the latest public yum repository file http://public-yum.oracle.com/public-yum-ol6.repo, put it in /etc/yum.repos.d and enable the playground repo : [ol6_playground_latest] name=Latest mainline stable kernel for Oracle Linux 6 ($basearch) - Unsupported baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/playground/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 Now, all you need to do : type yum update and you will be downloading the latest stable kernel which will install cleanly on Oracle Linux 6. Thus you end up with a stable Linux distribution where you can install all your software, and then download the latest stable kernel (at time of writing this is 3.6.7) without having to recompile a kernel, without having to jump through hoops. There is of course a big, very important disclaimer this is NOT for PRODUCTION use. We want to try and help make it easy for people that are interested, from a user perspective, where the Linux kernel is going and make it easy to install and use it and play around with new features. Without having to learn how to compile a kernel and without necessarily having to install a complete new distribution with all the changes top to bottom. So we don't or won't introduce any new userspace changes, this project really is around making it easy to try out the latest upstream Linux kernels in a very easy way on an environment that's stable and you can keep current, since all the latest errata for Oracle Linux 6 are published on the public yum repo as well. So one repository location for all your current changes and the upstream kernels. We hope that this will get more users to try out the latest kernel and report their findings. We are always interested in understanding stability and performance characteristics. As new features are going into the mainline kernel, that could potentially be interesting or useful for various products, we will try to point them out on our blogs and give an example on how something can be used so you can try it out for yourselves. Anyway, I hope people will find this useful and that it will help increase interested in upstream development beyond reading lkml by some of the more non-kernel-developer types.

    Read the article

  • Seperation of project responsibilities in new project

    - by dreza
    We have very recently started a new project (MVC 3.0) and some of our early discussion has been around how the work and development will be split amongst the team members to ensure we get the least amount of overlap of work and so help make it a bit easier for each developer to get on and do their work. The project is expected to take about 6 months - 1 year (although not all developers are likely to be on and might filter off towards the end), Our team is going to be small so this will help out a bit I believe. The team will essentially consist of: 3 x developers (1 a slightly more experienced and will be the lead) 1 x project manager / product owner / tester An external company responsbile for doing our design work General project/development decisions so far have included: Develop in an Agile way using SCRUM techniques (We are still very much learning this approach as a company) Use MVVM archectecture Use Ninject and DI where possible Attempt to use as TDD as much as possible to drive development. Keep our controllers as skinny as possible Keep our views as simple as possible During our discussions two approaches have been broached as too how to seperate the workload given our objectives outlined above. OPTION 1: A framework seperation where each person is responsible for conceptual areas with overlap and discussion primarily in the integration areas. The integration areas would the responsibily of both developers as required. View prototypes (**Graphic designer**) | - Mockups | Views (Razor and view helpers etc) & Javascript (**Developer 1**) | - View models (Integration point) | Controllers and Application logic (**Developer 2**) | - Models (Integration point) | Domain model and persistence (**Developer 3**) PROS: Integration points are quite clear and so developers can work without dependencies on others fairly easily Code practices such as naming conventions and style is more easily managed in regards to consistancy as primarily only one developer will be handling an area CONS: Completion of an entire feature becomes a bit grey as no single person is responsible for an entire feature (story?) A person might not have a full appreciation for all areas of the project and so code overlap might be lacking if suddenly that person left. OPTION 2: A more task orientated approach where each person is responsible for the completion of the entire task from view - controller - model. PROS: A person is responsible for one entire feature so it's "complete" state can be clearly defined Code overlap into different areas will occur so each individual has good coverage over the entire application CONS: Overlap of development will occur in all the modules and developers can develop/extend without a true understanding of what the original code owner was intending. This could potentially lead more easily to code bloat? Following a convention might be harder as developers are adding to all areas of the project If a developer sets up a way of doing things would it be harder to enforce the other developers to follow that convention or even build on it (or even discuss it?). Dunno.. Bugs could more easily be introduced into areas not thought about by the developer It's easier to possibly to carry a team member in so far as one member just hacks code together to complete a task whilst another takes time to build a foundation that could be used by others and so help make future tasks easier i.e. starts building a framework? QUESTION: As it might appear I'm more in favor of option 1, however I'm interested to see how others might have approached this or what is the standard or best or preferred way of undertaking a project. Or indeed any different approach to handling this?

    Read the article

  • Get Ready for Anytime, Anywhere Engagement

    - by Christie Flanagan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Are you ready for 2015?  According to IDC, 2015 is the year when more users are projected to access the internet using mobile devices than with PC’s or other wired devices.  It’s no doubt that mobile devices are a critical means of communication today, and are on track to become increasingly more important in the coming years. However, device formats are so varied that delivering a mobile web experience that will engage site visitors and enhance your brand can be a daunting task. Solutions that empower organizations to easily extend their web presence to the mobile channel, while saving significant time and effort in managing mobile sites, are now essential in our ever connected mobile world. So what are some of the things organizations should look for in such a solution? Mobile device form factors, networks, protocols, and browsers vary widely, and reformatting web content for thousands of different device and software combinations is a prohibitive task. An effective mobile solution can make this process seamless by automatically formatting designated web content for mobile delivery.  By automatically detecting a site visitor’s device configuration, the selected web content can be sized and formatted for optimal display on that particular device. This can save tremendous time involved in building, formatting, and maintaining individual websites or mobile applications for different mobile devices. It’s not enough to simply support the thousands of different mobile device types that are out there. It’s also critical to make it easy for marketers and other business users to manage mobile sites and mobile content. Those responsible for maintaining an organization’s web and mobile experiences need the ability to edit content using rich text editor tools and then preview that content directly in the context of the mobile website and the traditional website, ideally from the same business user interface. Powerful capabilities such as these make managing the web experience for mobile devices easy, even with frequently changing content, across a multitude of different devices. This saves tremendous time involved in building, formatting, and maintaining individual websites or mobile applications for different mobile devices. When content or business needs change, the business user needs only to change site content once, and it is seamlessly deployed to the web and all mobile channels.Geo-location is another critical input to making the online experience engaging and relevant for web visitors who are increasingly mobile. A mobile solution should enable use of device GPS data to deliver location-based content and services to mobile website visitors. Organizations can provide mobile site visitors with location-sensitive search results, location-based offers and recommendations, integration of maps and directions into site content, and much more – all critical for meeting the needs of those on the go.To hear more about how mobile is changing the game, check out our recent webcast with Ted Schadler, Vice President, Principal Analyst, Forrester, where he discussed why mobile is the new face of engagement, or learn more about how to extend your web presence to the mobile channel with Oracle WebCenter Sites and Oracle WebCenter Sites Mobility Server.

    Read the article

  • I am not able to delete a corrupt NTFS partition on my pen drive. How can I force its deletion?

    - by yesuraj
    I formatted my 16GB pen drive with the NTFS file system in windows vista. After that I started copying some files. However, only a few files were copied to the pen drive before the copy operation hung. So I cancelled the copy operation. Now I am unable to use the pen drive. I DON'T REALLY NEED ANY FILES THAT I COPIED TO THE PENDRIVE. I JUST WANT TO USE THE PENDRIVE AGAIN. I have tried using Ubuntu to format the pen drive. But when i use fdisk to delete the partition, it looks like it is working fine but in fact it does not delete the partition. Also I am unable to format it with any other file system. When I tried to use gparted, it throws the following error: Error mounting: mount exited with exit code 14: The disk contains an unclean file system(0,0). The file system wasn't safely closed on window. Fixing ntfs_attr_pread_i:ntfs_pread failed: Input/output error Failed to read NTFS$Bitmap:Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a softRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into windows twice. The usage of the /f parameter is very important!. If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the dmraid documentation for more details When I searched the Internet I found help on how to recover. But I don’t want to recover, I want to format it again. When I pressed w after deleting the partition, it took more time than previously. After that i removed the pen drive and re-inserted, but the partition I had deleted was still present. If I simply type the command fdisk /dev/sdb without removing the pen drive after the partition is deleted, then it returns the error message Unable to open /dev/sdb. Here are the steps that I followed: root@yesuraj-ubuntu:~# fdisk /dev/sdb Command (m for help): d Selected partition 1 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks THE DEMESG PRINTS ARE AS FOLLOWS, [ 6139.774753] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6154.816941] usb 2-1.3: device descriptor read/64, error -110 [ 6169.968908] usb 2-1.3: device descriptor read/64, error -110 [ 6170.158427] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6185.200638] usb 2-1.3: device descriptor read/64, error -110 [ 6200.352572] usb 2-1.3: device descriptor read/64, error -110 [ 6200.542093] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6205.559460] usb 2-1.3: device descriptor read/8, error -110 I used the dd command and it erased the partition table. But now when I connect the pen drive, dmesg contains this error message: [88143.437001] sdb: unknown partition table. I am not able to create a partion using fdisk /dev/sdb. The error message says that it is unable to find the node. Other messages from dmesg follow below. [87100.531596] usb 2-1.3: new high speed USB device number 39 using ehci_hcd [87130.915257] usb 2-1.3: new high speed USB device number 40 using ehci_hcd [87135.932647] usb 2-1.3: device descriptor read/8, error -110

    Read the article

  • Why won't my Broadcom BCM4312 LP-PHY work with the STA driver?

    - by Jackson Taylor
    I tried the steps here for a 4312: https://help.ubuntu.com/community/WifiDocs/Driver/bcm43xx Both of these: sudo modprobe -r b43 ssb wl sudo modprobe wl return: FATAL: Module wl not found. FATAL: Error running install command for wl (this one is only for the second one actually) I tried the broadcom-sta, didn't work. What's confusing is down below in the next steps for STA with internet access it says to use the bcmwl one. So I install that and it succeeds but with some errors: sudo apt-get install bcmwl-kernel-source Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: module-assistant Use 'apt-get autoremove' to remove it. The following NEW packages will be installed: bcmwl-kernel-source 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/1,181 kB of archives. After this operation, 3,609 kB of additional disk space will be used. Selecting previously unselected package bcmwl-kernel-source. (Reading database ... 168005 files and directories currently installed.) Unpacking bcmwl-kernel-source (from .../bcmwl-kernel-source_5.100.82.112+bdcom-0ubuntu3_amd64.deb) ... Setting up bcmwl-kernel-source (5.100.82.112+bdcom-0ubuntu3) ... Loading new bcmwl-5.100.82.112+bdcom DKMS files... Building only for 3.5.0-21-generic Building for architecture x86_64 Module build for the currently running kernel was skipped since the kernel source for this kernel does not seem to be installed. ERROR: Module b43 does not exist in /proc/modules ERROR: Module b43legacy does not exist in /proc/modules ERROR: Module ssb does not exist in /proc/modules ERROR: Module bcm43xx does not exist in /proc/modules ERROR: Module brcm80211 does not exist in /proc/modules ERROR: Module brcmfmac does not exist in /proc/modules ERROR: Module brcmsmac does not exist in /proc/modules ERROR: Module bcma does not exist in /proc/modules FATAL: Module wl not found. FATAL: Error running install command for wl update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.5.0-21-generic jtaylor991@jtaylor991-whiteHP:~$ sudo modprobe wl FATAL: Module wl not found. FATAL: Error running install command for wl Then I do the modprobe wl commands listed above and it gives the above listed errors. It didn't work with the broadcom-sta driver either. I installed the b43 ones but nothing happened, and I don't know why so those are still installed. firmware-b43legacy-installer, b43-fwcutter and firmware-b43-lpphy-installer (yes it is a LP-PHY) are currently installed. If I go into System Settings Software Sources Additional Drivers it says "Using Broadcom 802.11 Linux STA wireless driver source from bcmwl-kernel-source (proprietary) But bcmwl-kernel-source isn't installed. I could try again but I remember rebooting and it still said this. What's funny is it found wireless networks during the Ubuntu setup/installation, I don't remember if I got it to connect or not though. I think it kept asking for a password when I put it in (yes it was right I showed password and looked at it) so I just ignored it. But right now the Enable Wireless option in the top right is just gone, it's just Enable Networking and I'm on ethernet on this HP Pavilion dv4-1435dx right here. If I run rfkill list it shows: 0: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: no It was hard blocked at the beginning but unblocking it makes no change. Also it's a touch sensitive button, and it appears to be always orange no matter if it's enabled or not because when I touch it the hard blocked changes between yes and no in rfkill list. I think it was blue for a minute at one point. What is going on?!?! Help me! Lol, thanks for any and all of your time guys. Oh yeah this is Ubuntu 12.10 fresh install.

    Read the article

  • At what point does "constructive" criticism of your code become unhelpful?

    - by user15859
    I recently started as a junior developer. As well as being one of the least experienced people on the team, I'm also a woman, which comes with all sorts of its own challenges working in a male-dominated environment. I've been having problems lately because I feel like I am getting too much unwarranted pedantic criticism on my work. Let me give you an example of what happened recently. Team lead was too busy to push in some branches I made, so he didn't get to them until the weekend. I checked my mail, not really meaning to do any work, and found that my two branches had been rejected on the basis of variable names, making error messages more descriptive, and moving some values to the config file. I don't feel that rejecting my branch on this basis is useful. Lots of people were working over the weekend, and I had never said that I would be working. Effectively, some people were probably blocked because I didn't have time to make the changes and resubmit. We are working on a project that is very time-sensitive, and it seems to me that it's not helpful to outright reject code based on things that are transparent to the client. I may be wrong, but it seems like these kinds of things should be handled in patch type commits when I have time. Now, I can see that in some environments, this would be the norm. However, the criticism doesn't seem equally distributed, which is what leads to my next problem. The basis of most of these problems was due to the fact that I was in a codebase that someone else had written and was trying to be minimally invasive. I was mimicking the variable names used elsewhere in the file. When I stated this, I was bluntly told, "Don't mimic others, just do what's right." This is perhaps the least useful thing I could have been told. If the code that is already checked in is unacceptable, how am I supposed to tell what is right and what is wrong? If the basis of the confusion was coming from the underlying code, I don't think it's my responsibility to spend hours refactoring a whole file that someone else wrote (and works perfectly well), potentially introducing new bugs etc. I'm feeling really singled out and frustrated in this situation. I've gotten a lot better about following the standards that are expected, and I feel frustrated that, for example, when I refactor a piece of code to ADD error checking that was previously missing, I'm only told that I didn't make the errors verbose enough (and the branch was rejected on this basis). What if I had never added it to begin with? How did it get into the code to begin with if it was so wrong? This is why I feel so singled out: I constantly run into this existing problematic code, that I either mimic or refactor. When I mimic it, it's "wrong", and if I refactor it, I'm chided for not doing enough (and if I go all the way, introducing bugs, etc). Again, if this is such a problem, I don't understand how any code gets into the codebase, and why it becomes my responsibility when it was written by someone else, who apparently didn't have their code reviewed. Anyway, how do I deal with this? Please remember that I said at the top that I'm a woman, and I'm sure these guys don't usually have to worry about decorum when they're reviewing other guys' code, but honestly that doesn't work for me, and it's causing me to be less productive. I'm worried that if I talk to my manager about it, he'll think I can't handled the environment, etc.

    Read the article

  • Drawing random smooth lines contained in a square [migrated]

    - by Doug Mercer
    I'm trying to write a matlab function that creates random, smooth trajectories in a square of finite side length. Here is my current attempt at such a procedure: function [] = drawroutes( SideLength, v, t) %DRAWROUTES Summary of this function goes here % Detailed explanation goes here %Some parameters intended to help help keep the particles in the box RandAccel=.01; ConservAccel=0; speedlimit=.1; G=10^(-8); % %Initialize Matrices Ax=zeros(v,10*t); Ay=Ax; vx=Ax; vy=Ax; x=Ax; y=Ax; sx=zeros(v,1); sy=zeros(v,1); % %Define initial position in square x(:,1)=SideLength*.15*ones(v,1)+(SideLength*.7)*rand(v,1); y(:,1)=SideLength*.15*ones(v,1)+(SideLength*.7)*rand(v,1); % for i=2:10*t %Measure minimum particle distance component wise from boundary %for each vehicle BorderGravX=[abs(SideLength*ones(v,1)-x(:,i-1)),abs(x(:,i-1))]'; BorderGravY=[abs(SideLength*ones(v,1)-y(:,i-1)),abs(y(:,i-1))]'; rx=min(BorderGravX)'; ry=min(BorderGravY)'; % %Set the sign of the repulsive force for k=1:v if x(k,i)<.5*SideLength sx(k)=1; else sx(k)=-1; end if y(k,i)<.5*SideLength sy(k)=1; else sy(k)=-1; end end % %Calculate Acceleration w/ random "nudge" and repulive force Ax(:,i)=ConservAccel*Ax(:,i-1)+RandAccel*(rand(v,1)-.5*ones(v,1))+sx*G./rx.^2; Ay(:,i)=ConservAccel*Ay(:,i-1)+RandAccel*(rand(v,1)-.5*ones(v,1))+sy*G./ry.^2; % %Ad hoc method of trying to slow down particles from jumping outside of %feasible region for h=1:v if abs(vx(h,i-1)+Ax(h,i))<speedlimit vx(h,i)=vx(h,i-1)+Ax(h,i); elseif (vx(h,i-1)+Ax(h,i))<-speedlimit vx(h,i)=-speedlimit; else vx(h,i)=speedlimit; end end for h=1:v if abs(vy(h,i-1)+Ay(h,i))<speedlimit vy(h,i)=vy(h,i-1)+Ay(h,i); elseif (vy(h,i-1)+Ay(h,i))<-speedlimit vy(h,i)=-speedlimit; else vy(h,i)=speedlimit; end end % %Update position x(:,i)=x(:,i-1)+(vx(:,i-1)+vx(:,i))/2; y(:,i)=y(:,i-1)+(vy(:,i-1)+vy(:,1))/2; % end %Plot position clf; hold on; axis([-100,SideLength+100,-100,SideLength+100]); cc=hsv(v); for j=1:v plot(x(j,1),y(j,1),'ko') plot(x(j,:),y(j,:),'color',cc(j,:)) end hold off; % end My original plan was to place particles within a square, and move them around by allowing their acceleration in the x and y direction to be governed by a uniformly distributed random variable. To keep the particles within the square, I tried to create a repulsive force that would push the particles away from the boundaries of the square. In practice, the particles tend to leave the desired "feasible" region after a relatively small number of time steps (say, 1000)." I'd love to hear your suggestions on either modifying my existing code or considering the problem from another perspective. When reading the code, please don't feel the need to get hung up on any of the ad hoc parameters at the very beginning of the script. They seem to help, but I don't believe any beside the "G" constant should truly be necessary to make this system work. Here is an example of the current output: Many of the vehicles have found their way outside of the desired square region, [0,400] X [0,400].

    Read the article

  • Get More From Your Service Request

    - by Get Proactive Customer Adoption Team
    Leveraging Service Request Best Practices Use best practices to get there faster. In the daily conversations I have with customers, they sometimes express frustration over their Service Requests. They often feel powerless to make needed changes, so their sense of frustration grows. To help you avoid some of the frustration you might feel in dealing with your Service Requests (SR), here are a few pointers that come from our best practice discussions. Be proactive. If you can anticipate some of the questions that Support will ask, or the information they may need, try to provide this up front, when you log the SR. This could be output from the Remote Diagnostic Agent (RDA), if this is a database issue, or the output from another diagnostic tool, if you’re an EBS customer. Any information you can supply that helps us understand the situation better, helps us resolve the issue sooner. As you use some of these tools proactively, you might even find the solution to the problem before you log an SR! Be right. Make sure you have the correct severity level. Since you select the initial severity level, it’s easy to accept the default without considering how significant this may be. Business impact is the driving factor, so make sure you take a moment to select the severity level that is appropriate to the situation. Also, make sure you ask us to change the severity level, should the situation dictate. Be responsive! If this is an important issue to you, quickly follow up on any action plan submitted to you by Oracle Support. The support engineer assigned to your Service Request will be able to move the issue forward more aggressively when they have the needed information. This is crucial in resolving your issues in a timely manner. Be thorough. If there are five questions in the action plan, make sure you provide an answer for all five questions in one response, rather than trickling them in one at a time. This will allow the engineer to look at all of the information as a whole and to avoid multiple trips to your SR, saving valuable time and getting you a resolution sooner. Be your own advocate! You know your situation best; make sure Oracle Support understands both how and why this issue is important to you and your company. Use the escalation process if you're concerned that your SR isn't going the right direction, the right pace, or through the right person. Don't wait until you're frustrated and angry. An escalation is as simple as a quick conversation on the phone and can be amazingly effective in getting your issues back on track. The support manager you speak with is empowered to make any needed changes. Be our partner. You can make your support experience better. When your SR has been resolved, you may receive a survey request. This is intended to get your feedback about how your SR went and what we can do to improve your overall support experience. Oracle Support is here to help you. Our goal with any Service Request is to provide the best possible solution as quickly as possible. With your help, we’ll be able to do this with your Service Request too.  

    Read the article

  • Zero to Cloud : One stop shop for resources to accelerate your transformation to enterprise private cloud

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} During the Oracle Open World 2012 last week, Oracle introduced "Zero to Cloud" resource center to help you accelerate your transformation journey to enterprise private cloud. To help organizations deploy fully operational, enterprise-grade private cloud environment in as little as half a day, Oracle has brought key content together into this single, user-friendly resource center. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The resource center is launched just as the Oracle Cloud Builder Summit series moves into full swing. Designed for executives, cloud architects, and IT operations professionals, the day-long event series will eventually reach nearly 100 cities around the globe. During this event, an interactive "Zero to Cloud" session will showcase the transformational journey of a fictitious enterprise to the private cloud using the latest solutions from Oracle—including, Oracle Database, Oracle Fusion Middleware, Oracle VM and Oracle Enterprise Manager, as well as Oracle’s full range of engineered systems. The online "Zero to Cloud" resource center includes best practices from Oracle experts and early adopter customers as well as interviews with Oracle development executives responsibly for Oracle’s private cloud solutions and roadmap. It also includes a new self-assessment quiz that can help determine readiness for a successful private cloud deployment. Once you've determined organizational readiness, explore early adopter tips, demos, guides, exclusive white papers and more at the "Zero to Cloud" resource center.

    Read the article

  • Dual monitor not working completely in 12.10 after upgrade

    - by Mark Baldridge
    At 12.04, dual monitors worked perfectly. After upgrading to 12.10, the primary monitor works, the second monitor only partly works. I am sure there is some difference between the releases that I have missed setting properly. System settings - Displays show both correctly as Acer 22" monitors at 1680x1050 (16:10). An icon on monitor 2 is present, but elongated; almost an artifact, since other icons on the primary screen are absent, but this one icon is there on th second monitor. Selecting the icons on both screens exist. Painting is weird on monitor 2. Launcher exists and works on both screens, but even with sticky edges off, the cursor stops at the left edge of monitor 2. Clicking on text editor on screen 2 launcer will launch gedit there. If I drag it, it leaves a trail of after images like repaint is failing. If I drive the cursor on the launcher, the help tags like "LibreOffice Writer" appear, but stay on screen unless I drag the active gedit window over them. Then part of the help bubbles are overwritten, leaving behind after images of the gedit window on screen. What is really fascinating is that the System settings - Displays is now ignoring monitor selection, after allowing it earlier. Just before this, the help popup which said "Select a monitor to change its properties; drag to rearrange its placement" actually let me do that. Maybe a trick of where I grab the edge of the monitor in the Displays setting. I just found a working handle. When I drag monitor 1 to the right of monitor 2, "Apply" and confirm, both monitors work normally (although the right monitor lets the cursor slide off the right edge onto the left edge of monitor 1 - which sounds correct). Painting of windows does not leave an after image. However, success is only temporary. The setting survives the reboot, but painting on the left monitor, now monitor 2, now replicates the issues from before. The after image of the gedit window and the small window for "Are you sure you want to close all programs and restart the computer?" are still on monitor 2 (on the left now), even though they are not real windows, nor do they have processes behind them. Curiously, in Displays, the "green" monitor on the left in the display window is matched by the right monitor color in the monitor upper left corner. Probably makes sense as the one on the right is now monitor 1. If I repeat the "drag the left monitor to the right of the right monitor on the "Displays" window, things are oriented properly, with no display artifacts as I drag windows around either screen. Also the description bubbles that pop up are overwritten on both screens, so none of those artifacts either. This goodness does not survive a reboot, however. Have not tried logging out and back in. All of this after positing that the motherboard VGA and HDMI ports could have been the issue. So, I installed an e-GeForce 7600 GT Dual DVI (I know the web thinks it is not DVI, but VGA, but the connectors are DVI). No change to the weird behavior. The good parts continue to work, the weirdness also works, and swapping monitor positions seems to cure the issue. So, is there a setting I have missed? Given "swapping" monitor 1 and 2 on the System Settings... - Displays makes it work, just not across boot, I suspect so.

    Read the article

  • Setting up your project

    - by ssoolsma
    Before any coding we first make sure that the project is setup correctly. (Please note, that this blog is all about how I do it, and incase i forget, i can return here and read how i used to do it. Maybe you come up with some idea’s for yourself too.) In these series we will create a minigolf scoring cart. Please note that we eventually create a fully functional application which you cannot use unless you pay me alot of money! (And i mean alot!)   1. Download and install the appropriate tools. Download the following: - TestDriven.Net (free version on the bottom of the download page) - nUnit TestDriven is a visual studio plugin for many unittest frameworks, which allows you to run  / test code very easily with a right click –> run test. nUnit is the test framework of choice, it works seamless with TestDriven.   2. Create your project Fire up visual studio and create your DataAccess project:  MidgetWidget.DataAccess is it’s name. (I choose MidgetWidget as name for the solution). Also, make sure that the MidgetWidget.DataAccess project is a c# ClassLibary Hit OK to create the solution. (in the above example the checkbox Create directory for solution is checked, because i’m pointing the location to the root of c:\development where i want MidgetWidget to be created.   3. Setup the database. You should have thought about a database when you reach this point. Let’s assume that you’ve created a database as followed: Table name: LoginKey Fields: Id (PK), KeyName (uniqueidentifier), StartDate (datetime), EndDate (datetime) Table name:  Party Fields: Id (PK), Key (uniqueidentifier, Created (datetime) Table name:  Person Fields: Id(PK),  PartyId (int), Name (varchar) Tablename: Score Fields: Id (PK), Trackid (int), PersonId (int), Strokes (int) Tablename: Track Fields: Id (PK), Name (varchar) A few things to take note about the database setup. I’ve singularized all tablenames (not “Persons“ but “Person”. This is because in a few minutes, when this is in our code, we refer to the database objects as single rows. We retrieve a single Person not a single “Persons” from the database.   4. Create the entity framework In your solution tree create a new folder and call it “DataModel”. Inside this folder: Add new item –> and choose ADO.NET Entity Data Model. Name it “Entities.edmx” and hit  “Add”. Once the edmx is added, open it (double click) and right click the white area and choose “Update model from database…". Now, point it to your database (i include sensitive data in the connectionstring) and select all the tables. After that hit “Finish” and let the entity framework do it’s code generation. Et Voila, after a few seconds you have set up your entity model. Next post we will start building the data-access! I’m off to the beach.

    Read the article

  • Oracle Applications Cloud Release 8 Customization: Your User Interface, Your Text

    - by ultan o'broin
    Introducing the User Interface Text Editor In Oracle Applications Cloud Release 8, there’s an addition to the customization tool set, called the User Interface Text Editor  (UITE). When signed in with an application administrator role, users launch this new editing feature from the Navigator's Tools > Customization > User Interface Text menu option. See how the editor is in there with other customization tools? User Interface Text Editor is launched from the Navigator Customization menu Applications customers need a way to make changes to the text that appears in the UI, without having to initiate an IT project. Business users can now easily change labels on fields, for example. Using a composer and activated sandbox, these users can take advantage of the Oracle Metadata Services (MDS), add a key to a text resource bundle, and then type in their preferred label and its description (as a best practice for further work, I’d recommend always completing that description). Changing a simplified UI field label using Oracle Composer In Release 8, the UITE enables business users to easily change UI text on a much wider basis. As with composers, the UITE requires an activated sandbox where users can make their changes safely, before committing them for others to see. The UITE is used for editing UI text that comes from Oracle ADF resource bundles or from the Message Dictionary (or FND_MESSAGE_% tables, if you’re old enough to remember such things). Functionally, the Message Dictionary is used for the text that appears in business rule-type error, warning or information messages, or as a text source when ADF resource bundles cannot be used. In the UITE, these Message Dictionary texts are referred to as Multi-part Validation Messages.   If the text comes from ADF resource bundles, then it’s categorized as User Interface Text in the UITE. This category refers to the text that appears in embedded help in the UI or in simple error, warning, confirmation, or information messages. The embedded help types used in the application are explained in an Oracle Fusion Applications User Experience (UX) design pattern set. The message types have a UX design pattern set too. Using UITE  The UITE enables users to search and replace text in UI strings using case sensitive options, as well as by type. Users select singular and plural options for text changes, should they apply. Searching and replacing text in the UITE The UITE also provides users with a way to preview and manage changes on an exclusion basis, before committing to the final result. There might, for example, be situations where a phrase or word needs to remain different from how it’s generally used in the application, depending on the context. Previewing replacement text changes. Changes can be excluded where required. Multi-Part Messages The Message Dictionary table architecture has been inherited from Oracle E-Business Suite days. However, there are important differences in the Oracle Applications Cloud version, notably the additional message text components, as explained in the UX Design Patterns. Message Dictionary text has a broad range of uses as indicated, and it can also be reserved for internal application use, for use by PL/SQL and C programs, and so on. Message Dictionary text may even concatenate together at run time, where required. The UITE handles the flexibility of such text architecture by enabling users to drill down on each message and see how it’s constructed in total. That way, users can ensure that any text changes being made are consistent throughout the different message parts. Multi-part (Message Dictionary) message components in the UITE Message Dictionary messages may also use supportability-related numbers, the ones that appear appended to the message text in the application’s UI. However, should you have the requirement to remove these numbers from users' view, the UITE is not the tool for the job. Instead, see my blog about using the Manage Messages UI.

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >