Search Results

Search found 71953 results on 2879 pages for 'work environment'.

Page 309/2879 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • SharePoint Content Type Cheat Sheet

    - by Bil Simser
    PrincipleAny application or solution built in SharePoint must use a custom content type over adding columns to lists. The only exception to this is one-off solutions that have no life-cycle, proof-of-concepts, etc.Creating Content TypesWeb UI. Not portable, POC onlyC# or Declarative (XML). Must deploy these as FeaturesRuleDo not chagne the base XML for a Content Type after deploying. The only exception to this rule is that you can re-deploy a modified Content Type definition only after completely removing it from the environment (either programatically or by hand).Updating Content TypesUpdate and push down to child typesWeb UI. Manual for each environment. Document steps required for repeatability.Feature Upgrade. Preferred solution.C#. If you created the content type through code you might want to go this route. Create new modified Content Types and hide the old one. Not recommended but useful for legacy.ReferencesCreate Custom Content  Types in SharePoint 2010 (C#)Content Type Definitions  (XML)Creating Content Types (XML  and C#)Updating ApproachesUpdating Child Content TypesAgree or disagree?

    Read the article

  • Big GRC: Turning Data into Actionable GRC Intelligence

    - by Jenna Danko
    While it’s no longer headline news that Governments have carried out large scale data-mining programmes aimed at terrorism detection and identifying other patterns of interest across a wide range of digital data sources, the debate over the ethics and justification over this action, will clearly continue for some time to come. What is becoming clear is that these programmes are a framework for the collation and aggregation of massive amounts of unstructured data and from this, the creation of actionable intelligence from analyses that allowed the analysts to explore and extract a variety of patterns and then direct resources. This data included audio and video chats, phone calls, photographs, e-mails, documents, internet searches, social media posts and mobile phone logs and connections. Although Governance, Risk and Compliance (GRC) professionals are not looking at the implementation of such programmes, there are many similar GRC “Big data” challenges to be faced and potential lessons to be learned from these high profile government programmes that can be applied a lot closer to home. For example, how can GRC professionals collect, manage and analyze an enormous and disparate volume of data to create and manage their own actionable intelligence covering hidden signs and patterns of criminal activity, the early or retrospective, violation of regulations/laws/corporate policies and procedures, emerging risks and weakening controls etc. Not exactly the stuff of James Bond to be sure, but it is certainly more applicable to most GRC professional’s day to day challenges. So what is Big Data and how can it benefit the GRC process? Although it often varies, the definition of Big Data largely refers to the following types of data: Traditional Enterprise Data – includes customer information from CRM systems, transactional ERP data, web store transactions, and general ledger data. Machine-Generated /Sensor Data – includes Call Detail Records (“CDR”), weblogs and trading systems data. Social Data – includes customer feedback streams, micro-blogging sites like Twitter, and social media platforms like Facebook. The McKinsey Global Institute estimates that data volume is growing 40% per year, and will grow 44x between 2009 and 2020. But while it’s often the most visible parameter, volume of data is not the only characteristic that matters. In fact, according to sources such as Forrester there are four key characteristics that define big data: Volume. Machine-generated data is produced in much larger quantities than non-traditional data. This is all the data generated by IT systems that power the enterprise. This includes live data from packaged and custom applications – for example, app servers, Web servers, databases, networks, virtual machines, telecom equipment, and much more. Velocity. Social media data streams – while not as massive as machine-generated data – produce a large influx of opinions and relationships valuable to customer relationship management as well as offering early insight into potential reputational risk issues. Even at 140 characters per tweet, the high velocity (or frequency) of Twitter data ensures large volumes (over 8 TB per day) need to be managed. Variety. Traditional data formats tend to be relatively well defined by a data schema and change slowly. In contrast, non-traditional data formats exhibit a dizzying rate of change. Without question, all GRC professionals work in a dynamic environment and as new services, new products, new business lines are added or new marketing campaigns executed for example, new data types are needed to capture the resultant information.  Value. The economic value of data varies significantly. Typically, there is good information hidden amongst a larger body of non-traditional data that GRC professionals can use to add real value to the organisation; the greater challenge is identifying what is valuable and then transforming and extracting that data for analysis and action. For example, customer service calls and emails have millions of useful data points and have long been a source of information to GRC professionals. Those calls and emails are critical in helping GRC professionals better identify hidden patterns and implement new policies that can reduce the amount of customer complaints.   Now on a scale and depth far beyond those in place today, all that unstructured call and email data can be captured, stored and analyzed to reveal the reasons for the contact, perhaps with the aggregated customer results cross referenced against what is being said about the organization or a similar peer organization on social media. The organization can then take positive actions, communicating to the market in advance of issues reaching the press, strengthening controls, adjusting risk profiles, changing policy and procedures and completely minimizing, if not eliminating, complaints and compensation for that specific reason in the future. In this one example of many similar ones, the GRC team(s) has demonstrated real and tangible business value. Big Challenges - Big Opportunities As pointed out by recent Forrester research, high performing companies (those that are growing 15% or more year-on-year compared to their peers) are taking a selective approach to investing in Big Data.  "Tomorrow's winners understand this, and they are making selective investments aimed at specific opportunities with tangible benefits where big data offers a more economical solution to meet a need." (Forrsights Strategy Spotlight: Business Intelligence and Big Data, Q4 2012) As pointed out earlier, with the ever increasing volume of regulatory demands and fines for getting it wrong, limited resource availability and out of date or inadequate GRC systems all contributing to a higher cost of compliance and/or higher risk profile than desired – a big data investment in GRC clearly falls into this category. However, to make the most of big data organizations must evolve both their business and IT procedures, processes, people and infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and be able integrate them with the pre-existing company data to be analyzed. GRC big data clearly allows the organization access to and management over a huge amount of often very sensitive information that although can help create a more risk intelligent organization, also presents numerous data governance challenges, including regulatory compliance and information security. In addition to client and regulatory demands over better information security and data protection the sheer amount of information organizations deal with the need to quickly access, classify, protect and manage that information can quickly become a key issue  from a legal, as well as technical or operational standpoint. However, by making information governance processes a bigger part of everyday operations, organizations can make sure data remains readily available and protected. The Right GRC & Big Data Partnership Becomes Key  The "getting it right first time" mantra used in so many companies remains essential for any GRC team that is sponsoring, helping kick start, or even overseeing a big data project. To make a big data GRC initiative work and get the desired value, partnerships with companies, who have a long history of success in delivering successful GRC solutions as well as being at the very forefront of technology innovation, becomes key. Clearly solutions can be built in-house more cheaply than through vendor, but as has been proven time and time again, when it comes to self built solutions covering AML and Fraud for example, few have able to scale or adapt appropriately to meet the changing regulations or challenges that the GRC teams face on a daily basis. This has led to the creation of GRC silo’s that are causing so many headaches today. The solutions that stand out and should be explored are the ones that can seamlessly merge the traditional world of well-known data, analytics and visualization with the new world of seemingly innumerable data sources, utilizing Big Data technologies to generate new GRC insights right across the enterprise.Ultimately, Big Data is here to stay, and organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be the ones that are well positioned to make the most of it. A Blueprint and Roadmap Service for Big Data Big data adoption is first and foremost a business decision. As such it is essential that your partner can align your strategies, goals, and objectives with an architecture vision and roadmap to accelerate adoption of big data for your environment, as well as establish practical, effective governance that will maintain a well managed environment going forward. Key Activities: While your initiatives will clearly vary, there are some generic starting points the team and organization will need to complete: Clearly define your drivers, strategies, goals, objectives and requirements as it relates to big data Conduct a big data readiness and Information Architecture maturity assessment Develop future state big data architecture, including views across all relevant architecture domains; business, applications, information, and technology Provide initial guidance on big data candidate selection for migrations or implementation Develop a strategic roadmap and implementation plan that reflects a prioritization of initiatives based on business impact and technology dependency, and an incremental integration approach for evolving your current state to the target future state in a manner that represents the least amount of risk and impact of change on the business Provide recommendations for practical, effective Data Governance, Data Quality Management, and Information Lifecycle Management to maintain a well-managed environment Conduct an executive workshop with recommendations and next steps There is little debate that managing risk and data are the two biggest obstacles encountered by financial institutions.  Big data is here to stay and risk management certainly is not going anywhere, and ultimately financial services industry organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be best positioned to make the most of it. Matthew Long is a Financial Crime Specialist for Oracle Financial Services. He can be reached at matthew.long AT oracle.com.

    Read the article

  • How to decide on a price for the project as a freelancer

    - by Shekhar_Pro
    I have seen similar question on this SE site but none comes close to a sure shot answer and many are rather subjective. So i am taking a website as an example to be more objective for you to decide its development price i should quote for the complete work.I would like to have specific figures. In past I have developed many projects for my classmates (Computer science and few .net) when i was in college and there i just arbitrarily quoted the price i will take depending on my mood and customer's ability to pay.. usually ranging from Rs.500 (about $10 USD) to Rs. 1500 (about $30 USD). I have also developed few websites but that was open-source and free. But this time impressed by my work i have got a client that wants to get a website developed similar to this: [ http://www.jeetle.in/ ]. So taking this website as an example tell me how much should i charge for complete work from designing to payment gateway implementation (Excluding the charge the payment gateway provider will take). Few information you might like to consider. I am the only developer on this project if that makes any difference. And i would be using ASP.Net and MSSQL Express for server side processing and jQuery on client. Time period for development offered is about 4 to 6 Weeks. Its like i know my work but not how much I'm worth

    Read the article

  • Does this BSD-like license achieve what I want it to?

    - by Joseph Szymborski
    I was wondering if this license is: self defeating just a clone of an existing, better established license practical any more "corporate-friendly" than the GPL too vague/open ended and finally, if there is a better license that achieves a similar effect? I wanted a license that would (in simple terms) be as flexible/simple as the "Simplified BSD" license (which is essentially the MIT license) allow anyone to make modifications as long as I'm attributed require that I get a notification that such a derived work exists require that I have access to the source code and be given license to use the code not oblige the author of the derivative work to have to release the source code to the general public not oblige the author of the derivative work to license the derivative work under a specific license Here is the proposed license, which is just the simplified BSD with a couple of additional clauses (all of which are bolded). Copyright (c) (year), (author) (email) All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. The copyright holder(s) must be notified of any redistributions of source code. The copyright holder(s) must be notified of any redistributions in binary form The copyright holder(s) must be granted access to the source code and/or the binary form of any redistribution upon the copyright holder's request. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

    Read the article

  • How do I cut and paste commands from your blog?

    - by Maria Colgan
    At the recent ODTUG  Kscope 12 conference several people told me that they really enjoyed our blog on the Optimizer but were frustrated because they couldn’t cut and paste the commands used in the blog posts straight into their environment. Typically I use screen shots in the blog posts to make the commands clear but it does mean that it is impossible to cut and paste the commands into your environment. In order to get around this I have created a downloadable .sql script for each of our blog posts. You should now see the sentence “You can get a copy of the script I used to generate this post here”, appearing at the bottom of each blog post. Clicking on the link will open the .sql script that contains all of the commands used in the post. You can either save the entire script or just cut and paste the particular command you are interested in! I have added scripts for all of this year’s blog posts and am slowly making my way through our old posts until we have a script for everything we have posted to date. Hopefully this will help! +Maria Colgan

    Read the article

  • Unable to boot Windows after installing Ubuntu 12.04 - error: invalid efi file path

    - by user113350
    I have a Laptop (ASUS X310A, I installed Ubuntu 12.04 to be side by side with Windows 7 but I seem to have gotten a problem with booting Windows 7. I used the Boot Repair twice with no results. Boot-Repair info: http://paste.ubuntu.com/1417623/ The error I get when starting Windows 7 from GRUB is: error: invalid efi file path In Boot Manager or Menu, I have 3 options now: 2x for Ubuntu (maybe cause I did boot-repair twice) 1x Windows boot manager (If I boot this it opens "ASUS Preload Wizard", it gives me the option to re-install windows losing all previous data -) When I was making the partition before installing Ubuntu, I made the new partition by making sda4 smaller and adding ext4 mounted: "\" and adding a swap area. Installed it and it didn't work, nothing worked. So i booted Ubuntu from the USB again and deleted the partitions I made and decided to make sda3 smaller and making the partitions but this time it gave me the option that I could mount sda3 on "\windows" or "\dos" I ignored it and didn't choose neither because the I know that it doesn't need to be mounted and proceeded to create what is now sda7 (ext4) and sda8 (swap area). It still didn't work so I booted from USB and did the first boot-repair, so I was able to boot Ubuntu now but not windows, but when I did it through my USB I was not able to update boot-repair, so i decided to redo the boot-repair from Ubuntu running on the Hardisk (fully updated) and it still didn't work. In GRUB this is what i see (when booting using Ubuntu as first option in Boot Menu): Ubuntu, with Linux 3.2.0-29-generic Ubuntu, with Linux 3.2.0-29-generic (recovery mode) Windows UEFI loader Windows Boot UEFI bootx64.efi.bkp Windows 7 (loader) (on /dev/sda3) Windows Recovery Environment (loader) (on /dev/sda5) I tried all the ones starting with "Windows" they all don't work Please help, Many Thanks

    Read the article

  • Agile development challenges

    - by Bob
    With Scrum / user story / agile development, how does one handle scheduling out-of-sync tasks that are part of a user story? We are a small gaming company working with a few remote consultants who do graphics and audio work. Typically, graphics work should be done at least a week (sometimes 2 weeks) in advance of the code so that it's ready for integration. However, since SCRUM is supposed to focus on user stories, how should I split the stories across iteration so that they still follow the user story model? Ideally, a user story should be completed by all the team members in the same iteration, I feel that splitting them in any way violates the core principle of user story driven development. Also, one front end developer can work at 2X pace of backend developers. However, that throws the scheduling out of sync as well because he is either constantly ahead of them or what we have done is to have him work on tasks that not specific to this iteration just to keep busy. Either way, it's the same issue as above, splitting up user story tasks. If someone can recommend an active Google agile development group that discusses these and other issues, that'll be great. Also, if you know of a free alternative to Pivotal Labs, let me know as well. I'm looking now at Agilo.

    Read the article

  • Using OData to get Mix10 files

    - by Jon Dalberg
    There has been a lot of talk around OData lately (go to odata.org for more information) and I wanted to get all the videos from Mix ‘10: two great tastes that taste great together. Luckily, Mix has exposed the ‘10 sessions via OData at http://api.visitmix.com/OData.svc, now all I have to do is slap together a bit of code to fetch the videos. Step 1 (cut a hole in the box) Create a new console application and add a new service reference. Step 2 (put your junk in the box) Write a smidgen of code: 1: static void Main(string[] args) 2: { 3: var mix = new Mix.EventEntities(new Uri("http://api.visitmix.com/OData.svc")); 4:   5: var files = from f in mix.Files 6: where f.TypeName == "WMV" 7: select f; 8:   9: var web = new WebClient(); 10: 11: var myVideos = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyVideos), "Mix10"); 12:   13: Directory.CreateDirectory(myVideos); 14:   15: files.ToList().ForEach(f => { 16: var fileName = new Uri(f.Url).Segments.Last(); 17: Console.WriteLine(f.Url); 18: web.DownloadFile(f.Url, Path.Combine(myVideos, fileName)); 19: }); 20: } Step 3 (have her open the box) Compile and run. As you can see, the client reference created for the OData service handles almost everything for me. Yeah, I know there is some batch file to download the files, but it relies on cUrl being on the machine – and I wanted an excuse to work with an OData service. Enjoy!

    Read the article

  • NDC Oslo

    - by Alan Smith
    Originally posted on: http://geekswithblogs.net/asmith/archive/2013/06/14/153136.aspx2013 has been a hectic year for conference presentations so far, NDC in Oslo has been the 6th conference I have attended, and my session there was my 11th conference presentation this year. I have been meaning to make the short trip over from Stockholm to NDC for a few years, and this was the first time I made it. I have heard a lot of great things about the event, and was impressed with the location, the sessions, and most of all the atmosphere around the event boots and during the party on Thursday evening. The session I was delivering was my “Grid Computing with 256 Windows Azure Worker Roles & Kinect” demo, which I have delivered at many events over the past 12 months. The demo went fine. I’m always a little nervous when I try to scale out the application to 256 worker roles, it almost always works well and the application will scale in minutes, but very occasionally there can be a longer delay due to the provisioning process in the Windows Azure data centers. This would not be an issue for many scenarios, but when standing on stage in front of a room full of developers you really want things to run smoothly. A number of people have suggested that I should pre-provision an environment so that it is guaranteed to be there when I run the demo during a session. For me the aim has always been to show the rapid scalability on cloud-based platforms live on stage. Pre-provisioning an environment may make for a more reliable demo but to me that would be cheating, and not half as much fun!

    Read the article

  • How to sync Ubuntu/software/configurations between N computers with free software and/or without a cloud?

    - by skanatek
    Note: this question is not about syncing data in a Dropbox-like way (files, folders), it is more about syncing configurations. I would like to have exactly the same version of Ubuntu with all the software installed and configured both on my Desktop PC and on my Laptop PC (and maybe on my small netbook PC) without using Ubuntu Sync and with minimal maintenance effort (setup once, run for a long time). The use case is the following: I work on my Laptop PC and do some changes to software configuration, for example: configure vim to have a new plugin update the Search Tracker / Recoll file search index configure Thunderbird to have an additional IMAP account ('remember password') add some new bookmarks in Firefox/Chrome change the desktop background image install new software with apt-get install build and install new software with checkinstall etc. I do some 'sync' operation I switch to my Desktop PC and get all the changes from (1) working on the Desktop PC I work on my Desktop PC and do some changes to software configuration, for example: add new directory to the list of directories to be backed up by DejaDup add a new check spelling dictionary to the Libreoffice Writer configure the Terminator software to have colored fonts install new font into the Ubuntu system configure Ekiga to make phone calls etc. I do some 'sync' operation I switch to my Laptop PC and get all the changes from (1) and (4) working on the Laptop PC. Question: What free/open-source software can I use to sync both machines' Ubuntu systems, installed software and configurations? Is it possible to do that without any cloud services? Complementary question: It is obvious that the Desktop PC and the Laptop PC have different hardware configurations. How does the 'sync software' in question deal with video drivers, wlan drivers and their configurations? Note: I do not need all the PCs to be synced at the same time, because I work with only one single machine at once. Note: I considered to use Chef to solve the problem, but it seems that it might be really cumbersome to maintain such a setup. Note: I also considered using a bootable USB with Ubuntu installed (portable Linux), but I am not sure that the video drivers will work then.

    Read the article

  • Join Domain and Dos App

    - by Austin Lamb
    ok, So First off yes i have read all the related topics and those fixs are either out of date or dont work. i am running ubuntu 12.04 and i would like to add it to the win2008 server network, after i get that done i would like to mount the F:\ drive of the server somewhere on my linux machine where it can be identified as Drive F:\ by wine or Dosemu if i can achieve all of that i need to find out how to run a MS-Dos 16-bit Point-of-sales Graphic program in ubuntu whether that be through wine, dosemu, or dosBox. it does not matter it just has to be able to read and write to the servers F: drive, operate the dos app, and support LPt1 (i think) for printing reciepts and loading tickets. i am a decently knowledgeable windows tech, at least thats what my job description says.. but this is my first encounter with linux in a work environment, it could prove to very experience changing if i can just prove it as a practical theory and a reasonable solution, and get it to work.. the first step is to get it joined to the domain. i have likewise-open CLI and GUI versions, samba, and GADMIN-SAMBA installed in attempts to get any of them to work. any help in any area is greatly appreciated, especially with the domain joining since it is the first step and what i thought would be the easiest step..

    Read the article

  • The Fantastic New WebLogic on Oracle Database Appliance 2.9 Release is Here!

    - by JuergenKress
    Last week was a big day in virtualised ODA-land as it saw the launch of WebLogic on ODA 2.9. Admittedly it doesn't sound like a very exciting release but it is one that we at O-box have been looking forward to for quite some time. Let me explain why, then we'll look into the details... The ODA X4-2 has 48 Intel Xeon cores. That is a lot of compute power. Whilst the largest O-box SOA Appliance single environment configuration can in theory use all those cores (currently with 40 vCPU of SOA!) the vast majority of O-box users will want smaller configurations. Prior to 2.9 the Oracle WebLogic implementation only supported one domain per ODA, so the conundrum O-box development faced last year was either: offer customers only one SOA environment on their O-box for now (but have the benefit of a standard, easily supportable WebLogic installation), or build our own WebLogic/OTD OVM templates from scratch. One of our driving goals with O-box is to give the best possible experience and make the appliance as supportable as possible. Therefore we took the gamble that we would stick with the Oracle's one-domain WebLogic configuration initially, and just hope that it would deliver multi-domain support for us in a timely manner (note: this is probably not a strategy that business textbooks would recommend!). Anyway, we've been working closely with Oracle Product Management for a few months now and I'm delighted to see 2.9 as the fruits of their labour. This also neatly ties in with several recent requests for O-box to include OSB as well as SOA/BPEL (which we have always wanted to have in separate domains). The diagram below is the neatest way to summarise what the new 2.9 release will allow us to deliver, i.e. previously only one 3D box was possible: Read the complete article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: oBox,WebLogic on ODA,ODA,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • What can you do to decrease the number of live issues with applications?

    - by User Smith
    First off I have seen this post which is slightly similar to my question. : What can you do to decrease the number of deployment bugs of a live website? Let me layout the situation for you. The team of programmers that I belong to have metrics associated with our code. Over the last several months our errors in our live system have increased by a large amount. We require that our updates to applications be tested by at least one other programmer prior to going live. I personally am completely against this as I think that applications should be tested by end users as end users are much better testers than programmers, I am not against programmers testing, obviously programmers need to test code, but they are most of the times too close to the code. The reason I specify that I think end users should test in our scenario is due to the fact that we don't have business analysts, we just have programmers. I come from a background where BAs took care of all the testing once programmers checked off it was ready to go live. We do have a staging environment in place that is a clone of the live environment that we use to ensure that we don't have issues between development and live environments this does catch some bugs. We don't do end user testing really at all, I should say we don't really have anyone testing our code except programmers, which I think gets us into this mess (Ideally, we would have BAs or QA or professional testers test). We don't have a QA team or anything of that nature. We don't have test cases for our projects that are fully laid out. Ok, I am just a peon programmer at the bottom of the rung, but I am probably more tired of these issues than the managers complaining about them. So, I don't have the ability to tell them you are doing it all wrong.....I have tried gentle pushes in the correct direction. Any advice or suggestions on how to alleviate this issue is greatly appreciated. Thanks.

    Read the article

  • New Oracle Solaris 11 Administration book

    - by glynn
    During the development of Oracle Solaris 11, one of the main goals was to modernize the operating system and remove some of the existing frustrations that our administrative audience had in deploying and using the platform within data centers around the world. That meant a comprehensive clean out of some existing technologies to provision the operating system (replacing Jumpstart with Automated Installer) and manage system software (replacing SVR4 with IPS packaging), consolidate the vast spectrum of networking configuration, and enhance the user environment to provide familiarity for those who were used to administering Linux environments among many other things. While some considered the changes to Oracle Solaris 11 as a negative change, most will be impressed at how far we've come - the deeper integration of key technologies, presented in a consolidated and consistent form. It is easier to administer the Oracle Solaris platform that ever before, and I have no doubt that administrators coming from other platforms will be hugely impressed with what they see, especially if they're judging based on past experiences of Solaris 8 and Solaris 9. In fact I'd go further to say that Oracle Solaris 11 is a more powerful, integrated and usable platform that most Linux platforms I've seen. But as with anything, there's always an initial learning curve to get through. We've provided a significant selection of learning materials out on the Oracle Solaris 11 pages on Oracle Technology Network and some great training and certification options. One more option is now available in the form of a book, the Oracle Solaris 11 System Administration The Complete Reference. This provides an exceptional reference to help administrators learn about Oracle Solaris 11, especially those who have come from the Linux platform. As is quoted in the first chapter of the guide: Linux users and developers will find in Oracle Solaris 11 a familiar and quickly productive working environment; we point out similarities and differences between the Linux and Solaris kernels and system administration tools, and describe how typical open source Web development tasks are accomplished in this OS. So I would encourage you to take a read of it and start seriously considering Oracle Solaris 11 to be a platform choice for your data center. Oracle Solaris 11 System Administration The Complete Reference - yours for only $32.50 (if you successfully use the promotion code - otherwise worth shopping around to pick up a good deal).

    Read the article

  • Where do I start in regards to making a Gnome/Unity Form Application

    - by JMK
    Ok so I am familiar with developing Form and Console applications on Windows using Visual Studio .Net with C#, but where do I start when it comes to Linux distro's like Ubuntu, is there an equivalent? How would one go about matching what they can do in a Windows environment with .Net and C# in a Linux environment without .Net coding in something like Java or C/C++? I am aware of Eclipse, does eclipse have a form designer or do you have to code the design of any Gnome/Unity forms manually? Can I use eclipse to write the Linux equivalent of a console application, that you just double click on to run? I also know about Mono, but the idea is that I want to learn how to develop software without using anything in the Microsoft stack and am not sure where to start. What is the standard language/framework used to develop these types of applications on Linux? As I become more proficient with Visual Studio, C# and .Net, it has struck me that without these Microsoft tools, I am nothing. I am only capable of developing for the Microsoft OS and this scares me. This isn't some anti Microsoft thing, Microsoft makes some incredible Software/Hardware/Operating Systems/IDE's, but it is generally a bad idea to put all of your eggs in one basket so if I want to learn how to develop Terminal and Gnome/Unity form applications where in the world do I start? I have used Linux on and off for years, but Windows has been my primary OS. However I have watched Linux get better and better and as much as I love Windows 7, I am dubious about Windows 8 (I for one will sorely miss my start menu)! Obviously MS aren't going anywhere anytime soon and I could spend the the next couple of decades developing for .Net without any issues but just because you can get away with something doesn't always mean it's a good idea. Thanks

    Read the article

  • How do I pick which agency to go through?

    - by RoboShop
    I work in a town where the majority of work comes from the government. As a contractor, I generally have to apply for work through agencies which are on the government's preferred vendor's list. Most jobs are publicly listed and to apply for them, you generally need an agency to represent you by submitting your application with a rate which is usually your rate plus their commission. I've been trying to figure out what the agencies do, and it seems a large part of what they do is 1) get on that preferred vendor's list and 2) forward resumes. So right now, my policy is that since their commission affects how expensive I am, one - I don't work with companies that do not disclose their margin. And two, I go for the agency that takes the least amount of commission for the job I want to apply for. IS that the best approach? I would think applying for a job with the most competitive rate is the best approach but I also wonder whether which agency you're applying through actually matter? I know some agencies actually build personal relationships with senior managers but how do I know which one? How do I know that actually affect my job prospects? What criteria should I use to decide which agent I go through for the job?

    Read the article

  • Which platform to choose, Java or .NET?

    - by salman
    I am working in a private bank, a leading mid size bank in local market. We are going to create our core banking solution. Existing solution has been developed on Java using IBM Visual Age 4.0. It is very important to discuss architecture first, we have currently more than 350 branches working in standalone mode, and it means they are working in self contained environment. They have their own database server (IBM DB2 9.7) and they are communicating with other branches via sockets to send and receive data. Having experience of .NET for more than 5 years I am trying to convince my superiors to choose .NET platform, but they are reluctant and unwilling. It is my job to encourage them for choosing best available platform to create large scale enterprise application. In simple word, we are going to create a very large scale enterprise financial application, a centralize and integrated which connects all branch networks plus having scalable, solid architecture that easily evolve over time. I want professional people to comment on above scenarios. Which platform to choose .NET or Java? Our all resource is currently working in Java, we have homogeneous environment (no Linux, no Mac and no UNIX). Any idea, any thoughts, any points technical or non-technical i.e. administrative or management point of view will be really appreciated.

    Read the article

  • What can be done to decrease the number of live issues with applications?

    - by User Smith
    First off I have seen this post which is slightly similar to my question. : What can you do to decrease the number of deployment bugs of a live website? Let me layout the situation for you. The team of programmers that I belong to have metrics associated with our code. Over the last several months our errors in our live system have increased by a large amount. We require that our updates to applications be tested by at least one other programmer prior to going live. I personally am completely against this as I think that applications should be tested by end users as end users are much better testers than programmers, I am not against programmers testing, obviously programmers need to test code, but they are most of the times too close to the code. The reason I specify that I think end users should test in our scenario is due to the fact that we don't have business analysts, we just have programmers. I come from a background where BAs took care of all the testing once programmers checked off it was ready to go live. We do have a staging environment in place that is a clone of the live environment that we use to ensure that we don't have issues between development and live environments this does catch some bugs. We don't do end user testing really at all, I should say we don't really have anyone testing our code except programmers, which I think gets us into this mess (Ideally, we would have BAs or QA or professional testers test). We don't have a QA team or anything of that nature. We don't have test cases for our projects that are fully laid out. Ok, I am just a peon programmer at the bottom of the rung, but I am probably more tired of these issues than the managers complaining about them. So, I don't have the ability to tell them you are doing it all wrong.....I have tried gentle pushes in the correct direction. Any advice or suggestions on how to alleviate this issue ?

    Read the article

  • Collision Detection for a 2D RPG

    - by PHMitrious
    First of all, I have done some research on this topic before asking, and I'm asking this question as a mean to get some opinions on this topic, so I don't make a decision only on my own, but taking into account other people's experience as well. I'm starting a 2D online RPG project. I am using SFML for graphics and input and I'm creating a basic game structure and all for the game, creating modules for each part of the game. Well, let me get to the point I just wanted to give you guys some context. I want to decide on how I'm going to work with collision detection. Well I'm kinda going to work on maps with a tile map divided in layers (as usual) and add an extra 2 layers - not exactly in the map - for objects. So I'll have collisions between objects and agents (players - npcs - monsters - spells etc) and agents and tiles. The seconds one can be easily solved the first one need a little bit of work. I considered both creating a basic collision test engine using polygons and a quadtree to diminish tests since I'm going to be working with big maps with lots of objects - creating both a physical and graphical world representation. And I also considered using a physics engine like Box2D for collision tests. I think the first approach would take more work on my part but the second one would have the overhead of using a whole physics engine for just collision detection and no physics. What do you guys think ?

    Read the article

  • Preventing possible burnout in a junior dev, or perhaps I'm not doing enough?

    - by m.edmondson
    I'm a software developer with 5 years experience over 3 companies. Within the last year a junior (brand new to the industry) has started at my current employer. I believe he is an excellent developer, who always delivers and is skilled as solving complex problems. However I'm slightly concerned that he is possibly applying himself too much for the following reasons: He begins work approximately 2 hours before most (and is expected) In his free time he has developed an application that was clearly months worth of work that is specific to our employer I and the team are completely greatful for all he is doing, and is clearly an asset to our team. However I'm worried that this is not sustainable. I can almost see that he has the same enthusiasm that I had when I began coding for work, however over the years I've realised that extra curricular work not only doesn't progress your career, but eats into your all important free time. The question I'm asking is: Should I advise him to take things a bit more slowly? Or perhaps I need to learn from him and do more for my employer out of hours?

    Read the article

  • Installing both lxml 3.1.2 and lxml2 on ubuntu 12.04

    - by wgw
    I asked this on SO: http://stackoverflow.com/questions/19852911/lxml-3-1-2-and-lxml2-both-on-ubuntu/19856674#19856674 But it is perhaps more appropriate for AskUbuntu. So here it is again, reformulated. On the lxml site they suggest that it is possible to have both lxml2 and the newest version of lxml on ubuntu: Using lxml with python-libxml2 If you want to use lxml together with the official libxml2 Python bindings (maybe because one of your dependencies uses it), you must build lxml statically. Otherwise, the two packages will interfere in places where the libxml2 library requires global configuration, which can have any kind of effect from disappearing functionality to crashes in either of the two. To get a static build, either pass the --static-deps option to the setup.py script, or run pip with the STATIC_DEPS or STATICBUILD environment variable set to true, i.e. STATIC_DEPS=true pip install lxml The STATICBUILD environment variable is handled equivalently to the STATIC_DEPS variable, but is used by some other extension packages, too. I am generally confused about how pip packages and ubuntu packages get along, so I hesitate to run STATIC_DEPS=true pip install lxml. Will it damage/confuse my installed lxml2 package? The suggestion on SO was to install the new lxml in a virtualenv. That looks like the best way to go, but the lxml site is suggesting that a dual installation will work also. In general: what happens if I use pip (to get a newer install) for a package that is already installed by apt-get?

    Read the article

  • problems with installation of ubuntu 12.04 (64 and 32 bits)

    - by user76104
    I am a latinoamerican so.. sorry for my english I am trying to install ubuntu on my new pc but I have serious problems. When I put the installation cd and it runs, everything goes fine until that "window" when the user have to decide if want to try ubuntu or install it on the machine.. well that "window" apears in blank, and I see that my mouse and my keyboard turn really slowly and I can't do anything I have to shut it down by pressing power botton. The specification of my pc are this Gigabyte ex58 ud7 i7 950 wester digital caviar black 6gb memory corsair evga gtx580 Please i really neeed to install ubuntu or another linux distrubution, i am usign de sismic unix program on my laptop and well. it burns when i program a script! help! please :) what can i do..? what i have to download? i am not a pro in linux so please :( be pattient with me when it starts to load the achieve ubuntu environment to enter a menu with a graphical environment but I managed to get something sturdy and gave him to try ubuntu, started to load but freezes on the sign of ubuntu with red spots and only shows me the pointer not more ..... any other option? ps: had previously tried to install the alternate version of ubuntu not having success: S

    Read the article

  • Software development process for a part time University project for 1 developer?

    - by Pricey
    I will be doing a part time University project soon and the time frame for it is around 8 months with approximately 10-15 hours a week spent working on it, with a review by a tutor each quarter. My question is what software development process would you recommend using when the course requires you to work on your own in order to manage yourself as well as the project? I wanted to use a weekly or bi-weekly iterative approach to my work but a lot of the processes seem tailored to teams of people. I am looking at XP (Extreme Programming) OR Scrum as something that is less than the norm for University work but again Scrum I don't know a lot about yet, and a question I have is; can you say you are doing XP without pair-programming? because my tutor seems to think that I have to stick to all the practices otherwise I can't do it (nevermind if I am working alone). We can have external user input as well but due to the small timescales with part time work it may be more beneficial for myself to be the user as well, which is not what I prefer considering how I can get lost in the design.

    Read the article

  • Attempting to install netgear N300 Wireless USB Adapter on Ubuntu without a present internet connection

    - by Liz
    Hello Linux/Ubuntu world out there. I don't have internet presently on the desktop I am trying to install the USB wireless adapter on. This seems to be the problem, which if the hardware would work would theoretically fix the problem. I can NOT access the internet via anything but wireless. I am presently on my laptop searching for answers while trying to install this little device. So any advice will have to take that into account. Now I have tried so far, using WINE which does not want to work, I have tried Windows Wireless Drivers which doesn't want to work, I have tried Software Sources, Other Software and it will not acknowledge the cdrom as a repository stating errors like E:Unable to stat the mount point /cdrom/ -stat (2: No such file or directory) However I can open the CD icon on my computer and access and browse the files. The computer can read the CD. I can read the CD. I've tried just plugging it in and seeing if the computer will automatically recognize the hardware, and go from there. That does not work either. I have tested USB port to just verify that the USB port works. It does. My laptop recognizes the hardware, and would easily install the software if I prompted it to. The difference is that my laptop is Vista, and I HATE Vista. Any tips, tricks?

    Read the article

  • Can't complete dropbox installation from behind proxy in Ubuntu 11.10

    - by Mark Jones
    Problem: My PC on campus sits behind a proxy (requiring authentication) and I can't setup Dropbox. I am convinced that this is a proxy issue as I can't setup Ubuntu one either (but I don't use Ubuntu One so that is not a problem). I have looked at the Ubuntu One fix but it seems to be to modify settings explicitly related to Ubuntu One. I can install the nautilus-dropbox package (compiled from source and from .deb package from website and from software centre) but once I click OK from the "Dropbox Installation" dialog box (prompting me to download the proprietary daemon) the installation just freezes with the OK button pressed. When I look at its process in System Monitor its waiting channel is inet_wait_for_connect. I have set the following proxy directives thus far: Added mj22:**@proxy.waikato.ac.nz:80 information to network proxy settings under network in settings. Added http_host and http_port variables under gconf-editor-system-proxy Added 'host', 'authentication_password' 'authentication_user' and ticked 'user authentication' and 'use_http_proxy' under gconf-editor-system-http_proxy Added export http_proxy="http://mj22:**@proxy.waikato.ac.nz:80/" to /etc/bash.bashrc Added Acquire::http::proxy "http://mj22:**@proxy.waikato.ac.nz:80/"; to /etc/apt/apt.conf (which is what I imagine is letting Software Center retrieve packages). (where ** is my password) I have also added the equivalent ftp and https lines for the above entries. I get the internet fine and Software Centre can download packages but thats it. Related issues: The software centre can't fetch reviews (but can download packages). When trying to add an online account in Gnome 3 a dialog pop up appears with "Error getting a Request Token: Cannot connect to proxy (proxy.waikato.ac.nz)" Updates: After some time (10mins ish) Dropbox shows an error dialog box that reads: Trouble connecting to Dropbox servers. Maybe your internet connection is down, or you need to set you http_proxy environment variable. Is there a way I can see what environment variables are currently set?

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >