Search Results

Search found 35340 results on 1414 pages for 'policy based management'.

Page 491/1414 | < Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >

  • Running Multiple sites with multiple domains apache

    - by PsychoData
    I am having a rough time running apache and using multiple domain names here is a snippet of my config file. I keep getting a error saying that NameVirtualHost has no VirtualHosts. I want them both running on the same IP and I'm not sure why this doesn't work. I've been digging through the documentation for VirtualHosts, NameVirtualHost, and apache's page about name based virtual hosting. That example in the name based page is almost exactly my config! What am I doing wrong? Listen *:80 NameVirtualHost *:80 <VirtualHost *:80> ServerName www.sample1.net DocumentRoot /var/www/sample1-net </VirtualHost> <VirtualHost *:80> ServerName www.example2.net DocumentRoot /var/www/example2-net </VirtualHost>

    Read the article

  • iwconfig, iw not displaying wireless information

    - by Srivatsa Kanchi
    after fresh install to 12.10, the wireless information is not shown by both iwconfig and iw. The wireless sets up successfully and able to connect This is what i get $ iwconfig wlan0 wlan0 IEEE 802.11abgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=14 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off $ iw wlan0 link Not connected. $ uname -a Linux srivatsa-ThinkPad-T61 3.5.0-18-generic #29-Ubuntu SMP Fri Oct 19 10:26:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • Set up a root server using Ubuntu and Virtualization

    - by Daniel Völkerts
    Hello, I'd like to setup a fresh root server and install a linux based virtualization on it. My thoughts are on: Intel VTs Hardware Ubuntu 9.10 KVM based virt. The access to the root server will only be SSH for Administration. Has anybody done this before, what was your glues discovered in the daily use? My requirements are: very secure, so the root server only has ssh to the dom-0 and minimalistic ports for the guest (e.g. http/s). good monitoring of host/guest (my idea is to using zabbix for it) easy and fast administration (how are the command line tools working for you? cryptiv? high learning curve?) I'm pleased to learn from your suggestions. Regards, Daniel Völkerts

    Read the article

  • Google I/O Sandbox Case Study: Box

    Google I/O Sandbox Case Study: Box We interviewed Box at the Google I/O Sandbox on May 11, 2011. They explained to us the benefits of integrating with the Chrome OS system. Box offers cloud-based content management for businesses and they recently unveiled a streamlined content upload process on the Chrome OS. For more information about developing on Chrome, visit: code.google.com For more information on Box, visit: www.box.net From: GoogleDevelopers Views: 20 0 ratings Time: 01:47 More in Science & Technology

    Read the article

  • Multiple Denial of Service (DoS) vulnerabilities in Apache Tomcat

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2011-4858 Resource Management Errors vulnerability 5.0 Apache Tomcat Solaris 11 11/11 SRU 4 Solaris 10 SPARC: 122911-29 X86: 122912-29 Solaris 9 Contact Support CVE-2012-0022 Numeric Errors vulnerability 5.0 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Business Analytics Oracle Partner Advisory Council 2013

    - by Mike.Hallett(at)Oracle-BI&EPM
    72 544x376 WHEN: Friday the 20th of September 2013 (just before OpenWorld) Register by: 1st of August 2013 WHERE: Sofitel San Francisco Bay 223 Twin Dolphin Drive, 94065 REDWOOD CITY, CA, USA Don’t miss this once a year opportunity to meet and drive a closer engagement with Oracle’s global Product Management Team: Oracle global Product Management will host this workshop. Target group: ACE Directors, Lead Consultants and key architects from our partners Topics: Product Development Roadmap, Partner project experience, your feedback, Q&A session For any inquiries please contact [email protected] As part of registering for the event, please note that you will need to complete the BI / EPM PAC-survey. Thank you for taking the time to provide this, your valuable input. The PAC (one for BI, and a separate track for Hyperion) is free of charge, but is only for OPN Specialised member partners, and is subject to availability. (Please do not attend unless you have received a confirmation from Oracle to do so.) Registration Link Coming Soon We do hope you will be able to join us - and I look forward to welcoming you in San Francisco. Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Rules of Holes #4 -Do You Have the BIG Picture?

    - by ArnieRowland
    Some folks decry the concept of being in a 'Hole'. For them, there is no such thing as 'Technical Debt', no such thing as maintaining weak and wobbly legacy code, no such thing as bad designs, no such thing as under-skilled or poorly performing co-workers, no such thing as 'fighting fires', or no such thing as management that doesn't share the corporate vision. They just go to work and do their job, keep their head down, and do whatever is required. Mostly. Until the day they are swallowed by the...(read more)

    Read the article

  • Oracle Transportation User Conference Agenda Released

    - by John Murphy
    The Oracle Transportation Management (OTM) User Conference agenda is now available.   The event brings together users, implementers and prospective customers of OTM.   The event is held annually in Philadelphia with this year's event taking place August 12 - 15.   Follow one of the links to see the complete agenda and to register to attend.  http://otmconference.com/agenda.aspx

    Read the article

  • Belgrade Open Source Software Development Center

    - by Tori Wieldt
    A new Open Source Software Development Center is open at University of Belgrade Serbia. It centers around using Java & NetBeans as open source projects to learn from and contribute to. Assistant Professor Zoran Sevarac says that not only does the center allow him to teach software development using open source projects, but also "we are improving our University courses based on the experience we get from working on open source code."  Some of the projects underway are a NetBeans UML plugin; Neuroph (a Java neural network framework, with a NetBeans Platform-based UI); a NetBeans DOAP Plugin; WorkieTalkie (NetBeans chat plugin); and 2D and 3D visualization plugins for NetBeans. Here's video describing the NetBeans UML plugin: University of Belgrade also has an official university course about open source development, where students learn to use development tools, work in teams, participate in open source projects and learn from real world software development projects. Students, teachers, and researchers at the University of Belgrade, and any member of the open source community are welcome to come to learn software development from successful open source projects. For more information, you can contact Zoran Sevarac (@neuroph on Twitter). 

    Read the article

  • Sync Banshee library data.

    - by Dom
    I use Banshee to organise my music, I particularly like its scoring system and I have smart playlists based on it. However, I have two versions of my music library, one on each of my computers. As one of the computers is small I only have a favourite set of songs on that computer rather than my whole collection. The computers are not on a local network, but I do use Ubuntu One for file sharing between them. Is there any way I can synchronise song data (play count, score, skip count ...) and playlist data (including smart playlists that include songs based on this data) between the two computers? This would only be relevant of course for the songs that exist on both computers, the songs that exist on only one would need to be ignored. I did consider putting the library data file (I think it is .xml but I'm not sure) into the shared file and creating a symbolic link to it, but then I wouldn't be able to have a different set of songs on each computer. Thank you.

    Read the article

  • Some Problems Can't Be Outsourced

    - by mikef
    More and more companies are becoming attracted to the idea of Infrastructure as a Service (or IaaS). It would seem that you can outsource the provisioning and management of your services, encompassing everything from Email, through to your servers, workstations and software, all the way down to your LAN and internet services. This type of outsourcing can be a very attractive option for companies who have tight budgets who are short of technical skills or don't have the means to provide long-term IT support. Essentially, they can outsource your services at low short-term costs that are knowable and controllable, are quickly and easily scalable, and generate a minimum of hassle for your internal staff. If you want to get a sophisticated IT infrastructure set up in a hurry without the usual high buy-in costs, or the task of finding and hiring the right specialists. It would seem the way to go, particularly when their salesmen are hypnotizing you with oleaginous phrases such as "we are closely aligned with our client organization's core business requirements, providing agile services". It sounds too good to be true, and so it is. Whereas the costs will have initially been calculated on the annual renewal fees and service fees for ongoing support, there are other charges too which aren't so obvious. It can end up costing far more than the conventional solution once you take into account the extra costs, the fees for customization and upgrades. The Total Cost of Ownership (TCO) only becomes apparent when it is too late to extract the company easily from the arrangement. After a few years, these annual fees can add up to more than the initial cost of implementing a traditional in-house system. Worse than that is that you can then lose your power to determine your priorities: When you become reliant on this company, with its own schedule of priorities, to implement every change, however simple, you have effectively lost control of your technical infrastructure. This will make senior management very nervous. There is definitely a requirement for this sort of service. If you urgently need an exceptionally high class of service or more expertise than you currently possess, then outsourcing is probably for you. You and your IT colleagues will always have something to do, be it user assistance, smoothing out integrations with an external provider, or working on something entirely new. Heck, if you outsource to IBM, the SysAdmins can go along for the ride and polish their expertise. What you need to figure out is how much your time is worth, because time is ultimately all that outsourcing will buy you and your organization. Now you just need to convince your nervous CEO. Cheers, Michael

    Read the article

  • What is the value to checking in broken unit tests?

    - by Adam W.
    While there are ways of keeping unit tests from being executed, what is the value of checking in broken unit tests? I will use a simple example. Case sensitivity. The current code is Case Sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests has found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • Frame Independent Movement

    - by ShrimpCrackers
    I've read two other threads here on movement: Time based movement Vs Frame rate based movement?, and Fixed time step vs Variable time step but I think I'm lacking a basic understanding of frame independent movement because I don't understand what either of those threads are talking about. I'm following along with lazyfoo's SDL tutorials and came upon the frame independent lesson. http://lazyfoo.net/SDL_tutorials/lesson32/index.php I'm not sure what the movement part of the code is trying to say but I think it's this (please correct me if I'm wrong): In order to have frame independent movement, we need to find out how far an object (ex. sprite) moves within a certain time frame, for example 1 second. If the dot moves at 200 pixels per second, then I need to calculate how much it moves within that second by multiplying 200 pps by 1/1000 of a second. Is that right? The lesson says: "velocity in pixels per second * time since last frame in seconds. So if the program runs at 200 frames per second: 200 pps * 1/200 seconds = 1 pixel" But...I thought we were multiplying 200 pps by 1/1000th of a second. What is this business with frames per second? I'd appreciate if someone could give me a little bit more detailed explanation as to how frame independent movement works. Thank you.

    Read the article

  • How to use uTouch on multitouch-enabled touchpads?

    - by Freddi
    I currently have a Synaptics touchpad with only few classic multitouch features (2 finger scroll, right click). By installing the uTouch testing suite, I saw that it doesn't accept my touchpad as input device. I want to buy a newer notebook and would like to benefit of uTouch features (window management, swipe, pinch, rotate). Does uTouch only work on touchscreens or also on touchpads? What requirements should I take into account when choosing a new notebook?

    Read the article

  • Capitalize on Engineering and Information Assets throughout the Enterprise

    To facilitate information exchange, drive performance and improve corporate governance, organizations are investing in Oracle Universal Content Management to store, track and manage their digital information assets. Combined with Oracle's AutoVue visualization solutions and CADTop, Sword Group's CAD integration for UCM, engineering centric organizations can now access, view and collaborate on engineering and CAD documents throughout the enterprise for improved visibility and more informed decision making.

    Read the article

  • career planning advice [closed]

    - by JDB
    Possible Duplicate: Are certifications worth it? I am at the point in my career where people start to veer off into either management-type roles or they focus on solidifying their technical skills to stay in the development game for the long-haul. Here's my story: I've got a degree in economics, an MA in Political Science and an MBA in Finance and Management. In addition, I've done coursework in advanced math and software development (although no degree in math or software). All-in-all, I've got 13 years of post-secondary education under my belt. I, however, currently work as a software developer using C# for desktop, Silverlight, Flex and javascript for web, and objective c for mobile. I've been in software development for the past 3.3 years, and it seems like it comes pretty easy to me. I work in a field called "geospatial information systems," which just involves customization and manipulation of geospatial data. Right now I am looking at one of several certifications. Given this background, which of these certifications has the highest ceiling? CFA PMP various development/technological certifications from Microsoft, etc. Other? My academic and work experience are all heavy on the analytical/development side, esp. so given the MBA and the B.S. in Econ. The political science degree was really a lot of stats. So it seems that I would be good pursuing more of the CFA/analytical role. This is a difficult path, however, because I have no work experience in the financial sector, and the developers in finance are all "quants," which again, I am OK with, but I haven't done much statistical modeling in the past 3.3 years. The PMP would require knowledge of best practices as it pertains explicitly to software development. I also don't enjoy a lot of business travel, a common theme for most PMP jobs I've seen. If certifications is the route, which would you recommend? Anything else? I've thought about going back to try to knock out a B.S. in C.S., but I wasn't sure how long that would take, or what would be involved. Thoughts or recommendations? Thanks in advance! I turn 32 this weekend, which is what has forced me to think about these issues.

    Read the article

  • how're routing tables populated?

    - by Robbie Mckennie
    i've been reading "tcp/ip illustrated" and i started reading about ip forwarding. all about how you can receive a datagram and work out where to send it next based on the desination ip and your routing table. but what confused me is how (in a home network setting) the table itself is populated. is there a lower layer protocol at work here? does it come along with dhcp? or is it simply based on the ip address and netmask of each interface? i do know (from other books) that in the early days of ethernet one had to set up routing tables by hand, but i know i didn't do that.

    Read the article

  • Advantages of EPOS Tills

    To make the business operation visible to the management to take actions immediately to the day-to-day changes is a tough task. Another important thing is that if a business farm has several departme... [Author: Alan Wisdom - Computers and Internet - April 05, 2010]

    Read the article

  • Which architecture should I choose for this project?

    - by Jichao
    I have a project. The server which based on a phone PCI-board is responsible for received phone calls from the customer and then redirect the phone calls to the operators. I have decided to code the server using c++ programming language and qt framework because the PCI-board SDK's interface is c/c++ originated and for the sake of portability. The server need to send the information of the the customer to the operator while ringing the operator and the ui interface of the operator client should be browser-based. Now the key problem is how could the server notify the operator that there is a phone call for he/she. One architecture I have considered is like this, The operator browser client use ajax pooling the web server to check whether there is call to the client; the web server pooling the database server to check whether there is call; the desktop server(c++) wait for the phone calls and set the information in the database. The other operations such as hang up the phone call from the client, retransfer the phone call to the other operator also use this architecture. Then, is there any way other than pooling the server(js code setInterval('getDail', 1000)) to decide whether there is a call to the operator? Is this architecture feasible or should I use some terrific techniques that I do know such as web services,xml-rpc, soap???

    Read the article

  • Good, simple reasons for having multiple environments

    - by smp7d
    Throughout my career I had worked at companies that had a collection of different environments for different purposes. We always had more or less our desktop environment, a test environment, a QA environment, a staging environment and a production environment. This went for both servers/applications and any data sources we were using. When I started at my current company I found that 90% of the apps were either developed on a desktop environment against production data sources or developed directly on the production server depending on the platform. I wasn't fazed because I was hired in part to make changes to improve the way the development team functioned, which was clear from my interview process. We slowly started to turn the philosophy and pretty soon, most of the apps could be run in either a desktop, test or production environment. Not too long after that staging came around as well. Now most of our developers see the benefit of this methodology and defend it vigilantly. However, we have a number of legacy apps that never got migrated. We also have a number of legacy programmers who think of this as a waste of time. Unfortunately, we got lip service but never full buy-in from management. We got what we thought was a commitment to invest substantially in this about a year ago, but nothing materialized despite the considerable planning that we put into it. Now we are finding that we need more and more environments. We need help from the server/network administration teams for setup and we need participation from the business stakeholders to support the release cycle. We are at a place now where a project can function what I consider "normally" only if you have the right people on the project and the time to set up the proper environments. I'd love to present a complete argument, but management really has no time and interest in hearing me out until there is a critical issue. I can't really articulate the benefits simply as it always just seemed second nature to me. I was wondering if there are any good, simple, irrefutable reasons for the separation of environments that would get managers with no development experience to get behind this idea. Are there any good resources/literature on the topic?

    Read the article

  • Oracle Magazine, November/December 2007

    Oracle Magazine November/December features articles on Oracle Magazine Editors' Choice Awards 2007, SOA, Oracle Universal Content Management, Oracle Application Development Framework 11g, Oracle BPEL Test Framework, Oracle SQL Developer, Oracle Application Express, and much more.

    Read the article

  • How can I connect to a Windows (7) running in VirtualBox from the network with RDP?

    - by Andre
    I have a Windows7 guest running in a VirtualBox hosted by Precise (Ubuntu 12.04 LTS). In the settings of the virtual machine I have activated the 'Remote Display' (Enable Server) with default settings (server port, authentication method). The network of the guest is attached to NAT. I try to use Remmina to connect from the host to the guest. I tried 127.0.0.1, 127.0.0.2, the internal IP of Win7 (10. ...). Did not work so far. Do I have to active something and what on Win7 (home premium)? I would like to connect to the Win7 remote desktop from the local network. Can I stay in NAT mode (preferred due to our network policy), or do I have to go for 'bridged'? Thanks!

    Read the article

  • Best way to block a country by IP address?

    - by George Edison
    I have a website that needs to block a particular country based on IP address. I am more than aware that IP-based blocking is not a foolproof method for blocking visitors, but it is a necessary step in the right direction. Since I'm using PHP, what I would do is use a GeoIP database like geoplugin.net. However, I'm curious to know if there's a better way of doing this. The website is on a shared webserver (I don't have root access) and it is running Apache on centOS. I guess my question is "can an .htaccess file be configured to block by IP using an external source to lookup IP addresses."

    Read the article

< Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >