Search Results

Search found 2205 results on 89 pages for 'devel cover'.

Page 63/89 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • ATG Live Webcast Dec. 6th: Minimizing EBS Maintenance Downtimes

    - by Bill Sawyer
    This webcast provides an overview of the plans and decisions you can make, and the actions you can take, that will help you minimize maintenance downtimes for your E-Business Suite instances. It is targeted to system administrators, DBAs, developers, and implementers. This session, led by Elke Phelps, Senior Principal Product Manager, and Santiago Bastidas, Principal Product Manager, will cover best practices, tools, utilities, and tasks to minimize your maintenance downtimes during the four key maintenance phases. Topics will include: Pre-Patching: Reviewing the list of patches and analyzing their impact Patching Trials: Testing the patch prior to actual production deployment Patch Deployment: Applying patching to your system Post Patching Analysis: Validating the patch application Date:                Thursday, December 6, 2012Time:               8:00 AM - 9:00 AM Pacific Standard TimePresenters:   Elke Phelps, Senior Principal Product Manager                         Santiago Bastidas, Principal Product Manager Webcast Registration Link (Preregistration is optional but encouraged) To hear the audio feed:    Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              103200To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  595757500 If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here. If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • Oracle BPM and Open Data integration development

    - by drrwebber
    Rapidly developing Oracle BPM application solutions with data source integration previously required significant Java and JDeveloper skills. Now using open source tools for open data development significantly reduces the coding needed.  Key tasks can be performed with visual drag and drop designing combined with menu selections entry and automatic form generation directly from XSD schema definitions. The architecture used is extremely lightweight, portable, open platform and scalable allowing integration with a variety of Oracle and non-Oracle data sources and systems. Two videos available on YouTube walk through the process at both an introductory conceptual level and then a deep dive into the programming needed using JDeveloper, Oracle BPM composer and Oracle WLS (WebLogic Server) along with the CAM editor and Open-XDX open source tools. Also available are coding samples and resources from the GitHub project page, along with working online demonstration resources on the VerifyXML site. Combining Oracle BPM with these open source tools provides a comprehensive simple and elegant solution set. Development times are slashed and rapid prototyping is enabled. Also existing data sources can be integrated using open data formats with either XML or JSON along with CRUD accessing via the Open-XDX Java component. The Open-XDX tool is a code-free approach where data mapping is configured as templates using visual drag and drop in the CAM Editor open source tool.  XML or JSON is then automatically generated or processed (output or input) and appropriate SQL statements created to support the data accessing.   Also included is the ability to integrate with fillable PDF forms via the XML templates and the Java PDF form filling library.  Again minimal Java coding is needed to associate the XML source content with the PDF named fields.  The Oracle BPM forms can be automatically generated from XSD schema definitions that are built from the data mapping templates.  This dramatically simplifies development work as all the integration artifacts needed are created by the open source editor toolset. The developer level video is designed as a tutorial with segments, hands-on demonstrations and reviews.  This allows developers to learn the techniques and approaches used in incremental steps. The intended audience ranges from data analysts to developers and assumes only entry level Java skills and knowledge.  Most actions are menu driven while Java coding is limited to simply configuring values and parameters along with performing builds and deployments from JDeveloper and Oracle WLS.   Additional existing Oracle online training resources can be referenced on Oracle BPM and WLS that cover other normal delivery aspects such as user management and application deployment.

    Read the article

  • Oracle Brings Java to iOS Devices (and Android too)

    - by Shay Shmeltzer
    Java developer, did you ever wish that you can take your Java skills and apply them to building applications for iOS mobile devices? Well, now you can! With the new Oracle ADF Mobile solution, Oracle has created a unique technology that allows developers to use the Java language and develop applications that install and run on both iOS and Android mobile devices. The solution is based on a thin native container that installs as part of your application. The container is able to run the same application you develop unchanged on both Android and iOS devices. One part of the container is a headless lightweight JVM based on the Java ME CDC technology. This allows the execution of Java code on your mobile device. Java is used for building business logic, accessing local SQLite encrypted database, and invoking and interacting with remote services. Java concept on the UI too To further help transition Java developers to mobile developers, ADF Mobile borrows familiar concepts from the world of JSF to make the UI development experience simpler. The user interface layer of Oracle ADF Mobile is rendered with HTML5 which delivers native user experience on the devices, including animations and gesture support. Using a set of rich components, developers can create mobile pages without needing to write low level HTML5 and JavaScript code. The components cover everything from simple controls such as text fields, date pickers, buttons and links, to advanced data visualization components such as graphs, gauges and maps, and including unique mobile UI patterns such as lists, and toggle selectors. Want to see the components in action? Access this demo instance from your mobile device. Need to further customize the look and feel? You can use CSS3 to achieve this. A controller layer - similar in functionality to the JSF controller - allows developer to simplify the way they build navigation between pages. The logic behind the pages is written in managed beans with various scopes – again similar to the JSF approach. Need to interact with device features like camera, SMS, Contacts etc? Oracle conveniently packaged access to these services in a set of services that you can just drag and drop into your pages as buttons and links, or code into your managed beans Java calls to activate. Underneath the covers this layer is implemented using the open source phonegap solution. With the new Oracle ADF Mobile solution, transferring your Java skills into the Mobile world has become much easier. Check out this development experience demo. And then go and download JDeveloper and the ADF Mobile extension and try it out on your own. For more on ADF Mobile, see the ADF Mobile OTN page.

    Read the article

  • How to be successful at BDD Specifications Workshops?

    - by sigo
    Today we tried to introduce BDD in our software development process by having a specification workshop. For this workshop we had 2 developers, 1 tester and 1 business analyst. The workshop lasted 1h30 and by the end of it we managed to figure out some BDD scenarios for our new feature. We tried to focus on finding the scenarios that we could miss, and the difficult ones. At the end of the workshop some people were actually unhappy with the workshop. One developer felt he wasted his time as he was used to be given out the scenarios directly by the business analyst and review them with her. The business analyst didn't feel confident with our scenario coverage (Had a feeling that we could have missed out other important stuff) but more importantly felt that this workshop was also a waste of time as she could have figured out all these scenarios by herself and in a shorter period of time. So my question is how that kind of workshop can actually work. In the theory, given you have a new feature to develop, you put the tree 'amigos' (dev/tester/ba) in the same room so that they can collaborate together on writing the differents requirements for the new feature using examples. I can see all the benefits from that. Specially in term of knowledge sharing and common product/end goal/done vision. But in practice, we still think it is more cost effective to first have a BA to work on his own on the examples and only then to have the scenarios to be reviewed/reworked by the 3 'amigos'. By having the BA to work on his own, we actually feel more confident that we are less going to miss out stuff + we still get to review the scenarios afterward to double check. We don't think than simple brainstorming/deliberate discovery is actually enought to seriously cover all the requirement for a feature. The business analyst is actually the best person for that kind of stuff. The thing we just do is to review what she wrote and see if then we have a common understanding (which could then lead to rewrite some of her scenarios or add new ones she could have missed). This workshop lasted 1h30, and by the end of it, we didn't feel confident enought about wha we did...sure we could have spent more time on it but honestly most people get exhausted after 1h30 of brainstorming. So how can you get that to work effectively in practice ?

    Read the article

  • ATG Live Webcast Dec. 13th: EBS Future Directions: Deployment and System Administration

    - by Bill Sawyer
    This webcast provides an overview of the improvements to Oracle E-Business Suite deployment and system administration that are planned for the upcoming EBS 12.2 release.   It is targeted to system administrators, DBAs, developers, and implementers. This webcast, led by Max Arderius, Manager Applications Technology Group, compares existing deployment and system administration tools for EBS 12.0 and 12.1 with the upcoming functionality planned for EBS 12.2. This was a very popular session at OpenWorld 2012, and I am pleased to bring it to the ATG Live Webcast series.  This session will cover: Understanding the Oracle E-Business Suite 12.2 Architecture Installing & Upgrading EBS 12.2 Online Patching in EBS 12.2 Cloning in EBS 12.2 Date:             Thursday, December 13, 2012Time:             8:00 AM - 9:00 AM Pacific Standard TimePresenter:   Max Arderius, Manager Applications Technology Group Webcast Registration Link (Preregistration is optional but encouraged) To hear the audio feed:   Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              103194To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  593672805If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here. If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • PBCS Hyperion Planning in the Cloud Implementation Workshop

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 Oracle Planning and Budgeting Cloud Service (PBCS) opens up opportunities for organizations of all sizes to streamline planning and forecasting, accelerate deployment, and reduce costs. This one-day in-person workshop is delivered by Oracle Development (free to OPN member partners), and will cover the handoff from selling-to-implementing of PBCS. Although the basic building blocks are the same as with on-premises Planning, there is a paradigm shift when it comes to selling and implementing a Cloud Service solution. The value proposition behind Oracle Planning and Budgeting Cloud Service is all about the deployment model, how it’s sold and how it gets implemented – simplicity, fast adoption and flexible deployment, without sacrificing first-class functionality. To be successful, the entire cycle from sales to implementation should consistently support this value proposition to your clients. This training event is for OPN member partners whose business roles involve presales, implementation consulting, and support. This workshop briefly reviews the sales approach, as background, with emphasis on partner sales support. The main objective is to learn what is needed to successfully implement Oracle Planning and Budgeting Cloud Service once the sales hand off is made – how to leverage your current Hyperion Planning knowledge and use the features designed specifically to build out a Cloud Service solution. This Workshop is being offered at three locations for partners from all countries in EMEA: June 24, 2014: Kista, Sweden June 26, 2014: Reading, United Kingdom June 29-30, 2014 (split days): Dubaï, United Arab Emirates To get more information, to check pre-requisites, and to register, click here. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • How to be successfull at BDD Specifications Workshops?

    - by sigo
    Today we tried to introduce BDD in our software development process by having a specification workshop. For this workshop we had 2 developers, 1 tester and 1 business analyst. The workshop lasted 1h30 and by the end of it we managed to figure out some BDD scenarios for our new feature. We tried to focus on finding the scenarios that we could miss, and the difficult ones. At the end of the workshop some people were actually unhappy with the workshop. One developer felt he wasted his time as he was used to be given out the scenarios directly by the business analyst and review them with her. The business analyst didn't feel confident with our scenario coverage (Had a feeling that we could have missed out other important stuff) but more importantly felt that this workshop was also a waste of time as she could have figured out all these scenarios by herself and in a shorter period of time. So my question is how that kind of workshop can actually work. In the theory, given you have a new feature to develop, you put the tree 'amigos' (dev/tester/ba) in the same room so that they can collaborate together on writing the differents requirements for the new feature using examples. I can see all the benefits from that. Specially in term of knowledge sharing and common product/end goal/done vision. But in practice, we still think it is more cost effective to first have a BA to work on his own on the examples and only then to have the scenarios to be reviewed/reworked by the 3 'amigos'. By having the BA to work on his own, we actually feel more confident that we are less going to miss out stuff + we still get to review the scenarios afterward to double check. We don't think than simple brainstorming/deliberate discovery is actually enought to seriously cover all the requirement for a feature. The business analyst is actually the best person for that kind of stuff. The thing we just do is to review what she wrote and see if then we have a common understanding (which could then lead to rewrite some of her scenarios or add new ones she could have missed). This workshop lasted 1h30, and by the end of it, we didn't feel confident enought about wha we did...sure we could have spent more time on it but honestly most people get exhausted after 1h30 of brainstorming. So how can you get that to work effectively in practice ?

    Read the article

  • What is hiberfil.sys and How Do I Delete It?

    - by The Geek
    You’re no doubt reading this article because there’s a gigantic hiberfil.sys file sitting in the root of your drive, and you want to get rid of it to free up some space… but you can’t! Luckily, you actually can delete it, and today we’ll show you how. The more memory you have in your PC, the bigger the file will be. So What is hiberfil.sys Anyway? Windows has two power management modes that you can choose from: one is Sleep Mode, which keeps the PC running in a low power state so you can almost instantly get back to what you were working on. The other is Hibernate mode, which completely writes the memory out to the hard drive, and then powers the PC down entirely, so you can even take the battery out, put it back in, start back up, and be right back where you were. Hibernate mode uses the hiberfil.sys file to store the the current state (memory) of the PC, and since it’s managed by Windows, you can’t delete the file. So if you never use it, and want to disable Hibernate mode, keep reading. Personally I stick with Sleep Mode the vast majority of the time, but I do use Hibernate quite often. Disable Hibernate (and Delete hiberfil.sys) in Windows 7 or Vista You’ll need to open an administrator mode command prompt by right-clicking on the command prompt in the start menu, and then choosing Run as Administrator. Once you’re there, type in the following command: powercfg -h off You should immediately notice that the Hibernate option is gone from the Shut down menu. You’ll also notice that the file is magically gone! For more about dealing with Hibernate like setting how long it takes to head into Hibernate mode, you can check out our article on How to Manage Hibernate Mode in Windows 7. Disabling Hibernate Mode in Windows XP It’s a lot easier in Windows XP to get rid of Hibernate mode… in fact, we’ve already covered it before, but we’ll cover it again. Just head into Control Panel –> Power Options, and then find the Hibernate tab. Uncheck the box, reboot your PC, and then you can delete the hiberfil.sys file. Similar Articles Productive Geek Tips How to Delete a System File in Windows 7 or VistaDisable Delete Confirmation Dialog in Windows 7 or VistaClear IE7 Browsing History From the Command LineHide, Delete, or Destroy the Recycle Bin Icon in Windows 7 or VistaClear the Auto-Complete Email Address Cache in Outlook TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 3: Key Concerns

    - by Tim Murphy
    This is part 3 in a series of posts based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Keys Concerns Of Smartphones In The Enterprise These are the factors that you need to be aware of and address in order to build successful enterprise smartphone applications.  Most of them have nothing to do with the application itself as you will see here. Managing Devices Managing devices is a factor that is going to effect how much your company will have to spend outside of developing the applications.  How will you track the devices within the corporation?  How often will you have to replace phones and as a consequence have to upgrade your applications to support new phones?  The devices can represent a significant investment of capital.  If these questions are not addressed you will find a number of hidden costs throughout the life of your solution. Purchase or BYOD We have seen the trend of Bring Your Own Device (BYOD) lately within the enterprise.  How many meetings have you been in where someone is on their personal iPad, iPhone, Android phone or Windows Phone?  The issue is if you can afford to support everyone's choice in device? That is a lot to take on even if you only support the current release of each platform. Do you go with the most popular device or do you pick a platform that best matches your current ecosystem and distribute company owned devices?  There is no easy answer here, but you should be able give some dollar value to both hardware and development costs related to platform coverage. Asset Tracking/Insurance Smartphones are devices that are easier to lose or have stolen than laptops and desktops. Not only do you have your normal asset management concerns but also assignment of financial responsibility. You also will need to insure them against damage and theft and add legal documents that spell out the responsibilities of the employees that use these devices. Personal vs. Corporate Data What happens when you terminate an employee?  How do you recover the device?  What happens when they have put personal data on the device?  These are all situation that can cause possible loss of corporate intellectual property or legal repercussions of reclaiming a device with personal data on it.  Policies need to be put in place that protect the company from being exposed to type of loss.  This can mean significant legal and procedural cost that you need to consider. Coming Up In the last installment of this series I will cover application development considerations. del.icio.us Tags: Smartphones,Enterprise Smartphone Apps,Architecture

    Read the article

  • how did Google Analytics kill my site?

    - by user1813359
    Yesterday I created a google analytics profile for one of my sites and included the JS block in the layout template. What happened next was very strange. Within about 2 minutes, the site had become unreachable. I had been checking the AWStats page for the site when I thought to set up GA. After that had been done, I clicked on the link for 404 stats, which opens in a new tab. It churned for a long while and then showed a nearly blank page, similar to that when Firefox chokes on a badly-formatted XML page, except there was no error msg. But i was logged into the server and could see that that page has a 401 Transitional DTD. Strange! I tried viewing source but it just churned endlessly. I then tried "inspect element" and was able to see an error msg having to do with some internal Firefox lib. Unfortunately, i neglected to copy that. :-( All further attempts to load anything on the site would time out. Firebug's Net panel showed no request being made. Chrome would time out. So, I deleted the GA profile, removed the JS block, and cleared the server cache. No joy. I then removed all google cookies and disabled JS. Still nothing. No luck in any other browser. And now my client couldn't access the site. Terrific. I was able use wget while logged into another server. The retrieved page was fine, and did not contain the GA JS block. However, the two servers are on the same network. (Perhaps a clue.) The server itself was fine. Ping, traceroute looked great. I could SSH in. I tailed the access log and tried a browser request. Nothing. But i forgot to quit and a minute or so later I saw a request from someone else being logged. Later, I could see that requests had been served all day to some people. Now, 24 hours later, the site works once again, but is still unreachable by the client (who is in another city). So, does anyone have some insight into what's going on? Does this have something to do with google's CDN? I don't know very much about how GA works but what I'm seeing reminds me of DNS propagation issues. And why the initial XML error? And why the heck was the site just plain unreachable? What did google do to my site?! Sorry for the length but I wanted to cover everything.

    Read the article

  • Strangling the life out of Software Testing

    - by MarkPearl
    I recently did a course at the local university on Software Engineering. At the beginning of the course I looked over the outline of the subject and there seemed to be some really good content. It covered traditional & agile project methodologies, some general communication and modelling chapters and finished off with testing. I was particularly excited to see the section on testing as this was something I learnt on my own and see great value in. The course has now just ended and I am very disappointed. I now know one of the reasons why so few people i.e. in my region do Test Driven Development, or perform even basic testing methodologies. The topic was to academic! Yes, you might be able to list 4 different types of black box test approaches vs. white box test approaches and describe the characteristics of Smoke Tests, but never during course did we see an example of an actual test or how it might be implemented! In fact, if I did not have personal experience of applying testing in actual projects, I wouldn’t even know what a unit test looked like. Now, what worries me is the following… It took us 6 months to cover the course material, other students more than likely came out of that course with little appreciation of the subject – in fact they now have a very complex view of what a test is – so complex that I think most of them will never attempt it again on their own. Secondly, imagine studying to be a dentist without ever actually seeing a tooth? Yes, you might be able to describe a tooth, and know what it is made out of – but nobody would want a dentist who has never seen a tooth to operate on them. Yet somehow we expect people studying software engineering to do the same? This is not right. Now, before I finish my rant let me say that I know this is not the same everywhere in the world, and that there needs to be a balance on practical implementation and academic understanding – I am just disappointed that this does not seem to be happening at the institution that I am currently studying at ;-( Please, if you happen to be a lecturer or teacher reading this post – a combination of theory and practical's goes a long way. We need to up the quality of software being produced and that starts at learner level!

    Read the article

  • iPhone App IDs and Provisioning... Does App ID get used instead of provisioning ID if I decide to us

    - by Jann
    This is a question that has been bugging me for a while. I started my app (now submitted -- not yet approved) not wishing to get into the mess that is APNS (Push). I did the following: iPhone Developer Center: Provisioning Portal-Provisioning: Then I created a Development and a Distribution Provisioning Profile. I installed both in XCode. Everything hunky dory. The Development profile scares me a bit by expiring so soon (90 days) but I can remove it from the iPhone(s) and sign it with a new one later. I tested using the Development profile, and later to submitted it by signing it with the Distribution profile. I then uploaded the Distribution profile-signed app to iTunesConnect (app store). Okay, I understand that much. Now, what I don't understand is this: Now that I understand the theories and methods behind how Push works, I am wishing to add it to my app. I already went under: iPhone Developer Center: Provisioning Portal-App IDs: and created a Development Provisioning Profile and Distribution Provisioning Profile there (push & in-app purchase enabled). Here is where it gets confusing to me. All the books and docs I have read say that I have to sign the app with this "App ID" provisioning profile (push-enabled) from now on. Does that mean I no longer ever use the previously created provisioning profiles? If I were to import these "App ID" provisioning profiles into Xcode they will exist alongside my previously generated "non-push" profiles. ~/Library/Mobile Devices/Provisioning Profiles now has 2 files. One Devel and one Distrib. It will now have 4 even though for this app I will not use the "non-push" anymore right? (actually, since they are locked by using bundle-codes and app ids i will never use it again if all of my further versions of this app use Push?) Confused. Can anyone enlighten me? Why not use the "App ID" profiles in the first place for everyone -- even if you are not gonna use push? Would keep it simpler. Should I only generate "Push Enabled" profiles from now on -- even if i am not sure I am gonna use push (or for that matter in-app purchase)? Please give me some insight. I do not wanna do this wrong. Thanks! Jann

    Read the article

  • What do you expect from a package manager for Emacs

    - by tarsius
    Although several hundred Emacs Lisp libraries exist GNU Emacs does not have an (internal) package manager. I guess that most users would agree that it is currently rather inconvenient to find, install and especially keep up-to-date Emacs Lisp libraries. These pages make life a bit easier Emacs Lisp List - Problem: I see dead people (links). Emacswiki - Problem: May contain traces of nuts (malicious code). These are some package managers XEmacs package manager package.el - ELPA pases install.el install-elisp.el plugin.el use-package.el jem-pkg.el epkg/elm - the one I am working. And this are some packages that provide functionality that might be useful in a package manager ell.el - Browse the Emacs Lisp List genauto.el - helps generate autoloads for your elisp packages date-calc.el - date calculation and parsing routines strptime.el - partial implementation of POSIX date and time parsing wikirel.el - Visit relevant pages on the Emacs Wiki loadhist.el, lib-requires.el, elisp-depend.el - Commands to list Emacs Lisp library dependencies. project-root.el - Define a project root and take actions based upon it So I would like to know from you what you consider important/unimportant/supplementary... in a package manager for Emacs. Some ideas Many packages (incorporate the Emacs Lisp List and other lists of libraries). Only packages that have been tested. Support for more than one package archive (so people can choose between many/tested packages). Dependency calculated based on required features only. Dependencies take particular versions into account. Only use versions that have been released upstream. Use versions from version control systems if available. Packages are categorized. Packages can be uninstalled and updated not only installed. Support creating fork of upstream version of packages. Support publishing these forks. Support choosing a fork. After installation packages are activated. Generate autoloads. Integration with Emacswiki (see wikirel.el). Users can tag, comment ... packages and share that information. Only FSF-assigned/GPL/FOSS software or don't care about license. Package manager should be integrated in Emacs. Support contacting author. Lots of metadata. Suggest alternatives before installing a particular package. Some discussions about the subject at hand emacs-devel 20080801 comp.emacs 20021121 RationalElispPackaging I am hoping for these kinds of answers Pointers to more implementations, discussions etc. Lengthy descriptions of a set of features that make up your ideal package manager. Descriptions of one particular disired/undisired feature. This has the advantage that the regular voting mechanism allows us to see what features are most welcomed. Feel free to elaborate on my ideas from above. Surprise me.

    Read the article

  • Can't build gem -- native extension build fails -- can you see why?

    - by marfarma
    I can't figure out what is going wrong here -- any ideas?? I'm running on a Ubuntu 8.04 LTS, and have installed libxml2 and libxslt from these instructions: http://www.techsww.com/tutorials/libraries/libxml/installation/installing_libxml_on_ubuntu_linux.php http://www.techsww.com/tutorials/libraries/libxslt/installation/installing_libxslt_on_ubuntu_linux.php However, I installed the latest versions: libxslt-1.1.24 libxml2-2.7.3 The install was uneventful -------------------- I set LD_LIBRARY_PATH ---------------------------------- echo $LD_LIBRARY_PATH /usr/local/libxslt/lib: ------------- seems like the function is present -- at least based on the output of strings ------------ /usr/local/libxslt/lib$ strings * | grep ParseStylesheetDoc xsltParseStylesheetDoc xsltParseStylesheetDoc xsltParseStylesheetDoc xsltParseStylesheetDoc xsltParseStylesheetDoc xsltParseStylesheetDoc xsltParseStylesheetDoc ----------------------- But the compile still fails ---------------------------------------- sudo gem install webrat Building native extensions. This could take a while... ERROR: Error installing webrat: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb install webrat checking for iconv.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libxml/parser.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libxslt/xslt.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libexslt/exslt.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for xmlParseDoc() in -lxml2... yes checking for xsltParseStylesheetDoc() in -lxslt... no libxslt is missing. try 'port install libxslt' or 'yum install libxslt-devel' *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/local/bin/ruby --with-iconv-dir --without-iconv-dir --with-iconv-include --without-iconv-include=${iconv-dir}/include --with-iconv-lib --without-iconv-lib=${iconv-dir}/lib --with-xml2-dir --without-xml2-dir --with-xml2-include --without-xml2-include=${xml2-dir}/include --with-xml2-lib --without-xml2-lib=${xml2-dir}/lib --with-xslt-dir --without-xslt-dir --with-xslt-include --without-xslt-include=${xslt-dir}/include --with-xslt-lib --without-xslt-lib=${xslt-dir}/lib --with-xml2lib --without-xml2lib --with-xsltlib --without-xsltlib Gem files will remain installed in /usr/local/lib/ruby/gems/1.8/gems/nokogiri-1.3.3 for inspection. Results logged to /usr/local/lib/ruby/gems/1.8/gems/nokogiri-1.3.3/ext/nokogiri/gem_make.out

    Read the article

  • Unable to have nokogiri obey custom path parameters during install

    - by Christopher
    I am trying to install nokogiri locally on dreamhost using the commands: $ wget ftp://xmlsoft.org/libxml2/libxml2-2.7.6.tar.gz $ wget ftp://xmlsoft.org/libxml2/libxslt-1.1.26.tar.gz $ tar zxvf libxml2-2.7.6.tar.gz $ cd libxml2-2.7.6 $ ./configure --prefix=$HOME/local/ --exec-prefix=$HOME/local $ make && make install $ cd .. $ tar zxvf libxslt-1.1.26.tar.gz $ cd libxslt-1.1.26 $ ./configure --prefix=$HOME/local/ --with-libxml-prefix=$HOME/local/ $ make && make install $ export LD_LIBRARY_PATH=$HOME/local/lib $ gem install nokogiri -- --with-xslt-dir=$HOME/local \ --with-xml2-include=$HOME/local/include/libxml2 \ --with-xml2-lib=$HOME/local/lib but it still gives the error: Building native extensions. This could take a while... ERROR: Error installing nokogiri: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for iconv.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libxml/parser.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libxslt/xslt.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... no libxslt is missing. try 'port install libxslt' or 'yum install libxslt-devel' *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 --with-iconv-dir --without-iconv-dir --with-iconv-include --without-iconv-include=${iconv-dir}/include --with-iconv-lib --without-iconv-lib=${iconv-dir}/lib --with-xml2-dir --without-xml2-dir --with-xml2-include --without-xml2-include=${xml2-dir}/include --with-xml2-lib --without-xml2-lib=${xml2-dir}/lib --with-xslt-dir --without-xslt-dir --with-xslt-include --without-xslt-include=${xslt-dir}/include --with-xslt-lib --without-xslt-lib=${xslt-dir}/lib Gem files will remain installed in /home/myusername/.gems/gems/nokogiri-1.4.1 for inspection. Results logged to /home/myusername/.gems/gems/nokogiri-1.4.1/ext/nokogiri/gem_make.out where it doesn't seem to be looking in the paths I have specified for the libraries. Is there something wrong with my installation method?

    Read the article

  • How to make my SanDisk Cruzer Blade 4GB disk back to normal?

    - by Jack
    Currently, my SanDisk Cruzer Blade 4GB have become a 64MB Firebird RAW flash drive (thumb drive). I don't know why it become like this but when I plug it into a PC, it suddenly transform itself to become a 64MB flash drive. (From 4 GB to 64 MB, that is a huge change!) I read the following articles: http://forums.sandisk.com/t5/All-SanDisk-USB-Flash-Drives/Cruzer-Blade-will-not-format/td-p/214932 http://forums.whirlpool.net.au/archive/1691847 http://forum.hddguru.com/sandiskfirebird-64mb-t23539.html and notice that their solutions does not work at all. Some of their solution is as follows: Using the HP USB Disk Storage Format Tool (http://files.extremeoverclocking.com/file.php?f=1970), which did not work for me as it could not format. Using the h2testw to see if it is genuine, which did not work for me because it is a RAW partition. Return to the vendor, which I doubt since the product only have 1 year warranty and it has expired. I also notice that the flash drive is very fragile as after a few use, the flash drive have become something like the following: (The upper cover of the USB connector was broken or torn) So, wondering if someone have any good solution to fix it back so that at least I can retrieve my data on the fragile thumb drive.

    Read the article

  • Office365 SPF record has too many lookups

    - by Sammitch
    For some utterly ridiculous administrative reasons we've got a split domain with one mailbox on Office365 which requires us to add include:outlook.com to our SPF record. The problem with this is that that rule alone requires nine DNS lookups of the maximum of 10. Seriously, it's horrible. Just look at it: v=spf1 include:spf-a.outlook.com include:spf-b.outlook.com ip4:157.55.9.128/25 include:spfa.bigfish.com include:spfb.bigfish.com include:spfc.bigfish.com include:spf-a.hotmail.com include:_spf-ssg-b.microsoft.com include:_spf-ssg-c.microsoft.com ~all Given that we have our own large-ish mail system we need to have rules for a, mx, include:_spf1.mydomain.com, and include:_spf2.mydomain.com which puts us at 13 DNS lookups which causes PERMERRORs with strict SPF validators, and completely unreliable/unpredictable validation with non-strict/badly implemented validators. Is it possible to somehow eliminate 3 of those include: rules from the bloated outlook.com record, but still cover the servers used by O365? Edit: Commentors have mentioned that we should simply use the shorter spf.protection.outlook.com record. While that is news to me, and it is shorter, it's only one record shorter: spf.protection.outlook.com include:spf-a.outlook.com include:spf-b.outlook.com include:spf-c.outlook.com include:spf.messaging.microsoft.com include:spfa.frontbridge.com include:spfb.frontbridge.com include:spfc.frontbridge.com Edit² I suppose we can technically pare this down to: v=spf1 a mx include:_spf1.mydomain.com include:_spf2.mydomain.com include:spf-a.outlook.com include:spf-b.outlook.com include:spf-c.outlook.com include:spfa.frontbridge.com include:spfb.frontbridge.com include:spfc.frontbridge.com ~all but the potential issues I see with this are: We need to keep abreast of any changes to the parent spf.protection.outlook.com and spf.messaging.microsoft.com records. If anything is changed or [god forbid] added we would have to manually update ours to reflect that. With our actual domain name the record's length is 260 chars, which would require 2 strings for the TXT record, and I honestly don't trust that all of the DNS clients and SPF resolvers out there will properly accept a TXT record longer than 255 bytes.

    Read the article

  • Event ID: 861 - The Windows Firewall has detected an application listening for incoming traffic

    - by Chris Marisic
    Firstly, my machines aren't compromised any person suggesting such will be DV'd. The security logs on some of my networks client machines (all Windows Xp Sp3) get filled with these useless error messages. Security Failure Audit Detailed Tracking Event ID: 861 User: NT AUTHORITY\NETWORK SERVICE The Windows Firewall has detected an application listening for incoming traffic. Name: - Path: C:\WINDOWS\system32\svchost.exe Process identifier: 976 User account: NETWORK SERVICE User domain: NT AUTHORITY Service: Yes RPC server: No IP version: IPv4 IP protocol: UDP Port number: 55035 Allowed: No User notified: No It's always on various random ports of UDP so setting up a port exception isn't really an option. It's always from svchost or lsass both of which are running services from DLLs. One of the most offending processes seems to the be DnsCache. I have in my global policy under AT < Network < Network Connection < Widnows Firewall < Domain Profile (I haven't changed any standard profile options do both need configured? To allow remote administration and desktop exceptions and have a custom program exception list that has %SystemRoot%\system32\svchost.exe:*:enabled:svchost (Windows won't allow you to add this exception on a local machine but it let me have it on here in the global policy it just doesn't seem to do anything) %SystemRoot%\system32\lsass.exe:*enabled:lsass (I think this one ended all of my LSASS messages) %SystemRoot%\system32\dnsrslvr.dll:*:enabled:dnscache (I tried adding the dll itself to the exception list, this didn't seem to do anything) Is there really any other options left other than disabling the Windows Firewall entirely, disabling auditing entirely or just changing the event viewer to just auto overwrite when needed? I'd much rather fix the problem and get rid of these entries ever being created instead of just trying to cover up the problem.

    Read the article

  • Windows Server 2008 / SQL 2008 Licensing for Authenticated Web Application

    - by MikeM
    Hello, I'm trying to crunch some numbers to see what the software costs involved are for hosting an application we are developing. Users will not be anonymous - they will need to log in. SQL Server 2008: SQL Server licensing is easy - it will be licensed per-processor. No real fuss there. The cost of CALs would be much higher for the number of users as compared to the processor licenses. Windows Server 2008: This is where it gets trickier. We need to license the OS for both the web servers (there will be a couple) plus the database servers (also a couple). The Web Servers could run on the Web Edition without a need for CALs, but if you continue reading, you will see that may not matter much because I will likely have user CALs for each user anyway. We can't use the "External Connector" for any of the Windows licenses, because that doesn't cover customers who are paying to access a hosted application. We can't use the Web Edition for the SQL Servers because that license only allows database running on Web Edition to host data for the local web application (i.e. other web servers can't connect to it). So that leaves us with the "full" editions of Windows Server for the database server OS. I find this a little rediculous, and I feel as though I must be missing something, but it looks to me like I will actually need to buy a CAL for every user who signs up to use our service. I feel like I'm missing something because that means that for every user, I have to shell out $40 for a CAL. That could be one or two years' worth of revenue from each user for an inexpensive service! Is there any way to serve a web application to authenticated users without paying for individual Windows Server CALs, if the web servers and SQL servers are seperate boxes?

    Read the article

  • Cannot get libcurl-devl on OpenSUSE 11.3

    - by Dai
    I have a server running OpenSUSE 11.3 that I can't really upgrade to a newer version of OpenSUSE (it's a managed appliance). I have some PHP shell scripts that need to run on the server that have a dependency on both cURL and OpenSSL. I discovered that the PHP 5.3.3 binaries on the server did not include OpenSSL but did include cURL I downloaded the latest PHP sources, extracted them, and ran ./configure --with-openssl --with-zlib --with-bcmath --with-curl --with-readline --with-libxml --enable-sockets This failed: the configure script complained that it couldn't find cURL: checking for cURL support... yes checking for cURL in default path... not found configure: error: Please reinstall the libcurl distribution - easy.h should be in /include/curl/ I tried to install libcurl by running zypper install libcurl-devl This failed too: doom:~/phpworksite/php-5.5.15 # zypper install libcurl-devl Loading repository data... Warning: Repository 'Updates for openSUSE 11.3 11.3-1.82' appears to outdated. Consider using a different mirror or server. Warning: Repository 'openSUSE_11.3_Updates' appears to outdated. Consider using a different mirror or server. Reading installed packages... 'libcurl-devl' not found in package names. Trying capabilities. No provider of 'libcurl-devl' found. Resolving package dependencies... Nothing to do. However, libcurl-devl is listed when I run zypper search curl. doom:~/phpworksite/php-5.5.15 # zypper search curl Loading repository data... Warning: Repository 'Updates for openSUSE 11.3 11.3-1.82' appears to outdated. Consider using a different mirror or server. Warning: Repository 'openSUSE_11.3_Updates' appears to outdated. Consider using a different mirror or server. Reading installed packages... S | Name | Summary | Type --+-----------------------------+----------------------------------------------------------+-------- i | curl | A Tool for Transferring Data from URLs | package | curlftpfs | Filesystem for mounting FTP hosts using FUSE and libcurl | package | libcurl-devel | A Tool for Transferring Data from URLs | package i | libcurl4 | cURL shared library version 4 | package i | perl-WWW-Curl | Perl extension interface for libcurl | package i | php5-curl | PHP5 Extension Module | package | python-curl | Python module interface to the cURL library | package | python-curl-doc | Documentation for python-curl | package | xmms2-plugin-curl | Curl Support for xmms2 | package | xmms2-plugin-curl-debuginfo | Debug information for package xmms2-plugin-curl | package doom:~/phpworksite/php-5.5.15 # Here are the current repositories. doom:~/phpworksite/php-5.5.15 # zypper repos # | Alias | Name | Enabled | Refresh ---+----------------------------------------------+----------------------------------------------+---------+-------- 1 | PHP_extensions_(openSUSE_11.3) | PHP_extensions_(openSUSE_11.3) | No | Yes 2 | Packman_11.3 | Packman_11.3 | Yes | Yes 3 | Updates for openSUSE 11.3 11.3-1.82 | Updates for openSUSE 11.3 11.3-1.82 | Yes | Yes 4 | openSUSE_11.3_OSS | openSUSE_11.3_OSS | Yes | Yes 5 | openSUSE_11.3_Updates | openSUSE_11.3_Updates | Yes | Yes 6 | openSUSE_BuildService_-_devel:languages:perl | openSUSE_BuildService_-_devel:languages:perl | No | Yes 7 | repo-debug | openSUSE-11.3-Debug | No | Yes 8 | repo-non-oss | openSUSE-11.3-Non-Oss | Yes | Yes 9 | repo-oss | openSUSE-11.3-Oss | Yes | Yes 10 | repo-source | openSUSE-11.3-Source | No | Yes BTW, I did try building PHP without cURL, however it broke a lot of things, so apparently I really need cURL. My question: how can I install libcurl-devl (or just install cURL) so that I can build PHP?

    Read the article

  • Home Networking Questions

    - by Eddie Parker
    Hello: I'm looking to wire my home with CAT-X (where X is probably going to be CAT-6, unless someone can convince me differently. ;) ). I'm seeking advice on what equipment I'll need for the job, and any things I should watch out for. It's a two story half-duplex I'll be wiring, roughly about 1800 sq ft. Here's what I believe I need so far: Bulk CAT-6 Ethernet cabling CM Rated Gigabit switch(es?) Patch panel Equipment for cutting, terminating wire, fishing through walls, etc Wall outlet covers, etc. Questions I have: Does it matter the MHz rating on the Ethernet cable? If so, why? I have two gigabit switches currently, an 8-port and a 5-port. Should I buy one massive switch to cover all the connections I need, or should I just chain the two together and buy a switch for however many other connections I need? Do I really need a patch panel? I understand it keeps the cables looking cleaner than coming out of a hole in the wall, but is there some other product I can use, perhaps combining a switch with a patch panel or some such? Ideally I'll have all this running out of a relatively small closet, so the less components (or smaller) the better. Any advice, links, or suggested product to use/avoid would be appreciated!

    Read the article

  • Self-hosting vs. Budget hosting - What are the economics?

    - by cdonner
    My current hosting provider (shared Linux, unlimited domains, < $10 per month, with about 20 sites) has been giving me a lot of grief lately. I am contemplating to just ditch them and repurpose the old Sun V20z that is sitting in my basement rack, and move the hosting in-house, literally. My math goes as follows: my company pays up to $80 a months for my home internet service, which would cover the upgrade from currently Fios to Comcast business internet with 5 static IPs. So this comes free. running the server will cost me about $180/year at the current rate of approx. $.2/kWh my time is free So, it seems that the my net cost of doing this would be about $80 anually, plus the work that goes into setup and maintenance. I will have to get email hosting somewhere, which I do not want to do myself. On the other side of the balance sheet, I'd likely get better uptime than my provider based on recent stats, will not get suspended and don't have to spend hours with customer support. Overall, I am not convinced. Has anybody actually done that? What was your experience, and did it pay off?

    Read the article

  • AsteriskNow Migration / Shared Extension Space

    - by Aaron C. de Bruyn
    I am testing the possibility of migrating from an old Avaya phone system to AsteriskNow. The migration would cover several hundred phones--but spread out over several years. (Management wants to move buildings to the new phone system one by one as cables get cut or time permits.) Two other directive is that extensions must not change and they want a GUI that other admins (non-Linux geeks) can manage. They currently use 9XXX for all extensions. We linked the Avaya and Asterisk box via PRI card and they both are communicating. From the Avaya side, if we move (for example) extension 9001 to Asterisk, we forward the call over the PRI to the AsteriskNow box and the SIP phone rings. In AsteriskNow we have an outgoing rule '_9XXX' that routes all 4-digit extensions starting with 9 back to Avaya. Here's the trouble. Dialing 9001 (the extension moved over to AsteriskNow) causes the call to be routed out the PRI to the Avaya box, then the Avaya box routes the call back to Asterisk, and Asterisk routes it to the SIP phone. As we get more and more users switched over, it will use up more and more channels over the PRI card. Is there a way I can ask Asterisk to check it's local extensions first--then forward off to the Avaya system if it starts with '_9XXX'? (I know how I can do it when editing the raw config files, I'm just looking for a way to do it in the GUI so other admins can manage it if necessary.) As a last-ditch plan, I know I can specifically add '_9001' as an outgoing call rule and sent it directly to extension 9001--but I'd really hate to do that for several hundred phones

    Read the article

  • Embed album art in OGG through command line in linux

    - by teratomata
    I want to convert my music from flac to ogg, and currently oggenc does that perfectly except for album art. Metaflac can output album art, however there seems to be no command line tool to embed album art into ogg. MP3Tag and EasyTag are able to do it, and there is a specification for it here which calls for the image to be base64 encoded. However so far I have been unsuccessful in being able to take an image file, converting it to base64 and embedding it into an ogg file. If I take a base64 encoded image from an ogg file that already has the image embedded, I can easily embed it into another image using vorbiscomment: vorbiscomment -l withimage.ogg > textfile vorbiscomment -c textfile noimage.ogg My problem is taking something like a jpeg and converting it to base64. Currently I have: base64 --wrap=0 ./image.jpg Which gives me the image file converted to base64, using vorbiscomment and following the tagging rules, I can embed that into an ogg file like so: echo "METADATA_BLOCK_PICTURE=$(base64 --wrap=0 ./image.jpg)" > ./folder.txt vorbiscomment -c textfile noimage.ogg However this gives me an ogg whose image does not work properly. I noticed when comparing the base64 strings that all properly embedding pictures have a header line but all the base64 strings I generate are lacking this header. Further analysis of the header: od -c header.txt 0000000 \0 \0 \0 003 \0 \0 \0 \n i m a g e / j p 0000020 e g \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 0000040 \0 \0 \0 \0 \0 \0 \0 \0 035 332 0000052 Which follows the spec given above. Notice 003 corresponds to front cover and image/jpeg is the mime type. So finally, my question is, how can I base64 encode a file and generate this header along with it for embedding into an ogg file?

    Read the article

  • Using VLANs/subnetting to separate management from services?

    - by YouAreTheHat
    Background: I recently purchased a server and a managed switch for my home in the hopes of getting more experience and some fun toys to play with. The devices and appliances I either have or plan to have cover a broad spectrum: router, DD-WRT AP, Dell switch, OpenLDAP server, FreeRADIUS server, OpenVPN gateway, home PCs, gaming consoles, etc. I intend to segment my network with VLANs and associated subnets (e.g., VID10 is populated by devices on 192.168.10.0/24). The idea is to secure the more sensitive appliances by forcing traffic through my router/FW. Setup: After thinking and planning for some time, I have tentatively decided on 4 VLANs: one for the WAN connection, one for servers, one for home/personal devices, and one for management. In theory, the home VLAN will have limited access to the servers, and the management VLAN will be totally isolated for security. Question: Since I want to restrict access to management interfaces, but some appliances have to be accessible to other devices, is it possible/wise to have only management (SSH, HTTP, RDP) available on one VLAN/IP and only services (LDAP, DHCP, RADIUS, VPN) available on other? Is this a thing that is done? Does it gain me the security I think it does, or hurt me in some way?

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >