Search Results

Search found 1288 results on 52 pages for 'prime factors'.

Page 31/52 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • On Windows 7, how can I tell if a recording is multi-channel without third party tools? [migrated]

    - by engineerchuan
    A customer has an audio that is confidential and can't send it to me. He also would not like to install other tools. He has a basic Windows 7 install. Is there any way to tell whether the recording is one channel or two channel? Normally, I would just get the audio and soxi it. Or, I would tell him to install Audacity or equivalent sound editor and open it up. I also thought that if you right clicked and looked at the size, bit rate, and length, you could get number of channels but bit rate already factors in number of channels. Sorry I'm not giving you a lot to work with.

    Read the article

  • Storing Cards and PCI Compliance

    - by Nimbuz
    I'm developing a SaaS service and will be managing payments as a merchant for customers, and since we'll be using multipe payment processors depending on users location, amount and other factors so its important to store card details. I did some research and from what I understood all you need is a PCI compliant host (VPS, Dedicated or Private Cloud) and get it validated and certified through some provider like TrustWave etc... Is that correct or am I missing something? Also, would be great if you could suggest a few (not necessasrily cheap, but affordable) PCI compliant hosts. Many thanks

    Read the article

  • Is it possible to create an SFTP drop box?

    - by Jordan Reiter
    I have a Windows server with folders accessible via SFTP (server is running OpenSSH). scp is blocked. I would like to copy files from a Linux server to the Windows server. SFTP seems like a good option. Ideally I'd like something similar to an FTP drop box, so that the Linux box could just copy files directly over to the Windows box. I'm also open to any solutions to this that would allow me to copy the files while offering the least amount of hassle. The language I'd be using on the Linux box is python; not sure if that factors in or not.

    Read the article

  • FreePBX: Asterisk in the Cloud (EC2) Audio Problems

    - by neezer
    Please pardon the newbie question, but I can't seem to figure this out. I followed the Voxilla's tut to the tee: http://voxilla.com/2009/10/15/voxill...p-by-step-1457 But in making calls, my softphones connect, yet no audio (in either direction). I know from poking around the forums that this is generally caused by two factors: NAT and audio codecs. I (being new to the arena), however, don't know which. I believe I have Asterisk and the clients restricted to just ulaw, and I also believe I have the correct ports open, and my externip set correctly (I think the Voxilla AMI does this automatically, since it's in the cloud). I'm a bit lost. I'd be happy to post whatever configuration files that might help, provided you tell me where they are on the filesystem. But like I said before, this is effectively a vanilla install of Voxilla's own FreePBX AMI. I'd appreciate any help or guidance here. Thanks!

    Read the article

  • FreePBX: Asterisk in the Cloud (EC2) Audio Problems

    - by neezer
    Please pardon the newbie question, but I can't seem to figure this out. I followed the Voxilla's tut to the tee: http://voxilla.com/2009/10/15/voxill...p-by-step-1457 But in making calls, my softphones connect, yet no audio (in either direction). I know from poking around the forums that this is generally caused by two factors: NAT and audio codecs. I (being new to the arena), however, don't know which. I believe I have Asterisk and the clients restricted to just ulaw, and I also believe I have the correct ports open, and my externip set correctly (I think the Voxilla AMI does this automatically, since it's in the cloud). I'm a bit lost. I'd be happy to post whatever configuration files that might help, provided you tell me where they are on the filesystem. But like I said before, this is effectively a vanilla install of Voxilla's own FreePBX AMI. I'd appreciate any help or guidance here. Thanks!

    Read the article

  • On Windows 7, how can I tell if a recording is multi-channel without third party tools?

    - by engineerchuan
    A customer has an audio that is confidential and can't send it to me. He also would not like to install other tools. He has a basic Windows 7 install. Is there any way to tell whether the recording is one channel or two channel? Normally, I would just get the audio and soxi it. Or, I would tell him to install Audacity or equivalent sound editor and open it up. I also thought that if you right clicked and looked at the size, bit rate, and length, you could get number of channels but bit rate already factors in number of channels. Sorry I'm not giving you a lot to work with.

    Read the article

  • When RDP as a Domain User, Smart Card Requested

    - by Paul
    My W8 machine is connected to domain zen. If I rdp to the W8 machine, I can log in as a local user without problems. If I try to log in as a domain user, I am prompted for a smart card instead of a password. Any ideas why? Note that Interactive login: require smart card is disabled in group policy: And here is the output from rsop.msc: Some additional information on this one. If my connecting machine is on the same domain/network as the W8 machine, then I am prompted for a password as usual. If the machine is remote, on a different domain, then I am prompted for a smart card. In addition, the machine I am connecting from that gets the smartcard prompt is an XP box. I haven't isolated exactly which of these factors triggers the different response.

    Read the article

  • Dangers of the pyton eval() statement

    - by LukeP
    I am creating a game. Specifically it is a pokemon battle simulator. I have an sqlite database of moves in which a row looks something like: name | type | Power | Accuracy | PP | Description However, there are some special moves. For said special moves, their damage (and other attributes not shown above, like status effects) may be dependant on certian factors. Rather than create a huge if/else in one of my classes covering the formulas for every one of these moves. I'd rather include another column in the DB that contains a formula in string form, like 'self.health/2'(simplified example). I could then just plug that into eval. I always see people saying to stay away from eval, but from what I can tell, this would be considered an acceptable use, as the dangers of eval only come into play when accepting user input. Am I correct in this assumption, or is there somthing i'm not seeing.

    Read the article

  • Set an Excel cell's color based on multiple other cells' colors

    - by Lord Torgamus
    I have an Excel 2007 spreadsheet for a list of products and a bunch of factors to rate each one on, and I'm using Conditional Formatting to set the color of the cells in the individual attribute columns. It looks something like this: I want to fill in the rating column for each item with a color, based on the color ratings of its individual attributes. Examples of ways to determine this: the color of the category in which the item scored worst the statistical mode of the category colors the average of the category ratings, where each color is assigned a numerical value How can I implement any or all of the above rules? (I'm really just asking for a quick overview of the relevant Excel feature; I don't need step-by-step instructions for each rule.)

    Read the article

  • SQL Cluster install on Hyper V options

    - by Chris W
    I've been reading up on running a SQL Cluster in a Hyper V environment and there seems to be a couple of options: Install guest cluster on 2 VMs that are themselves part of a fail over cluster. Install SQL cluster on 2 VMs but the VMs themselves are not part of an underlying cluster. With option 1, it's little more complex as there's effectively two clusters in play but this adds some flexibility in the sense that I'm free to migrate the VMs between and physical blades in their cluster for physical maintenance without affecting the status of the SQL guest cluster that's running within them. With option 2, the set-up is a bit simpler as there's only 1 cluster in the mix but my VMs are anchored to the physical blades that they're set-up on (I'll ignore the fact I could manually move the VHDs for the purposes of this question). Are there any other factors that I should consider here when deciding which option to go for? I'm free to test out both options and probably will do but if any one has working experience of these set-ups and can offer some input that would be great.

    Read the article

  • What is the cost of running a desktop machine at home?

    - by vinc
    How much does it (roughly) cost to leave my personal home computer running 24/7 for a year? I'm not doing anything unusual; I run a webserver for myself, surf the web, write some code. I don't have any specific specs. So for example there might be the cost of electricity, internet connection, and possibly some other factors that I've overlooked. I'm trying to decide whether it would be a good idea to turn off my computer when I'm asleep and not using it. Is the cost negligible, maybe 1 USD per day, something in between or more?

    Read the article

  • Utility to take daily screenshots of a webpage

    - by Kevin L.
    I would like to have a visual history of my Tomato bandwidth graphs, so that I can roughly/manually correlate them with some other factors. Tomato can squirrel away the actual data points, but I'd rather not deal with importing it into some visualization tool. For sheer simplicity, a single image per day would be preferable. I'd like a program that can wake up at say, midnight, take a screenshot of a given webpage (the URL will always be the same), and save that image to a folder, maybe named after the date/time. I'd prefer OS X, but Windows and Linux are fair game too; I use all three. Any suggestions?

    Read the article

  • Why are the prices for broadband bandwidth at data centers much higher than consumer/small business offerings?

    - by odemarken
    The prices for broadband bandwidth at data centers are sometimes as much as 10x higher than for a typical small business/consumer connection, at least where I live. Now, I understand those are two differend kind of products, but what exactly are the differences? Is it mainly because the bandwidth you get at a data center is guaranteed (CIR), while a consumer offer lists maximal bandwidth (EIR/MIR)? Or are there other factors as well? (Note: my previous, much more specific question on the same general topic was closed as not constructive. I tried to extract the core issue and present it in a way that can be answered objectively. If you feel that this question is still bad and should be closed, please care to comment and explain why.)

    Read the article

  • Why would an ext3 filsystem be rolled back on a Debian VM running in VirtualBox after loss of power to the host

    - by Sevas
    A Debian Virtual machine runs as a Guest VirtualBox VM. It's filesystem is EXT3. The host system loses power and after booting up the host system and guest VM, I find that the VM's filesystem has been rolled back to a previous state, losing changes made to the filesystem some time before losing power. The operations that were rolled back had been fully completed before the loss of power (files fully copied, file handles closed, etc.), but it's possible and even likely that other write operations were occuring on the VM at the point of the crash. So I am trying to figure out if it's the filesystem recovery process that rolls back filesystem operations after encountering corruption post power loss, or is it possibly related to VirtualBox and the way it ignores flush requests for performance gains by default (discussed here) Are there any other factors that would result in the filesystem being rolled back after losing power?

    Read the article

  • Should I never put a transactional replication distributor on a subscriber server?

    - by Stuart Branham
    What factors into choosing a distribution server for transactional replication? In our topology, we've always had the distributor reside on the publishing server. We rarely generate snapshots and performance is good enough, so this is okay for us today. One of our instances is moving to a cluster, so we need to move the distributor off for resilience/symmetry. Right now our two choices are to use a server physically close to the publishers, or our single subscription server. Our publisher is in our main office, and our subscriber is in a colocation facility off-site which our ISP runs. We have a pretty good line to it. The reason we're even considering the latter is to save work and licensing costs.

    Read the article

  • What monitor specifications should be taken into consideration to avoid eye problems? [closed]

    - by coding crow
    I spent time programming on my 13.3" laptop for 8 to 10 hours a day. I was planning to buy a good monitor. Now that I have developed CVS buying a monitor has became an immediate priority. I have spend some time trying to understand what I should buy and why. I could only zero down on the size (20") and LED. So, I'm looking for advice on many other factors like resolution, pixel density, panel technology and so forth. What should I look for in a computer monitor to avoid further eye problems? Why?

    Read the article

  • Faq and Best tips Regarding Learning Database ?

    - by AdityaGameProgrammer
    For a programmer with no prior exposure to databases What would be a good database to learn Oracle vs SQLserver vs MySQLvs PostgreSQL? I have come across lot of discussion MySQL and PostgreSQL and frankly I am confused on which to start with. Are these very different, in the sense if one had to switch, would the exposure to one be counter-productive to learning the other? Is working with databases heavily platform dependent? What exactly do people mean by Data base programming vs. administration? Do people chose databases based on the programming language used for the application developed? In general, Working with databases is it implicit that we work with some server? Does the choice of databases differ when it comes to game development? If so what factors does it differ by? What are the Best Tips that you have found to be useful when learning databases Edit: Some FAQ i had and found the same on SO What should every developer know about databases? Which database if learning from scratch in 2010? For a beginner, is there much difference between MySQL and PostgreSQL What RDBMS should I learn/use? (MySql/SQL Server/Oracle etc.) To what extent should a developer learn database? How are database programmers different from other programmers? what kind of database are used in games?

    Read the article

  • How many different servers are needed to keep a website running with no downtime? [closed]

    - by Mason Wheeler
    Machines go down. It's a fact of life. They may need to be rebooted for some reason, or they may have a hardware failure, or a power outage. So if I wanted to deploy a website with a server backed by a SQL database, putting the whole thing on one server wouldn't be good enough. It obviously needs at least two servers, so that if one goes down, the other can pick up the slack until the first comes back up. Of course, if I have the server software on two machines, either one of which could go down, I can't place the database on either of those two machines, because it could go down. So the database needs its own server. But that server can go down, so I need a backup database server and some sort of replication system to keep it in sync so the main can fail-over to it. So far, that's a bare minimum of 4 machines to keep one website running with a reasonable chance of no downtime. (Assuming no catastrophic events take place that take down both front-end servers at once or both DB servers at once, and no hacks, DDOS attacks, etc. Am I missing any other factors, or should I consider 4 servers to be the minimum for running a website with a goal of continuing operation without downtime even when a server goes down?

    Read the article

  • Oracle’s AutoVue Enables Visual Decision Making

    - by Pam Petropoulos
    That old saying about a picture being worth a thousand words has never been truer.  Check out the latest reports from IDC Manufacturing Insights which highlight the importance of incorporating visual information in all facets of decision making and the role that Oracle’s AutoVue Enterprise Visualization solutions can play. Take a look at the excerpts below and be sure to click on the titles to read the full reports. Technology Spotlight: Optimizing the Product Life Cycle Through Visual Decision Making, August 2012 Manufacturers find it increasingly challenging to make effective product-related decisions as the result of expanded technical complexities, elongated supply chains, and a shortage of experienced workers. These factors challenge the traditional methodologies companies use to make critical decisions. However, companies can improve decision making by the use of visual decision making, which synthesizes information from multiple sources into highly usable visual context and integrates it with existing enterprise applications such as PLM and ERP systems. Product-related information presented in a visual form and shared across communities of practice with diverse roles, backgrounds, and job skills helps level the playing field for collaboration across business functions, technologies, and enterprises. Visual decision making can contribute to manufacturers making more effective product-related decisions throughout the complete product life cycle. This Technology Spotlight examines these trends and the role that Oracle's AutoVue and its Augmented Business Visualization (ABV) solution play in this strategic market. Analyst Connection: Using Visual Decision Making to Optimize Manufacturing Design and Development, September 2012 In today's environments, global manufacturers are managing a broad range of information. Data is often scattered across countless files throughout the product life cycle, generated by different applications and platforms. Organizations are struggling to utilize these multidisciplinary sources in an optimal way. Visual decision making is a strategy and technology that can address this challenge by integrating and widening access to digital information assets. Integrating with PLM and ERP tools across engineering, manufacturing, sales, and marketing, visual decision making makes digital content more accessible to employees and partners in the supply chain. The use of visual decision-making information rendered in the appropriate business context and shared across functional teams contributes to more effective product-related decision making and positively impacts business performance.

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • CRM@Oracle Series: CRM Analytics

    - by tony.berk
    What is the most important factor that leads to a successful CRM deployment? Is it the overall strategy, strong governance, defined processes or good data quality? Well, it's definitely a combination of all these, but the most important differentiator from our experience is Business Intelligence. Business Intelligence or Analytics is commonly mentioned as a key aspect to successful CRM and other enterprise deployments. The good news is that Oracle provides pre-built analytics dashboards, which provide real-time, actionable insight, and tools to build custom analyses. However, success with analytics, especially in a large enterprise, still requires a strong strategy, clean data for analysis, and performance. Today's CRM@Oracle slidecast covers Oracle's strategy, architecture and key success factors for deploying CRM Analytics internally at Oracle. CRM@Oracle: CRM Analytics Click here to learn more about Oracle CRM products and here to learn about Oracle Business Intelligence Applications. Have you read our other postings in the CRM@Oracle Series? If you have a particular CRM area or function which you'd like to hear how Oracle implemented it internally, post a comment and we'll get it on our list.

    Read the article

  • Partner Webcast: Oracle SOA Governance - 4 October 2012

    - by Thanos
    Oracle is pleased to invite you to a webcast on "Oracle SOA Governance Strategy" intended for our partners. SOA Governance is the framework that enables you to define and enforce rules for communication, collaboration, service development, management and usage across the enterprise and among the decision makers. It also allows you to define metrics to assess the quality of services and to measure their cost and benefits for your organization. Service Oriented Architecture comes with a promise! A promise to make your business more agile by the ability to create reusable services developed and deployed in cooperation between the business and IT. This promise can only be kept, if all the involved parties in your enterprise, across departments communicate and collaborate efficiently on establishing and maintaining and developing the service oriented assets. Such collaboration requires guidance and control. In this webcast you will hear about the key factors needed to establish successful SOA governance both from organizational as well as from technical point of view. Agenda: Introduction to SOA Challenges of SOA governance SOA governance principles Governing Service lifecycle Rules for choosing a service Q&A session Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now Also make sure to checkout the relevant SOA Governance Resource Kit For any questions please contact us at [email protected]

    Read the article

  • Partner Webcast: Oracle SOA Governance - 4 October 2012

    - by J Swaroop
    Oracle is pleased to invite you to a webcast on "Oracle SOA Governance Strategy" intended for our partners. SOA Governance is the framework that enables you to define and enforce rules for communication, collaboration, service development, management and usage across the enterprise and among the decision makers. It also allows you to define metrics to assess the quality of services and to measure their cost and benefits for your organization. Service Oriented Architecture comes with a promise! A promise to make your business more agile by the ability to create reusable services developed and deployed in cooperation between the business and IT. This promise can only be kept, if all the involved parties in your enterprise, across departments communicate and collaborate efficiently on establishing and maintaining and developing the service oriented assets. Such collaboration requires guidance and control. In this webcast you will hear about the key factors needed to establish successful SOA governance both from organizational as well as from technical point of view. Agenda: Introduction to SOA Challenges of SOA governance SOA governance principles Governing Service lifecycle Rules for choosing a service Q&A session Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now Also make sure to checkout the relevant SOA Governance Resource Kit For any questions please contact us at [email protected]

    Read the article

  • What are the options for hosting a small Plone site?

    - by Tina Russell
    I’ve developed a portfolio website for myself using Plone 4, and I’m looking for someplace to host it. Most Plone hosting services seem to focus on large, corporate deployments, but I need something that I can afford on a very limited budget and fits a small, single-admin website. My understanding is that my basic options are thus: I can go with a hosting service that specifically provides Plone. I know of WebFaction, but what others exist? Also, I’d have two stipulations for a Plone hosting service: (a) It needs to use Plone 4, for which I’ve developed my site, and (b) it needs to allow me SSH access to a home directory (including the Plone configuration), so that I may use my custom development eggs and such. I could use a VPS hosting service. What are my options here? Again, I need something cheap and scaled to my level. I could use Amazon EC2 or a similar service (please tell me of any) and pay by the tiniest unit of data. I’m a little scared of this because I have no idea how to do a cost-benefit analysis between this and a regular VPS host. The advantage of this approach would be that I only pay for what I use, making it very scalable, but I don’t know how the overall cost would compare to any VPS host under similar circumstances. What factors enter into the cost of Amazon EC2? What can I expect to pay under either option for regular traffic for a new website? Which one is more desirable for when a rush of visitors drive up my bandwidth bill? One last note: I know Plone isn’t common for websites for individuals, but please don’t try to talk me out of it here; that’s a completely different subject. For now, assume I’m sticking with Plone for good. Also, I have seen the Plone hosting services list on Plone.org—it’s twenty pages long, and the first page was nothing but professional Plone consulting services that sometimes offer hosting for business clients. So, that wasn’t much help. Thank you!

    Read the article

  • Slower site with the *same* configuration than a mirror copy…?

    - by Rosamunda Rosamunda
    I´ve got this Drupal site (ligadelconsorcista.org) that I have to move it from one server to another. The reason was that my older host even when it was pretty decent, it started a couple of months now to have many short downtimes, wich drove me crazy. The thing is that I´ve made a sort of mirror copy of the site: I´ve copied all the files exactly the same, and after that I´ve imported the database. The problem is that the new site connects much slower than my old hosting! (the new one is mediatemple) I´ve contacted their support and they tell me that there are several factors that can contribute to that... but that has nothing to do with their hosting service. The thing is that I don´t even know where to start looking for the problem. Notes: The new configuration is the same that the one I had with the older hosting account. Today I´ve set an account with cloudflare´s CDN to try to solve the problem. Even if the CDN is configured ok (I´ve asked their help desk) it won´t add any performance improvement. Any clues of what may I do about this? Thanks!!

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >