Search Results

Search found 2346 results on 94 pages for 'dedicated'.

Page 15/94 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Sharing swap space between Windows and Ubuntu

    - by Leftium
    This Linux Swap Space Mini-HOWTO describes how to share swap space between Windows and Linux. Do these instructions still apply to Ubuntu in 2011? How should I modify the steps for Ubuntu? Is there a better approach to sharing swap space? Based on the HOWTO, it seems best to create a dedicated NTFS swap partition: Dedicated so the swap file will be contiguous and remain unfragmented. NTFS so both Windows and Ubuntu can read/write to it. (Or is FAT32 better for this purpose?) Then, configure Ubuntu to prepare the swap space for use by Linux on start up; by Windows on shut down. I want to dual boot Ubuntu and Windows 7 on my X301 laptop. However, my laptop only has a 64 GB SDD, so I would like to conserve as much disk space as possible. update: There is an alternate method using a special driver for Windows that let you use a Linux swap partition for temporary storage like a RAM-disk, but it doesn't seem to be as good...

    Read the article

  • Is ActionScript 3 used by Serious Indie Developers?

    - by Puedes
    This question is for dedicated independent game developers: My dream is to be a game developer. I am a senior in high school who has taken Computer Science for all four years. I have used Java the whole time, but last year I started using PHP and ActionScript 3 (with Flixel). I also used Game Maker for a brief period. I apologize for this, I wanted to get that out of the way and clarify the fact that I have experience of some kind with game development. I am stuck at the moment because I don't quite know what language to use to develop games at a professional level. I am seriously interested in becoming a dedicated game developer, but this issue is really bothering me. I would like to know what the best option would be for my case, based on your experiences. Any advice is appreciated. Things to consider: I am only interested in making 2D games (I am not worried about 3D support) It would be ideal to use something that can be ported to multiple platforms (so as not to run into this problem later) I can't seem to figure out what the industry likes to use So far, this is what I have: I can't decide if it would be wise to stick with ActionScript 3, or move to C++ I know Flash would be for browser games, but what if I want to make a downloadable game, like Plants Vs. Zombies or Super Crate Box? Would Flash be a smart choice for standalone games, or did they use something else? Thank you for reading this, as I would like to stop worrying about this and make some games! Also, I hope this wasn't all over the place :) tl;dr Should I move ahead with AS3 or use something else i.e. C++

    Read the article

  • SharePoint 2010 in the cloud a.k.a. SharePoint Online

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). There are 3 ways to run SharePoint On premises, you buy the servers, and you run the servers. Hosted servers, where you don’t run the servers, but you let a hosting company run dedicated servers. Multi-tenant, like SharePoint online – this is what I am talking about in this blog post. Also known as SaaS (Software as a Service). The advantages of a cloud solution are undeniable. Availability, (SharePoint line offers a 99.9% uptime SLA) Reliability. Cost. Due to economies of scale, and no need to hire specialized dedicated staff. Scalability. Security. Flexibility – grow or shrink as you need to. If you are seriously considering SharePoint 2010 in the cloud, there are some things you need to know about SharePoint online. What will work - OOTB Customization, collaboration features etc. will work SharePoint Designer 2010 is supported, so no code workflows will work Visual Studio sandbox solutions, client object model will work. What won’t work - SharePoint 2010 online cloud environment supports only sandbox solutions. BCS, business connectivity services is not supported in SharePoint online. What you can do however is to host your services in Azure, and call them using Silverlight. Custom timer jobs will not work. Long story short, get used to Sandbox solutions – and the new way of programming. Sandbox solutions are pretty damn good. Most of the complaints I have heard around sandbox solutions being too restrictive, are uninformed mechanisms of doing things mired in the ways of 2002. .. or you could just live in 2002 too. Comment on the article ....

    Read the article

  • How can I get the best performance for playing games?

    - by Oli
    I've been playing a couple of Wine-games today and decided to switch to metacity to see what the performance difference was like. If you've never done it before, you just run metacity --replace but don't do that if you use Unity! Anyway, surprise surprise it was like playing on a dedicated Windows gaming machine. Playing under metacity today was bliss. Much higher framerates and just a fluidity that you'd expect from a native game. I'm not sure I can go back. Switching to metacity is no hardship but I wonder if there's anything else in the WM landscape that I should try out. I'm essentially looking for suggestions for the best way to play games. Mix up WMs, dedicated X sessions, whatever... As long as it makes Wine games run faster. Small print One process per answer (eg: New X session + OpenBox) We should probably land on a benchmark so we can show percentage improvement over a stock Compiz desktop. I'm open to suggestions in the comments. If people could test it and submit their how much it improves things for them in the comments, that would give others a good idea of if it's worth the pain.

    Read the article

  • Essbase Analytics Link (EAL) - Performance of some operation of EAL could be improved by tuning of EAL Data Synchronization Server (DSS) parameters

    - by Ahmed Awan
    Generally, performance of some operation of EAL (Essbase Analytics Link) could be improved by tuning of EAL Data Synchronization Server (DSS) parameters. a. Expected that DSS machine will be 64-bit machine with 4-8 cores and 5-8 GB of RAM dedicated to DSS. b. To change DSS configuration - open EAL Configuration Tool on DSS machine.     ->Next:     and define: "Job Units" as <Number of Cores dedicated to DSS> * 1.5 "Max Memory Size" (if this is 64-bit machine) - ~1G for each Job Unit. If DSS machine is 32-bit - max memory size is 2600 MB. "Data Store Size" - depends on number of bridges and volume of HFM applications, but in most cases 50000 MB is enough. This volume should be available in defined "Data Store Dir" driver.   Continue with configuration and finish it. After that, DSS should be restarted to take new definitions.  

    Read the article

  • Is it worth moving from Microsoft tech to Linux, NodeJS & other open source frameworks to save money for a start-up?

    - by dormisher
    I am currently getting involved in a startup, I am the only developer involved at the moment, and the other guys are leaving all the tech decisions up to me at the moment. For my day job I work at a software house that uses Microsoft tech on a day to day basis, we utilise .NET, SqlServer, Windows Server etc. However, I realise that as a startup we need to keep costs down, and after having a brief look at the cost of hosting for Windows I was shocked to see some of the prices for a dedicated server. The cheapest I found was £100 a month. Also if the business needs to scale in the future and we end up needing multiple servers, we could end up shelling out £10's of £000's a year in SQL Server / Windows Server licenses etc. I then had a quick look at the price of Linux hosting for a dedicated server and saw the price was waaaaaay lower than windows hosting. One place was offering a machine with 2 cores for less than £20 a month. This got me thinking maybe the way to go is open source on Linux. As I write a lot of Javascript at work (I'm working on a single page backbone app at the moment), I thought maybe NodeJS and a web framework like Express would be cool to use. I then thought that instead of using SQL why not use an open source NoSQL database like MongoDB, which has great support on NodeJS? My only concern is that some of the work the application is going to do is going to be dynamically building images and various other image related stuff, i.e. stuff that is quite CPU heavy - so I'm thinking of maybe writing anything CPU heavy in C++ and consuming it as a module in Node. That's the background - but basically is Linux a good match for: Hosting a NodeJS/Express site? Compiling C++ node modules? Using a NoSQL DB like MongoDB? And is it a good idea to move to these unfamiliar technologies to save money?

    Read the article

  • For a large website developed in PHP, is it necessary to have a framework?

    - by Martin
    I am wondering if it is necessary to have a framework or if it is a must-have if I plan to make a large website. Large website could mean a lot of things: in other words, multiple dynamic web pages (40-50 dynamic pages, mysql content) and a lot of visitors (+- a million hits per month). The site will be hosted in a dedicated server environment. I know that it could simplify coding for a developer team, that it includes libraries and a lot of advantages. But I just feel that I don't need that. I think that learning how it works, managing it and installing it would take more time and I could use that time to code. I write PHP the simplest way I could (with performance in mind) and I try to reuse my code/functions/classes most of the time and I make sure that if another developer joins the team, that he won't be lost in the code. I am also planning to use MemCached or another Cache for PHP. As I said, the site will be hosted in a dedicated server environment but will be entirely managed by the hosting company. I am pretty sure the control panel for me to control the basic stuff will be Cpanel. For a developer like me that only knows PHP, Javascript, HTML, CSS, MYSQL and really basic server management, I feel that it seems to complicated to have a framework. Am I wrong? Is it worth the time to learn all about it? Thank you for your opinions and suggestions.

    Read the article

  • Book Review: Oracle ADF Real World Developer’s Guide

    - by Frank Nimphius
    Recently PACKT Publishing published "Oracle ADF Real World Developer’s Guide" by Jobinesh Purushothaman, a product manager in our team. Though already the sixth book dedicated to Oracle ADF, it has a lot of great information in it that none of the previous books covered, making it a safe buy even for those who own the other books published by Oracle Press (McGrwHill) and PACKT Publishing. More than the half of the "Oracle ADF Real World Developer’s Guide" book is dedicated to Oracle ADF Business Components in a depth and clarity that allows you to feel the expertise that Jobinesh gained in this area. If you enjoy Jobinesh blog (http://jobinesh.blogspot.co.uk/) about Oracle ADF, then, no matter what expert you are in Oracle ADF, this book makes you happy as it provides you with detail information you always wished to have. If you are new to Oracle ADF, then this book alone doesn't get you flying, but, if you have some Java background, accelerates your learning big, big, big times. Chapter 1 is an introduction to Oracle ADF and not only explains the layers but also how it compares to plain Java EE solutions (page 13). If you are new to Oracle JDeveloper and ADF, then at the end of this chapter you know how to start JDeveloper and begin your ADF development Chapter 2 starts with what Jobinesh really is good at: ADF Business Components. In this chapter you learn about the architecture ingredients of ADF Business Components: View Objects, View Links, Associations, Entities, Row Sets, Query Collections and Application Modules. This chapter also provides a introduction to ADFBC SDO services, as well as sequence diagrams for what happens when you execute queries or commit updates. Chapter 3 is dedicated to entity objects and  is one of many chapters in this book you will enjoy and never want to miss. Jobinesh explains the artifacts that make up an entity object, how to work with entities and resource bundles, and many advanced topics, including inheritance, change history tracking, custom properties, validation and cursor handling.  Chapter 4 - you guessed it - is all about View objects. Comparable to entities, you learn about the XM files and classes that make a view object, as well as how to define and work with queries. List-of-values, inheritance, polymorphism, bind variables and data filtering are interesting - and important topics that follow. Again the chapter provides helpful sequence diagrams for you to understand what happens internally within a view object. Chapter 5 focuses on advanced view object and entity object topics, like lifecycle callback methods and when you want to override them. This chapter is a good digest of Jobinesh's blog entries (which most ADF developers have in their bookmark list). Really worth reading ! Chapter 6 then is bout Application Modules. Beside of what application modules are, this chapter covers important topics like properties, passivation, activation, application module pooling, how and where to write custom logic. In addition you learn about the AM lifecycle and request sequence. Chapter 7 is about the ADF binding layer. If you are new to Oracle ADF and got lost in the more advanced ADF Business Components chapters, then this chapter is where you get back into the game. In very easy terms, Jobinesh explains what the ADF binding is, how it fits into the JSF request lifecycle and what are the metadata file involved. Chapter 8 then goes into building data bound web user interfaces. In this chapter you get the basics of JavaServer Faces (e.g. managed beans) and learn about the interaction between the JSF UI and the ADF binding layer. Later this chapter provides advanced solutions for working with tree components and list of values. Chapter 9 introduces bounded task flows and ADF controller. This is a chapter you want to read if you are new to ADF of have started. Experts don't find anything new here, which doesn't mean that it is not worth reading it (I for example, enjoyed the controller talk very much) Chapter 10 is an advanced coverage of bounded task flow and talks about contextual events  Chapter 11 is another highlight and explains error handling, trains, transactions and more. I can only recommend you read this chapter. I am aware of many documents that cover exception handling in Oracle ADF (and my Oracle Magazine article for January/February 2013 does the same), but none that covers it in such a great depth. Chapter 12 covers ADF best practices, which is a great round-up of all the tips provided in this book (without Jobinesh to repeat himself). Its all cool stuff that helps you with your ADF projects. In summary, "Oracle ADF Real World Developer’s Guide" by Jobinesh Purushothaman is a great book and addition for all Oracle ADF developers and those who want to become one. Frank

    Read the article

  • Identity Management Monday at Oracle OpenWorld

    - by Tanu Sood
    What a great start to Oracle OpenWorld! Did you catch Larry Ellison’s keynote last evening? As expected, it was a packed house and the keynote received a tremendous response both from the live audience as well as the online community as evidenced by the frequent spontaneous applause in house and the twitter buzz. Here’s but a sampling of some of the tweets that flowed in: @paulvallee: I freaking love that #oracle has been born again in it's interest in core tech #oow (so good for #pythian) @rwang0: MyPOV: #oracle just leapfrogged the competition on the tech front across the board. All they need is the content delivery network #oow12 @roh1: LJE more astute & engaging this year. Nice announcements this year with 12c the MTDB sounding real good. #oow12 @brooke: Cool to see @larryellison interrupted multiple times by applause from the audience. Great speaker. #OOW And there’s lot more to come this week. Identity Management sessions kick-off today. Here’s a quick preview of what’s in store for you today for Identity Management: CON9405: Trends in Identity Management 10:45 a.m. – 11:45 a.m., Moscone West 3003 Hear directly from subject matter experts from Kaiser Permanente and SuperValu who would share the stage with Amit Jasuja, Senior Vice President, Oracle Identity Management and Security, to discuss how the latest advances in Identity Management that made it in Oracle Identity Management 11g Release 2 are helping customers address emerging requirements for securely enabling cloud, social and mobile environments. CON9492: Simplifying your Identity Management Implementation 3:15 p.m. – 4:15 p.m., Moscone West 3008 Implementation experts from British Telecom, Kaiser Permanente and UPMC participate in a panel to discuss best practices, key strategies and lessons learned based on their own experiences. Attendees will hear first-hand what they can do to streamline and simplify their identity management implementation framework for a quick return-on-investment and maximum efficiency. This session will also explore the architectural simplifications of Oracle Identity Governance 11gR2, focusing on how these enhancements simply deployments. CON9444: Modernized and Complete Access Management 4:45 p.m. – 5:45 p.m., Moscone West 3008 We have come a long way from the days of web single sign-on addressing the core business requirements. Today, as technology and business evolves, organizations are seeking new capabilities like federation, token services, fine grained authorizations, web fraud prevention and strong authentication. This session will explore the emerging requirements for access management, what a complete solution is like, complemented with real-world customer case studies from ETS, Kaiser Permanente and TURKCELL and product demonstrations. HOL10478: Complete Access Management Monday, October 1, 1:45 p.m. – 2:45 p.m., Marriott Marquis - Salon 1/2 And, get your hands on technology today. Register and attend the Hands-On-Lab session that demonstrates Oracle’s complete and scalable access management solution, which includes single sign-on, authorization, federation, and integration with social identity providers. Further, the session shows how to securely extend identity services to mobile applications and devices—all while leveraging a common set of policies and a single instance. Product Demonstrations The latest technology in Identity Management is also being showcased in the Exhibition Hall so do find some time to visit our product demonstrations there. Experts will be at hand to answer any questions. DEMOS LOCATION EXHIBITION HALL HOURS Access Management: Complete and Scalable Access Management Moscone South, Right - S-218 Monday, October 1 9:30 a.m.–6:00 p.m. 9:30 a.m.–10:45 a.m. (Dedicated Hours) Tuesday, October 2 9:45 a.m.–6:00 p.m. 2:15 p.m.–2:45 p.m. (Dedicated Hours) Wednesday, October 3 9:45 a.m.–4:00 p.m. 2:15 p.m.–3:30 p.m. (Dedicated Hours) Access Management: Federating and Leveraging Social Identities Moscone South, Right - S-220 Access Management: Mobile Access Management Moscone South, Right - S-219 Access Management: Real-Time Authorizations Moscone South, Right - S-217 Access Management: Secure SOA and Web Services Security Moscone South, Right - S-223 Identity Governance: Modern Administration and Tooling Moscone South, Right - S-210 Identity Management Monitoring with Oracle Enterprise Manager Moscone South, Right - S-212 Oracle Directory Services Plus: Performant, Cloud-Ready Moscone South, Right - S-222 Oracle Identity Management: Closed-Loop Access Certification Moscone South, Right - S-221 We recommend you keep the Focus on Identity Management document handy. And don’t forget, if you are not on site, you can catch all the keynotes LIVE from the comfort of your desk on YouTube.com/Oracle. Keep the conversation going on @oracleidm. Use #OOW and #IDM and get engaged today. Photo Courtesy: @OracleOpenWorld

    Read the article

  • Migrating Virtual Iron guest to Oracle VM 3.x

    - by scoter
    As stated on the official site, Oracle in 2009, acquired a provider of server virtualization management software named Virtual Iron; you can find all the acquisition details at this link. Into the FAQ on the official site you can also view that, for the future, Oracle plans to fully integrate Virtual Iron technology into Oracle VM products, and any enhancements will be delivered as a part of the combined solution; this is what is going on with Oracle VM 3.x. So, customers started asking us to migrate Virtual Iron guests to Oracle VM. IMPORTANT: This procedure needs a dedicated OVM-Server with no-guests running on top; be careful while execute this procedure on production environments. In these little steps you will find how-to migrate, as fast as possible, your guests between VI ( Virtual Iron ) and Oracle VM; keep in mind that OracleVM has a built-in P2V utility ( Official Documentation )  that you can use to migrate guests between VI and Oracle VM. Concepts: VI repositories.  On VI we have the same "repository" concept as in Oracle VM; the difference between these two products is that VI use a raw-lun as repository ( instead of using ocfs2 and its capabilities, like ref-links ). The VI "raw-lun" repository, with a pure operating-system perspective, may be presented as in this picture: Infact on this "raw-lun" VI create an LVM2 volume-group. The VI "raw-lun" repository, with an hypervisor perspective, may be presented as in this picture: So, the relationships are: LVM2-Volume-Group <-> VI Repository LVM2-Logical-Volume <-> VI guest virtual-disk The first step is to present the VI repository ( raw-lun ) to your dedicated OVM-Server. Prepare dedicated OVM-Server On the OVM-Server ( OVS ) you need to discover new lun and, after that, discover volume-group and logical-volumes containted in VI repository; due to default OVS configuration you need to edit lvm2 configuration file: /etc/lvm/lvm.conf     # By default for OVS we restrict every block device:     # filter = [ "r/.*/" ] and comment the line starting with "filter" as above. Now you have to discover the raw-lun presented and, next, activate volume-group and logical-volumes: #!/bin/bash for HOST in `ls /sys/class/scsi_host`;do echo '- - -' > /sys/class/scsi_host/$HOST/scan; done CPATH=`pwd` cd /dev for DEVICE in `ls sd[a-z] sd?[a-z]`;do echo '1' > /sys/block/$DEVICE/device/rescan; done cd $CPATH cd /dev/mapper for PARTITION in `ls *[a-z] *?[a-z]`;do partprobe /dev/mapper/$PARTITION; done cd $CPATH vgchange -a yAfter that you will see a new device:[root@ovs01 ~]# cd /dev/6000F4B00000000000210135bef64994[root@ovs01 6000F4B00000000000210135bef64994]# ls -l 6000F4B0000000000061013* lrwxrwxrwx 1 root root 77 Oct 29 10:50 6000F4B00000000000610135c3a0b8cb -> /dev/mapper/6000F4B00000000000210135bef64994-6000F4B00000000000610135c3a0b8cb By your OVM-Manager create a guest server with the same definition as on VI:same core number as VI source guestsame memory as VI source guestsame number of disks as VI source guest ( you can create OVS virtual disk with a small size of 1GB because the "clone" will, eventually, extend the size of your new virtual disks )Summarizing:source-virtual-disk path ( VI ):/dev/mapper/6000F4B00000000000210135bef64994-6000F4B00000000000610135c3a0b8cbdest-virtual-disk path ( OVS ):/OVS/Repositories/0004fb00000300006cfeb81c12f12f00/VirtualDisks/0004fb000012000055e0fc4c5c8a35ee.img ** ** = to identify your virtual disk you have verify its name under the "vm.cfg" file of your new guest.Clone VI virtual-disk to OVS virtual-diskdd if=/dev/mapper/6000F4B00000000000210135bef64994-6000F4B00000000000610135c3a0b8cb of=/OVS/Repositories/0004fb00000300006cfeb81c12f12f00/VirtualDisks/0004fb000012000055e0fc4c5c8a35ee.img Clean unsupported parameters and changes on OVS.1. Restore original /etc/lvm/lvm.conf    # By default for OVS we restrict every block device:     filter = [ "r/.*/" ]    and uncomment the line starting with "filter" as above.2. Force-stop lvm2-monitor service  # service lvm2-monitor force-stop 3. Restore original /etc/lvm directories ( archive, backup and cache )  # cd /etc/lvm  # rm -fr archive backup cache; mkdir archive backup cache4. Reboot OVSRefresh OVS repository and start your guest.By OracleVM Manager refresh your repository:By OracleVM Manager start your "migrated" guest: Comments and corrections are welcome.  Simon COTER 

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • In a multithreaded app, would a multi-core or multiprocessor arrangement be better?

    - by Michael
    I've read a lot on this topic already both here (e.g., stackoverflow.com/questions/1713554/threads-processes-vs-multithreading-multi-core-multiprocessor-how-they-are or http://stackoverflow.com/questions/680684/multi-cpu-multi-core-and-hyper-thread) and elsewhere (e.g., ixbtlabs.com/articles2/cpu/rmmt-l2-cache.html or software.intel.com/en-us/articles/multi-core-introduction/), but I still am not sure about a couple things that seem very straightforward. So I thought I'd just ask. (1) Is a multi-core processor in which each core has dedicated cache effectively the same as a multiprocessor system (balanced of course for processor speed, cache size, and so on)? (2) Let's say I have some images to analyze (i.e., computer vision), and I have these images loaded into RAM. My app spawns a thread for each image that needs to be analyzed. Will this app on a shared cache multi-core processor run slower than on a dedicated cache multi-core processor, and would the latter run at the same speed as on an equivalent single-core multiprocessor machine? Thank you for the help!

    Read the article

  • fluent nhibernate m-to-m with column

    - by csetzkorn
    Hi, I am used to hbm files and started using fluent nhibernate recently. Creating an m-to-m relationship between two entities A and B is quite simple In A I create: public virtual IList Bs { get; set; } and then I use: mapping.HasManyToMany(x = x.Bs); That’s it and I can do: A a = new A(); a.Bs.Add(b); My problem is that I would like to have an additional column in my dedicated m-to-m database table which holds the two foreign keys. What is the simplest way to achieve this in FNH? Would I have to create a dedicated entity for the m-to-m realtionship or is there a simpler solution? Thanks. Best wishes, Christian

    Read the article

  • How do I sync a list on a site from a list on a subsite in SharePoint?

    - by mandroid
    Our group has a main site and a subsite dedicated to our projects. Each project has a project lead and a completion % associated with it. We have a list on the project site that goes to each project's dedicated subsite within the project site. I want to duplicate the list from the project site on the main site, where any changes or additions made to the project site list appears in the main site list as well including the project lead and completion %. How do I accomplish this? Here is a simple example of the hierarchy: Main site - Project list derived from Project list below Project Site - Project list Project 1 Site Project 2 Site Project 3 Site

    Read the article

  • What is the point in using a "real" database modeling tool?

    - by cdeszaq
    We currently have a 10 year old nasty, spaghetti-code-style SQL Server database that we are soon looking to pretty much re-write from scratch as part of a re-write to a large web application. (The existing application will serve as the functional requirements for the next incarnation of the app). Some have suggested we use Visio to do all the diagramming and to generate the DDL, but others have suggested we use a dedicated database design tool, rather than a diagramming tool that is able to export DDL. Is there any benefit to using "real" DB design tools, such as ModelRight, over general tools like Visio? If so, what are those specific benefits? Edit: In a nutshell, what can real/dedicated tools do that something like Visio can't, and how much do these capabilities matter (from a best-practices standpoint, for example)

    Read the article

  • Keyboard focus being stolen by Flash

    - by NewUser
    Performing a search, I noticed several questions dedicated to how to steal/trap the keyboard focus of the visitor. Considering this site is dedicated to programming that's not suprising. I was wondering if anyone can advise me on how to prevent this type of behavior. Losing keyboard focus to flash basically removes my browser's functionality until I use the mouse to click elsewhere (I use Mozilla Firefox). Anyone know of some kind of plugin or greasemonkey script that will prevent my keyboard focus from being stolen? Normal browser "shortcuts" are rendered useless by having to use the mouse to return keyboard focus to the browser. Edit: Reply to the post below, I do have flashblock / noscript and some other things. My issue is flash that I want to see/interact with stealing my focus. Basically looking for something I can toggle that will prevent flash from getting keyboard focus or a way to force my firefox keyboard commands to the browser

    Read the article

  • Virtual Machine Performance - More RAM or More Processor?

    - by webworm
    When looking to improve Virtual Machine performance what would be better ... Increasing the available RAM or increasing the processor power? Here is my choice ... Core 2 Duo @ 2.4 GHz with 8 GB RAM and integrated graphics (Mac Book Pro 13") Core i7 @ 2.6 GHz with 4 GB RAM and 512 MB dedicated graphics (Mac Book Pro 15") I plan to run Windows x64 in the VM with SQL Server 2008, Visual Studio 2010, and SharePoint 2010. I am planning to run VMWare Fusion v3. I also didn't know if a dedicated graphics card makes a difference when using a Virtual Machine. Thank you.

    Read the article

  • How to Route URL from one domain to another..

    - by Magic
    Hello, I am an C# ASP.NET developer. I am trying to route URL from one domain to another using Godaddy IIS Virtual dedicated server or Dedicated server. For example I have a website application called A_Application in my server. An example URL: www.myserver.com/A_Application/product/bear/?productid=1 or using pretty URL www.myserver.com/A_Application/product/bear/1 I would like to setup for my client to point to A_Application using his/her domain. My Client example URL will be: www.hisserver.com/product/bear/?productid=1 or using pretty URL www.hisserver.com/product/bear/1 Thanks!

    Read the article

  • azure performance

    - by Dave K
    I've moved my app from a dedicated server to azure (and sql azure), and have noticed substantial performance degradation. obviously not having the database and web server on the same piece of hardware is much of it, but I'm curious what other people have found in migrating to azure, and if there is anything any of you would suggest I do to improve it. Right now I'm considering moving back to my dedicated server... So in summary, are there any rules of thumb for this, existing research (wasn't able to find much) or other pieces of advice on improving the performance of the app? has anyone else found the same to be true, and improved their site's performance in some way? it's built in C# on asp.net mvc 2. Thanks!

    Read the article

  • Keep local MS SQL 2008 DB table and remote SQL Azure DB table in sync

    - by Boomerangertanger
    Hi there, I have a dedicated server which hosts a Windows Service which does a lot of very heavy load stuff and populates a number of SQL Server database tables. However, of all the database tables it populates and works with, I want only one to be synchronised with a remote SQL Azure DB table. This is because this table holds what I called Resolved data, which is the end result of the Windows Service's work. I would like to keep a SQL Azure database table in sync with this database table. As far as I understand, my options are: Move everything onto Azure (but that involves a massive development overhead and risk) Have another Windows Service on the dedicated server which essentially looks at changed records since the last update and then manually update the SQL Azure table

    Read the article

  • .NET connecting to oracle problems with the connectionstring

    - by Oxymoron
    At the moment I'm trying to make a connection to a local server. Connecting via, say, TOAD works fine. When I try to connect using .NET I get ora-12154. Which puzzles me, since I'm using the connectionstring from my TNSNAMES.ora file: XE = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = myPC)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = XE) ) ) As follows: private string connectionString = "Data Source=(DESCRIPTION =" +" (ADDRESS_LIST=(ADDRESS = (PROTOCOL = TCP)(HOST = myPC)(PORT = 1521)))" +" (CONNECT_DATA = (SERVER = DEDICATED)(SERVICE_NAME = XE));" +"User Id=sys;Password=zsxyzabc;"; Any ideas?

    Read the article

  • Difference between Cloud and Virtualization

    - by Akash Kava
    Ops: This does not belong to ServerFault because it focuses on Programing Architecture. I have following questions regarding differences between Cloud and Virtualization.. How Cloud is different then Virtualization? Currently I tried to find out pricing of Rackspace, Amazone and all similar cloud providers, I found that our current 6 dedicated servers came cheaper then their pricing. So how one can claim cloud is cheaper? Is it cheaper only in comparison of normal hosting? We re organized our infrastructure in virtual environment to reduce or configuration overhead at time of failure, we did not have to rewrite any peice of code that is already written for earlier setup. So moving to virtualization does not require any re programming. But cloud is absoltely different and it will require entire reprogramming right? Is it really worth to recode when our current IT costs are 3-4 times lower then cloud hosting including raid backups and all sort of clustering for high availability? New programming architecture means new overheads of training staff, new methods of testing and new deployment schemes, does it justify over "on demand resource usage" words of cloud? We are having current development architecture with simple Server side ASP.NET WebServices with no local context and on client side Flex/Silverlight which offers pretty good REST architecture and its highly scalable. How does cloud differs from REST model of deployment? On storage, SQL Server or MySQL offers pretty good replication and high availibility then what is advantage in cloud? Data guarantee, one of our vendor hosting some other customer's app on cloud (one of most used), lost Entire Hard Disk (the virtual) and entire module in first 6 months. Second provider said its your duty to take backup, fine I agree, but no provider gives SLA for data guarantee, they give 99% uptime. However in most business apps, uptime is less important then data integrity. In our 10 years of dedicated hosting experience we had only one hard disk crash. This makes me little skeptical to go for cloud and loosing control over data. And I feel its just a big marketing buzz to sell virtulization in different form. Size of data, currently all providers charge very heavy for large data, if you are hosting only below 100GB cloud can be good alternative, but I think virtual servers and dedicated servers above 100GB to few TBs are still cheaper. Why would want to pay so high on cloud when there is no data guarentee as well as it doesnt say anything about redundancy. (I wish SO had something for spell check for Internet Explorer, sorry for wrong spellings in my post)

    Read the article

  • headers already sent displays after moving files from one server to another

    - by PERR0_HUNTER
    hey there! I have a project running on dreamhost hosting and it's working fine, but since DH has been getting really slow I'm moving the project to my new dedicated server. The thing is that after I move all of my file over to the new dedicated (ubuntu 8.4) I get see warnings all over the place telling me that the headers had been already sent. The first thing I tried was moving the files via FTP: download to my machine, upload to server - Didn't worked Second try was tar.gz the folder on the first server and untar it on the new one, didn't worked either I tried chaing enconding to ANSI and they start working, however most of my files contain accents so ANSI is not an option for all my files i need UTF8 Any ideas on how to fix this? I'm sure it must be some sort of config

    Read the article

  • Make backup of large site with 100,000+ files/images

    - by niggles
    I tried backing up our site today using the Unix 'cp' command and ended up getting our office blocked out by PLESK - it added my ip to /etc/hosts.deny as it thought I was flooding the server. After Tech support fixed the issue, they suggested I go folder by folder to back it up, but there's about 10,000 folders on the site totaling 1/2 a terabyte, each with multiple sub-folders, so this isn't viable. Basically I want to be able to mirror the domain on another domain we've got set up on the same dedicated server so I can test with live images (the bulk of our content). Any suggestions e.g adding some rules to open_base_dir and getting PHP to recursively copy the folders to the other domain (remember it's on the same dedicated box so it just needs to traverse the directory, not FTP things).

    Read the article

  • What would cause Memcached to Hang for 2+ seconds?

    - by Brad Dwyer
    I'm going nuts trying to scale memcached. From their site: Memcached operations are almost all O(1). Connecting to it and issuing a get or stat command should never lag. If connecting lags, you may be hitting the max connections limit. See ServerMaint for details on stats to monitor. If issuing commands lags, you can have a number of tuning problems. Most common are hardware problems, not enough RAM (swapping), network problems (bandwidth, dropped packets, half-duplex connections). On rare occasion OS bugs or memcached bugs can contribute. Well.. it is most certainly not performing like an O(1) operation for me. Under low to normal load on our site memcached response times for get and set ops are about 0.001 seconds. Not bad. But if we triple the load we get outliers that take 100x (or in rare cases 1000x!) that long. I even had one instance where it took 2.2442 seconds for memcached to store a value. Obviously this is killing our site. Here's the output of Memcached-getStats during one of the slow periods: [pid] => 18079 [uptime] => 8903 [threads] => 4 [time] => 1332795759 [pointer_size] => 32 [rusage_user_seconds] => 26 [rusage_user_microseconds] => 503872 [rusage_system_seconds] => 125 [rusage_system_microseconds] => 477008 [curr_items] => 42099 [total_items] => 422500 [limit_maxbytes] => 943718400 [curr_connections] => 84 [total_connections] => 4946 [connection_structures] => 178 [bytes] => 7259957 [cmd_get] => 1679091 [cmd_set] => 351809 [get_hits] => 1662048 [get_misses] => 17043 [evictions] => 0 [bytes_read] => 109388476 [bytes_written] => 3187646458 [version] => 1.4.13 So things that I have ruled out so far are: Hitting the max connections limit (curr_connections of 84 is well below the default of max of 1024) Swapping - the machine has 900M out of 1024M of memory dedicated to memcached on a dedicated machine. It only appears to be using about 7MB of data as per the bytes stat. How would I diagnose the other hardware problems? prstat doesn't really show a whole lot going on in terms of CPU or memory usage. Not sure how to figure out the network problems but as this is a dedicated server on the same private network as the web box I don't think it's a connectivity issue (ping is less than a millisecond between the boxes). Is there something else I'm missing here? It's driving me nuts. Edit: Also forgot to mention that I've tried both persistent and non-persistent connections with minimal-to-no impact.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >