Search Results

Search found 5714 results on 229 pages for 'power mosfet'.

Page 201/229 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • Free hosting solution for a very low-traffic website [duplicate]

    - by user966939
    This question already has an answer here: How to find web hosting that meets my requirements? 4 answers I run a very low-traffic website (about 40 users, basically all of which are daily active on the site). I don't see it changing anytime soon either, as there is no way to sign up on the site right now. Until now I have just been using a sub-directory on a friend's host (shared), to host the web site. But in only a few weeks from now, his subscription will end, and he has no plans on renewing it. So of course this means I'll have to move on to something else. But I don't think I'll find someone who'd be willing to share a... shared host with me again. And besides, the software used on that server is ancient (PHP 4.4.9 + MySQL 4.1.22). There's one obvious solution that comes to mind, I guess: choose a better host and pay for it myself. The problem here is that I have no real fixed income, as I'm only a student. So even if the pricing is dirt cheap, I just can't be certain I will be able to afford it, every single month, for... at least 2 years maybe? So I've looked at free hosting solutions instead. The least requirement I had was that it was completely free of ads. But no matter where I look, I always find something in a corner or two ("what can you expect from a free host?" - yeah I know, but I guess it was worth a shot). For example, on Byethost (one of the free hosts I tried), if you trigger a PHP error while error reporting is set to E_ALL, you will spawn some hidden ad... Besides Byethost, I've tried 000Webhost, x10Hosting, 2Freehosting/1Freehosting, Wink.ws, and they are only worse. Okay, I'm running low on ideas. But! What if I just hosted the site myself, on my own computer? That could work. I actually do have my computer on practically 24/7. But not really. Sometimes I need to reboot it, and sometimes we even have power outages. And what if the hardware needs an upgrade? It's not such a big deal for me if the site went down, because I know what's going on; but what about the users? If I do decide to host it myself, is there some way to show users an alternate page instead of them just seeing a generic "server not found" page in the browser when the site is not accessible? Or is there something I have been missing out on? Is there a different kind of "web hosting" solution out there that I haven't heard of? Here is what I'm really looking for: Free (as in, no costs) NO ads Bandwidth enough for a low-traffic forum with roughly 40 users (Semi-)Up-to-date PHP and MySQL (at least not older than a year) No standard (non-extension) PHP functions turned off - such as sleep() The mbstring extension is enabled Disk space: at least 5 MB At least one MySQL database Some bonus points would be: Max execution time of PHP scripts can be set Remote access to MySQL database What would be the best solution for me? Is there one?

    Read the article

  • Move over DFS and Robocopy, here is SyncToy!

    - by andywe
    Ever since Windows 2000, I have always had the need to replicate data to multiple endpoints with the same content. Until DFS was introduced, the method of thinking was to either manually copy the data location by location, or to batch script it with xcopy and schedule a task. Even though this worked (and still does today), it was cumbersome, and intensive on the network, especially when dealing with larger amounts of data. Then along came robocopy, as an internal tool written by an enterprising programmer at Microsoft. We used it quite a bit, especially when we could not use DFS in the early days. It was received so well, it made it into the public realm. At least now we could have the ability to determine what files had changed and only replicate those. Well, over time there has been evolution of this ideal. DFS is obviously the Windows enterprise class service to do this, along with BrancheCache..however you don’t always need or want the power of DFS, especially when it comes to small datacenter installations, or remote offices. I have specific data sets that are on closed or restricted networks, that either have a security need for this, or are in remote countries where bandwidth is a premium. FOr this, I use the latest evolution for one off replication names Synctoy. Synctoy is from Microsoft, seemingly released in 2009, that wraps a nice GUI around setting up a paired set of folders (remember the mobile briefcase from Windows 98?), and allowing you the choice of synchronization methods. 1 way, or 2 way. Simply create a paired set of folders on the source and destination, choose your options for content, exclude any file types you don’t want to replicate, and click run. Scheduling is even easier. MS has included a wrapper for doing just this so all you enter in your task schedule in the SynToyCMD.exe, a –R as an argument, and the time schedule. No more complicated command lines or scripts.   I find this especially useful when I use MS backup to back up a system volume, but only want subsets of backup information of a data share and ONLY when that dataset has changed. Not relying on full backups and incremental. An example of this is my application installation master share. I back this up with SyncToy because I do not need multiple backup copies..one copy elsewhere suffices to back it up. At home, very useful for your pictures, videos, music, ect..the backup is online and ready to access, not waiting for you to restore a backup file, and no need to institute a domain simply to have DFS.'   Do note there is a risk..if you accidently delete a file and do not catch this before the next sync, then depending on your SyncToy settings, you can indeed lose that file as the destination updates..so due diligence applies. I make it a rule to sync manly one way…I use my master share for making changes, and allow the schedule to follow suit. Any real important file I lock down as read only through file permissions so it cannot be deleted unless I intervene.   Check out the tool and have some fun! http://www.microsoft.com/en-us/download/details.aspx?DisplayLang=en&id=15155

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • Approach for packing 2D shapes while minimizing total enclosing area

    - by Dennis
    Not sure on my tags for this question, but in short .... I need to solve a problem of packing industrial parts into crates while minimizing total containing area. These parts are motors, or pumps, or custom-made components, and they have quite unusual shapes. For some, it may be possible to assume that a part === rectangular cuboid, but some are not so simple, i.e. they assume a shape more of that of a hammer or letter T. With those, (assuming 2D shape), by alternating direction of top & bottom, one can pack more objects into the same space, than if all tops were in the same direction. Crude example below with letter "T"-shaped parts: ***** xxxxx ***** x ***** *** ooo * x vs * x vs * x vs * x o * x * xxxxx * x * x o xxxxx xxx Right now we are solving the problem by something like this: using CAD software, make actual models of how things fit in crate boxes make estimates of actual crate dimensions & write them into Excel file (1) is crazy amount of work and as the result we have just a limited amount of possible entries in (2), the Excel file. The good things is that programming this is relatively easy. Given a combination of products to go into crates, we do a lookup, and if entry exists in the Excel (or Database), we bring it out. If it doesn't, we say "sorry, no data!". I don't necessarily want to go full force on making up some crazy algorithm that given geometrical part description can align, rotate, and figure out best part packing into a crate, given its shape, but maybe I do.. Question Well, here is my question: assuming that I can represent my parts as 2D (to be determined how), and that some parts look like letter T, and some parts look like rectangles, which algorithm can I use to give me a good estimate on the dimensions of the encompassing area, while ensuring that the parts are packed in a minimal possible area, to minimize crating/shipping costs? Are there approximation algorithms? Seeing how this can get complex, is there an existing library I could use? My thought / Approach My naive approach would be to define a way to describe position of parts, and place the first part, compute total enclosing area & dimensions. Then place 2nd part in 0 degree orientation, repeat, place it at 180 degree orientation, repeat (for my case I don't think 90 degree rotations will be meaningful due to long lengths of parts). Proceed using brute force "tacking on" other parts to the enclosing area until all parts are processed. I may have to shift some parts a tad (see 3rd pictorial example above with letters T). This adds a layer of 2D complexity rather than 1D. I am not sure how to approach this. One idea I have is genetic algorithms, but I think those will take up too much processing power and time. I will need to look out for shape collisions, as well as adding extra padding space, since we are talking about real parts with irregularities rather than perfect imaginary blocks. I'm afraid this can get geometrically messy fairly fast, and I'd rather keep things simple, if I can. But what if the best (practical) solution is to pack things into different crate boxes rather than just one? This can get a bit more tricky. There is human element involved as well, i.e. like parts can go into same box and are thus a constraint to be considered. Some parts that are not the same are sometimes grouped together for shipping and can be considered as a common grouped item. Sometimes customers want things shipped their way, which adds human element to constraints. so there will have to be some customization.

    Read the article

  • SharePoint Content and Site Editing Tips

    - by Bil Simser
    A few content management and site editing tips for power users on this bacon flavoured unicorn morning. The theme here is keep it clean!Write "friendly" email addressesRemember it's human beings reading your content. So seeing something like "If you have questions please send an email to [email protected]" breaks up the readiblity. Instead just do the simple steps of writing the content in plain English and going back, highlighting the name and insert a link (note: you might have to prefix the link with mailto:[email protected]). It makes for a friendlier looking page and hides the ugliness that are sometimes in email addresses.Use friendly column and list namesThis is a big pet peeve of mine. When you first create a column or list with spaces the internal name is changed. The display name might be "My Amazing List of Animals with Large Testicles" but the internal (and link) name becomes "My_x00x20_Amazing_x00x20_List_x00x20_of_x00x20_Animals_x00x20_with_x00x20_Large_x00x20_Testicles". What's worse is if you create a publishing page named "This Website is Fueled By a Dolphin's Spleen". Not only is it incorrect grammar, but the apostrophe wreaks havoc on both the internal name for the list (with lots of crazy hex codes) as well as the hyperlink (where everything is uuencoded). Instead create the list with a distinct and compact name then go back and change it to whatever you want. The end result is a better formed name that you can both script and access in code easier.Keep your Views CleanWhen you add a column to a list or create a new list the default is to add it to the default view. Do everyone a favour and don't check this box! The default view of a list should be something similar to the Title field and nothing else. Keep it clean. If you want to set a defalt view that's different, go back and create one with all the fields and filtering and sorting columns you want and set it as default. It's a good idea to keep the original AllItems.aspx (note the lack of space in the filename!) easy and unfiltered. It's also a good idea to keep your column count down in views. Don't let every column be added by default and don't add every column just because you can. Create separate views for distinct responsibilities and try to keep the number of columns down to a single screen to prevent horizontal scrolling.Simple NavigationThe Quick Launch is a great tool for navigating around your site but don't use the default of adding all lists to it. Uncheck that box and keep navigation simple. Create custom groupings that make sense so if you don't have a site with "Documents and Lists" but "Reports and Notices" makes more sense then do it. Also hide internal lists from the Quick Launch. For example, if most users don't need to see all the lookup tables you might have on a site don't show them. You can use audience filtering on the Quick Launch if you want to hide admin items from non-admin users so consider that as an option.Enjoy!

    Read the article

  • The best computer ever

    - by Jeff
    (This is a repost from my personal blog… wow… I need to write more technical stuff!) About three years and three months ago, I bought a 17" MacBook Pro, and it turned out to be the best computer I've ever owned. You might think that every computer with better specs is automatically better than the last, but that hasn't been my experience. My first one was a Sony, back in the Pentium III days, and it cost an astonishing $2,500. That was even more ridiculous in 1999 dollars. It had a dial-up modem, and a CD-ROM, built-in! It may have even played DVD's. A few years later I bought an HP, and it ended up being a pile of shit. The power connector inside came loose from the board, and on occasion would even short. In 2005, I bought a Dell, and it wasn't bad. It had a really high resolution screen (complete with dead pixels, a problem in those days), and it was the first laptop I felt I could do real work on. When 2006 rolled around, Apple started making computers with Intel CPU's, and I bought the very first one the week it came out. I used Boot Camp to run Windows. I still have it in its box somewhere, and I used it for three years. The current 17" was new in 2009. The goodness was largely rooted in having a big screen with lots of dots. This computer has been the source of hundreds of blog posts, tens of thousands of lines of code, video and photo editing, and of course, a whole lot of Web surfing. It connected to corpnet at Microsoft, WiFi in Hawaii and has presented many a deck. It has traveled with me tens of thousands of miles. Last year, I put a solid state drive in it, and it was like getting a new computer. I can boot up a Windows 7 VM in about 19 seconds. Having 8 gigs of RAM has always been fantastic. Everything about it has been fast and fun. When new, the battery (when not using VM's) could get as much as 10 hours. I can still do 7 without much trouble. After 460 charge cycles, the battery health is still between 85 and 90%. The only real negative has been the size and weight. It's only an inch thick, but naturally it's pretty big with a 17" screen. You don't get battery life like that without a huge battery, either, so it's heavy. It was never a deal breaker, but sometimes a long haul across a large airport, you know you're carrying it. Today, Apple announced a new, thinner and lighter 15" laptop, with twice the RAM and CPU cores, and four times the screen resolution. It basically handles my size and weight issues while retaining the resolution, and it still costs less than my 17" did. So I ordered one. Three years is an excellent run, but I kind of budgeted for a new workhorse this year anyway. So if you're interested in a 17" MacBook Pro with a Core 2 Duo 2.66 GHz CPU, 8 gigs of RAM and a 320 gig hard drive (sorry, I'm keeping the SSD), I have one to sell. They've apparently discontinued the 17", which is going to piss off the video community. It's in excellent condition, with a few minor scratches, but I take care of my stuff.

    Read the article

  • Oracle Executive Strategy Brief: Enterprise-Grade Cloud Applications

    - by B Shashikumar
    Cloud Computing has clearly evolved into one of the dominant secular trends in the industry. Organizations are looking to the cloud to change how they buy and consume IT. And its no longer about just lower up-front costs. The cloud promises to deliver greater agility and free up resources to focus on innovation versus running and maintaining systems. But are organizations actually realizing these benefits? The full promise of cloud is not being realized by customers who entrust their business to multiple niche cloud providers. While almost 9 out of 10 companies  expect more IT agility with cloud, only 47% are actually getting it (Source: 2011 State of Cloud Survey by Symantec). These niche cloud customers have also seen the promises of lower costs, efficiency gains, improved security, and compliance go unfulfilled. Having one cloud provider for customer relationship management (CRM) and another for human capital management (HCM), and then trying to glue these proprietary systems together while integrating to a back-office financial system can add to complexity and long-term costs. Completing a business process or generating an integrated report is cumbersome, and leverages incomplete data. Why can’t niche cloud providers deliver on the full promise of cloud? It’s simple: you still need to complete business processes. You still need reporting that enables you to take action using data from multiple systems. You still have to comply with SOX and other industry regulations. These requirements don’t go away just because you deploy in the cloud. Delivering lower up-front costs by enabling customers to buy software as a service (SaaS) is the easy part. To get real value that lasts longer than your quarterly report, it’s important to realize the benefits of cloud without compromising on functionality and while having the right level of control and flexibility. This is the true promise of cloud. Oracle’s cloud strategy centers around delivering the benefits of cloud—without compromise. We uniquely empower our customers with complete solutions and choice. From the richest functionality to integrated reporting and great user experience. It’s all available in the cloud. And it works not just with other Oracle cloud applications, but with your existing Oracle and third-party systems as well. This helps protect your current investments and extend their value as you journey to the cloud. We’ve made the necessary investments not only in our applications but also in the underlying technology that makes it all run—from the platform down to the hardware and operating system. We make it all. And we’ve engineered it to work together and be highly optimized for our customers, in the cloud. With Oracle enterprise-grade cloud applications, you get the benefits of cloud plus more power, more choice, and more confidence. Read more about how you can realize the true advantage of Cloud with Oracle Enterprise-grade Cloud applications in the Oracle Executive Strategy Brief here.  You can also attend an Oracle Cloud Conference event at a city near you. Register here. 

    Read the article

  • What Counts For a DBA – Depth

    - by Louis Davidson
    SQL Server offers very simple interfaces to many of its features. Most people could open up SSMS, connect to a server, write a simple query and see the results. Even several of the core DBA tasks are deceptively straightforward. It doesn’t take a rocket scientist to perform a basic database backup or run a trace (even using the newfangled Extended Events!). However, appearances can be deceptive, and often times it is really important that a DBA understands not just the basics of how to perform a task, but why we do a task, and how that task works. As an analogy, consider a child walking into a darkened room. Most would know that they need to turn on the light, and how to do it, so they flick the switch. But what happens if light fails to shine forth. Most would immediately tell you that you need to consider changing the light bulb. So you hop in the car and take them to the local home store and instruct them to buy a replacement. Confronted with a 40 foot display of light bulbs, how will they decide which of the hundreds of types of bulbs, of different types, fittings, shapes, colors, power and efficiency ratings, is the right choice? Obviously the main lesson the child is going to learn this day is how to use their cell phone as a flashlight so they don’t have to ask for help the next time. Likewise, when the metaphorical toddlers who use your database server have issues, they will instinctively know something is wrong, and may even have some idea what caused it, but will have no depth of knowledge to figure out the right solution. That is where the DBA comes in and attempts to save the day. However, when one looks beneath the shiny UI, SQL Server has its own “40 foot display of light bulbs”, in the form of the tremendous number of tools and the often-bewildering amount of information they can present to the DBA, to help us find issues. Unfortunately, resorting to guesswork, to trying different “bulbs” over and over, hoping to stumble on the answer. This is where the right depth of knowledge goes a long way. If we need to write a SELECT statement, then knowing the syntax and where to find the data is not enough. Knowledge of indexes and query plans is essential. Without it, we might hit on a query that “works”, but we are basically still a user, not a programmer, because we have no real control over our platform. Is that level of knowledge deep enough? Probably not, since knowledge of the underlying metadata and structures would be very useful in helping us make sense of any query plan. Understanding the structure of an index makes the “key lookup” operator not sound like what you do when someone tapes your car key to the ceiling. So is even this level of understanding deep enough? Do we need to understand the memory architecture used to process the query? It might be a comforting level of knowledge, and will doubtless come in handy at some point, but is not strictly necessary in most cases. Beyond that lies (more or less) full knowledge of SQL language and the intricacies of every step the SQL Server engine takes to process our query. My personal theory is that, as a professional, our knowledge of a given task should extend, at a minimum, one level deeper than is strictly necessary to perform the task. Anything deeper can be left to the ridiculously smart, or obsessive, or both. As an example. tasked with storing an integer value between 0 and 99999999, it’s essential that I know that choosing an Integer over Decimal(8,0) will likely offer performance benefits. It is then useful that I also understand the value of adding a CHECK constraint, to make sure the values are valid to the desired range; and comforting that I know a little about the underlying processors, registers and computer math. Anything further, I leave to the likes of Joe Chang, whose recent blog post on the topic offers depth by the bucketful!  

    Read the article

  • Customer Loyalty vs. Customer Engagement: Who Cares?

    - by Jeb Dasteel-Oracle
    Have you read the recent Forbes OracleVoice blog titled Customer Loyalty is Dead. Long Live Engagement!? If you haven’t, take a look. This article prompted lots of conversation in the social realm. Many who read the article voiced their reactions to the headline and now I’m jumping in to add my view. Normal 0 false false false EN-US X-NONE X-NONE Customer loyalty is still key. It’s the effect and engagement is the cause. We at least know that to be true for our customers. We are in an age where customers are demanding to be heard. We need them to be actively involved – or engaged – as well. Greater levels of customer engagement, properly targeted, positively correlate with satisfaction. Our data has shown us this over and over. Satisfied customers are more loyal and more willing to vocalize their satisfaction through referencing, and are more likely to purchase again, all of which in turn drives incremental revenue – from the customer doing the referencing AND the customer on the receiving end of that reference. Turning this around completely, if we begin to see the level of a customer’s engagement start to wane, this is an indicator that their satisfaction, loyalty, and future revenue are likely at risk. At Oracle, we’ve put in place many programs to target, encourage, and then track engagement, allowing us to measure engagement as a determinant of loyalty. Some of these programs include our Key Accounts, solution design and architectural, Executive Sponsorship, as well as executive advisory boards. Specific programs allow us to engage specific contacts within specific customer organizations (based on role) and then systematically track their engagement activities over time, along side of tracking customer satisfaction, loyalty, referenceability, and incremental revenue contribution. Continuous measurement of engagement allows us to better understand customer views of what it means to partner with a provider and adjust program participation to better meet the needs of the partnership. We can also track across customer segments, and design new programs that are even more effective than the ones we have in place today. In case you missed any of my previous Forbes articles, I’ve included links below for easy access. Award-Winning Companies Put Customers First The Power of Peer Networks: 5 Reasons to Get (and Stay) Involved Technology At Work: Traveling In Style Customer Central: 8 Strategies for Putting Customers at the Core of Your Business Technology at Work: Five Companies Doing IT Right /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • Faster Memory Allocation Using vmtasks

    - by Steve Sistare
    You may have noticed a new system process called "vmtasks" on Solaris 11 systems: % pgrep vmtasks 8 % prstat -p 8 PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 8 root 0K 0K sleep 99 -20 9:10:59 0.0% vmtasks/32 What is vmtasks, and why should you care? In a nutshell, vmtasks accelerates creation, locking, and destruction of pages in shared memory segments. This is particularly helpful for locked memory, as creating a page of physical memory is much more expensive than creating a page of virtual memory. For example, an ISM segment (shmflag & SHM_SHARE_MMU) is locked in memory on the first shmat() call, and a DISM segment (shmflg & SHM_PAGEABLE) is locked using mlock() or memcntl(). Segment operations such as creation and locking are typically single threaded, performed by the thread making the system call. In many applications, the size of a shared memory segment is a large fraction of total physical memory, and the single-threaded initialization is a scalability bottleneck which increases application startup time. To break the bottleneck, we apply parallel processing, harnessing the power of the additional CPUs that are always present on modern platforms. For sufficiently large segments, as many of 16 threads of vmtasks are employed to assist an application thread during creation, locking, and destruction operations. The segment is implicitly divided at page boundaries, and each thread is given a chunk of pages to process. The per-page processing time can vary, so for dynamic load balancing, the number of chunks is greater than the number of threads, and threads grab chunks dynamically as they finish their work. Because the threads modify a single application address space in compressed time interval, contention on locks protecting VM data structures locks was a problem, and we had to re-scale a number of VM locks to get good parallel efficiency. The vmtasks process has 1 thread per CPU and may accelerate multiple segment operations simultaneously, but each operation gets at most 16 helper threads to avoid monopolizing CPU resources. We may reconsider this limit in the future. Acceleration using vmtasks is enabled out of the box, with no tuning required, and works for all Solaris platform architectures (SPARC sun4u, SPARC sun4v, x86). The following tables show the time to create + lock + destroy a large segment, normalized as milliseconds per gigabyte, before and after the introduction of vmtasks: ISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1386 245 6X X7560 64 1016 153 7X M9000 512 1196 206 6X T5240 128 2506 234 11X T4-2 128 1197 107 11x DISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1582 265 6X X7560 64 1116 158 7X M9000 512 1165 152 8X T5240 128 2796 198 14X (I am missing the data for T4 DISM, for no good reason; it works fine). The following table separates the creation and destruction times: ISM, T4-2 before after ------ ----- create 702 64 destroy 495 43 To put this in perspective, consider creating a 512 GB ISM segment on T4-2. Creating the segment would take 6 minutes with the old code, and only 33 seconds with the new. If this is your Oracle SGA, you save over 5 minutes when starting the database, and you also save when shutting it down prior to a restart. Those minutes go directly to your bottom line for service availability.

    Read the article

  • Oracle Excellence Award

    - by Hartmut Wiese
    CALL FOR NOMINATIONS 2014 Oracle Excellence Award: Sustainability Innovation Is your organization using an Oracle product to help with a sustainability initiative while reducing costs? Saving energy? Saving gas? Saving paper? For example, you may use Oracle’s Agile Product Lifecycle Management to design more eco-friendly products, Oracle Transportation Management to reduce fleet emissions, Oracle Exadata Database Machine to decrease power and cooling needs while increasing database performance, Oracle Business Intelligence to measure environmental impacts, or one of many other Oracle products. Your organization may be eligible for the 2014 Oracle Excellence Award: Sustainability Innovation. Submit a nomination form located here by Friday June 20 if your company is using any Oracle product to take an environmental lead as well as to reduce costs and improve business efficiencies by using green business practices. These awards will be presented during Oracle OpenWorld 2014 (September 28-October 2) in San Francisco.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 About the Award • Winners will be selected from the customer and/or partner nominations. Either a customer, their partner, or Oracle representative can submit the nomination form on behalf of the customer.• There is a nomination form here to discuss your use of Oracle products and how they have helped your sustainability efforts and reduced costs. • Winners will be selected based on the extent of the environmental impact they have had as well as the business efficiencies they have achieved through their combined use of Oracle products. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Nomination Eligibility • Your company uses at least one component of Oracle products, whether it's the Oracle database, business applications, Fusion Middleware, or Sun servers/storage. • This solution should be in production or in active development. • Nomination deadline: Friday June 20, 2014. Benefits to Award Winners • Award presented to winners during Oracle OpenWorld by Jeff Henley, Oracle Chairman of the Board • Free Oracle OpenWorld registration pass for each winning customer • 2014 Oracle Excellence Award: Sustainability Innovation award logo for inclusion on your own website &/or press release • Possible placement in Oracle Profit Magazine &/or Oracle Magazine • ‘Enable the Eco-Enterprise’ podcast opportunity See last year's winners here Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 ______________________________________________________________________________________ Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Questions? Send an email to: [email protected] Follow Oracle’s Sustainability Solutions on Twitter, LinkedIn, YouTube, and the Sustainability Matters blog Web page with award details:  http://www.oracle.com/us/products/applications/green/call-for-nominations-185050.html  

    Read the article

  • MySQL, An Ideal Choice for The Cloud

    - by Bertrand Matthelié
    As the world's most popular web database, MySQL has quickly become the leading database for the cloud, with most providers offering MySQL-based services. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Access our Resource Kit to discover: Why MySQL has become the leading database in the cloud, and how it addresses the critical attributes of cloud-based deployments How ISVs rely on MySQL to power their SaaS offerings Best practices to deploy the world’s most popular open source database in public and private clouds Normal 0 false false false EN-US X-NONE X-NONE You will also find out how you can leverage MySQL together with Hadoop and other technologies to unlock the value of Big Data, either on-premise or in the cloud. Access white papers, webinars, case studies and other resources in /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} our Resource Kit now!

    Read the article

  • installer can't find partition, but fdisk can find them

    - by pxd
    I'm installing ubuntu 12.04, my system had install 2 system -- winxp and ubuntu 10.10. Now, I want to update ubuntu to 12.04. I use usb disk to install 12.04. But, the installer can't not find my partition in my harddisk. But, the fdisk can find them. Can you help me? How to do? ubuntu@ubuntu:~$ sudo lshw -short H/W path Device Class Description system HP 2230s (NN868PA#AB2) /0 bus 3037 /0/9 memory 64KiB BIOS /0/0 processor Intel(R) Core(TM)2 Duo CPU T6570 @ 2.10GHz /0/0/1 memory 2MiB L2 cache /0/0/3 memory 32KiB L1 cache /0/0/0.1 processor Logical CPU /0/0/0.2 processor Logical CPU /0/2 memory 32KiB L1 cache /0/4 memory 2GiB System Memory /0/4/0 memory SODIMM [empty] /0/4/1 memory 2GiB SODIMM DDR2 Synchronous 800 MHz (1.2 ns) /0/100 bridge Mobile 4 Series Chipset Memory Controller Hub /0/100/2 display Mobile 4 Series Chipset Integrated Graphics Controller /0/100/2.1 display Mobile 4 Series Chipset Integrated Graphics Controller /0/100/1a bus 82801I (ICH9 Family) USB UHCI Controller #4 /0/100/1a.1 bus 82801I (ICH9 Family) USB UHCI Controller #5 /0/100/1a.2 bus 82801I (ICH9 Family) USB UHCI Controller #6 /0/100/1a.7 bus 82801I (ICH9 Family) USB2 EHCI Controller #2 /0/100/1b multimedia 82801I (ICH9 Family) HD Audio Controller /0/100/1c bridge 82801I (ICH9 Family) PCI Express Port 1 /0/100/1c.1 bridge 82801I (ICH9 Family) PCI Express Port 2 /0/100/1c.1/0 wlan1 network PRO/Wireless 5100 AGN [Shiloh] Network Connection /0/100/1c.2 bridge 82801I (ICH9 Family) PCI Express Port 3 /0/100/1c.4 bridge 82801I (ICH9 Family) PCI Express Port 5 /0/100/1c.5 bridge 82801I (ICH9 Family) PCI Express Port 6 /0/100/1c.5/0 eth1 network 88E8072 PCI-E Gigabit Ethernet Controller /0/100/1d bus 82801I (ICH9 Family) USB UHCI Controller #1 /0/100/1d.1 bus 82801I (ICH9 Family) USB UHCI Controller #2 /0/100/1d.2 bus 82801I (ICH9 Family) USB UHCI Controller #3 /0/100/1d.7 bus 82801I (ICH9 Family) USB2 EHCI Controller #1 /0/100/1e bridge 82801 Mobile PCI Bridge /0/100/1f bridge ICH9M LPC Interface Controller /0/100/1f.2 scsi0 storage 82801IBM/IEM (ICH9M/ICH9M-E) 4 port SATA Controller [AHCI mode] /0/100/1f.2/0 /dev/sda disk 500GB WDC WD5000BEVT-0 /0/100/1f.2/0/1 /dev/sda1 volume 48GiB Windows NTFS volume /0/100/1f.2/0/2 /dev/sda2 volume 416GiB Extended partition /0/100/1f.2/0/2/5 /dev/sda5 volume 97GiB HPFS/NTFS partition /0/100/1f.2/0/2/6 /dev/sda6 volume 198GiB HPFS/NTFS partition /0/100/1f.2/0/2/7 /dev/sda7 volume 27GiB Linux filesystem partition /0/100/1f.2/0/2/8 /dev/sda8 volume 93GiB Linux filesystem partition /0/100/1f.2/1 /dev/cdrom disk CDDVDW TS-L633M /0/1 scsi6 storage /0/1/0.0.0 /dev/sdb disk 15GB STORAGE DEVICE /0/1/0.0.0/0 /dev/sdb disk 15GB /0/1/0.0.0/0/1 /dev/sdb1 volume 14GiB Windows FAT volume /1 power HZ04037 ubuntu@ubuntu:~$ ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x31263125 Device Boot Start End Blocks Id System /dev/sda1 * 63 102277727 51138832+ 7 HPFS/NTFS/exFAT /dev/sda2 102277728 976784129 437253201 f W95 Ext'd (LBA) /dev/sda5 102277791 307078127 102400168+ 7 HPFS/NTFS/exFAT /dev/sda6 307078191 724141151 208531480+ 7 HPFS/NTFS/exFAT /dev/sda7 724142080 781459455 28658688 83 Linux /dev/sda8 781461504 976771071 97654784 83 Linux Disk /dev/sdb: 15.9 GB, 15931539456 bytes 64 heads, 32 sectors/track, 15193 cylinders, total 31116288 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009eb92 Device Boot Start End Blocks Id Systemfile:///home/ubuntu/Pictures/Screenshot%20from%202012-07-07%2010:25:40.png /dev/sdb1 * 32 31115263 15557616 c W95 FAT32 (LBA) ubuntu 12.04 installer can't find the partition in my hard disk, only find device - /dev/sda.(sorry, I'm new user, so can't send image.)

    Read the article

  • Top 10 things I Learned this October

    - by rbewtra
    Last week, I attended the second largest IT conference. It was Gartner Symposium IT Expo held in Orlando, Florida. Earlier this month, I also had the opportunity to be part of the largest IT conference earlier in the month – Oracle Open World . Both were gatherings for senior IT professionals – CIOs, Senior IT  and Line of Business executives, and Developers. At both events, I learned a great deal about how companies are innovating and leveraging technology.  Here are my top 10 take-aways: #10.  Everyone is talking about Social, Mobile and Cloud  - Whether listening to Gartner discuss The Nexus of Forces or listening to Oracle’s Executive Vice President Hasan Rizvi deliver Oracle Fusion Middleware General Session  -- everyone is talking about Social, Mobile Cloud, and Information – Gartner, Oracle, our customers, partners, -- everyone.   #9. SOA is NOT dead, it is more important than ever before – it is an imperative!  #8. The big question around IT security is not “what will you do IF?” but “what will you do WHEN?” #7. General Colin Powell is an IT guy! Aside from having served as National Security Advisor, Chairman of the Joint Chiefs of Staff and as the U.S. Secretary of State. Gen Colin Powell was an inspirational speaker at the Gartner Symposium and it was clear he understands IT and the powerful impact it has on our society and our youth today. #6. Change will happen, we need to plan for it! #5. When everything is connected and just works, we have harnessed the power of technology. Middleware is at the heart of social, mobile and cloud. #4. Innovation is happening everywhere! Attending both IT events I was able to hear from companies of all sizes and across industries – including Tesco, Nike, Electronic Arts, Nintendo, International Speedway--  they all discussed how they are transforming their companies and their industries. #3. “One size fits all” strategy does not work instead it alienates IT and business. The PACE Layered Application Strategy is a framework that allows IT to have that Nexus of Forces conversation with the business. #2. To stay relevant, we need to hire the innovation workers, develop for that innovation layer. #1. My smartphone is the most valuable tool I own! Everyday with it, I am able to communicate via phone, email, text with family, friends, colleagues. I am able to look up directions to my hotel, make reservations at restaurants, view my calendar, take pictures, record messages, check in for flights and so much more…. I can never leave home without it. Look forward to catching up again soon! Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • Hosted Monitoring

    - by Grant Fritchey
    The concept of using services to take the place of writing a lot of your own code goes way, way back in computing history. The fundamentals of the concept go back to the dawn of computing with places like IBM hosting time-shares for computing power that you could rent for short periods of time. But things really took off with the building of the Web. Now, all the growth with virtual machines, hosted machines, hosted services from vendors like Amazon and Microsoft, the need to keep all of your software locally on physical boxes is just going the way of the dodo. There will likely always be some pieces of software that you keep on machines on your property or on your person, but the concept of keeping fundamental services locally is going away. As someone put it to me once, if you were starting a business right now, would you bother setting up an Exchange server to manage your email or would you just go to one of the external mail services for everything? For most of us (who are not Exchange admins) the answer is pretty easy. With all this momentum to having external services manage more and more of the infrastructure that’s not business unique, why would you burn up a server and license instance setting up monitoring for your SQL Servers? Of course, some of you are dealing with hyper-sensitive data that might require, through law or treaty, that you lock it down and never expose it to the intertubes, but most of us are not. So, what if someone else took on the basic hassle of setting up monitoring on your systems? That’s what we’re working on here at Red Gate. Right now it’s a private test, but we’re growing it and developing it and it’ll be going to a public beta, probably (hopefully) this year. I’m running it on my machines right now. The concept is pretty simple. You put a relay on your server, poke a hole in your firewall for it, and we start monitoring your server using SQL Monitor. It’s actually shocking how easy it is to get going. You still have to adjust your alerting thresholds, but that’s a standard part of alerting. Your pain threshold and my pain threshold for any given alert may be different. But from there, we do all the heavy lifting, keeping your data online and available, providing you with access to the information about how your servers are behaving, everything. Maybe it’s just me, but I’m really excited by this. I think we’re getting to a place where we can really help the small and medium sized businesses get a monitoring solution in place, quickly and easily. All you crazy busy, and possibly accidental, DBAs and system admins finally can set up monitoring without taking all the time to configure systems, run installs, and all the rest. You just have to tweak your alerts and you’re ready to run. If you are interested in checking it out, you can apply for the closed beta through the Monitor web page.

    Read the article

  • ?SPARC T4?????????????·???? : Netra SPARC T4-1

    - by user13138700
    ?SPARC T4???????????????·??????? Netra SPARC T4-1 ???? Netra SPARC T4-2 ?2012?1?10??????????3?15??????????????(????) ?????????? Netra SPARC T4-1 ? 4core ???( T4 ???????? 4core ???)(*)???????????????????????????(*)( Netra SPARC T4-1 ?????? 4core ???? 8core ????????) ??? prtdiag ????? pginfo ??????????????? 8????/1core ???? prtdiag ????????4core=32???????????????pginfo ?????????????????core ???????????????????? # ./prtdiag -v System Configuration: Oracle Corporation sun4v Netra SPARC T4-1 ???????: 130560 M ??? ================================ ?? CPU ================================ CPU ID Frequency Implementation Status ------ --------- ---------------------- ------- 0 2848 MHz SPARC-T4 on-line 1 2848 MHz SPARC-T4 on-line 2 2848 MHz SPARC-T4 on-line 3 2848 MHz SPARC-T4 on-line 4 2848 MHz SPARC-T4 on-line 5 2848 MHz SPARC-T4 on-line 6 2848 MHz SPARC-T4 on-line 7 2848 MHz SPARC-T4 on-line 8 2848 MHz SPARC-T4 on-line 9 2848 MHz SPARC-T4 on-line 10 2848 MHz SPARC-T4 on-line 11 2848 MHz SPARC-T4 on-line 12 2848 MHz SPARC-T4 on-line 13 2848 MHz SPARC-T4 on-line 14 2848 MHz SPARC-T4 on-line 15 2848 MHz SPARC-T4 on-line 16 2848 MHz SPARC-T4 on-line 17 2848 MHz SPARC-T4 on-line 18 2848 MHz SPARC-T4 on-line 19 2848 MHz SPARC-T4 on-line 20 2848 MHz SPARC-T4 on-line 21 2848 MHz SPARC-T4 on-line 22 2848 MHz SPARC-T4 on-line 23 2848 MHz SPARC-T4 on-line 24 2848 MHz SPARC-T4 on-line 25 2848 MHz SPARC-T4 on-line 26 2848 MHz SPARC-T4 on-line 27 2848 MHz SPARC-T4 on-line 28 2848 MHz SPARC-T4 on-line 29 2848 MHz SPARC-T4 on-line 30 2848 MHz SPARC-T4 on-line 31 2848 MHz SPARC-T4 on-line ======================= Physical Memory Configuration ======================== ???? # pginfo -p -T 0 (System [system,chip]) CPUs: 0-31 `-- 3 (Data_Pipe_to_memory [system,chip]) CPUs: 0-31 |-- 2 (Floating_Point_Unit [core]) CPUs: 0-7 | `-- 1 (Integer_Pipeline [core]) CPUs: 0-7 |-- 5 (Floating_Point_Unit [core]) CPUs: 8-15 | `-- 4 (Integer_Pipeline [core]) CPUs: 8-15 |-- 7 (Floating_Point_Unit [core]) CPUs: 16-23 | `-- 6 (Integer_Pipeline [core]) CPUs: 16-23 `-- 9 (Floating_Point_Unit [core]) CPUs: 24-31 `-- 8 (Integer_Pipeline [core]) CPUs: 24-31 T4 ????????????????????????????????????????????????? T3 ?????(S2 core)?????T4 ?????(S3 core)?????????????5???????????? T3 ?????(S2 core)?????????????????????????(????????)?????????????????????????????????????????????·???????????????????????????????????????? ????T4 ????????????????????????????T4 ??????????·??????? Netra SPARC T4-1 4core ????????????????????????????????????T3 ???????????????????????????? ?????????Netra SPARC T4-1 ??????????????? Netra SPARC T4-1 ?? Computing 1 x SPARC T4 4?? 32???? or 8 ?? 64 ???? 2.85GHz CPU (1?????8????) 16 x DDR3 DIMM (?? 256GB ?????16GB DIMM ???) I/O and Storage 3 x Low Profile PCI-Express Gen2 ???? (2 x 10Gb Ethernet XAUI ???????) 2 x Full-height Half-length PCI-Express Gen2 ???? 4 x 10/100/1000 Ethernet ???????? 4 x 2.5” SAS2 HDD 4 x USB ??? (?? 2, ?? 2) RAS and Management and Power Supply ???? (RAID????), ????PSU ?????????? ILOM?????????????? 2N (1+1) , AC ???? DC ?? Support OS Oracle Solaris 10 10/9, 9/10, 8/11, Oracle Solaris 11 11/11 Oracle VM Server for SPARC 2.1 (LDoms) ???? ??? NEBS Level3?? ??????21” 19”(EIA-310D),23”,24”,600mm????? ?????(?????)????????? ????SPARC T4 ????????SPARC T4 ?????????????????????????(4???)???????????? Oracle OpenWorld Tokyo 2012 ?3??(4/4(?)?4/5(?)?4/6(?))?????????????????????&?????????????????SPARC T4 ?????????????????????????????????·?????????????????SPARC T4 ???????????????????!? Oracle OpenWorld Tokyo 2012 http://www.oracle.com/openworld/jp-ja/index.html ????·???????????? 4/6(?) Develop D3-13 (14:00 - 14:45) ???????????49 ??? ?????? 7264 ???????????????

    Read the article

  • Problem with Android emulator

    - by benasio
    Projects do not run, on screen emulator only "ANDROID" WinXP pro SP3/Eclipse Galileo java version "1.6.0_20" Java(TM) SE Runtime Environment (build 1.6.0_20-b02) Java HotSpot(TM) Client VM (build 16.3-b01, mixed mode, sharing) My actions: 1.Start the emulator(Platform:2.1 API Level:7), wait until the window DDMS status will change to ONLINE 2.Launches helloandroid from examples - Run as Android Application Console: Android Launch! [2010-05-03 21:44:34 - HelloAndroid] adb is running normally. [2010-05-03 21:44:34 - HelloAndroid] Performing com.example.helloandroid.HelloAndroid activity launch [2010-05-03 21:44:34 - HelloAndroid] Automatic Target Mode: using existing emulator 'emulator-5554' running compatible AVD 'my_vm' [2010-05-03 21:44:34 - HelloAndroid] WARNING: Application does not specify an API level requirement! [2010-05-03 21:44:34 - HelloAndroid] Device API version is 7 (Android 2.1) [2010-05-03 21:44:34 - HelloAndroid] Uploading HelloAndroid.apk onto device 'emulator-5554' [2010-05-03 21:44:35 - HelloAndroid] Installing HelloAndroid.apk... [2010-05-03 21:45:07 - HelloAndroid] Success! [2010-05-03 21:45:08 - HelloAndroid] Starting activity com.example.helloandroid.HelloAndroid on device [2010-05-03 21:45:28 - HelloAndroid] ActivityManager: DDM dispatch reg wait timeout [2010-05-03 21:45:28 - HelloAndroid] ActivityManager: Can't dispatch DDM chunk 52454151: no handler defined [2010-05-03 21:45:28 - HelloAndroid] ActivityManager: Can't dispatch DDM chunk 48454c4f: no handler defined [2010-05-03 21:45:28 - HelloAndroid] ActivityManager: Can't dispatch DDM chunk 46454154: no handler defined [2010-05-03 21:45:28 - HelloAndroid] ActivityManager: Can't dispatch DDM chunk 4d505251: no handler defined [2010-05-03 21:45:52 - HelloAndroid] Device not ready. Waiting 3 seconds before next attempt. [2010-05-03 21:45:52 - HelloAndroid] ActivityManager: android.util.AndroidException: Can't connect to activity manager; is the system running? [2010-05-03 21:45:55 - HelloAndroid] Starting activity com.example.helloandroid.HelloAndroid on device [2010-05-03 21:46:11 - HelloAndroid] ActivityManager: DDM dispatch reg wait timeout ...... DDMS console (only errors and warnings) 05-03 17:43:52.437: ERROR/vold(26): Error opening switch name path '/sys/class/switch/test2' (No such file or directory) 05-03 17:43:52.437: ERROR/vold(26): Error bootstrapping switch '/sys/class/switch/test2' (No such file or directory) 05-03 17:43:52.437: ERROR/vold(26): Error opening switch name path '/sys/class/switch/test' (No such file or directory) 05-03 17:43:52.437: ERROR/vold(26): Error bootstrapping switch '/sys/class/switch/test' (No such file or directory) 05-03 17:48:34.036: WARN/Zygote(29): Preloaded drawable resource #0x1080093 (res/drawable-mdpi/sym_def_app_icon.png) that varies with configuration!! 05-03 17:48:34.406: WARN/Zygote(29): Preloaded drawable resource #0x1080002 (res/drawable-mdpi/arrow_down_float.png) that varies with configuration!! 05-03 17:48:35.836: WARN/Zygote(29): Preloaded drawable resource #0x10800b4 (res/drawable/btn_check.xml) that varies with configuration!! 05-03 17:48:36.076: WARN/Zygote(29): Preloaded drawable resource #0x10800b7 (res/drawable-mdpi/btn_check_label_background.9.png) that varies with configuration!! 05-03 17:48:36.106: WARN/Zygote(29): Preloaded drawable resource #0x10800b8 (res/drawable-mdpi/btn_check_off.png) that varies with configuration!! 05-03 17:48:36.147: WARN/Zygote(29): Preloaded drawable resource #0x10800bd (res/drawable-mdpi/btn_check_on.png) that varies with configuration!! 05-03 17:48:36.437: WARN/Zygote(29): Preloaded drawable resource #0x1080004 (res/drawable/btn_default.xml) that varies with configuration!! 05-03 17:48:36.716: WARN/Zygote(29): Preloaded drawable resource #0x1080005 (res/drawable/btn_default_small.xml) that varies with configuration!! 05-03 17:48:36.966: WARN/Zygote(29): Preloaded drawable resource #0x1080006 (res/drawable/btn_dropdown.xml) that varies with configuration!! 05-03 17:48:37.326: WARN/Zygote(29): Preloaded drawable resource #0x1080008 (res/drawable/btn_plus.xml) that varies with configuration!! 05-03 17:48:37.707: WARN/Zygote(29): Preloaded drawable resource #0x1080007 (res/drawable/btn_minus.xml) that varies with configuration!! 05-03 17:48:38.057: WARN/Zygote(29): Preloaded drawable resource #0x1080009 (res/drawable/btn_radio.xml) that varies with configuration!! 05-03 17:48:38.776: WARN/Zygote(29): Preloaded drawable resource #0x108000a (res/drawable/btn_star.xml) that varies with configuration!! 05-03 17:48:39.327: WARN/Zygote(29): Preloaded drawable resource #0x1080125 (res/drawable/btn_toggle.xml) that varies with configuration!! 05-03 17:48:39.416: WARN/Zygote(29): Preloaded drawable resource #0x1080187 (res/drawable-mdpi/ic_emergency.png) that varies with configuration!! 05-03 17:48:39.506: WARN/Zygote(29): Preloaded drawable resource #0x1080012 (res/drawable-mdpi/divider_horizontal_bright.9.png) that varies with configuration!! 05-03 17:48:39.576: WARN/Zygote(29): Preloaded drawable resource #0x1080014 (res/drawable-mdpi/divider_horizontal_dark.9.png) that varies with configuration!! 05-03 17:48:40.126: WARN/Zygote(29): Preloaded drawable resource #0x1080016 (res/drawable/edit_text.xml) that varies with configuration!! 05-03 17:48:40.507: WARN/Zygote(29): Preloaded drawable resource #0x1080161 (res/drawable/expander_group.xml) that varies with configuration!! 05-03 17:48:41.036: WARN/Zygote(29): Preloaded drawable resource #0x1080062 (res/drawable/list_selector_background.xml) that varies with configuration!! 05-03 17:48:41.177: WARN/Zygote(29): Preloaded drawable resource #0x1080217 (res/drawable-mdpi/menu_background.9.png) that varies with configuration!! 05-03 17:48:41.256: WARN/Zygote(29): Preloaded drawable resource #0x1080218 (res/drawable-mdpi/menu_background_fill_parent_width.9.png) that varies with configuration!! 05-03 17:48:41.567: WARN/Zygote(29): Preloaded drawable resource #0x1080219 (res/drawable/menu_selector.xml) that varies with configuration!! 05-03 17:48:41.706: WARN/Zygote(29): Preloaded drawable resource #0x1080224 (res/drawable-mdpi/panel_background.9.png) that varies with configuration!! 05-03 17:48:41.849: WARN/Zygote(29): Preloaded drawable resource #0x108022e (res/drawable-mdpi/popup_bottom_bright.9.png) that varies with configuration!! 05-03 17:48:42.026: WARN/Zygote(29): Preloaded drawable resource #0x108022f (res/drawable-mdpi/popup_bottom_dark.9.png) that varies with configuration!! 05-03 17:48:42.156: WARN/Zygote(29): Preloaded drawable resource #0x1080230 (res/drawable-mdpi/popup_bottom_medium.9.png) that varies with configuration!! 05-03 17:48:42.276: WARN/Zygote(29): Preloaded drawable resource #0x1080231 (res/drawable-mdpi/popup_center_bright.9.png) that varies with configuration!! 05-03 17:48:42.376: WARN/Zygote(29): Preloaded drawable resource #0x1080232 (res/drawable-mdpi/popup_center_dark.9.png) that varies with configuration!! 05-03 17:48:42.507: WARN/Zygote(29): Preloaded drawable resource #0x1080235 (res/drawable-mdpi/popup_full_dark.9.png) that varies with configuration!! 05-03 17:48:42.606: WARN/Zygote(29): Preloaded drawable resource #0x1080238 (res/drawable-mdpi/popup_top_bright.9.png) that varies with configuration!! 05-03 17:48:42.696: WARN/Zygote(29): Preloaded drawable resource #0x1080239 (res/drawable-mdpi/popup_top_dark.9.png) that varies with configuration!! 05-03 17:48:42.946: WARN/Zygote(29): Preloaded drawable resource #0x108006d (res/drawable/progress_indeterminate_horizontal.xml) that varies with configuration!! 05-03 17:48:43.076: WARN/Zygote(29): Preloaded drawable resource #0x108023f (res/drawable/progress_small.xml) that varies with configuration!! 05-03 17:48:43.456: WARN/Zygote(29): Preloaded drawable resource #0x1080240 (res/drawable/progress_small_titlebar.xml) that varies with configuration!! 05-03 17:48:43.957: WARN/Zygote(29): Preloaded drawable resource #0x1080262 (res/drawable-mdpi/scrollbar_handle_horizontal.9.png) that varies with configuration!! 05-03 17:48:44.036: WARN/Zygote(29): Preloaded drawable resource #0x1080263 (res/drawable-mdpi/scrollbar_handle_vertical.9.png) that varies with configuration!! 05-03 17:48:44.176: WARN/Zygote(29): Preloaded drawable resource #0x1080071 (res/drawable/spinner_dropdown_background.xml) that varies with configuration!! 05-03 17:48:44.317: WARN/Zygote(29): Preloaded drawable resource #0x1080326 (res/drawable-mdpi/title_bar_shadow.9.png) that varies with configuration!! 05-03 17:48:44.496: WARN/Zygote(29): Preloaded drawable resource #0x10801c6 (res/drawable-mdpi/indicator_code_lock_drag_direction_green_up.png) that varies with configuration!! 05-03 17:48:44.607: WARN/Zygote(29): Preloaded drawable resource #0x10801c7 (res/drawable-mdpi/indicator_code_lock_drag_direction_red_up.png) that varies with configuration!! 05-03 17:48:45.956: WARN/Zygote(29): Preloaded drawable resource #0x10801c8 (res/drawable-mdpi/indicator_code_lock_point_area_default.png) that varies with configuration!! 05-03 17:48:46.407: WARN/Zygote(29): Preloaded drawable resource #0x10801c9 (res/drawable-mdpi/indicator_code_lock_point_area_green.png) that varies with configuration!! 05-03 17:48:46.696: WARN/Zygote(29): Preloaded drawable resource #0x10801ca (res/drawable-mdpi/indicator_code_lock_point_area_red.png) that varies with configuration!! 05-03 17:48:56.307: ERROR/BatteryService(170): usbOnlinePath not found 05-03 17:48:56.336: ERROR/BatteryService(170): batteryVoltagePath not found 05-03 17:48:56.350: ERROR/BatteryService(170): batteryTemperaturePath not found 05-03 17:48:56.696: ERROR/SurfaceFlinger(170): Couldn't open /sys/power/wait_for_fb_sleep or /sys/power/wait_for_fb_wake 05-03 17:48:57.847: WARN/SurfaceFlinger(170): ro.sf.lcd_density not defined, using 160 dpi by default. 05-03 17:49:02.116: WARN/UsageStats(170): Usage stats version changed; dropping 05-03 17:49:05.036: WARN/zipro(182): Unable to open zip '/data/local/bootanimation.zip': No such file or directory 05-03 17:49:06.297: WARN/zipro(182): Unable to open zip '/system/media/bootanimation.zip': No such file or directory 05-03 17:49:50.637: WARN/PackageManager(170): Running ENG build: no pre-dexopt! 05-03 17:53:59.196: WARN/PackageManager(170): Unknown permission com.google.android.providers.gmail.permission.WRITE_GMAIL in package com.android.settings 05-03 17:53:59.238: WARN/PackageManager(170): Unknown permission com.google.android.providers.gmail.permission.READ_GMAIL in package com.android.settings 05-03 17:53:59.286: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH in package com.android.settings 05-03 17:53:59.517: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH in package com.android.providers.contacts 05-03 17:53:59.656: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH.cp in package com.android.providers.contacts 05-03 17:53:59.717: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH.mail in package com.android.contacts 05-03 17:53:59.796: WARN/PackageManager(170): Unknown permission android.permission.ADD_SYSTEM_SERVICE in package com.android.phone 05-03 17:54:00.126: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH in package com.android.development 05-03 17:54:00.206: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH.ALL_SERVICES in package com.android.development 05-03 17:54:00.206: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH.YouTubeUser in package com.android.development 05-03 17:54:00.237: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.ACCESS_GOOGLE_PASSWORD in package com.android.development 05-03 17:54:00.258: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH in package com.android.browser 05-03 17:54:25.456: WARN/ResourceType(170): Resources don't contain package for resource number 0x7f0700e5 05-03 17:54:25.486: WARN/ResourceType(170): Resources don't contain package for resource number 0x7f020031 05-03 17:54:25.536: WARN/ResourceType(170): Resources don't contain package for resource number 0x7f020030 05-03 17:54:25.576: WARN/ResourceType(170): Resources don't contain package for resource number 0x7f050000 05-03 17:54:38.708: WARN/SharedBufferStack(182): waitForCondition(LockCondition) timed out (identity=0, status=0). CPU may be pegged. trying again.

    Read the article

  • How do I set up TFS PowerShell Snapin

    - by TheSean
    I have installed TFS Power Tools and I am trying to use the powershell snapin, but I can't figure out how to set it up. When I look in the install folder, I only see the following 5 dlls. Microsoft.TeamFoundation.PowerToys.Client.dll Microsoft.TeamFoundation.PowerToys.Common.dll Microsoft.TeamFoundation.PowerToys.Controls.dll Microsoft.VisualStudio.TeamFoundation.PowerToys.Common.dll Microsoft.VisualStudio.TeamFoundation.PowerToys.dll I used instalutil to install each one, and then I used the folowing ps code to see what cmdlets where installed so I could add the snapin but it looks like only a handfull exist in those dlls and these commands are not useful to me right now. PS H:\> get-pssnapin -registered Name : TfsBPAPowerShellSnapIn PSVersion : 1.0 Description : This is a PowerShell snap-in that includes Team Foundation Server cmdlets. PS H:\> get-command -pssnapin TfsBPAPowerShellSnapIn CommandType Name Definition ----------- ---- ---------- Cmdlet Get-MsiProductId Get-MsiProductId [[-ProductIndex] <Int32>] [[-Mo... Cmdlet Get-TfsDBServer Get-TfsDBServer [[-DBPath] <String>] [-Verbose] ... Cmdlet Get-TfsHealthPing Get-TfsHealthPing [-Verbose] [-Debug] [-ErrorAct... Cmdlet Get-TfsSqlData Get-TfsSqlData [[-ConnectionBuilder] <SqlConnect... thanks.

    Read the article

  • [SOLVED] BitmapFactory.decodeStream returning null when options are set.

    - by Robert Foss
    Hi! I'm having issues with BitmapFactory.decodeStream(inputStream). When using it without options, it will return an image. But when I use it with options as in .decodeStream(inputStream, null, options) it never returns Bitmaps. What I'm trying to do is to downsample a Bitmap before I actually load it to save memory. I've read some good guides, but none using .decodeStream. Here httpIM:NOT//nornalbion.SPAMcom/blog/?p=143 httpIM:NOT//kfb-android.blogspot.SPAMcom/2009/04/image-processing-in-android.html WORKS JUST FINE URL url = new URL(sUrl); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); InputStream is = connection.getInputStream(); Bitmap img = BitmapFactory.decodeStream(is, null, options); DOESNT WORK InputStream is = connection.getInputStream(); Bitmap img = BitmapFactory.decodeStream(is, null, options); InputStream is = connection.getInputStream(); Options options = new BitmapFactory.Options(); options.inJustDecodeBounds = true; BitmapFactory.decodeStream(is, null, options); Boolean scaleByHeight = Math.abs(options.outHeight - TARGET_HEIGHT) >= Math.abs(options.outWidth - TARGET_WIDTH); if(options.outHeight * options.outWidth * 2 >= 200*100*2){ // Load, scaling to smallest power of 2 that'll get it <= desired dimensions double sampleSize = scaleByHeight ? options.outHeight / TARGET_HEIGHT : options.outWidth / TARGET_WIDTH; options.inSampleSize = (int)Math.pow(2d, Math.floor( Math.log(sampleSize)/Math.log(2d))); } // Do the actual decoding options.inJustDecodeBounds = false; Bitmap img = BitmapFactory.decodeStream(is, null, options);

    Read the article

  • RIA IDE - Visual Studio 2010, FlashBuilder and ExtJS Designer

    - by Ronaldo Junior
    I've been playing around with Flex4 - the trial version - and is really quite good in terms of layout development - since I'm starting a new RIA project and we still on the "what platform" phase I better listen what you guys have to say. In Flashbuilder I can do the layout and mix with the back end script code really fast - the version 4 give me the power to even test the functionality of the server side script, etc and since I can use Fllashbuilder as a Eclipse plugin, I end up with only one IDE. On the other hand, with Silverlight, it looks like if I want a "drag and drop" approach to build my interface I would need Expression Blend and Visual Studio (full or express version). Is there a way out that? I mean, Visual Studio 2010 will come with some sort of Silverlight component palet so that you can easily drag and drop it on your interface like you would do with WPF, etc? I don't wanna use Blend - is way too much for what we need - taking the "coding XAML" by hand is out of question as well. A third approach is to use ExtJS - they have a new designer IDE that looks promising.

    Read the article

  • Slow XML-RPC in Windows 7 with XML-RPC.NET

    - by Emre Sahin
    I'm considering to use XML-RPC.NET to communicate with a Linux XML-RPC server written in Python. I have tried a sample application (MathApp) from Cook Computing's XML-RPC.NET but it took 30 seconds for the app to add two numbers within the same LAN with server. I have also tried to run a simple client written in Python on Windows 7 to call the same server and it responded in 5 seconds. The machine has 4 GB of RAM with comparable processing power so this is not an issue. Then I tried to call the server from a Windows XP system with Java and PHP. Both responses were pretty fast, almost instantly. The server was responding quickly on localhost too, so I don't think the latency arise from server. My googling returned me some problems regarding Windows' use of IPv6 but our call to server does include IPv4 address (not hostname) in the same subnet. Anyways I turned off IPv6 but nothing changed. Are there any more ways to check for possible causes of latency?

    Read the article

  • newline-ignoring diff / diff across multiple lines / reflow-ignoring diff

    - by Adam
    Does anybody know of a diff-like tool that can show me the changes between two text files, but ignore changes in whitespace including newlines? Here's an example: the quick brown fox jumped over the lazy bear. the quick brown fox jumped over the lazy bear. the quick brown fox jumped over the lazy bear. the quick brown fox jumped over the lazy bear. quick brown fox jumped over the lazy bear. the quick brown fox jumped over the lazy bear. the quick brown fox jumped over the lazy bear. the quick brown fox jumped over the lazy bear. All I did was delete one word and reflow it, but "diff -b" detects a change on every line (as it should; I'm not saying this is a bug in diff). But for large LaTeX files this is a major problem; change one word in a long paragraph and the diff you get back is basically useless. By the way, I'm aware that this requires way more computational power than the usual lines-are-atomic diff. I'm only doing this on small human-generated files and am happy to wait a long time if I have to.

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >