Search Results

Search found 3276 results on 132 pages for 'cost'.

Page 110/132 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • eSTEP Newsletter November 2012

    - by mseika
    Dear Partners,We would like to inform you that the November '12 issue of our Newsletter is now available.The issue contains information to the following topics: News from CorpOracle Celebrates 25 Years of SPARC Innovation; IDC White Papers Finds Growing Customer Comfort with Oracle Solaris Operating System; Oracle Buys Instantis; Pillar Axiom OpenWorld Highlights; Announcement Oracle Solaris 11.1 Availability (data sheet, new features, FAQ's, corporate pages, internal blog, download links, Oracle shop); Announcing StorageTek VSM 6; Announcement Oracle Solaris Cluster 4.1 Availability (new features, FAQ's, cluster corp page, download site, shop for media); Announcement: Oracle Database Appliance 2.4 patch update becomes available Technical SectionOracle White papers on SPARC SuperCluster; Understanding Parallel Execution; With LTFS, Tape is Gaining Storage Ground with additional link to How to Create Oracle Solaris 11 Zones with Oracle Enterprise Manager Ops Center; Provisioning Capabilities of Oracle Enterprise Ops Center Manager 12c; Maximizing your SPARC T4 Oracle Solaris Application Performance with the following articles: SPARC T4 Servers Set World Record on Siebel CRM 8.1.1.4 Benchmark, SPARC T4-Based Highly Scalable Solutions Posts New World Record on SPECjEnterprise2010 Benchmark, SPARC T4 Server Delivers Outstanding Performance on Oracle Business Intelligence Enterprise Edition 11g; Oracle SUN ZFS Storage Appliance Reference Architecture for VMware vSphere4; Why 4K? - George Wilson's ZFS Day Talk; Pillar Axiom 600 with connected subjects: Oracle Introduces Pillar Axiom Release 5 Storage System Software, Driving down the high cost of Storage, This Provisioning with Pilar Axiom 600, Pillar Axiom 600- System overview and architecture; Migrate to Oracle;s SPARC Systems; Top 5 Reasons to Migrate to Oracle's SPARC Systems Learning & EventsRecently delivered Techcasts: Learning Paths; Oracle Database 11g: Database Administration (New) - Learning Path; Webcast: Drill Down on Disaster Recovery; What are Oracle Users Doing to Improve Availability and Disaster Recovery; SAP NetWeaver and Oracle Exadata Database Machine ReferencesARTstor Selects Oracle’s Sun ZFS Storage 7420 Appliances To Support Rapidly Growing Digital Image Library, Scottish Widows Cuts Sales Administration 20%, Reduces Time to Prepare Reports by 75%, and Achieves Return on Investment in First Year, Oracle's CRM Cloud Service Powers Innovation: Applications on Demand; Technology on Demand, How toHow to Migrate Your Data to Oracle Solaris 11 Using Shadow Migration; Using svcbundle to Create SMF Manifests and Profiles in Oracle Solaris 11; How to prepare a Sun ZFS Storage Appliance to Serve as a Storage Devise with Oracle Enterprise Manager Ops Center 12c; Command Summary: Basic Operations with the Image Packaging System In Oracle Solaris 11; How to Update to Oracle Solaris 11.1 Using the Image Packaging System, How to Migrate Oracle Database from Oracle Solaris 8 to Oracle Solaris 11; Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster; Ease the Chaos with Automated Patching: Oracle Enterprise Manager Cloud Control 12c; Book excerpt: Oracle Exalogic Elastic Cloud HandbookYou find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • Security Access Control With Solaris Virtualization

    - by Thierry Manfe-Oracle
    Numerous Solaris customers consolidate multiple applications or servers on a single platform. The resulting configuration consists of many environments hosted on a single infrastructure and security constraints sometimes exist between these environments. Recently, a customer consolidated many virtual machines belonging to both their Intranet and Extranet on a pair of SPARC Solaris servers interconnected through Infiniband. Virtual Machines were mapped to Solaris Zones and one security constraint was to prevent SSH connections between the Intranet and the Extranet. This case study gives us the opportunity to understand how the Oracle Solaris Network Virtualization Technology —a.k.a. Project Crossbow— can be used to control outbound traffic from Solaris Zones. Solaris Zones from both the Intranet and Extranet use an Infiniband network to access a ZFS Storage Appliance that exports NFS shares. Solaris global zones on both SPARC servers mount iSCSI LU exported by the Storage Appliance.  Non-global zones are installed on these iSCSI LU. With no security hardening, if an Extranet zone gets compromised, the attacker could try to use the Storage Appliance as a gateway to the Intranet zones, or even worse, to the global zones as all the zones are reachable from this node. One solution consists in using Solaris Network Virtualization Technology to stop outbound SSH traffic from the Solaris Zones. The virtualized network stack provides per-network link flows. A flow classifies network traffic on a specific link. As an example, on the network link used by a Solaris Zone to connect to the Infiniband, a flow can be created for TCP traffic on port 22, thereby a flow for the ssh traffic. A bandwidth can be specified for that flow and, if set to zero, the traffic is blocked. Last but not least, flows are created from the global zone, which means that even with root privileges in a Solaris zone an attacker cannot disable or delete a flow. With the flow approach, the outbound traffic of a Solaris zone is controlled from outside the zone. Schema 1 describes the new network setting once the security has been put in place. Here are the instructions to create a Crossbow flow as used in Schema 1 : (GZ)# zoneadm -z zonename halt ...halts the Solaris Zone. (GZ)# flowadm add-flow -l iblink -a transport=TCP,remote_port=22 -p maxbw=0 sshFilter  ...creates a flow on the IB partition "iblink" used by the zone to connect to the Infiniband.  This IB partition can be identified by intersecting the output of the commands 'zonecfg -z zonename info net' and 'dladm show-part'.  The flow is created on port 22, for the TCP traffic with a zero maximum bandwidth.  The name given to the flow is "sshFilter". (GZ)# zoneadm -z zonename boot  ...restarts the Solaris zone now that the flow is in place.Solaris Zones and Solaris Network Virtualization enable SSH access control on Infiniband (and on Ethernet) without the extra cost of a firewall. With this approach, no change is required on the Infiniband switch. All the security enforcements are put in place at the Solaris level, minimizing the impact on the overall infrastructure. The Crossbow flows come in addition to many other security controls available with Oracle Solaris such as IPFilter and Role Based Access Control, and that can be used to tackle security challenges.

    Read the article

  • Incentivizing Work with Development Teams

    - by MarkPearl
    Recently I saw someone on twitter asking about incentives and if anyone had past experience with incentivizing work. I promised to respond with some of the experiences I have had in the past so here goes... **Disclaimer** - these are my experiences with incentives, generally in software development - in some other industries this may not be applicable – this is also my thinking at this point in time, with more experience my opinion may change. Incentivize at the level that you want people to group at If you are wanting to promote a team mentality, incentivize teams. If you want to promote an individual mentality, incentivize individuals. There is nothing worse than mixing this up. Some organizations put a lot of effort in establishing teams and team mentalities but reward individuals. This has a counter effect on the resources they have put towards establishing a team mentality. In the software projects that I work with we want promote cross functional teams that collaborate. Personally, if I was on a team and knew that there was an opportunity to work on a critical component of the system, and that by doing so I would get a bigger bonus, then I would be hesitant to include other people in solving that problem. Thus, I would hinder the teams efforts in being cross functional and reduce collaboration levels. Does that mean everyone in the team should get an even share of an incentive? In most situations I would say yes - even though this may feel counter-intuitive. I have heard arguments put forward that if “person x contributed more than person Y then they should be rewarded more” – This may sound controversial but I would rather treat people how would you like them to perform, not where they currently are at. To add to this approach, if someone is free loading, you bet your bottom dollar that the team is going to make this a lot more transparent if they feel that individual is going to be rewarded at the same level that everyone else is. Bad incentives promote destructive work If you are going to incentivize people, pick you incentives very carefully. I had an experience once with a sales person who was told they would get a bonus provided that they met an ordering target with a particular supplier. What did this person do? They sold everything at cost for the next month or so. They reached the goal, but the company didn't gain anything from it. It was a bad incentive. Expect the same with development teams, if you incentivize zero bug levels, you will get zero code committed to the solution. If you incentivize lines of code, you will get many many lines of bad code. Is there such a thing as a good incentives? Monetary wise, I am not sure there is. I would much rather encourage organizations to pay their people what they are worth upfront. I would also advise against paying money to teams as an incentive or even a bonus or reward for reaching a milestone. Rather have a breakaway for the team that promotes team building as a reward if they reach a milestone than pay them more money. I would also advise against making the incentive the reason for them to reach the milestone. If this becomes the norm it promotes people to begin to only do their job if there is an incentive at the end of the line. This is not a behaviour one wants to encourage. If the team or individual is in the right mind-set, they should not work any harder than they are right now with normal pay.

    Read the article

  • Extreme Makeover, Phone Edition: Comcasts xfinity

    Mobile Makeover For many companies the first foray into Windows Phone 7 (WP7) may be in porting their existing mobile apps. It is tempting to simply transfer existing functionality, avoiding the additional design costs. Readdressing business needs and taking advantage of the WP7 platform can reduce cost and is essential to a successful re-launch. To better understand the advantage of new development lets examine a conceptual upgrade of Comcasts existing mobile app. Before Comcast has a great mobile app that provides several key features. The ability to browse the lineup using a guide, a client for Comcast email accounts, On Demand gallery, and much more. We will leverage these and build on them using some of the incredible WP7 features.   After With the proliferation of DVRs (Digital Video Recorders) and a variety of media devices (TV, PC, Mobile) content providers are challenged to find creative ways to build their brands. Every client touch point must provide both value added services as well as opportunities for marketing and up-sale; WP7 makes it easy to focus on those opportunities. The new app is an excellent vehicle for presenting Comcasts newly rebranded TV, Voice, and Internet services. These services now fly under the banner of xfinity and have been expanded to provide the best experience for Comcast customers. The Windows Phone 7 app will increase the surface area of this service revolution.   The home menu is simplified and highlights Comcasts Triple Play: Voice, TV, and Internet. The inbox has been replaced with a messages view, and message management is handled by a WP7 hub. The hub presents emails, tweets, and IMs from Comcast and other viewers the user follows on Twitter.  The popular view orders shows based on the users viewing history and current cable package. The first show Glee is both popular and participating in a conceptual co-marketing effort, so it receives prime positioning. The second spot goes to a hit show on a premium channel, in this example HBOs The Pacific, encouraging viewers to upgrade for this premium content. The remaining spots are ordered based on viewing history and popularity. Tapping the play button moves the user to the theatre where they can watch previews or full episodes streaming from Fancast. Tapping an extra presents the user with show details as well as interactive content that may be included as part of co-marketing efforts. Co-Marketing with Dynamic Content The success of Comcasts services are tied to the success of the networks and shows it purveys, making co-marketing efforts essential. In this concept FOX is co-marketing its popular show Glee. A customized panorama is updated with the latest gleeks tweets, streaming HD episodes, and extras featuring photos and video of the cast. If WP7 apps can be dynamically extended with web hosted .xap files, including sandboxed partner experiences would enable interactive features such as the Gleek Peek, in which a viewer can select a character from a panorama to view the actors profile. This dynamic inline experience has a tailored appeal to aspiring creatives and is technically possible with Windows Phone 7.   Summary The conceptual Comcast mobile app for Windows Phone 7 highlights just a few of the incredible experiences and business opportunities that can be unlocked with this latest mobile solution. It is critical that organizations recognize and take full advantage of these new capabilities. Simply porting existing mobile applications does not leverage these powerful tools; re-examining existing applications and upgrading them to Windows Phone 7 will prove essential to the continued growth and success of your brand.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The best computer ever

    - by Jeff
    (This is a repost from my personal blog… wow… I need to write more technical stuff!) About three years and three months ago, I bought a 17" MacBook Pro, and it turned out to be the best computer I've ever owned. You might think that every computer with better specs is automatically better than the last, but that hasn't been my experience. My first one was a Sony, back in the Pentium III days, and it cost an astonishing $2,500. That was even more ridiculous in 1999 dollars. It had a dial-up modem, and a CD-ROM, built-in! It may have even played DVD's. A few years later I bought an HP, and it ended up being a pile of shit. The power connector inside came loose from the board, and on occasion would even short. In 2005, I bought a Dell, and it wasn't bad. It had a really high resolution screen (complete with dead pixels, a problem in those days), and it was the first laptop I felt I could do real work on. When 2006 rolled around, Apple started making computers with Intel CPU's, and I bought the very first one the week it came out. I used Boot Camp to run Windows. I still have it in its box somewhere, and I used it for three years. The current 17" was new in 2009. The goodness was largely rooted in having a big screen with lots of dots. This computer has been the source of hundreds of blog posts, tens of thousands of lines of code, video and photo editing, and of course, a whole lot of Web surfing. It connected to corpnet at Microsoft, WiFi in Hawaii and has presented many a deck. It has traveled with me tens of thousands of miles. Last year, I put a solid state drive in it, and it was like getting a new computer. I can boot up a Windows 7 VM in about 19 seconds. Having 8 gigs of RAM has always been fantastic. Everything about it has been fast and fun. When new, the battery (when not using VM's) could get as much as 10 hours. I can still do 7 without much trouble. After 460 charge cycles, the battery health is still between 85 and 90%. The only real negative has been the size and weight. It's only an inch thick, but naturally it's pretty big with a 17" screen. You don't get battery life like that without a huge battery, either, so it's heavy. It was never a deal breaker, but sometimes a long haul across a large airport, you know you're carrying it. Today, Apple announced a new, thinner and lighter 15" laptop, with twice the RAM and CPU cores, and four times the screen resolution. It basically handles my size and weight issues while retaining the resolution, and it still costs less than my 17" did. So I ordered one. Three years is an excellent run, but I kind of budgeted for a new workhorse this year anyway. So if you're interested in a 17" MacBook Pro with a Core 2 Duo 2.66 GHz CPU, 8 gigs of RAM and a 320 gig hard drive (sorry, I'm keeping the SSD), I have one to sell. They've apparently discontinued the 17", which is going to piss off the video community. It's in excellent condition, with a few minor scratches, but I take care of my stuff.

    Read the article

  • D&rsquo;Arcy&rsquo;s Book Club - The New Strategic Selling

    - by D'Arcy Lussier
    The New Strategic Selling Miller and Heiman Amazon.ca Amazon.com Chapters Everybody is a salesmen. Every day, without knowing it, we sell something to someone. Now, the typical vision people think of when they hear the word “sales” is the sleazy used car salesperson who does whatever they can to get you to buy the clunker on their lot. But selling is not an action tied to money and products. Selling is about convincing people to see your point of view and act on it. If you want your company to cover a trip to a conference, you may have to sell the idea to your boss. If you want to buy that new big screen TV, you have to sell the idea to your significant other. If you want to go on a weekend fishing trip with the boys you might be called in to help sell the idea to your buddies wife. We all sell, but we don’t all sell very well. So enter The New Strategic Selling, a book based on the sales course put on by the Miller-Heiman group. In fact, this isn’t really a “New” strategy to selling as its been around for a number of years. But the concepts they present, the ideas about selling, these are still very radical based on what most of us have experienced. Gone are the high pressure, win at all cost, GlenGarry-GlenRoss style of sales…instead the book presents a framework to switch to need-based selling. It’s the idea that instead of going in raving about a product or service, you build a relationship where the buyer expresses what their needs are and your response is to present a solution that best fits that need. Instead of focussing on the amount of money you can squeeze out of a client, you focus on whether everyone wins, that they receive win-results from the engagement, that repeat business is developed over time delivering value over and over again. The great thing about the book is that what it teaches…things like how to identify different buying influencers, how to prepare for meetings, techniques to solicit information about what the buyer is really thinking/feeling…these things are entirely applicable in *any* situation that you need to sell to someone…and remember: selling is convincing people to see your point of view and act on it. So that new big screen TV you want to buy but need to convince your wife on? This book can help you. That training opportunity you want your company to send you on? This book can help you. The upgrade to your community park that you want to lobby the local civic authorities for? This book can help you. The book is a bit wordy. I found that the length could have been reduced and the points still have gotten across. That’s really the only knock that I have though; the insight that it provides is so worthwhile that having to chew through extra words is well worth it. You definitely don’t have to be a professional salesperson to benefit from this book. Rating: 4/5

    Read the article

  • When things go awry

    - by Phil Factor
    The moment the Entrepreneur opened his mouth on prime-time national TV, spelled out the URL and waxed big on how exciting ‘his’ new website was, I knew I was in for a busy night. I’d designed and built it. All at once, half a million people tried to log into the website. Although all my stress-testing paid off, I have to admit that the network locked up tight long before there was any danger of a database or website problem. Soon afterwards, the Entrepreneur and the Big Boss were there in the autopsy meeting. We picked through all our systems in detail to see how they’d borne the unexpected strain. Mercifully, in view of the sour mood of the Big Boss, it turned out that the only thing we could have done better was buy a bigger pipe to and from the internet. We’d specified that ‘big pipe’ when designing the system. The Big Boss had then railed at the cost and so we’d subsequently compromised. I felt that my design decisions were vindicated. The Big Boss brooded for a while. Then he made the significant comment: “What really ****** me off is the fact that, for ten minutes, we couldn’t take people’s money.” At that point I stopped feeling smug. Had the internet connection been better, the system would have reached its limit and failed rather precipitously, and that wasn’t what he wanted. Then it occurred to me that what had gummed up the connection was all those images on the site, that had made it so impressive for the visitors. If there had been a way to automatically pare down the site to the bare essentials under stress… Hmm. I began to consider disaster-recovery in the broadest sense – maintaining a service in spite of unusual or unexpected events. What he said makes a lot of sense: sacrifice whatever isn’t essential to keep the core service running when we approach the capacity limits. Maybe in IT we should borrow (or revive) the business concept of the ‘Skeleton service’, maintaining only the priority parts under stress, using a process that is well-prepared and carefully rehearsed. How might this work? Whatever the event we have to prepare for, it is all about understanding the priorities; knowing what one can dispense with when the going gets tough. In the event of database disaster, it’s much faster to deploy a skeletal system with only the essential data than to restore the entire system, though there would have to be a reconciliation process to update the revived database retrospectively, once the emergency was over. It isn’t just the database that could be designed for resilience. One could prepare for unusually high traffic in a website by designing a system that degraded gradually to a ‘skeletal’ site, one that maintained the commercial essentials without fat images, JavaScript libraries and razzmatazz. This is all what the Big Boss scathingly called ‘a mere technicality’. It seems to me that what is needed first is a culture of application and database design which acknowledges that we live in a very imperfect world, and react accordingly when things go awry.

    Read the article

  • “Apparently, you signed a software services agreement without fully understanding it.”

    - by Dave Ballantyne
    I am not a lawyer. Let me say that again, I am not a lawyer. Todays Dilbert has prompted me to post about my recent experience with SqlServer licensing. I'm in the technical realm and rarely have much to do with purchasing and licensing.  I say “I need” , budget realities will state weather I actually get.  However, I do keep my ear to the ground and due to my community involvement, I know, or at least have an understanding of, some licensing restrictions. Due to a misunderstanding, Microsoft Licensing stated that we needed licenses for our standby servers.  I knew that that was not the case,  and a quick tweet confirmed this. So after composing an email stating exactly what the machines in question were used for ie Log shipped to and used in a disaster recover scenario only,  and posting several Technet articles to back this up, we saved 2 enterprise edition licences, a not inconsiderable cost. However during this discussion, I was made aware of another ‘legalese’ document that could completely override the referenced articles, and anything I knew, or thought i knew, about SqlServer licensing. Personally, I had no knowledge of this.  The “Purchase Use Rights” agreement would appear to be the volume licensing equivalent of the “End User License Agreement” , click throughs we all know and ignore.  Here is a direct quote from Microsoft licensing, when asked for clarification. “Thanks for your email. Just to give some background on the Product Use Rights (PUR), licenses acquired through volume licensing are bound by the most recent PUR at the time of license acquisition. The link for the current PUR and PUR archive is http://www.microsoft.com/licensing/about-licensing/product-licensing.aspx. Further to this, products acquired through boxed product or pre-installed on hardware (OEM) are bound by the End User License Agreement (EULA). The PUR will explain limitations, license requirements and rulings on areas like multiplexing, virtualization, processor licensing, etc. When an article will appear on a Microsoft site or blog describing the licensing of a product, it will be using the PUR as a base. Due to the writing style or language used by the person writing areas of the website or technical blogs, the PUR is what you should use as a rule and not any of the other media. The PUR is updated quarterly and will reference every product available at that time working on the latest version unless otherwise stated. The crux of this is that the PUR is written after extensive discussions between the different branches of Microsoft (legal, technical, etc) and the wording is then approved. This is not always the case for some pages explaining licensing as they are merely intended to advise and not subject to the intense scrutiny as the PUR.” So, exactly what does that mean ? My take :  This is a living document, “updated quarterly” , though presumably this could be done on a whim and a fancy.  It could state , you are only licensed if ,that during install you stand in a corner juggling and that photographic evidence is required. A plainly ridiculous demand but,  what else could it override or new requirements could it state that change your existing understanding of the product or your legal usage of it. As i say, im not a lawyer, but are you checking the PURA prior to purchase ?

    Read the article

  • HERMES Medical Solutions Helps Save Lives with MySQL

    - by Bertrand Matthelié
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} HERMES Medical Solutions was established in 1976 in Stockholm, Sweden, and is a leading innovator in medical imaging hardware/software products for health care facilities worldwide. HERMES delivers a plethora of different medical imaging solutions to optimize hospital workflow. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} HERMES advanced algorithms make it possible to detect the smallest changes under therapies important and necessary to optimize different therapeutic methods and doses. Challenges Fighting illness & disease requires state-of-the-art imaging modalities and software in order to diagnose accurately, stage disease appropriately and select the best treatment available. Selecting and implementing a new database platform that would deliver the needed performance, reliability, security and flexibility required by the high-end medical solutions offered by HERMES. Solution Decision to migrate from in-house database to an embedded SQL database powering the HERMES products, delivered either as software, integrated hardware and software solutions, or via the cloud in a software-as-a-service configuration. Evaluation of several databases and selection of MySQL based on its high performance, ease of use and integration, and low Total Cost of Ownership. On average, between 4 and 12 Terabytes of data are stored in MySQL databases underpinning the HERMES solutions. The data generated by each medical study is indeed stored during 10 years or more after the treatment was performed. MySQL-based HERMES systems also allow doctors worldwide to conduct new drug research projects leveraging the large amount of medical data collected. Hospitals and other HERMES customers worldwide highly value the “zero administration” capabilities and reliability of MySQL, enabling them to perform medical analysis without any downtime. Relying on MySQL as their embedded database, the HERMES team has been able to increase their focus on further developing their clinical applications. HERMES Medical Solutions could leverage the Oracle Financing payment plan to spread its investment over time and make the MySQL choice even more valuable. “MySQL has proven to be an excellent database choice for us. We offer high-end medical solutions, and MySQL delivers the reliability, security and performance such solutions require.” Jan Bertling, CEO.

    Read the article

  • JTF Tranlsation Festival 2011

    - by user13133135
    ?????????????????????? (MT) ??????????????????????? JTF ????????????????????????????????????????????????? ???5??!???21?JTF???????? ? ??:2011?11?29?(?)9:30~20:30(??9:00) ? ??:??????????(????)?(??) ? ??:(?)?????? ??:JTF?????????? ? http://www.jtf.jp/jp/festival/festival_top.html ????????????????????????????????MT ????????????????????????????????????????????????? 90 ???!??(!?)?????????????????????????????????????????????????????????????????????????????? ????????????????? http://www.jtf.jp/jp/festival/festival_program.html#koen_04 ?????????????????????????????? English:  It's been a while since the last post... I have been working on machine translation (MT) and post editing (PE) for Japanese.  Last year was my first step in MT+PE area, and I would take this year as an advanced step.  I plan to talk over Post editing 2011 (Advanced Step) on November 27 at JTF Translation Festival.  ?5 days before application due? 21st JTF Translation Festival ? Date:Nov 29, 2011 Tuesday 9:30~20:30(Gate open: 9:00) ? Place:Arcadia Ichigaya Tokyo ? http://www.jtf.jp/jp/festival/festival_top.html In this session, I would like to expand the thought on "how to best utilize MT and PE" either from the view of Client and Translator.  I will show some examples of post editing as a guideline to know what is the best way and most effective way to do post-edit for Japanese.  Also, I will discuss what is the best practice for MT users (Client). The session lasts 90 minutes... sound a little long for me, but I want to spend more time for discussion than last year.  It would be great to exchange thought or experiences about MT and PE.  What is your concerns or problems in the daily work with MT ?  If you have some, please bring them to my session at JTF Translation Festival.  Here is my session details (Japanese): http://www.jtf.jp/jp/festival/festival_program.html#koen_04 Here is the outline of my session: What is the advantage of MT ? Does it solve all the problems about cost, resource, and quality ?  Well, it is not a magic.  So, you cannot expect all at once.  When you have a problem, there are 3 options... 1. Be patient and wait until everything is ready, 2. Run a workaround using anything available now, 3. Find out something completely new and spend time and money. This time, I will focus Option 2 - do something with what we already have.  That is, I will discuss how we can best utilize MT in our daily business.  My view is two ways: From Client point of view, and From Translator point of view Looking forward to meeting many people and exchanging thoughts and information!

    Read the article

  • Get More Value From Your Oracle Premier Support Investment

    - by Get Proactive Customer Adoption Team
    Untitled Document The Return on Investment in Support Training I’m a typical software user. I’ve been using spreadsheets almost daily for the past 10 years or so. I know how to enter simple formulas, format cells, import files, and I can sort and filter. Sometimes I even use a pivot table. I never attended training. I learnt everything I know on the fly. Sometimes it was intuitive and easy, other times I had to spend minutes and even hours searching for a solution. Yet when I see what some other people can do with their spreadsheets, I know I’m utilizing maybe 15% of the functionality. Pity, one day I really have to sign up for training. Why haven’t I done it yet? Ah, you know, I’m a busy person, I have work to do. And if I need to use a feature that I am unfamiliar with, I’ll spend time on it only when I really need it. Now wait. When I recall how much time I spent trying to figure how things work compared to time I spent doing the productive work, I realize it was not insignificant. I’m unable to sum up all the time I spent ‘learning’ on the fly, but I’m sure it’s been days or even weeks. And after all this time, I’ve mastered 15% of its features. If only I had attended training years ago. That investment would have paid back 10 times! Working with My Oracle Support is no different. Our customers typically use simple search, create service requests, and download patches. They think they know how to use My Oracle Support. And they’re right. They know something but often they’re utilizing only a fragment of My Oracle Support’s potential. For the investment that has been made, using only a small subset of the capabilities offered in My Oracle Support leaves value on the table. There is much more available in My Oracle Support. Dozens of diagnostic tools and proactive health checks will keep verifying your Oracle environments against best practices that Oracle gathers every day thanks to our comprehensive knowledge management process. Automated patch recommendations will help prevent known issues, and upgrade planning and more is included in My Oracle Support. Why are you not utilizing all of these best practices, capabilities and tools? Is it because you don’t have time to invest 2-3 hours of your time to learn about the features? Simply because you think you can learn on the fly like I thought I could? Does learning on the fly how to properly use the Service Request escalation process when you already have critical issue sound like a good idea? My advice is: Invest your time now to learn how My Oracle Support can help you prevent issues on your systems. Learn how to find answers faster and resolve problems more efficiently. Understand how to properly complete a service request. Invest in Support training, offered at no additional cost to Oracle Premier Support customers. It will pay back quicker than you think. It will bring you more value than you think. Discover your advantage with Oracle Premier Support's Proactive Portfolio.

    Read the article

  • ADO and Two Way Storage Tiering

    - by Andy-Oracle
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 We get asked the following question about Automatic Data Optimization (ADO) storage tiering quite a bit. Can you tier back to the original location if the data gets hot again? The answer is yes but not with standard Automatic Data Optimization policies, at least not reliably. That's not how ADO is meant to operate. ADO is meant to mirror a traditional view of Information Lifecycle Management (ILM) where data will be very volatile when first created, will become less active or cool, and then will eventually cease to be accessed at all (i.e. cold). I think the reason this question gets asked is because customers realize that many of their business processes are cyclical and the thinking goes that those segments that only get used during month end or year-end cycles could sit on lower cost storage when not being used. Unfortunately this doesn't fit very well with the ADO storage tiering model. ADO storage tiering is based on the amount of free and used space in the source tablespace. There are two parameters that control this behavior, TBS_PERCENT_USED and TBS_PERCENT_FREE. When the space in the tablespace exceeds the TBS_PERCENT_USED value then segments specified in storage tiering clause(s) can be moved until the percent of free space reaches the TBS_PERCENT_FREE value. It is worth mentioning that no checks are made for available space in the target tablespace. Now, it is certainly possible to create custom functions to control storage tiering, but this can get complicated. The biggest problem is insuring that there is enough space to move the segment back to tier 1 storage, assuming that that's the goal. This isn't as much of a problem when moving from tier 1 to tier 2 storage because there is typically more tier 2 storage available. At least that's the premise since it is supposed to be less costly, lower performing and higher capacity storage. In either case though, if there isn't enough space then the operation fails. In the case of a customized function, the question becomes do you attempt to free the space so the move can be made or do you just stop and return false so that the move cannot take place? This is really the crux of the issue. Once you cross into this territory you're really going to have to implement two-way hierarchical storage and the whole point of ADO was to provide automatic storage tiering. You're probably better off using heat map and/or business access requirements and building your own hierarchical storage management infrastructure if you really want two way storage tiering. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Profiling Startup Of VS2012 &ndash; JustTrace Profiler

    - by Alois Kraus
    JustTrace is made by Telerik which is mainly known for its collection of UI controls. The current version (2012.3.1127.0) does include a performance and memory profiler which does cost 614€ and is currently with a special offer for 306€ on sale. It does include one year of free upgrades. The uneven € numbers are calculated from the 799€ and 50% dicsount price. The UI is already in Metro style and simple to use. Multi process, attach, method recording filter are not supported. It looks like JustTrace is like Ants a Just My Code profiler. For stuff where you do not have the pdbs or you want to dig deeper into the BCL code you will not get far. After getting the profile data you get in the All Methods grid a plain list with hit count and own time. The method list for all methods is also suspiciously short which is a clear sign that you will not get far during the analysis of foreign code. But at least there is also a memory profiler included. For this I have to choose in the first window for Profiling Type “Memory Profiler” to check the memory consumption of VS.  There are some interesting number to see but I do really miss from YourKit the thread stack window. How am I supposed to get a clue when much memory is allocated and the CPU consumption is high in which places I should look? The Snapshot summary gives a rough overview which is ok for a first impression. Next is Assemblies? This gives you a list of all loaded assemblies. Not terribly useful.   The By Type view gives you exactly what it is supposed to do. You have to keep in mind that this list is filtered by the types you did check in the Assemblies list. The By Type instance list does only show types from assemblies which do not originate from Microsoft. By default mscorlib and System are not checked. That is the reason why for the first time my By Type window looked like The idea behind this feature is to show only your instances because you are ultimately responsible for the overall memory consumption. I am not sure if I do like this feature because by default it does hide too much. I do want to see at least how many strings and arrays are allocated. A simple namespace filter would also do it in my opinion. Now you can examine all string instances and look who in the object graph does keep a reference on them. That is nice but YourKit has the big plus that you can also look into the string contents.  I am also not sure how in the graph cycles are visualized and what will happen if you have thousands of objects referencing you. That's pretty much it about JustTrace. It can help the average developer to pinpoint performance and memory issues by just looking at his own code and instances. Showing them more will not help them because the sheer amount of information will overwhelm them. And you need to have a pretty good understanding how the GC and the CLR does work. When you have a performance issue at a customer machine it is sometimes very helpful to be able a bring a profiler onto the machine (no pdbs, …) and to get a full snapshot of all processes which are in the problematic use case involved. For these more advanced use cased JustTrace is certainly the wrong tool. Next: SpeedTrace

    Read the article

  • Migrating VB6 to HTML5 is not a fiction - Customer success story

    - by Webgui
    All of you VB developers in the present or past would probably find it hard to believe that the old VB code can be migrated and modernized into the latest .NET based HTML5 without having to rewrite the application. But we have been working on such tools for the past couple of years and already have several real world applications that were fully 'transposed' from VB6. The solution is called Instant CloudMove and its main tool is called the TranspositionStudio. It is a unique solution that relies on the concept of transposition. Transposition comes from mathematics and music and refers to exchanging elements while everything else remains the same or moving an element as is from one environment to another. This means that we are taking the source code and put it in a modern technological environment with relatively few adjustments.The concept is based on a set of Mapping Expressions which are basically links between an element in the source environment and one in the target environment that has the same functionality. About 95% of the code is usually mapped out-of-the-box and the rest is handled with easy-to-use mapping tools designed for Visual Studio developers providing them with a familiar environment and concepts for completing the mapping and allowing them to extend and customize existing mapping expressions. The solution is also based on a circular workflow that enables developers to make any changes as required until the result is satisfying.As opposed to existing migration solutions that offer automation are usually a “black box” to the user, the transposition concept enables full visibility, flexibility and control over the code and process at all times allowing to also add/change functionalities or upgrade the UI within the process and tools.This is exactly the case with our customer’s aging VB6 PMS (Property Management System) which needed a technological update as well as a design refresh. The decision was to move the VB6 application which had about 1 million lines of code into the latest web technology. Since the application was initially written 13 years ago and had many upgrades since the code must be very patchy and includes unused sections. As a result, the company Mihshuv Group considered rewriting the entire application in Java since it already had the knowledge. Rewrite would allow starting with a clean slate and designing functionality, database architecture, UI without any constraints. On the other hand, rewrite entitles a long and detailed specification work as well as a thorough QA and this translates into a long project with high risk and costs.So the company looked for a migration solution as an alternative; the research lead to Gizmox and after examining the technology it was decided to perform a hybrid project which would include an automatic transposition of the core of the VB6 application (200,000 lines of code) while they redesigning the UI, adding new functionality, deleting unused code and rewriting about 140 reports with Crystal Reports will be done manually using Visual WebGui development tools.The migration part of the project was completed in 65 days by 3 developers from Mihshuv Group guided by Gizmox migration experts while the rewrite and UI upgrade tasks took about the same. So in only a few months period Mihshuv Group generated an up-to-date product, written in the latest Web technology with modern, friendly UI and improved functionality. Guest selection screen of the original VB6 PMS Guest selection screen on the new web–based PMS Compared to the initial plan to rewrite the entire application in Java, the hybrid migration/rewrite approach taken by Mihshuv Group using Gizmox technology proved as a great decision. In terms of time and cost there were substantial savings; from a project that was priced for at least a year (without taking into account the huge risk and uncertainty) it became a few months project only. More about this and other customer stories can be found here

    Read the article

  • Spotlight on Oracle Social Relationship Management. Social Enable Your Enterprise with Oracle SRM.

    - by Pat Ma
    Facebook is now the most popular site on the Internet. People are tweeting more than they send email. Because there are so many people on social media, companies and brands want to be there too. They want to be able to listen to social chatter, engage with customers on social, create great-looking Facebook pages, and roll out social-collaborative work environments within their organization. This is where Oracle Social Relationship Management (SRM) comes in. Oracle SRM is a product that allows companies to manage their presence with prospects and customers on social channels. Let's talk about two popular use cases with Oracle SRM. Easy Publishing - Companies now have an average of 178 social media accounts - with every product or geography or employee group creating their own social media channel. For example, if you work at an international hotel chain with every single hotel creating their own Facebook page for their location, that chain can have well over 1,000 social media accounts. Managing these channels is a mess - with logging in and out of every account, making sure that all accounts are on brand, and preventing rogue posts from destroying the brand. This is where Oracle SRM comes in. With Oracle Social Relationship Management, you can log into one window and post messages to all 1,000+ social channels at once. You can set up approval flows and have each account generate their own content but that content must be approved before publishing. The benefits of this are easy social media publishing, brand consistency across all channels, and protection of your brand from inappropriate posts. Monitoring and Listening - People are writing and talking about your company right now on social media. 75% of social media users have written a negative post about a brand after a poor customer service experience. Think about all the negative posts you see in your Facebook news feed about delayed flights or being on hold for 45 minutes. There is so much social chatter going on around your brand that it's almost impossible to keep up or comprehend what's going on. That's where Oracle SRM comes in. With Social Relationship Management, a company can monitor and listen to what people are saying about them on social channels. They can drill down into individual posts or get a high level view of trends and mentions. The benefits of this are comprehending what's being said about your brand and its competitors, understanding customers and their intent, and responding to negative posts before they become a PR crisis. Oracle SRM is part of Oracle Cloud. The benefits of cloud deployment for customers are faster deployments, less maintenance, and lower cost of ownership versus on-premise deployments. Oracle SRM also fits into Oracle's vision to social enable your enterprise. With Oracle SRM, social media is not just a marketing channel. Social media is also mechanism for sales, customer support, recruiting, and employee collaboration. For more information about how Oracle SRM can social enable your enterprise, please visit oracle.com/social. For more information about Oracle Cloud, please visit cloud.oracle.com.

    Read the article

  • When should I use a Process Model versus a Use Case?

    - by Dave Burke
    This Blog entry is a follow on to https://blogs.oracle.com/oum/entry/oum_is_business_process_and and addresses a question I sometimes get asked…..i.e. “when I am gathering requirements on a Project, should I use a Process Modeling approach, or should I use a Use Case approach?” Not surprisingly, the short answer is “it depends”! Let’s take a scenario where you are working on a Sales Force Automation project. We’ll call the process that is being implemented “Lead-to-Order”. I would typically think of this type of project as being “Process Centric”. In other words, the focus will be on orchestrating a series of human and system related tasks that ultimately deliver value to the business in a cost effective way. Put in even simpler terms……implement an automated pre-sales system. For this type of (Process Centric) project, requirements would typically be gathered through a series of Workshops where the focal point will be on creating, or confirming, the Future-State (To-Be) business process. If pre-defined “best-practice” business process models exist, then of course they could and should be used during the Workshops, but even in their absence, the focus of the Workshops will be to define the optimum series of Tasks, their connections, sequence, and dependencies that will ultimately reflect a business process that meets the needs of the business. Now let’s take another scenario. Assume you are working on a Content Management project that involves automating the creation and management of content for User Manuals, Web Sites, Social Media publications etc. Would you call this type of project “Process Centric”?.......well you could, but it might also fall into the category of complex configuration, plus some custom extensions to a standard software application (COTS). For this type of project it would certainly be worth considering using a Use Case approach in order to 1) understand the requirements, and 2) to capture the functional requirements of the custom extensions. At this point you might be asking “why couldn’t I use a Process Modeling approach for my Content Management project?” Well, of course you could, but you just need to think about which approach is the most effective. Start by analyzing the types of Tasks that will eventually be automated by the system, for example: Best Suited To? Task Name Process Model Use Case Notes Manage outbound calls Ö A series of linked human and system tasks for calling and following up with prospects Manage content revision Ö Updating the content on a website Update User Preferences Ö Updating a users display preferences Assign Lead Ö Reviewing a lead, then assigning it to a sales person Convert Lead to Quote Ö Updating the status of a lead, and then converting it to a sales order As you can see, it’s not an exact science, and either approach is viable for the Tasks listed above. However, where you have a series of interconnected Tasks or Activities, than when combined, deliver value to the business, then that would be a good indicator to lead with a Process Modeling approach. On the other hand, when the Tasks or Activities in question are more isolated and/or do not cross traditional departmental boundaries, then a Use Case approach might be worth considering. Now let’s take one final scenario….. As you captured the To-Be Process flows for the Sales Force automation project, you discover a “Gap” in terms of what the client requires, and what the standard COTS application can provide. Let’s assume that the only way forward is to develop a Custom Extension. This would now be a perfect opportunity to document the functional requirements (behind the Gap) using a Use Case approach. After all, we will be developing some new software, and one of the most effective ways to begin the Software Development Lifecycle is to follow a Use Case approach. As always, your comments are most welcome.

    Read the article

  • Concurrent Affairs

    - by Tony Davis
    I once wrote an editorial, multi-core mania, on the conundrum of ever-increasing numbers of processor cores, but without the concurrent programming techniques to get anywhere near exploiting their performance potential. I came to the.controversial.conclusion that, while the problem loomed for all procedural languages, it was not a big issue for the vast majority of programmers. Two years later, I still think most programmers don't concern themselves overly with this issue, but I do think that's a bigger problem than I originally implied. Firstly, is the performance boost from writing code that can fully exploit all available cores worth the cost of the additional programming complexity? Right now, with quad-core processors that, at best, can make our programs four times faster, the answer is still no for many applications. But what happens in a few years, as the number of cores grows to 100 or even 1000? At this point, it becomes very hard to ignore the potential gains from exploiting concurrency. Possibly, I was optimistic to assume that, by the time we have 100-core processors, and most applications really needed to exploit them, some technology would be around to allow us to do so with relative ease. The ideal solution would be one that allows programmers to forget about the problem, in much the same way that garbage collection removed the need to worry too much about memory allocation. From all I can find on the topic, though, there is only a remote likelihood that we'll ever have a compiler that takes a program written in a single-threaded style and "auto-magically" converts it into an efficient, correct, multi-threaded program. At the same time, it seems clear that what is currently the most common solution, multi-threaded programming with shared memory, is unsustainable. As soon as a piece of state can be changed by a different thread of execution, the potential number of execution paths through your program grows exponentially with the number of threads. If you have two threads, each executing n instructions, then there are 2^n possible "interleavings" of those instructions. Of course, many of those interleavings will have identical behavior, but several won't. Not only does this make understanding how a program works an order of magnitude harder, but it will also result in irreproducible, non-deterministic, bugs. And of course, the problem will be many times worse when you have a hundred or a thousand threads. So what is the answer? All of the possible alternatives require a change in the way we write programs and, currently, seem to be plagued by performance issues. Software transactional memory (STM) applies the ideas of database transactions, and optimistic concurrency control, to memory. However, working out how to break down your program into sufficiently small transactions, so as to avoid contention issues, isn't easy. Another approach is concurrency with actors, where instead of having threads share memory, each thread runs in complete isolation, and communicates with others by passing messages. It simplifies concurrent programs but still has performance issues, if the threads need to operate on the same large piece of data. There are doubtless other possible solutions that I haven't mentioned, and I would love to know to what extent you, as a developer, are considering the problem of multi-core concurrency, what solution you currently favor, and why. Cheers, Tony.

    Read the article

  • Rethinking Oracle Optimizer Statistics for P6 Part 2

    - by Brian Diehl
    In the previous post (Part 1), I tried to draw some key insights about the relationship between P6 and Oracle Optimizer Statistics.  The first is that average cardinality has the greatest impact on query optimization and that the particular queries generated by P6 are more likely to use this average during calculations. The second is that these are statistics that are unlikely to change greatly over the life of the application. Ultimately, our goal is to get the best query optimization possible.  Or is it? Stability No application administrator wants to get the call at 9am that their application users cannot get there work done because everything is running slow. This is a possibility with a regularly scheduled nightly collection of statistics. It may not just be slow performance, but a complete loss of service because one or more queries are optimized poorly. Ideally, this should not be the case. The database optimizer should make better decisions with more up-to-date data. Better statistics may give incremental performance benefit. However, this benefit must be balanced against the potential cost of system down time.  It is stability that we ultimately desire and not absolute optimal performance. We do want the benefit from more accurate statistics and better query plans, but not at the risk of an unusable system. As a result, I've developed the following methodology around managing database statistics for the P6 database.  1. No Automatic Re-Gathering - The daily, weekly, or other interval of statistic gathering is unlikely to be beneficial. Quite the opposite. It is more likely to cause problems. 2. Smart Re-Gathering - The time to collect statistics is when things have changed significantly. For a new installation of P6, this is happening more often because the data is growing from a few rows to thousands and more. But for a mature system, the data is not changing significantly from week-to-week. There are times to collect statistics: New releases of the application Changes in the underlying hardware or software versions (ex. new Oracle RDBMS version) When additional user groups are added. The new groups may use the software in significantly different ways. After significant changes in the data. This may be monthly, quarterly or yearly.  3. Always Test - If you take away one thing from this post, it would be to always have a plan to test after changing statistics. In reality, statistics can be collected as often as you desire provided there are tests in place to verify that performance is the same or better. These might be automated tests or simply a manual script of application functions. 4. Have a Way Out - Never change the statistics without a way to return to the previous set. Think of the statistics as one part of the overall application code that also includes the source code--both application and RDBMS. It would be foolish to change to the new code without a way to get back to the previous version. In the final post, I will talk about the actual script I created for P6 PMDB and possible future direction for managing query performance. 

    Read the article

  • Multitenancy in SQL Azure

    - by cibrax
    If you are building a SaaS application in Windows Azure that relies on SQL Azure, it’s probably that you will need to support multiple tenants at database level. This is short overview of the different approaches you can use for support that scenario, A different database per tenant A new database is created and assigned when a tenant is provisioned. Pros Complete isolation between tenants. All the data for a tenant lives in a database only he can access. Cons It’s not cost effective. SQL Azure databases are not cheap, and the minimum size for a database is 1GB.  You might be paying for storage that you don’t really use. A different connection pool is required per database. Updates must be replicated across all the databases You need multiple backup strategies across all the databases Multiple schemas in a database shared by all the tenants A single database is shared among all the tenants, but every tenant is assigned to a different schema and database user. Pros You only pay for a single database. Data is isolated at database level. If the credentials for one tenant is compromised, the rest of the data for the other tenants is not. Cons You need to replicate all the database objects in every schema, so the number of objects can increase indefinitely. Updates must be replicated across all the schemas. The connection pool for the database must maintain a different connection per tenant (or set of credentials) A different user is required per tenant, which is stored at server level. You have to backup that user independently. Centralizing the database access with store procedures in a database shared by all the tenants A single database is shared among all the tenants, but nobody can read the data directly from the tables. All the data operations are performed through store procedures that centralize the access to the tenant data. The store procedures contain some logic to map the database user to an specific tenant. Pros You only pay for a single database. You only have a set of objects to maintain and backup. Cons There is no real isolation. All the data for the different tenants is shared in the same tables. You can not use traditional ORM like EF code first for consuming the data. A different user is required per tenant, which is stored at server level. You have to backup that user independently. SQL Federations A single database is shared among all the tenants, but a different federation is used per tenant. A federation in few words, it’s a mechanism for horizontal scaling in SQL Azure, which basically uses the idea of logical partitions to distribute data based on certain criteria. Pros You only have a single database with multiple federations. You can use filtering in the connections to pick the right federation, so any ORM could be used to consume the data. Cons There is no real isolation at that database level. The isolation is enforced programmatically with federations.

    Read the article

  • High Resolution Timeouts

    - by user12607257
    The default resolution of application timers and timeouts is now 1 msec in Solaris 11.1, down from 10 msec in previous releases. This improves out-of-the-box performance of polling and event based applications, such as ticker applications, and even the Oracle rdbms log writer. More on that in a moment. As a simple example, the poll() system call takes a timeout argument in units of msec: System Calls poll(2) NAME poll - input/output multiplexing SYNOPSIS int poll(struct pollfd fds[], nfds_t nfds, int timeout); In Solaris 11, a call to poll(NULL,0,1) returns in 10 msec, because even though a 1 msec interval is requested, the implementation rounds to the system clock resolution of 10 msec. In Solaris 11.1, this call returns in 1 msec. In specification lawyer terms, the resolution of CLOCK_REALTIME, introduced by POSIX.1b real time extensions, is now 1 msec. The function clock_getres(CLOCK_REALTIME,&res) returns 1 msec, and any library calls whose man page explicitly mention CLOCK_REALTIME, such as nanosleep(), are subject to the new resolution. Additionally, many legacy functions that pre-date POSIX.1b and do not explicitly mention a clock domain, such as poll(), are subject to the new resolution. Here is a fairly comprehensive list: nanosleep pthread_mutex_timedlock pthread_mutex_reltimedlock_np pthread_rwlock_timedrdlock pthread_rwlock_reltimedrdlock_np pthread_rwlock_timedwrlock pthread_rwlock_reltimedwrlock_np mq_timedreceive mq_reltimedreceive_np mq_timedsend mq_reltimedsend_np sem_timedwait sem_reltimedwait_np poll select pselect _lwp_cond_timedwait _lwp_cond_reltimedwait semtimedop sigtimedwait aiowait aio_waitn aio_suspend port_get port_getn cond_timedwait cond_reltimedwait setitimer (ITIMER_REAL) misc rpc calls, misc ldap calls This change in resolution was made feasible because we made the implementation of timeouts more efficient a few years back when we re-architected the callout subsystem of Solaris. Previously, timeouts were tested and expired by the kernel's clock thread which ran 100 times per second, yielding a resolution of 10 msec. This did not scale, as timeouts could be posted by every CPU, but were expired by only a single thread. The resolution could be changed by setting hires_tick=1 in /etc/system, but this caused the clock thread to run at 1000 Hz, which made the potential scalability problem worse. Given enough CPUs posting enough timeouts, the clock thread could be a performance bottleneck. We fixed that by re-implementing the timeout as a per-CPU timer interrupt (using the cyclic subsystem, for those familiar with Solaris internals). This decoupled the clock thread frequency from timeout resolution, and allowed us to improve default timeout resolution without adding CPU overhead in the clock thread. Here are some exceptions for which the default resolution is still 10 msec. The thread scheduler's time quantum is 10 msec by default, because preemption is driven by the clock thread (plus helper threads for scalability). See for example dispadmin, priocntl, fx_dptbl, rt_dptbl, and ts_dptbl. This may be changed using hires_tick. The resolution of the clock_t data type, primarily used in DDI functions, is 10 msec. It may be changed using hires_tick. These functions are only used by developers writing kernel modules. A few functions that pre-date POSIX CLOCK_REALTIME mention _SC_CLK_TCK, CLK_TCK, "system clock", or no clock domain. These functions are still driven by the clock thread, and their resolution is 10 msec. They include alarm, pcsample, times, clock, and setitimer for ITIMER_VIRTUAL and ITIMER_PROF. Their resolution may be changed using hires_tick. Now back to the database. How does this help the Oracle log writer? Foreground processes post a redo record to the log writer, which releases them after the redo has committed. When a large number of foregrounds are waiting, the release step can slow down the log writer, so under heavy load, the foregrounds switch to a mode where they poll for completion. This scales better because every foreground can poll independently, but at the cost of waiting the minimum polling interval. That was 10 msec, but is now 1 msec in Solaris 11.1, so the foregrounds process transactions faster under load. Pretty cool.

    Read the article

  • Obtaining MFC Feature Pack GUI elements in .NET WinForms

    - by Cody Gray
    The MFC Feature Pack (and VS 2010) adds out-of-the-box support for several "modern" GUI elements (such as MDI with tabbed documents, the ribbon, and a Visual Studio-style interface with docking panels). These are a boon to those of us that have to support legacy MFC-based applications and want to update their look-and-feel, and a sign that Microsoft has not completely abandoned unmanaged C++ development. However, with the push so strongly in favor of .NET, WinForms, and managed code (and for plenty of good reasons), there seems little reason to develop new applications in unmanaged C++/MFC. The question then becomes how does one obtain these GUI elements in a WinForms application. Almost all of the add-ons and libraries I have found so far cost money, and introduce additional dependencies. I don't have a budget to buy third-party libraries, and the controls provided by Microsoft in MFC for free seem sufficient for our needs. But I still have reservations about learning MFC to develop a new application. Not only does the investment in time seem significant (by all accounts, MFC seems particularly difficult to learn, even for experienced .NET developers--although I am willing to try), but the question of MFC's lifespan is raised as well. Certainly, given the millions of lines of code and existing apps written in native C++, it will be around for some time, but the handwriting seems to be on the wall, so to speak, that it's no longer Microsoft's touted development platform. It seems like these features should be available by now in WinForms without the need for third-party add-ons, or devoting a lot of time and resources to custom-drawing EVERYTHING. Am I just missing something? I find very little online that compares these new features of MFC to what is available in WinForms, mainly because most everything written on MFC pre-dated its most recent update, before which it looked admitted "dated," and with its other flaws, was hardly an appealing platform for new development. With the very recent release of VS 2010, we have a while to wait before WinForms gets updated again. What routes are you guys taking for applications whose customers demand a modern-looking UI on a budget?

    Read the article

  • Why were namespaces removed from ECMAScript consideration?

    - by Bob
    Namespaces were once a consideration for ECMAScript (the old ECMAScript 4) but were taken out. As Brendan Eich says in this message: One of the use-cases for namespaces in ES4 was early binding (use namespace intrinsic), both for performance and for programmer comprehension -- no chance of runtime name binding disagreeing with any earlier binding. But early binding in any dynamic code loading scenario like the web requires a prioritization or reservation mechanism to avoid early versus late binding conflicts. Plus, as some JS implementors have noted with concern, multiple open namespaces impose runtime cost unless an implementation works significantly harder. For these reasons, namespaces and early binding (like packages before them, this past April) must go. But I'm not sure I understand all of that. What exactly is a prioritization or reservation mechanism and why would either of those be needed? Also, must early binding and namespaces go hand-in-hand? For some reason I can't wrap my head around the issues involved. Can anyone attempt a more fleshed out explanation? Also, why would namespaces impose runtime costs? In my mind I can't help but see little difference in concept between a namespace and a function using closures. For instance, Yahoo and Google both have YAHOO and google objects that "act like" namespaces in that they contain all of their public and private variables, functions, and objects within a single access point. So why, then, would a namespace be so significantly different in implementation? Maybe I just have a misconception as to what a namespace is exactly.

    Read the article

  • iPhone: Software Development And Distribution

    - by xsl
    I have a few quick questions about the iPhone software development. I did some research about the topic, but there are a few specific things I would like to ask here, because I will have to estimate the cost of the required hardware and software, before I am allowed to buy anything. I never did any Mac development nor have I ever owned an iPhone, so needless to say this is quite hard for me. I will buy an iMac mini with 2 GB RAM for iPhone development. I will have to use it at the same time as my regular PC, but the majority of the time I won't use the Mac at all. Do I have to buy an additional monitor, a mouse and a keyboard or is there a better solution? I will have to port a C library to the iPhone platform and develop an iPhone application that uses the ported library. Do I need anything else than the iPhone SDK to do this? If I use an external library (see above), can I test the application with the integrated emulator, or is it recommend to buy the device? In a later phase of the project I will have to buy an iPhone, but I will have to wait until the iPhone 4 is released here in Europe, because the application requires a camera. In addition to this I will have to send data to a remote webservice. Aside from these two things I don't require any other features. Can I just buy the iPhone online from another country (the iPhones here are sim locked), or should I buy one with a contract? When the application is ready, it will be installed on a few iPhones owned by our customer. Because of security reasons it is crucial that there is no third party involved in this process (i.e. the application should not be distributed on the app store). Is this possible?

    Read the article

  • How good is Dotfuscator Community Edition? What is "good enough obfuscator"?

    - by zendar
    I plan to release one small, low priced utility. Since this is more hobby than business, I planned to use Dotfuscator Community Edition that is shipped with VS2008. How good is it? I could also use definition of "good enough obfuscator" - what features are missing from Dotfuscator Community Edition to make it good enough. Edit: I checked pricing on number of commercial obfuscators and they cost a lot. Is it worth it? Are commercial versions that much better protecting from reverse engineering? I'm not very afraid of my application being cracked (it will be disappointing if application is so bad that no one is interested in cracking it). It's not heavily protected anyway, not overly complex serial key and licence checks on few places in code. It just bugs me that without obfuscation, somebody can easily get source code, rebrand it and sell it as its own. Does this happens a lot? Edit 2: Can somebody recommend commercial obfuscator. I found lots of them, all of them are expensive, some even don't have price listed on web site. Feature wise, all products seem more or less similar. What is minimal set of features obfuscator should have?

    Read the article

  • Home automation using Arduino / XMPP client for Arduino

    - by Ashish
    I am trying to setup a system for automating certain tasks in my home. I am thinking of a solution wherein a server side application would be able to send/receive commands/data to Arduino (attached with Arduino Ethernet Shield) via the web. Here the Arduino may both act as a sensor interface to the server application or command executor interface for the server app. E.g. (user story): The overhead water tank in my house has a water level sensor attached with Arduino (attached with Arduino Ethernet Shield). Another Arduino (attached with Arduino Ethernet Shield) is attached with a relay/latch. This relay/latch is then connected to a water pump. Now the server side application on the web is able to get/receive water level information from the Arduino on the water tank. Depending on the water level information received, the web application should send suitable signals/commands to Arduino on water pump to switch 'ON' or switch 'OFF' the water pump. Now for such a system to work across the web, I am thinking of using one of the type of solutions in order of my priority: Using XMPP for communication between server application and Arduino. Using HTTP polling. Using HTTP hanging GET. For solution number 1, I need to implement a XMPP client that would reside on Arduino. Is it possible to write a XMPP client small enough to reside on an Arduino? If yes what are the minimum possible XMPP client functionality that I need to write for Arduino, so that it would be able to contact XMPP servers solutions like GTalk, etc.? For solution number 2 and 3 I need guidance in implementation. Also which solution would be cost effective and easily extendable?

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >