Search Results

Search found 4249 results on 170 pages for 'browsing history'.

Page 130/170 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • How Big Data and Social Won the Election

    - by Mike Stiles
    The story of big data’s influence on the outcome of the US Presidential election is worth a good look, because a) it’s a harbinger of things to come, and b) it’s an example of similar successes available to any enterprise seriously resourcing integrated big data, modeling, and data-driven execution on all assets, including social. Obama campaign manager Jim Messina fielded a data and analytics brain trust 5 times larger than 2008. At that time, there were numerous databases from various sources, few of them talking to each other. This time, the mission was to be metrics-centered and measure everything measurable, and in context with all the other data. Big data showed them exactly what they needed to know and told them what to do about it. It showed them women 40-49 on the west coast would donate big money if they got to eat with George Clooney. Women on the east coast would pony up to hang out with Sarah Jessica Parker. Extensive daily modeling showed them what kinds of email appeals, from who, and to whom, would prove most successful in raising cash, recruiting volunteers, and getting out the vote. Swing state voters were profiled and approached with more customized targeting that at any time in history. Ads were purchased on specific shows watched by the targets, increasing efficiency 14% over traditional media buys. For all the criticism of the candidate’s focus on appearing on comedy and entertainment shows, and local radio morning shows, that’s where the data sent them to reach the voters most likely to turn out for them. And then there was social. Again, more than in any other election, Facebook was used for virtual, highly efficient door-to-door canvasing. Facebook fans got pictures of friends in swing states and were asked to encourage them to act. Using that approach, 1 in 5 peer-to-peer appeals led to the desired action. Assumptions, gut, intuition, campaign experience, all took a backseat to strategy shifts solidly backed up by data. Zeroing in on demographics likely to back the President and tracking their mood daily literally changed the voter landscape. The Romney team watched Obama voters appear seemingly out of thin air. One Obama campaign aide said, “We ran the election 66,000 times every night.” Which brings us to your organization. If you’re starting to feel like the battle-cry of “but this is the way we’ve always done it” is starting to put you in an extremely vulnerable position, you’re right. Social has become a key communication tool of the 21st century. Failing to use it, or failing to invest in a deep understanding of who your customers and prospects are so the content you post there will achieve desired actions and results, will leave you waking up one morning wondering, “What happened?”@mikestilesPhoto stock.xchng

    Read the article

  • Swiss Re increases data warehouse performance and deploys in record time

    - by KLaker
    Great information on yet another data warehouse deployment on Exadata. A little background on Swiss Re: In 2002, Swiss Re established a data warehouse for its client markets and products to gather reinsurance information across all organizational units into an integrated structure. The data warehouse provided the basis for reporting at the group level with drill-down capability to individual contracts, while facilitating application integration and data exchange by using common data standards. Initially focusing on property and casualty reinsurance information only, it now includes life and health reinsurance, insurance, and nonlife insurance information. Key highlights of the benefits that Swiss Re achieved by using Exadata: Reduced the time to feed the data warehouse and generate data marts by 58% Reduced average runtime by 24% for standard reports comfortably loading two data warehouse refreshes per day with incremental feeds Freed up technical experts by significantly minimizing time spent on tuning activities Most importantly this was one of the fastest project deployments in Swiss Re's history. They went from installation to production in just four months! What is truly surprising is the that it only took two weeks between power-on to testing the machine with full data volumes! Business teams at Swiss Re are now able to fully exploit up-to-date analytics across property, casualty, life, health insurance, and reinsurance lines to identify successful products. These points are highlighted in the following quotes from Dr. Stephan Gutzwiller, Head of Data Warehouse Services at Swiss Re:  "We were operating a complete Oracle stack, including servers, storage area network, operating systems, and databases that was well optimized and delivered very good performance over an extended period of time. When a hardware replacement was scheduled for 2012, Oracle Exadata was a natural choice—and the performance increase was impressive. It enabled us to deliver analytics to our internal customers faster, without hiring more IT staff" “The high quality data that is readily available with Oracle Exadata gives us the insight and agility we need to cater to client needs. We also can continue re-engineering to keep up with the increasing demand without having to grow the organization. This combination creates excellent business value.” Our full press release is available here: http://www.oracle.com/us/corporate/customers/customersearch/swiss-re-1-exadata-ss-2050409.html. If you want more information about how Exadata can increase the performance of your data warehouse visit our home page: http://www.oracle.com/us/products/database/exadata-database-machine/overview/index.html

    Read the article

  • What do you do when practical problems get in the way of practical goals?

    - by P.Brian.Mackey
    UPDATE Source control is good to use. Sometimes, real world issues make it impractical to use. For example: If the team is not used to using source control, training problems can arise If a team member directly modifies code on the server, various issues can arise. Merge problems, lack of history, etc Let's say there's a project that is way out of sync. The physical files on the server differ in unknown ways over ~100 files. Merging would take not only a great knowledge of the project, but is also well beyond the ability to complete in the given time. Other projects are falling out of sync. Developers continue to have a distrust of source control and therefore compound the issue by not using source control. Developers argue that using source control is wasteful because merging is error prone and difficult. This is a difficult point to argue, because when source control is being so badly mis-used and source control continually bypassed, it is error prone indeed. Therefore, the evidence "speaks for itself" in their view. Developers argue that directly modifying source control saves time. This is also difficult to argue. Because the merge required to synchronize the code to start with is time consuming, across ~10 projects. Permanent files are often stored in the same directory as the web project. So publishing (full publish) erases these files that are not in source control. This also drives distrust for source control. Because "publishing breaks the project". Fixing this (moving stored files out of the solution subfolders) takes a great deal of time and debugging as these locations are not set in web.config and often exist across multiple code points. So, the culture persists itself. Bad practice begets more bad practice. Bad solutions drive new hacks to "fix" much deeper, much more time consuming problems. Servers, hard drive space are extremly difficult to come by. Yet, user expectations are rising. What can be done in this situation?

    Read the article

  • How do I get long command lines to wrap to the next line?

    - by BrianH
    Edit It was my .bashrc file. I've copied the same profile from machine to machine, and I used special characters in my $PS1 that are somehow throwing it off. I'm now sticking with the standard bash variables for my $PS1. Thanks to @ændrük for the tip on the .bashrc! ...End Edit... Something I have noticed in Ubuntu for a long time that has been frustrating to me is when I am typing a command at the command line that gets longer (wider) than the terminal width, instead of wrapping to a new line, it goes back to column 1 on the same line and starts over-writing the beginning of my command line. (It doesn't actually overwrite the actual command, but visually, it is overwriting the text that was displayed). It's hard to explain without seeing it, but let's say my terminal was 20 characters wide (Mine is more like 120 characters - but for the sake of an example), and I want to echo the English alphabet. What I type is this: echo abcdefghijklmnopqrstuvwxyz But what my terminal looks like before I hit the key is: pqrstuvwxyzghijklmno When I hit enter, it echos abcdefghijklmnopqrstuvwxyz so I know the command was received properly. It just wrapped my typing after the "o" and started over on the same line. What I would expect to happen, if I typed this command in on a terminal that was only 20 characters wide would be this: echo abcdefghijklmno pqrstuvwxyz Background: I am using bash as my shell, and I have this line in my ~/.bashrc: set -o vi to be able to navigate the command line with VI commands. I am currently using Ubuntu 10.10 server, and connecting to the server with Putty. In any other environment I have worked in, if I type a long command line, it will add a new line underneath the line I am working on when my command gets longer than the terminal width and when I keep typing I can see my command on 2 different lines. But for as long as I can remember using Ubuntu, my long commands only occupy 1 line. This also happens when I am going back to previous commands in the history (I hit Esc, then 'K' to go back to previous commands) - when I get to a previous command that was longer than the terminal width, the command line gets mangled and I cannot tell where I am at in the command. The only work-around I have found to see the entire long command is to hit "Esc-V", which opens up the current command in a VI editor. I don't think I have anything odd in my .bashrc file. I commented out the "set -o vi" line, and I still had the problem. I downloaded a fresh copy of Putty and didn't make any changes to the configuration - I just typed in my host name to connect, and I still have the problem, so I don't think it's anything with Putty (unless I need to make some config changes) Has anyone else had this problem, and can anyone think of how to fix it? Thanks in advance! Brian

    Read the article

  • The Uganda .NET Usergroup meeting for January 2011 - a look back.

    - by Malisa L. Ncube
    We had a very interesting meeting on Friday 28th last week. We had 10 attendees and two speakers. The first topic presented was Cloud Computing, presented by Allan Rwakatungu @arwakatungu who works with MTN Uganda. He gave a very brilliant outline of how Cloud computing and service oriented applications had begun changing the platform for operating business and the costs it saves because of scalability and elasticity. He went on to demonstrate the steps you would take if you are beginning a new Windows Azure project. He explained the history and evolution of the Windows Azure, SQL Azure and cloud services offered by Amazon and google.com. The attendees had many questions to ask (obviously), but they were all answered very well. We once again thank Allan, for taking time to prepare the presentation and demonstrating for us. We recorded a video on the entire presentation and after doing some editing we will publish it. One wish which was echoed by most members was that Microsoft should open the cloud services and development for Africa. Microsoft currently does not even have servers here in Africa and so far, that does not put African developers in the same platform as other developers in other continents. Now is the time considering the improvements in network speeds and joining of the Seacom network and broadband.   I presented on Parallelism and Multithreading using .NET 4.0, I also gave some details on the language changes in C# 5.0 and the async keyword and the TaskEx class. I explained the Task, Scheduling of parallel tasks and demonstrated problems that may arise from using parallelism inappropriately. I also demonstrated the performance improvements that may be achieved by taking advantage of multi-core processors. You may download the presentation on Parallelism and Multi-threading from here. The resolution of the meeting was that we should meet more than once a month and begin other activities which should be more fun. e.g. Geek Dinner, Geek Beer or CodeCamp. Based on that we all agreed we shall have a mid-month meeting starting from February. Cheers folks! del.icio.us Tags: .net,usergroup,cloud computing,parallelism,multi-threading

    Read the article

  • Dealing with coworkers when developing, need advice [closed]

    - by Yippie-Kai-Yay
    I developed our current project architecture and started developing it on my own (reaching something like, revision 40). We're developing a simple subway routing framework and my design seemed to be done extremely well - several main models, corresponding views, main logic and data structures were modeled "as they should be" and fully separated from rendering, algorithmic part was also implemented apart from the main models and had a minor number of intersection points. I would call that design scalable, customizable, easy-to-implement, interacting mostly based on the "black box interaction" and, well, very nice. Now, what was done: I started some implementations of the corresponding interfaces, ported some convenient libraries and wrote implementation stubs for some application parts. I had the document describing coding style and examples of that coding style usage (my own written code). I forced the usage of more or less modern C++ development techniques, including no-delete code (wrapped via smart pointers) and etc. I documented the purpose of concrete interface implementations and how they should be used. Unit tests (mostly, integration tests, because there wasn't a lot of "actual" code) and a set of mocks for all the core abstractions. I was absent for 12 days. What do we have now (the project was developed by 4 other members of the team): 3 different coding styles all over the project (I guess, two of them agreed to use the same style :), same applies to the naming of our abstractions (e.g CommonPathData.h, SubwaySchemeStructures.h), which are basically headers declaring some data structures. Absolute lack of documentation for the recently implemented parts. What I could recently call a single-purpose-abstraction now handles at least 2 different types of events, has tight coupling with other parts and so on. Half of the used interfaces now contain member variables (sic!). Raw pointer usage almost everywhere. Unit tests disabled, because "(Rev.57) They are unnecessary for this project". ... (that's probably not everything). Commit history shows that my design was interpreted as an overkill and people started combining it with personal bicycles and reimplemented wheels and then had problems integrating code chunks. Now - the project still does only a small amount of what it has to do, we have severe integration problems, I assume some memory leaks. Is there anything possible to do in this case? I do realize that all my efforts didn't have any benefit, but the deadline is pretty soon and we have to do something. Did someone have a similar situation? Basically I thought that a good (well, I did everything that I could) start for the project would probably lead to something nice, however, I understand that I'm wrong. Any advice would be appreciated, sorry for my bad english.

    Read the article

  • Ubuntu hangs on booting up after a update

    - by alFReD NSH
    I've made a clean install yesterday, for the first time restarted, everything went good and then after I updated packages and copied my old home directory to replace the new one, when I restarted it hung when it was booting. I tried reinstalling again and doing the same thing, but again same thing happened. Here's what I see, before when the Ubuntu logo with the five dots is shown: Then after that, 3 or 4 of the dots will load and hangs there. If I press arrow up before that, this will be shown I started my laptop again today(the pictures are for the day before) and after that, boot up with live CD and got the logs. dmesg: http://pastebin.com/aVxV7BQF syslog: http://pastebin.com/4E2BrRUK And some info: alfred@alFitop:~$ uname -a Linux alFitop 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux lshw: http://pastebin.com/AZbKJmsT sources.list : http://pastebin.com/2HazmuyV My problem is a bit similar to here: http://ubuntuforums.org/showthread.php?t=1918271 Though I didn't change my x.org config. Only changed home directory and updated packages. I've tried memtest and fschk, both passed. In the recovery mode boot option, I've also realized that same things happen in failsafe graphical mode. But when I go into the network mode, I can boot up my system, but of course same the graphics are just basic. Adding blacklist intel_ips to /etc/modprobe.d/blacklist.conf solves the first message, but still I get the broken pipe and CPU stack traces. The current kernel version is 3.2.0-25, I've tried booting up in the 3.2.0-23(the one the installer came with, but same results.) Also uninstalled apparmor, didn't help. I've installed Ubuntu again, this time without copying the home directory, also same result. --- UPDATE --- This problem was solved before with removing backports, but its back again! I've updated my laptop last night and the problem came back. It's definitely one of these packages. My /var/log/apt/term.log and /var/log/apt/history.log. I'm almost having the same situation. --- UPDATE --- I realized this also have happened on times that I have updated(haven't restarted after it) and my computer power has been cut off and its shutdown due to lack of power. And I realized if I just do as I answered but not in somewhere without GUI(networking mode has the GUI) it wouldn't work.

    Read the article

  • Oracle Customer Success Forum - Batesville - Oracle Sales Cloud - June 24th, 5pm CET

    - by Richard Lefebvre
    Batesville uses Oracle Sales Cloud to create a common platform and standardize processes for business transformation across field sales and telesales. Using real-time KPI dashboards, they are measuring their business success with consistency across their sales reps.We are pleased to invite you to a discussion with Batesville on industry trends, why sales automation is important, reasons for choosing Oracle Sales Cloud, and the vendor evaluation process. Please click on the register button to confirm your attendance by 5:00 p.m. Pacific Time on June 23, 2014.Speakers: Diane Kinker, Director CRM Program Chris Haven, Senior Director Product Management, Oracle (Moderator) Organization Profile:Batesville (www.Batesville.com), a wholly owned subsidiary of Hillenbrand, Inc. (NYSE:HI), is the leader in the North American death care industry. For more than 125 years, Batesville has been dedicated to helping families honor the lives of those they love®. Batesville’s innovation has changed the face of funeral service, from advancements in manufacturing and quality to patented features and memorialization offerings, technology and web-based solutions, and profit-enhancing merchandising systems and room displays. Our history of manufacturing excellence, product innovation, superior customer service and reliable delivery has helped Batesville become – and remain – a market leader. Event Description:In this informal reference call, you will have the opportunity to hear Batesville discuss industry trends, why sales automation is important, the decision making process for choosing Oracle Sales Cloud, and the vendor evaluation process. The call will open with a brief overview, followed by discussion, and an open question and answer session. Please allow one hour for the call.Why Oracle:Batesville looked to transform its sales automation processes. Oracle Sales Cloud met these needs and Batesville’s requirements for: Standardized end-to-end Sales Processes including Sales Performance Management (territory management, quota management and incentive compensation) Mobile capabilities with integration to Microsoft Outlook and Smartphones Creation of the WIG Dashboard (Wildly Important Goal) using reporting and analytics Click the Register Now button to confirm your attendance for this informative event. Registration will close at 5:00 p.m. Pacific Time on June 23, 2014.After you register your information will be forwarded through an Approval Process. Once your registration request has been validated against the invitation database, you will receive an email confirmation with your registration details as long as there is availability. Please be advised that Batesville will revise the registrants list and may dismiss registrations as they see fit. Register Now!

    Read the article

  • 3 Reasons You Need To Know Something About Every Technology

    - by Tim Murphy
    I make my living as a consultant and a general technologist.  I credit my success to the fact that I have never been afraid to pick up any product, language or platform needed to get the job done.  While Microsoft technologies I my mainstay, I have done work on mainframe and UNIX platforms and have worked with a wide variety of database engines.  Each one has it’s use and most times it is less expensive to find a way to communicate with an existing system than to replace it. So what are the main benefits of expending the effort to learn a new technology? New ways to solve problems Accelerate development Advise clients and get new business opportunities By new technology I mean ones that you haven’t had experience with before.  They don’t have to be the the one that just came out yesterday.  As they say, those who do not learn from history are bound to repeat it.  If you can learn something from an older technology it can be just as valuable as the shiny new one.  Either way, when you add another tool to your kit you get a new view on each problem you face.  This makes it easier to create a sound solution. The next thing you can learn from working with different products and techniques is how to more efficiently develop solve problems.  Many times if you are working with a new language you will find that there are specific design patterns that are used with it in normal use.  These can usually be applied with most languages.  You just needed to be exposed to them. The last point is about helping your clients and helping yourself.  If you can get in on technologies early you will have advantage over your competition in the market.  You will also be able to honestly advise you client on why they should or should not go with a new product.  Being able to compare products and their features is always an ability that stake holders appreciate. You don’t need to learn every detail of a product.  Learn enough to function and get an idea of how to use the technology.  Keep eating those technology Wheaties and you will be ready to go the distance in any project. del.icio.us Tags: Technology,technologists,technology generalist,Software Architecture

    Read the article

  • Paranoid management, contractor checking work [closed]

    - by user833345
    Just wanted to get some opinions and experiences on an issue I'm having at work. First, a little background. I've been working at a company for some time (past any probation periods) and rewriting a horrendous system. No tests, incomplete and broken functionality everywhere, enough copypasta to feed a small village, redundant code, more unused SQL tables than used ones and terrible performance. I've never seen such bad code, pretty much all of it is worthy of being posted on TheDailyWTF. The company has been operating for a number of years and have had a string of bad developers working on this system. I made a call on rewriting instead of refactoring since I judged it to be less work overall and decided that the result will address the requirements more appropriately, since the central requirement is to have a future-proof system for the next decade with plenty of room to scale up. Refactoring would have entailed untangling a huge ball of yarn and at the same time integrating it with a proper foundation or building a foundation from scratch. I've introduced the latest spiffy framework, unit & functional testing, CI, a bug tracker and agile workflow to the environment. I've fixed most of the performance issues of the old system (there were no indexes on any of the tables, for example). I've created an automated deployment process for the old system. The CTO has been maintaining the old system while I have been building the new one and he has been advising management that everything is being done as per best practices. However, management is hiring a contractor to come in and verify my work. In my experience, this is unprecedented. I can understand their reasoning to an extent, since they've had bad luck in the past, but can't help but feel somewhat offended at the fact that they distrust two senior developers who have been working with them for some time enough that a third party is being brought in. And it's not just me who is under watch - people's emails are constantly checked, someone had a remote desktop application installed on their computer of which I was asked to check the usage logs to try to determine if they were stealing sensitive data and there are CCTV cameras in one of the rooms. It's the first time I've decided to disable my Skype history at work. Am I right to feel indignant here? Has anyone else ever encountered such a situation? If so, how did it work out in the end? Was it worth sticking around? Should I just find another job?

    Read the article

  • How to properly do weapon cool-down reload timer in multi-player laggy environment?

    - by John Murdoch
    I want to handle weapon cool-down timers in a fair and predictable way on both client on server. Situation: Multiple clients connected to server, which is doing hit detection / physics Clients have different latency for their connections to server ranging from 50ms to 500ms. They want to shoot weapons with fairly long reload/cool-down times (assume exactly 10 seconds) It is important that they get to shoot these weapons close to the cool-down time, as if some clients manage to shoot sooner than others (either because they are "early" or the others are "late") they gain a significant advantage. I need to show time remaining for reload on player's screen Clients can have clocks which are flat-out wrong (bad timezones, etc.) What I'm currently doing to deal with latency: Client collects server side state in a history, tagged with server timestamps Client assesses his time difference with server time: behindServerTimeNs = (behindServerTimeNs + (System.nanoTime() - receivedState.getServerTimeNs())) / 2 Client renders all state received from server 200 ms behind from his current time, adjusted by what he believes his time difference with server time is (whether due to wrong clocks, or lag). If he has server states on both sides of that calculated time, he (mostly LERP) interpolates between them, if not then he (LERP) extrapolates. No other client-side prediction of movement, e.g., to make his vehicle seem more responsive is done so far, but maybe will be added later So how do I properly add weapon reload timers? My first idea would be for the server to send each player the time when his reload will be done with each world state update, the client then adjusts it for the clock difference and thus can estimate when the reload will be finished in client-time (perhaps considering also for latency that the shoot message from client to server will take as well?), and if the user mashes the "shoot" button after (or perhaps even slightly before?) that time, send the shoot event. The server would get the shoot event and consider the time shot was made as the server time when it was received. It would then discard it if it is nowhere near reload time, execute it immediately if it is past reload time, and hold it for a few physics cycles until reload is done in case if it was received a bit early. It does all seem a bit convoluted, and I'm wondering whether it will work (e.g., whether it won't be the case that players with lower ping get better reload rates), and whether there are more elegant solutions to this problem.

    Read the article

  • Managing Regulated Content in WebCenter: USDM and Oracle Offer a New Part 11 Compliant Solution for Life Sciences

    - by Michael Snow
    Guest post today provided by Oracle partner, USDM  Regulated Content in WebCenterUSDM and Oracle offer a new Part 11 compliant solution for Life Sciences (White Paper) Life science customers now have the ability to take advantage of all of the benefits of Oracle’s WebCenter Content, a global leader in Enterprise Content Management.   For the past year, USDM has been developing best practice compliance solutions to meet regulated content management requirements for 21 CFR Part 11 in WebCenter Content. USDM has been an expert in ECM for life sciences since 1999 and in 2011, certified that WebCenter was a 21CFR Part 11 compliant content management platform (White Paper).  In addition, USDM has built Validation Accelerators Packs for WebCenter to enable life science organizations to quickly and cost effectively validate this world class solution.With the Part 11 certification, Oracle’s WebCenter now provides regulated life science organizations  the ability to manage REGULATORY content in WebCenter, as well as the ability to take advantage of ALL of the additional functionality of WebCenter, including  a complete, open, and integrated portfolio of portal, web experience management, content management and social networking technology.  Here are a few screen shot examples of Part 11 functionality included in the product: E-Sign, E-Sign Rendor, Meta Data History, Audit Trail Report, and Access Reporting. Gone are the days that life science companies have to spend millions of dollars a year to implement, maintain, and validate ECM systems that no longer meet the ever changing business and regulatory requirements.  Life science companies now have the ability to use WebCenter Content, an ECM system with a substantially lower cost of ownership and unsurpassed functionality.Oracle has been #1 in life sciences because of their ability to develop cost effective, easy-to-use, scalable solutions which help increase insight and efficiency to drive growth for their customers.  Adding a world class ECM solution to this product portfolio allows life science organizations the chance to get rid of costly ECM systems that no longer meet their needs and use WebCenter, part of the Oracle Fusion Technology stack, with their other leading enterprise applications.USDM provides:•    Expertise in Life Science ECM Business Processes•    Prebuilt Life Science Configuration in WebCenter •    Validation Accelerator Packs for WebCenterUSDM is very proud to support Oracle’s expanding commitment to Life Sciences…. For more information please contact:  [email protected] Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • Assessing Relative Maintainability

    - by João Bragança
    We (a contractor, actually) are implementing an off the shelf system to replace a legacy homegrown system for the core domain of the company (designing widgets). Unfortunately both systems will have to run concurrently for some time, as the product just isn't ready yet. Also, the decision was made to only migrate some of the widgets from the legacy system, based on date of last sale activity. Later on a new requirement came down: certain people in the company, most of them outside of the widget development context, want to search all widgets. The search results screen has 3 pieces of data: a GUID, a human readable id that is searchable, and a brief description (may need to be searchable in the future). In the widget details, there will be multiple screens. These screens align very well along SOA / bounded context lines - a screen for marketing data, a screen for sales history, etc. UML ahead! I am probably using the wrong kind of arrows here so please forgive me. The current solution - which is not in production yet - is something like the following: Both systems will be queried and the controller will merge the results. The new system has its own proprietary query language (we've alleviated this a bit with a LINQ provider). It also puts a lot of data on the wire. 15 search results typically run about 60k of unintelligible SOAP-wrapped xml. So I would prefer to avoid querying this system directly. These two systems publish events to help us integrate with other systems, mainly an ERP system. One of these events contains all the data necessary for the search screen. I proposed the following alternative: However I am being told that 'adding another database' will create more maintenance down the road. However, I believe this to be false, as I had to add a relatively simple feature that took several hours longer than anticipated because of this merging code. I want to get a feel for which system is more maintainable in the long run. I personally have not had the burden of maintaining any large system. I want something more than my gut. Specifically I'd like to know if having more, specialized physical databases is more or less maintainable than having less larger physical databases.

    Read the article

  • Update in Certification Exam Score Report Access Process!

    - by Richard Lefebvre
    Please note that exam results for all Oracle Certification exams will be accessed through CertView, starting October 30th, 2012. Exam results will no longer be available at the test center, or on the Pearson VUE website. Candidates will receive an email from Oracle within 30 minutes of completing the exam to let them know that their exam results are available on CertView. Candidates must have an Oracle Web Account to access CertView. This new process applies to exam results for all Oracle Certification exams - proctored and non-proctored as well beta exams. CertView, Oracle's self-service certification portal will be the partners’ one stop source for all their certification and exam history! Other benefits of this change include: driving all candidates to have an Oracle Web Account which will lead to tighter integration with Oracle University records in the future, increased security around data privacy and a higher validity rate for candidate email addresses. Existing benefits of CertView include, self-service access to exam and certification records and logos, and access to Oracle's self service certification verification. Accessing Exam Results  Returning CertView Users ·         Click the link in the email sent by Oracle or go to certview.oracle.com ·         Select the See My New Exam Results Now link to view exam results ·         Select the Print My New Exam Results Now link to print exam results  New CertView Users - Who Have An Oracle Web Account ·         First time Users must authenticate their CertView account ·         Account Authentication requires the Oracle Testing ID and email address from your Pearson VUE profile ·         Click the link in the email sent by Oracle or go to certview.oracle.com and follow the Authenticate My CertView Account link.  New CertView Users - Who Do Not Have An Oracle Web Account ·         CertView users are required to have an Oracle Web Account ·         To create an Oracle Web Account, go to certview.oracle.com and select theCreate My Oracle Web Account Now link. Then follow the remaining instructions under I do not have an Oracle Web Account on that page.

    Read the article

  • MySQL Enterprise Monitor 3.0.11 has been released

    - by Andy Bang
    We are pleased to announce that MySQL Enterprise Monitor 3.0.11 is now available for download on the My Oracle Support (MOS) web site. It will also be available via the Oracle Software Delivery Cloud in about 1 week. This is a maintenance release that includes a few new features and fixes a number of bugs. You can find more information on the contents of this release in the change log. You will find binaries for the new release on My Oracle Support. Choose the "Patches & Updates" tab, and then choose the "Product or Family (Advanced Search)" side tab in the "Patch Search" portlet. You will also find the binaries on the Oracle Software Delivery Cloud in approximately 1 week. Choose "MySQL Database" as the Product Pack and you will find the Enterprise Monitor along with other MySQL products. Based on feedback from our customers, MySQL Enterprise Monitor (MEM) 3.0 offers many significant improvements over previous releases. Highlights include: Policy-based automatic scheduling of rules and event handling (including email notifications) make administration of scale-out easier and automatic Enhancements such as automatic discovery of MySQL instances, centralized agent configuration and multi-instance monitoring further improve ease of configuration and management The new cloud and virtualization-friendly, "agent-less" design allows remote monitoring of MySQL databases without the need for any remote agents Trends, projections and forecasting - Graphs and Event handlers inform you in advance of impending file system capacity problems Zero Configuration Query Analyzer - Works "out of the box" with MySQL 5.6 Performance_Schema (supported by 5.6.14 or later) False positives from flapping or spikes are avoided using exponential moving averages and other statistical techniques Advisors can analyze data across an entire group; for example, the Replication Configuration Advisor can scan an entire topology to find common configuration errors like duplicate server UUIDs or a slave whose version is less than its master's More information on the contents of this release is available here: What's new in MySQL Enterprise Monitor 3.0? MySQL Enterprise Edition: Demos MySQL Enterprise Monitor Frequently Asked Questions MySQL Enterprise Monitor Change History More information on MySQL Enterprise and the Enterprise Monitor can be found here: http://www.mysql.com/products/enterprise/ http://www.mysql.com/products/enterprise/monitor.html http://www.mysql.com/products/enterprise/query.html http://forums.mysql.com/list.php?142 If you are not a MySQL Enterprise customer and want to try the Monitor and Query Analyzer using our 30-day free customer trial, go to http://www.mysql.com/trials, or contact Sales at http://www.mysql.com/about/contact. If you haven't looked at MEM recently, and especially MEM 3.0, please do so now and let us know what you think. Thanks and Happy Monitoring! - The MySQL Enterprise Tools Development Team

    Read the article

  • SMTP POP3 & PST. Acronyms from Hades.

    - by mikef
    A busy SysAdmin will occasionally have reason to curse SMTP. It is, certainly, one of the strangest events in the history of IT that such a deeply flawed system, designed originally purely for campus use, should have reached its current dominant position. The explanation was that it was the first open-standard email system, so SMTP/POP3 became the internet standard. We are, in consequence, dogged with a system with security weaknesses so extreme that messages are sent in plain text and you have no real assurance as to who the message came from anyway (SMTP-AUTH hasn't really caught on). Even without the security issues, the use of SMTP in an office environment provides a management nightmare to all commercial users responsible for complying with all regulations that control the conduct of business: such as tracking, retaining, and recording company documents. SMTP mail developed from various Unix-based systems designed for campus use that took the mail analogy so literally that mail messages were actually delivered to the users, using a 'store and forward' mechanism. This meant that, from the start, the end user had to store, manage and delete messages. This is a problem that has passed through all the releases of MS Outlook: It has to be able to manage mail locally in the dreaded PST file. As a stand-alone system, Outlook is flawed by its neglect of any means of automatic backup. Previous Outlook PST files actually blew up without warning when they reached the 2 Gig limit and became corrupted and inaccessible, leading to a thriving industry of 3rd party tools to clear up the mess. Microsoft Exchange is, of course, a server-based system. Emails are less likely to be lost in such a system if it is properly run. However, there is nothing to stop users from using local PSTs as well. There is the additional temptation to load emails into mobile devices, or USB keys for off-line working. The result is that the System Administrator is faced by a complex hybrid system where backups have to be taken from Servers, and PCs scattered around the network, where duplication of emails causes storage issues, and document retention policies become impossible to manage. If one adds to that the complexity of mobile phone email readers and mail synchronization, the problem is daunting. It is hardly surprising that the mood darkens when SysAdmins meet and discuss PST Hell. If you were promoted to the task of tormenting the souls of the damned in Hades, what aspects of the management of Outlook would you find most useful for your task? I'd love to hear from you. Cheers, Michael

    Read the article

  • MySQL Enterprise Monitor 3.0.3 Is Now Available

    - by Andy Bang
    We are pleased to announce that MySQL Enterprise Monitor 3.0.3 is now available for download on the My Oracle Support (MOS) web site. It will also be available via the Oracle Software Delivery Cloud with the November update in about 1 week. This is a maintenance release that fixes a number of bugs. You can find more information on the contents of this release in the change log. You will find binaries for the new release on My Oracle Support. Choose the "Patches & Updates" tab, and then use the "Product or Family (Advanced Search)" feature. You will also find the binaries on the Oracle Software Delivery Cloud in approximately 1 week. Choose "MySQL Database" as the Product Pack and you will find the Enterprise Monitor along with other MySQL products. Based on feedback from our customers, MySQL Enterprise Monitor (MEM) 3.0 offers many significant improvements over previous releases. Highlights include: Policy-based automatic scheduling of rules and event handling (including email notifications) make administration of scale-out easier and automatic Enhancements such as automatic discovery of MySQL instances, centralized agent configuration and multi-instance monitoring further improve ease of configuration and management The new cloud and virtualization-friendly, "agent-less" design allows remote monitoring of MySQL databases without the need for any remote agents Trends, projections and forecasting - Graphs and Event handlers inform you in advance of impending file system capacity problems Zero Configuration Query Analyzer - Works "out of the box" with MySQL 5.6 Performance_Schema (supported by 5.6.14 or later) False positives from flapping or spikes are avoided using exponential moving averages and other statistical techniques Advisors can analyze data across an entire group; for example, the Replication Configuration Advisor can scan an entire topology to find common configuration errors like duplicate server UUIDs or a slave whose version is less than its master's More information on the contents of this release is available here: What's new in MySQL Enterprise Monitor 3.0? MySQL Enterprise Edition: Demos MySQL Enterprise Monitor Frequently Asked Questions MySQL Enterprise Monitor Change History More information on MySQL Enterprise and the Enterprise Monitor can be found here: http://www.mysql.com/products/enterprise/ http://www.mysql.com/products/enterprise/monitor.html http://www.mysql.com/products/enterprise/query.html http://forums.mysql.com/list.php?142 If you are not a MySQL Enterprise customer and want to try the Monitor and Query Analyzer using our 30-day free customer trial, go to http://www.mysql.com/trials, or contact Sales at http://www.mysql.com/about/contact. If you haven't looked at MEM recently, and especially MEM 3.0, please do so now and let us know what you think. Thanks and Happy Monitoring! - The MySQL Enterprise Tools Development Team

    Read the article

  • Windows Phone 7, taking the mobile out of mobile

    - by markritchie
    I’ll start off this blog entry with a little background which is necessary for you to understand my situation. I moved from to the UK to Germany about three years ago and while I can converse in German, English is still very much my first language and the language I want my content delivered in. I happily purchased a HTC HD7 when WP7 was released here in Germany thinking foolishly that Microsoft really would get internationalization and localization as it’s at the heart of their business and developer products. Overall I’m very happy with my WP7, the camera sucks but that’s more HTC’s fault that Microsoft, but other than that it’s a very promising platform. My problems started when I purchased my first app. Initially everything appeared to be fine and things were as smooth as things had been with my previous free applications, however about a month after I received an email from Zune informing me that the credit card that they had registered against my account had expired. No problem I foolishly thought; I’ll simply add a new one. I don’t want to bore you with the details of what I’ve been through other than to take the low-lights: trying numerous websites, posts on Microsoft Answers and a tweet to Microsoft Support all to no avail. Today somebody suggested that I call the Xbox support line in the UK as they had solved their billing problems this way. So I called up the Xbox support line since I hadn’t resolved the problem using the Zune portal to resolve my problem with my WP7 (anybody else thinks Microsoft might need some consolidation here). After being on the phone with a very friendly representative in Ireland I was informed that because my British credit card has a billing address in Germany they are unable to accept it as a credit card. Because my Live ID and Zune Tag are registered in the UK they cannot take my card. Their solution is that I have to create a new Zune Tag that has its locale in Germany and then associate it with my phone. Now, first thing I go to register for a Zune Tag with a German locale it very kindly switches into German for me, nothing quite like reading a EULA in a language you’re not fluent in. I’ve no idea how this will affect the apps that are available to me in the app store, and I’m pretty sure it means that all my Xbox live achievements will become history. And what if I was to move to say France at some point do I have to go through all this again? At the end of the day I’m trying to set something up so I can give them my money and they’re making it VERY difficult for me. Could you imagine walking into a book store when you were on holiday and being told by the clerk that they couldn’t take your credit card because you came from another country?

    Read the article

  • Integrating with a payment provider; Proper and robust OOP approach

    - by ExternalUse
    History We are currently using a so called redirect model for our online payments (where you send the payer to a payment gateway, where he inputs his payment details - the gateway will then return him to a success/failure callback page). That's easy and straight-forward, but unfortunately quite inconvenient and at times confusing for our customers (leaving the site, changing their credit card details with an additional login on another site etc). Intention & Problem description We are now intending to switch to an integrated approach using an exchange of XML requests and responses. My problem is on how to cater with all (or rather most) of the things that may happen during processing - bearing in mind that normally simplicity is robust whereas complexity is fragile. Examples User abort: The user inputs Credit Card details and hits submit. An XML message to the provider's gateway is sent and waiting for response. The user hits "stop" in his browser or closes the window. ignore_user_abort() in PHP may be an option - but is that reliable? might it be better to redirect the user to a "please wait"-page, that in turn opens an AJAX or other request to the actual processor that does not rely on the connection? Database goes away sounds over-complicated, but with e.g. a webserver in the States and a DB in the UK, it has happened and will happen again: User clicks together his order, payment request has been sent to the provider but the response cannot be stored in the database. What approach could I use, using PHP to sort of start an SQL like "Transaction" that only at the very end gets committed or rolled back, depending on the individual steps? Should then neither commit or roll back have happened, I could sort of "lock" the user to prevent him from paying again or to improperly account for payments - but how? And what else do I need to consider technically? None of the integration examples of e.g. Worldpay, Realex or SagePay offer any insight, and neither Google or my search terms were good enough to find somebody else's thoughts on this. Thank you very much for any insight on how you would approach this!

    Read the article

  • SPARC T4 ?????????

    - by user12798668
    ????????????????????????????????????????????????????? SPARC T4 ????????????????????????????T4 ??????? SP(Service Processor) ??? ILOM ?? - set /SP/powermgmt policy=elastic ?????????????????????????????????????????????????????????????????????????????????????????ILOM ? /SYS/VSP/history/0 ???????????????????????????? SPARC T4-1 ? OS ??????????????????????????????????????????????????????????? ?? ?? = ?????(W)?????????????? Mar 26 12:04:54 = 294 Mar 26 12:03:53 = 292 Mar 26 12:02:53 = 292 Mar 26 12:01:53 = 293 Mar 26 12:00:53 = 292 Mar 26 11:59:53 = 293 Mar 26 11:58:54 = 293 Mar 26 11:57:54 = 294 Mar 26 11:56:54 = 293 Mar 26 11:55:54 = 348 ????????????????????? 90W ?????????! ?? psrinfo ????????????????... # psrinfo 0 on-line since 03/23/2012 17:39:28 1 on-line since 03/26/2012 20:53:56 ?? SPARC T4-1 ????????8 ????????? 8 CPU ??????? SPARC T4 CPU ? 1 ??????????????psrinfo ??????? 64 CPU ??? OS ?????????????????????? 2 CPU ??????????! ???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????CPU ??????????????????????????????????????????????????????????????????????????????????????????????????????? 2 CPU ?????????????????????????????????? CPU ????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ??????????? SPARC T4 ????????????? Oracle VM Server for SPARC (OVM for SPARC) ?????????????????????????????????????????????????? Oracle VM Server for SPARC ???????????? ?10? ??????? - ??????? ??????????????????????????OTN ? How to Use the Power Management Controls on SPARC Servers ?????????????????????????????? ??????????????SPARC T4 ?????????????????? Oracle OpenWorld Tokyo 2012 ???????????????????? ?????????????????????????·????????????????????! 4/4(?) ???? K1-01?ENGINEERED FOR INNOVATION ???????????????(9:00-11:15) 4/5(?) ????????? G2-01 ???????&???????????IT???????????(11:50 - 13:20) ???????????!! Oracle OpenWorld Tokyo 2012 ???? URL http://www.oracle.com/openworld/jp-ja/index.html ?????? 7264 ???????????????

    Read the article

  • AWStats is processing log files but does not display them

    - by Wouter
    I've setup AWStats on my VPS to get some more insight into the traffic coming to my site. As instructed I ran a manual build/update which ran fine: sudo -u www-data ./awstats.pl -config=xxxx.com Create/Update database for config "/etc/awstats/awstats.xxxx.com.conf" by AWStats version 6.9 (build 1.925) From data in log file "/usr/share/doc/awstats/examples/logresolvemerge.pl /var/www/xxxx.com/logs/*-access.log |"... Phase 1 : First bypass old records, searching new record... Searching new records from beginning of log file... Phase 2 : Now process new records (Flush history on disk after 20000 hosts)... Warning: awstats has detected that some hosts names were already resolved in your logfile /usr/share/doc/awstats/examples/logresolvemerge.pl /var/www/xxxx.com/logs/*-access.log |. If DNS lookup was already made by the logger (web server), you should change your setup DNSLookup=1 into DNSLookup=0 to increase awstats speed. Jumped lines in file: 0 Parsed lines in file: 814 Found 0 dropped records, Found 0 corrupted records, Found 0 old records, Found 814 new qualified records. It also produced the file in the DatDir: /var/lib/awstats/awstats052010.xxxx.com.txt which contains what I would expect. BUT when I visit: xxxx.com/awstats/awstats.pl it tells me Last Update: Never updated (See 'Build/Update' on awstats_setup.html page) and the rest of the page is blank. I'm pretty sure I set it up correctly but now I cannot figure out why this is happening. Hopefully someone smarter then me can help me. Thank you in advanced.

    Read the article

  • Why is firefox 4 not HW accelerated?

    - by acidzombie24
    At first i thought it was my computer but then i tried chrome. Why isnt firefox not hardware accelerated? The first screenshot shows chrome at 23% usage. The 2nd shows 59%. I have 2 cpus which is why it isnt 100% usage. The game pictured is biolab Heres the text for about:support Application Basics Name Firefox Version 4.0 User Agent Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0) Gecko/20100101 Firefox/4.0 Profile Directory Open Containing Folder Enabled Plugins about:plugins Build Configuration about:buildconfig Extensions Name Version Enabled ID Modified Preferences Name Value accessibility.typeaheadfind.flashBar 0 browser.places.importBookmarksHTML false browser.places.smartBookmarksVersion 2 browser.startup.homepage_override.buildID 20110303194838 browser.startup.homepage_override.mstone rv:2.0 extensions.lastAppVersion 4.0 gfx.font_rendering.directwrite.enabled true network.cookie.prefsMigrated true places.history.expiration.transient_current_max_pages 127602 privacy.sanitize.migrateFx3Prefs true Graphics Adapter Description Mobile Intel(R) 4 Series Express Chipset Family Vendor ID 8086 Device ID 2a42 Adapter RAM Unknown Adapter Drivers igdumd64 igd10umd64 igdumdx32 igd10umd32 Driver Version 8.15.10.2202 Driver Date 8-25-2010 Direct2D Enabled true DirectWrite Enabled true (6.1.7600.16385, font cache n/a) WebGL Renderer Google Inc. -- ANGLE -- OpenGL ES 2.0 (ANGLE 0.0.0.541) GPU Accelerated Windows 1/1 Direct3D 10

    Read the article

  • Unable to back up SQL Server databases using a maintenance plan

    - by Stephen Jennings
    I am trying to create a maintenance plan that will run automatically and back up my SQL Server 2005 databases automatically. I create a new maintenance plan and add a "Back Up Database Task", select all databases, and choose a path to back up to. When I save and try to execute this plan, I get the following error message: =================================== Execution failed. See the maintenance plan and SQL Server Agent job history logs for details. =================================== Job 'Backup.Subplan_1' failed. (SqlManagerUI) ------------------------------ Program Location: at Microsoft.SqlServer.Management.SqlManagerUI.MaintenancePlanMenu_Run.PerformActions() I've checked the maintenance plan log, the agent log, and just about every log file I can find and there are no entries at all to help me figure out why this is failing. If I right-click on a specific database and select "Back Up", the task succeeds. I tried changing the plan to back up just that one database and it still failed. I've tried running the plan with both Windows authentication and SQL Server authentication with the sa account. I also tried specifically granting the SQL Server Agent user account full privileges on the backup folder, but it still failed. While searching the web for clues, the only solution I've run across so far suggests running sp_configure 'allow_update', 0. I tried this but allow_update was already set to 0 and it did not fix the problem. The Windows server and SQL Server have all updates applied to them. Thanks for any suggestions!

    Read the article

  • Upgraded AGPM Server cannot connect to relocated archive

    - by thommck
    We were using the Advanced Group Policy Management (AGPM) v3.0 on out Windows Server 2008 DC. It kept the archive on the C: drive. When we upgraded to AGPM v4 we relocated the archive to the D: drive. Now when we try to look at a GPO's hisory in GPMC we get the following error Failed to connect to the AGPM Server. The following error occurred: The server was unable to process the request due to an internal error. For more information about the error, either turn on IncludeExceptionDetailInFaults (either from ServiceBehaviorAttribute or from the configuration behavior) on the server in order to send the exception information back to the client, or turn on tracing as per the Microsoft .NET Framework 3.0 SDK documentation and inspect the server trace logs. System.ServiceModel.FaultException (80131501) You are able to click Retry or Cancel. Retry brings up the same error and Cancel takes you back to GPMC and the History tab displays "Archive not found". I installed the client on a Windows 7 computer (which is a n unsupported set up) and it could read the server archive without any issues. I followed the TechNet article "Move the AGPM Server and the Archive" but that didn't make a difference How can I tell the server where the archive is?

    Read the article

  • 5 reason why you should upgrade to new iPad (3rd generation)

    - by Gopinath
    Apple released the new iPad, 3rd generation, couple of days ago and they will be available in stores from March 16 onwards.  It’s the best tablet available in the market and for first time buyers it’s a no brainer to choose it. What about the iPad owners? Should they upgrade their iPad 2 to the new iPad? This is the question on the lips of most of the iPad owners. In this post we will provide you 5 reasons why you should upgrade your iPad, if more than two reasons are convincing then you should upgrade to the new iPad. Retina display – The best display ever made for mobile device, a game changer The new iPad comes with Retina display with screen resolution of 2048 x 1536, which is twice the resolution of iPad 2. Undoubtedly the iPad 3’s display is the best display ever made for a mobile device and it’s a game changer. With better resolution on iPad 3 eBook reading is going to be a pleasure with clear and crisp text Watching HD movies on iPad is going to be unbelievably good The new Games targeted for Retina display are going to be more realistic and needless to explain the pleasure of playing such games Graphic artists and photo editors get a professional on screen rendering support to create beautiful graphics 2x Faster & 2x Memory – Better Games and powerful Apps The new iPad is more powerful with 2x faster graphics and 2x more memory. Apple claims that the A5x processor of new iPad is 2x faster than iPad 2 and 4x faster than the best graphic chips available from other vendors. The RAM of  new iPad  is upgraded to 1 GB compared from 512 MB of iPad 2. With the fast processor and more memory, Apps and games are going to be blazing fast. 4G Internet – Browse the web at the speeds of 42 MB/sec Half of the iPad owners are frequent commuters who access internet over cellular networks, the new iPad’s 4G LTE is going to be a big boom for their  high data access needs. With the new iPad’s 4G LTE connectivity you can browse the web at 42 MB/sec and it mean you can watch a HD video without buffering issues. iPad 2 comes with 3G network support and it’s browsing speeds are way less than the new iPad. 5MP Camera – HD Movie Recording & gorgeous Photography iPad 2 has a 0.7 mega pixel camera and the new iPad comes with 5 megapixels camera. That is a huge boost for hobbyist  photographers and videographers. With the new iPad you can shoot gorgeous photos and 1080p HD video. The iSight camera of new iPad uses advanced optics with features like auto exposure, auto focus and face detection up to 10 faces. Amazon Pays up to $300 for old iPad 2 16 GB Wifi and more for other models Do you know that you can trade in your iPad 2 16 GB Wifi for upto $300? Amazon has an excellent trade in program for selling your used iPad 2s. Depending on the condition of the iPad 2  Amazon offers $234, $270, $300.00 for 16 GB Wifi versions that in Acceptable, Good and Like New conditions respectively.  The higher models of iPad 2s fetch you more money. With this great deal from Amazon the amount of extra money you need to spend for new iPad is almost half of their price. Visit Amazon Trade In’s website or read Amazon’s brilliant plan to pay you crazy money for your iPad 2 for more details. Related: New IPad Vs. IPad 2–Side By Side Comparison Of Hardware Specification [Infographic]

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >