Search Results

Search found 7324 results on 293 pages for 'operations research'.

Page 69/293 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • SQL Server IO handling mechanism can be severely affected by high CPU usage

    - by sqlworkshops
    Are you using SSD or SAN / NAS based storage solution and sporadically observe SQL Server experiencing high IO wait times or from time to time your DAS / HDD becomes very slow according to SQL Server statistics? Read on… I need your help to up vote my connect item – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage. Instead of taking few seconds, queries could take minutes/hours to complete when CPU is busy.In SQL Server when a query / request needs to read data that is not in data cache or when the request has to write to disk, like transaction log records, the request / task will queue up the IO operation and wait for it to complete (task in suspended state, this wait time is the resource wait time). When the IO operation is complete, the task will be queued to run on the CPU. If the CPU is busy executing other tasks, this task will wait (task in runnable state) until other tasks in the queue either complete or get suspended due to waits or exhaust their quantum of 4ms (this is the signal wait time, which along with resource wait time will increase the overall wait time). When the CPU becomes free, the task will finally be run on the CPU (task in running state).The signal wait time can be up to 4ms per runnable task, this is by design. So if a CPU has 5 runnable tasks in the queue, then this query after the resource becomes available might wait up to a maximum of 5 X 4ms = 20ms in the runnable state (normally less as other tasks might not use the full quantum).In case the CPU usage is high, let’s say many CPU intensive queries are running on the instance, there is a possibility that the IO operations that are completed at the Hardware and Operating System level are not yet processed by SQL Server, keeping the task in the resource wait state for longer than necessary. In case of an SSD, the IO operation might even complete in less than a millisecond, but it might take SQL Server 100s of milliseconds, for instance, to process the completed IO operation. For example, let’s say you have a user inserting 500 rows in individual transactions. When the transaction log is on an SSD or battery backed up controller that has write cache enabled, all of these inserts will complete in 100 to 200ms. With a CPU intensive parallel query executing across all CPU cores, the same inserts might take minutes to complete. WRITELOG wait time will be very high in this case (both under sys.dm_io_virtual_file_stats and sys.dm_os_wait_stats). In addition you will notice a large number of WAITELOG waits since log records are written by LOG WRITER and hence very high signal_wait_time_ms leading to more query delays. However, Performance Monitor Counter, PhysicalDisk, Avg. Disk sec/Write will report very low latency times.Such delayed IO handling also occurs to read operations with artificially very high PAGEIOLATCH_SH wait time (with number of PAGEIOLATCH_SH waits remaining the same). This problem will manifest more and more as customers start using SSD based storage for SQL Server, since they drive the CPU usage to the limits with faster IOs. We have a few workarounds for specific scenarios, but we think Microsoft should resolve this issue at the product level. We have a connect item open – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage - (with example scripts) to reproduce this behavior, please up vote the item so the issue will be addressed by the SQL Server product team soon.Thanks for your help and best regards,Ramesh MeyyappanHome: www.sqlworkshops.comLinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Attending a Career Fair: &ldquo;Don&rsquo;t be shy &ndash; Be prepared&rdquo;

    - by jessica.ebbelaar
    There are a large number of ways to interact with companies nowadays. The career fair is a very effective and personal way to interact with a number of different companies in a very short period of time. Here are some simple tips to help you perform during a career fair. Do research The key to being successful at a career fair is to do research before you go. Make a first selection of the companies you feel could be interesting for you. Include many types of employers. Once you have decided on the list of companies you want to visit, go to their career portal. Inform yourself about what the company does, i.e what roles there are available, how the company culture is described, what impression the testimonials give you. The question that you still have after reviewing this information, are the ones you can discuss with the company on the fair. Sell yourself Visit the companies you have on your top 5 list first, so you will be at your highest energy level to make that first impression. Think in advance about what you are going to tell the recruiter. Prepare a 30-second introduction (including degree, strengths, skills & experience) Be confident when you talk about your experience. Remember to start the conversation with a smile, make good eye contact and give a firm handshake. You could be speaking to your next manager, so be professional! If you already know what jobs you are interested in, relate your skills and experience to the roles that the company has available. If you are not yet sure gather as much information as you can about employment and/or hiring procedures, specific skills necessary for different jobs, training and career paths. Stand out As career fairs are very crowded and the attending companies meet with a lot of potential candidates on one day, you have to make sure you are noticed in a positive way. A good preparation and asking questions that show you have a good understanding of the industry, organization and roles will help you. Be aware of time demands on employers. Do not monopolize an employer's time. Dress appropriately to make a good first impression. Bring your resume Do not forget to bring your resume in print or on a USB-stick to the fair. If you are searching for different types of jobs, bring different versions of your resume. Your resume should be short and professional on white paper that is free of graphics or fancy print styles and containing larger margins for interviewer notes. Follow up After each conversation ask who you can contact for follow-up discussions about the specific roles. Use the back of a business card to record notes that help you remember important details and follow-up instructions. If no card is available, record the contact information and your comments in your notepad or phone. Last but not least, thank everyone you talk to for their time. Follow up as soon as possible with thank you notes that address the companies’ hiring needs, your qualifications, and express your desire for a second interview. What not to do… Do not visit a company with a group of friends. Interact with the companies on your own, to make your own positive impression. Do not walk up to a recruiter and interrupt a current conversation; wait your turn and be polite. What you should absolutely avoid is a grab and run on freebies! Take the time to speak to the company and ask for a freebie at the end of the conversation in case they are not offered to you. Good luck with the preparations for the career fair you will attend. Oracle recruiters look forward to meet you! They will be present on a large number of fairs in the region. For an overview of the fairs go to the Events & Calendar page on http://campus.oracle.com If you have any questions related to this article feel free to contact [email protected].

    Read the article

  • The Best Data Integration for Exadata Comes from Oracle

    - by maria costanzo
    Oracle Data Integrator and Oracle GoldenGate offer unique and optimized data integration solutions for Oracle Exadata. For example, customers that choose to feed their data warehouse or reporting database with near real-time throughout the day, can do so without decreasing  performance or availability of source and target systems. And if you ask why real-time, the short answer is: in today’s fast-paced, always-on world, business decisions need to use more relevant, timely data to be able to act fast and seize opportunities. A longer response to "why real-time" question can be found in a related blog post. If we look at the solution architecture, as shown on the diagram below,  Oracle Data Integrator and Oracle GoldenGate are both uniquely designed to take full advantage of the power of the database and to eliminate unnecessary middle-tier components. Oracle Data Integrator (ODI) is the best bulk data loading solution for Exadata. ODI is the only ETL platform that can leverage the full power of Exadata, integrate directly on the Exadata machine without any additional hardware, and by far provides the simplest setup and fastest overall performance on an Exadata system. We regularly see customers achieving a 5-10 times boost when they move their ETL to ODI on Exadata. For  some companies the performance gain is even much higher. For example a large insurance company did a proof of concept comparing ODI vs a traditional ETL tool (one of the market leaders) on Exadata. The same process that was taking 5hrs and 11 minutes to complete using the competing ETL product took 7 minutes and 20 seconds with ODI. Oracle Data Integrator was 42 times faster than the conventional ETL when running on Exadata.This shows that Oracle's own data integration offering helps you to gain the most out of your Exadata investment with a truly optimized solution. GoldenGate is the best solution for streaming data from heterogeneous sources into Exadata in real time. Oracle GoldenGate can also be used together with Data Integrator for hybrid use cases that also demand non-invasive capture, high-speed real time replication. Oracle GoldenGate enables real-time data feeds from heterogeneous sources non-invasively, and delivers to the staging area on the target Exadata system. ODI runs directly on Exadata to use the database engine power to perform in-database transformations. Enterprise Data Quality is integrated with Oracle Data integrator and enables ODI to load trusted data into the data warehouse tables. Only Oracle can offer all these technical benefits wrapped into a single intelligence data warehouse solution that runs on Exadata. Compared to traditional ETL with add-on CDC this solution offers: §  Non-invasive data capture from heterogeneous sources and avoids any performance impact on source §  No mid-tier; set based transformations use database power §  Mini-batches throughout the day –or- bulk processing nightly which means maximum availability for the DW §  Integrated solution with Enterprise Data Quality enables leveraging trusted data in the data warehouse In addition to Starwood Hotels and Resorts, Morrison Supermarkets, United Kingdom’s fourth-largest food retailer, has seen the power of this solution for their new BI platform and shared their story with us. Morrisons needed to analyze data across a large number of manufacturing, warehousing, retail, and financial applications with the goal to achieve single view into operations for improved customer service. The retailer deployed Oracle GoldenGate and Oracle Data Integrator to bring new data into Oracle Exadata in near real-time and replicate the data into reporting structures within the data warehouse—extending visibility into operations. Using Oracle's data integration offering for Exadata, Morrisons produced financial reports in seconds, rather than minutes, and improved staff productivity and agility. You can read more about Morrison’s success story here and hear from Starwood here. From an Irem Radzik article.

    Read the article

  • Oracle ties social, CRM, analytics products to customer experience

    - by Richard Lefebvre
    Oracle will embark on a new product strategy that centers on customer experience management, an approach driven by the company’s many recent acquisitions.  The new approach, announced by the company Monday night, will be seen in an expansive suite that features familiar Oracle products -- such as its Fusion CRM platform -- and offerings the company recently gained through acquisitions, including FatWire, RightNow and Vitrue. Billed as Oracle Customer Experience (CX), the suite enables businesses to respond to a market centered on the customer experience, said Anthony Lye, the company’s senior vice president of CRM. Companies “are very aware their products are commoditizing,” Lye said in an interview last week, referring to how the Web and social media channels have empowered customers. Customer experiences start and mature outside of CRM, and applications today need to reflect that shift, Lye said. Businesses thus need to step away from a pure CRM model, he said. Oracle claims CX will improve customer experience management by connecting businesses with customers across Web sites and social channels. Companies can create a single, real-time view of the customer and use predictive analytics of interactions to strengthen the customer experience, Oracle said. “Companies have to connect with their customers wherever, whenever and however they want,” Lye said. “They have to know and understand their customer.” Lye promoted Oracle CX as a suite that will work across channels to complement the company’s applications. A new strategy has been “cooking” for years now, but the acquisitions Oracle has made over the past two years made the time right for a “unique collaboration,” Lye said. CX includes basic Oracle CRM solutions such as Siebel and the new Fusion Apps. It also includes the company’s MDM products, Enterprise Data Quality, Customer Hub and Product Hub. And the suite is rounded out by the services that Oracle recently bought, transactions that created or enhanced the company’s presence in social, marketing, e-commerce and customer service. For instance, FatWire provides tools for marketing. ATG focuses on e-commerce. And RightNow specializes in customer service. Two recent acquisitions -- Collective Intellect and Vitrue -- gave Oracle a seat at the social table. Collective Intellect is a social intelligence program, and Vitrue is a social marketing and engagement platform. Those acquisitions have yet to be finalized. Oracle hopes to eventually integrate the two social offerings, as well as most of the other services, into the CX suite. CX can integrate on Oracle’s standard middleware, and can give users a lower TCO by leveraging it as a single stack on premise or as a cloud solution. Lye deferred questions about the pricing of CX, and instead pitched Oracle’s ability to offer multiple customer experience solutions in one suite. Businesses have struggled with the complexity of infrastructure and modern services that communicate with customers, Lye said. “They’ve struggled to pull all these things together. We’ve done that,” he said. Stephen Powers, a research director at Forrester Research Inc. in Cambridge, Mass., said it’s not surprising for Oracle to offer the CX suite and a related customer experience strategy.  “They’ve got CRM, ATG, FatWire. Clearly, it’s been the strategy for them,” he said. But the challenge for Oracle, and for any other vendor that has gone on an “acquisition spree,” is to connect its many products, Powers said. “The portfolio has to be more than the parts. They’ve got to realize the efficiencies and value of having these pieces to tie them together,” he said. “The proof is in the pudding. Adobe has done a nice job in its space with the products they’ve got. Now, Oracle has got to show it has something.” Albert McKeon (SearchCRM) Published: 25 Jun 2012 : http://searchcrm.techtarget.com/news/2240158644/Oracle-ties-social-CRM-analytics-products-to-customer-experience

    Read the article

  • What is Devops and Why You Should Care?

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} According to Wikipedia, DevOps (a portmanteau of development and operations) is a software development method that stresses communication, collaboration and integration between software developers and information technology (IT) professionals. DevOps is a response to the interdependence of software development and IT operations. It aims to help an organization rapidly produce software products and services. That definition of DevOps is the – what. The “why” is even easier. Standardized development methodology, clear communication and documented processes supported by a standards-based, proven middleware platform improves application development and management cycles, brings agility and provides greater availability and security to your IT infrastructure. Clearly, DevOps is about connecting people, products and processes. Ultimately, DevOps is about connecting IT to business. If you haven’t already seen it, do check out Bob Rhubart’s feature on DevOps in the latest issue of Oracle Magazine. And for more information on how Oracle Fusion Middleware, the #1 application infrastructure foundation, visit us on oracle.com Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Help Me Help You Fix That

    - by BuckWoody
    If you've been redirected here because you posted on a forum, or asked a question in an e-mail, the person wanted you to know how to get help quickly from a group of folks who are willing to do so - but whose time is valuable. You need to put a little effort into the question first to get others to assist. This is how to do that. It will only take you a moment to read... 1. State the problem succinctly in the title When an e-mail thread starts, or a forum post is the "head" of the conversation, you'll attract more helpers by using a descriptive headline than a vague one. This: "Driver for Epson Line Printer Not Installing on Operating System XYZ" Not this: "Can't print - PLEASE HELP" 2. Explain the Error Completely Make sure you include all pertinent information in the request. More information is better, there's almost no way to add too much data to the discussion. What you were doing, what happened, what you saw, the error message, visuals, screen shots, whatever you can include. This: "I'm getting error '5203 - Driver not compatible with Operating System since about 25 years ago' in a message box on the screen when I tried to run the SETUP.COM file from my older computer. It was a 1995 Compaq Proliant and worked correctly there.." Not this: "I get an error message in a box. It won't install." 3. Explain what you have done to research the problem If the first thing you do is ask a question without doing any research, you're lazy, and no one wants to help you. Using one of the many fine search engines you can most always find the answer to your problem. Sometimes you can't. Do yourself a favor - open a notepad app, and paste the URL's as you look them up. If you get your answer, don't save the note. If you don't get an answer, send the list along with the problem. It will show that you've tried, and also keep people from sending you links that you've already checked. This: "I read the fine manual, and it doesn't mention Operating System XYZ for some reason. Also, I checked the following links, but the instructions there didn't fix the problem: " Not this: <NULL> 4. Say "Please" and "Thank You" Remember, you're asking for help. No one owes you their valuable time. Ask politely, don't pester, endure the people who are rude to you, and when your question is answered, respond back to the thread or e-mail with a thank you to close it out. It helps others that have your same problem know that this is the correct answer. This: "I could really use some help here - if you have any pointers or things to try, I'd appreciate it." Not this: "I really need this done right now - why are there no responses?" This: "Thanks for those responses - that last one did the trick. Turns out I needed a new printer anyway, didn't realize they were so inexpensive now." Not this: <NULL> There are a lot of motivated people that will help you. Help them do that.

    Read the article

  • Strings in .NET are Enumerable

    - by Scott Dorman
    It seems like there is always some confusion concerning strings in .NET. This is both from developers who are new to the Framework and those that have been working with it for quite some time. Strings in the .NET Framework are represented by the System.String class, which encapsulates the data manipulation, sorting, and searching methods you most commonly perform on string data. In the .NET Framework, you can use System.String (which is the actual type name or the language alias (for C#, string). They are equivalent so use whichever naming convention you prefer but be consistent. Common usage (and my preference) is to use the language alias (string) when referring to the data type and String (the actual type name) when accessing the static members of the class. Many mainstream programming languages (like C and C++) treat strings as a null terminated array of characters. The .NET Framework, however, treats strings as an immutable sequence of Unicode characters which cannot be modified after it has been created. Because strings are immutable, all operations which modify the string contents are actually creating new string instances and returning those. They never modify the original string data. There is one important word in the preceding paragraph which many people tend to miss: sequence. In .NET, strings are treated as a sequence…in fact, they are treated as an enumerable sequence. This can be verified if you look at the class declaration for System.String, as seen below: // Summary:// Represents text as a series of Unicode characters.public sealed class String : IEnumerable, IComparable, IComparable<string>, IEquatable<string> The first interface that String implements is IEnumerable, which has the following definition: // Summary:// Exposes the enumerator, which supports a simple iteration over a non-generic// collection.public interface IEnumerable{ // Summary: // Returns an enumerator that iterates through a collection. // // Returns: // An System.Collections.IEnumerator object that can be used to iterate through // the collection. IEnumerator GetEnumerator();} As a side note, System.Array also implements IEnumerable. Why is that important to know? Simply put, it means that any operation you can perform on an array can also be performed on a string. This allows you to write code such as the following: string s = "The quick brown fox";foreach (var c in s){ System.Diagnostics.Debug.WriteLine(c);}for (int i = 0; i < s.Length; i++){ System.Diagnostics.Debug.WriteLine(s[i]);} If you executed those lines of code in a running application, you would see the following output in the Visual Studio Output window: In the case of a string, these enumerable or array operations return a char (System.Char) rather than a string. That might lead you to believe that you can get around the string immutability restriction by simply treating strings as an array and assigning a new character to a specific index location inside the string, like this: string s = "The quick brown fox";s[2] = 'a';   However, if you were to write such code, the compiler will promptly tell you that you can’t do it: This preserves the notion that strings are immutable and cannot be changed once they are created. (Incidentally, there is no built in way to replace a single character like this. It can be done but it would require converting the string to a character array, changing the appropriate indexed location, and then creating a new string.)

    Read the article

  • Paper-free Customer Engagement

    - by Michael Snow
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Appropriate repost from our friends at the AIIM blog: Digital Landfill -- John Mancini, supporting our mission of enabling customer engagement through better technology choices.  ---------- My wife didn't even give me a card for #wpfd - and they say husbands are bad at remembering anniversaries Well, today is the third World Paper Free Day.  I just got off the Tweet Jam, and there was a host of ideas for getting rid of -- or at least reducing -- paper. When we first started talking about "paper-free" most of the reasons raised to pursue this direction were "green" reasons.  I'm glad to see that the thinking has moved on to questions about how getting rid of paper and digitizing processes helps improve customer engagement.  And the bottom line.  And process responsiveness.  Not that the "green" reasons have gone away, but it's nice to see a maturation in the BUSINESS reasons to get rid of paper. Our World Paper Free Handbook (do not, do not, do not print it!) looks at how less paper in the workplace delivers significant benefits. Key findings show eliminating paper from processes can improve the responsiveness of customer service by 300 percent. Removing paper from business processes and moving content to PCs and tablets has the added advantage of helping companies adopt mobile-enable processes and eliminate elapsed time, lost forms, poor data and re-keying. To effectively mobile-enable processes and reduce reliance on paper, data should be captured as close to the point of origination as possible, which makes information easily available to whomever needs it, wherever they are, in the shortest time possible. This handbook summarizes the value of automating manual, paper-based processes. It then goes a step beyond to provide actionable steps that will set you on the path to productivity, profitability, and, yes, less paper.  Get your copy today and send the link around to your peers and colleagues.  Here's the link; please share it! http://www.aiim.org/Research-and-Publications/Research/AIIM-White-Papers/WPFD-Revolution-Handbook And don't miss out on the real world discussions about increasing engagement with WebCenter in new webinars being offered over the next couple of weeks:  October 30, 2012:  ResCare Solves Content Lifecycle Challenges with Oracle WebCenter November 1, 2012: WebCenter Content for Applications: Streamline Processes with Oracle WebCenter Content Management for Human Resources Applications Available On-Demand:  Using Oracle WebCenter to Content-Enable Your Business Applications

    Read the article

  • Universities 2030: Learning from the Past to Anticipate the Future

    - by Mohit Phogat
    What will the landscape of international higher education look like a generation from now? What challenges and opportunities lie ahead for universities, especially “global” research universities? And what can university leaders do to prepare for the major social, economic, and political changes—both foreseen and unforeseen—that may be on the horizon? The nine essays in this collection proceed on the premise that one way to envision “the global university” of the future is to explore how earlier generations of university leaders prepared for “global” change—or at least responded to change—in the past. As the essays in this collection attest, many of the patterns associated with contemporary “globalization” or “internationalization” are not new; similar processes have been underway for a long time (some would say for centuries).[1] A comparative-historical look at universities’ responses to global change can help today’s higher-education leaders prepare for the future. Written by leading historians of higher education from around the world, these nine essays identify “key moments” in the internationalization of higher education: moments when universities and university leaders responded to new historical circumstances by reorienting their relationship with the broader world. Covering more than a century of change—from the late nineteenth century to the early twenty-first—they explore different approaches to internationalization across Europe, Asia, Australia, North America, and South America. Notably, while the choice of historical eras was left entirely open, the essays converged around four periods: the 1880s and the international extension of the “modern research university” model; the 1930s and universities’ attempts to cope with international financial and political crises; the 1960s and universities’ role in an emerging postcolonial international development apparatus; and the 2000s and the rise of neoliberal efforts to reform universities in the name of international economic “competitiveness.” Each of these four periods saw universities adopt new approaches to internationalization in response to major historical-structural changes, and each has clear parallels to today. Among the most important historical-structural challenges that universities confronted were: (1) fluctuating enrollments and funding resources associated with global economic booms and busts; (2) new modes of transportation and communication that facilitated mobility (among students, scholars, and knowledge itself); (3) increasing demands for applied science, technical expertise, and commercial innovation; and (4) ideological reconfigurations accompanying regime changes (e.g., from one internal regime to another, from colonialism to postcolonialism, from the cold war to globalized capitalism, etc.). Like universities today, universities in the past responded to major historical-structural changes by internationalizing: by joining forces across space to meet new expectations and solve problems on an ever-widening scale. Approaches to internationalization have typically built on prior cultural or institutional ties. In general, only when the benefits of existing ties had been exhausted did universities reach out to foreign (or less familiar) partners. As one might expect, this process of “reaching out” has stretched universities’ traditional cultural, political, and/or intellectual bonds and has invariably presented challenges, particularly when national priorities have differed—for example, with respect to curricular programs, governance structures, norms of academic freedom, etc. Strategies of university internationalization that either ignore or downplay cultural, political, or intellectual differences often fail, especially when the pursuit of new international connections is perceived to weaken national ties. If the essays in this collection agree on anything, they agree that approaches to internationalization that seem to “de-nationalize” the university usually do not succeed (at least not for long). Please continue reading the other essays at http://globalhighered.wordpress.com/

    Read the article

  • concurrency::extent<N> from amp.h

    - by Daniel Moth
    Overview We saw in a previous post how index<N> represents a point in N-dimensional space and in this post we'll see how to define the N-dimensional space itself. With C++ AMP, an N-dimensional space can be specified with the template class extent<N> where you define the size of each dimension. From a look and feel perspective, you'd expect the programmatic interface of a point type and size type to be similar (even though the concepts are different). Indeed, exactly like index<N>, extent<N> is essentially a coordinate vector of N integers ordered from most- to least- significant, BUT each integer represents the size for that dimension (and hence cannot be negative). So, if you read the description of index, you won't be surprised with the below description of extent<N> There is the rank field returning the value of N you passed as the template parameter. You can construct one extent from another (via the copy constructor or the assignment operator), you can construct it by passing an integer array, or via convenience constructor overloads for 1- 2- and 3- dimension extents. Note that the parameterless constructor creates an extent of the specified rank with all bounds initialized to 0. You can access the components of the extent through the subscript operator (passing it an integer). You can perform some arithmetic operations between extent objects through operator overloading, i.e. ==, !=, +=, -=, +, -. There are operator overloads so that you can perform operations between an extent and an integer: -- (pre- and post- decrement), ++ (pre- and post- increment), %=, *=, /=, +=, –= and, finally, there are additional overloads for plus and minus (+,-) between extent<N> and index<N> objects, returning a new extent object as the result. In addition to the usual suspects, extent offers a contains function that tests if an index is within the bounds of the extent (assuming an origin of zero). It also has a size function that returns the total linear size of this extent<N> in units of elements. Example code extent<2> e(3, 4); _ASSERT(e.rank == 2); _ASSERT(e.size() == 3 * 4); e += 3; e[1] += 6; e = e + index<2>(3,-4); _ASSERT(e == extent<2>(9, 9)); _ASSERT( e.contains(index<2>(8, 8))); _ASSERT(!e.contains(index<2>(8, 9))); grid<N> Our upcoming pre-release bits also have a similar type to extent, grid<N>. The way you create a grid is by passing it an extent, e.g. extent<3> e(4,2,6); grid<3> g(e); I am not going to dive deeper into grid, suffice for now to think of grid<N> simply as an alias for the extent<N> object, that you create when you encounter a function that expects a grid object instead of an extent object. Usage The extent class on its own simply defines the size of the N-dimensional space. We'll see in future posts that when you create containers (arrays) and wrappers (array_views) for your data, it is an extent<N> object that you'll need to use to create those (and use an index<N> object to index into them). We'll also see that it is a grid<N> object that you pass to the new parallel_for_each function that I'll cover in the next post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Proving What You are Worth

    - by Ted Henson
    Here is a challenge for everyone. Just about everyone has been asked to provide or calculate the Return on Investment (ROI), so I will assume everyone has a method they use. The problem with stopping once you have an ROI is that those in the C-Suite probably do not care about the ROI as much as Return on Equity (ROE). Shareholders are mostly concerned with their return on the money the invested. Warren Buffett looks at ROE when deciding whether to make a deal or not. This article will outline how you can add more meaning to your ROI and show how you can potentially enhance the ROE of the company.   First I want to start with a base definition I am using for ROI and ROE. Return on investment (ROI) and return on equity (ROE) are ways to measure management effectiveness, parts of a system of measures that also includes profit margins for profitability, price-to-earnings ratio for valuation, and various debt-to-equity ratios for financial strength. Without a set of evaluation metrics, a company's financial performance cannot be fully examined by investors. ROI and ROE calculate the rate of return on a specific investment and the equity capital respectively, assessing how efficient financial resources have been used. Typically, the best way to improve financial efficiency is to reduce production cost, so that will be the focus. Now that the challenge has been made and items have been defined, let’s go deeper. Most research about implementation stops short at system start-up and seldom addresses post-implementation issues. However, we know implementation is a continuous improvement effort, and continued efforts after system start-up will influence the ultimate success of a system.   Most UPK ROI’s I have seen only include the cost savings in developing the training material. Some will also include savings based on reduced Help Desk calls. Using just those values you get a good ROI. To get an ROE you need to go a little deeper. Typically, the best way to improve financial efficiency is to reduce production cost, which is the purpose of implementing/upgrading an enterprise application. Let’s assume the new system is up and running and all users have been properly trained and are comfortable using the system. You provide senior management with your ROI that justifies the original cost. What you want to do now is develop a good base value to a measure the current efficiency. Using usage tracking you can look for various patterns. For example, you may find that users that are accessing UPK assistance are processing a procedure, such as entering an order, 5 minutes faster than those that don’t.  You do some research and discover each minute saved in processing a claim saves the company one dollar. That translates to the company saving five dollars on every transaction. Assuming 100,000 transactions are performed a year, and all users improve their performance, the company will be saving $500,000 a year. That $500,000 can be re-invested, used to reduce debt or paid to the shareholders.   With continued refinement during the life cycle, you should be able to find ways to reduce cost. These are the type of numbers and productivity gains that senior management and shareholders want to see. Being able to quantify savings and increase productivity may also help when seeking a raise or promotion.

    Read the article

  • Introducing the Metro User Interface on Windows 2012

    - by andywe
    Although I am a big fan of using PowerShell to do many of my server operations, that aspect is well covered by those far more knowledgeable than I, and there is vast information around the web already on that. The new Metro interface, and getting around both Windows 8 and Windows Server 2012 though is relatively new, even for those whop ran the previews. What is this? A blank Desktop!   Where did the start button go? Well, it is still there...sort of. It is hidden, and acts like an auto hidden component that appear only when the mouse is hovered over the lower left corner of the screen. Those familiar with Gnome or OSX can relate this to the "Hot Corners" functions. To get to the start button, hover your mouse in the very left corner of the task bar. Let it sit there a moment, and a small blue square with colored tiles in it called start will appear. Click it. I clicked it and now I have all the tiles..What is this?   Welcome to the Metro interface. This is a much more modern look, and although at first seems weird and cumbersome, I have actually found that it is a bit more extensible, allowing greater organization and customization than the older explorer desktop. If you look closely, you'll see each box represents either a program, or program group. First, a few basics about using the start view. First and foremost, a right mouse click will bring up a bar on the bottom, with an icon towards the right. Notice it is titled “All Apps”. An even easier way in many places is to hover your mouse in the exact opposite corner, in the upper right. A sidebar will open and expose what used to be a widget bar (remember Vista?), and there are options for Search, Start, and Settings.   Ok Great, but where is everything? It’s all there…Click the All Apps icon.   Look better? Notice the scroll bar at the bottom. Move it right..your desktop is sized to your content..so you can have a smaller, or larger amount of programs exposed. Each icon can be secondary clicked (right mouse click for most of us, and an options bar at the bottom, rather than the old small context menu, is opened with some very familiar options.   Notice the top of the Windows Explorer window has some new features. You still have your right mouse click functions, but since the shortcuts for these items already exist..just copy them. There are many ways, but here is a long way to show you more of the interface. 1. Right mouse click a program icon, and select the Open File Location option. 2. Trusty file manager opens…but if you look closely up at top edge of the window, you’ll see a nifty enhancement. An orange colored box that is titled Shortcut Tools and another lavender box Title Application tools. Each of these adds options at the top of the file manager window to make selection easy. Of course, you can still secondary click an item in the listing window too. 3. Click shortcut tools, right click your app shortcut and copy it. Then simply paste it into the desktop outside the File Explorer window Also note some of the newer features. The large icons up top below the menu that has many common operations. The options change as you select each menu item. Well, that’s it for this installment. I hope this helps you out.

    Read the article

  • MySQL Enterprise Backup 3.8.2 has been released!

    - by Hema Sridharan
    MySQL Enterprise Backup v3.8.2, a maintenance release of online MySQL backup tool, is now available for download from My Oracle Support  (MOS) website as our latest GA release.  It will also be available via the Oracle Software Delivery Cloud in approximately 1-2 weeks. A brief summary of the changes in MySQL Enterprise Backup version 3.8.2 is given below.   A. Functionality Added or Changed:  MySQL Enterprise Backup has a new --on-disk-full command line option. mysqlbackup could hang when the disk became full, rather than detecting the low space condition. mysqlbackup now monitors disk space when running backup commands, and users can now specify the action to take at a disk-full condition with the --on-disk-full option. For more details, refer this page MySQL Enterprise Backup has a new progress report feature, which periodically outputs short progress indicators on its  operations to user-selected destinations (for example, stdout, stderr, a file, or other choices). For more details on progress report options, refer here   B. Bugs Fixed: When --innodb-file-per-table=ON, if a table was renamed and backup-to-image was in progress, apply-log would fail when being run on the backup. (Bug #16903973)   MySQL Server failed to start after a backup was restored if  there had been online DDL transactions on partitioned tables during the time of backup. (Bug #16924499)   apply-log failed if ALTER TABLE ... REORGANIZE PARTITION was applied to partitioned InnoDB tables during backup. (Bug #16721824, Bug #16903951)  apply-incremental-backup might fail with an assertion error if  the InnoDB tables being backed up were created in Barracuda format and with their KEY_BLOCK_SIZE  values  different from the innodb_page_size . This fix ensures that different KEY_BLOCK_SIZE  values are handled properly during incremental backup and apply-incremental-backup operations.  If a table was renamed following a full backup, a subsequent incremental backup could copy the .frm file with the new name, but not the associated .ibd file with the new name. After a  restore, the InnoDB data dictionary could be in an  inconsistent state. This issue primarily occurred if the table  was not changed between the full backup and the subsequent  incremental backup. Bug #16262690)  After a full backup, if a table was renamed and modified,  apply-incremental-backup would crash when run on the backup directory. (Bug #16262609) The value of the binary log position in backup_variables.txt  could be different from the output displayed during the   backup-and-apply-log operation. (This issue did not occur if  the backup and apply-log steps were done separately.) (Bug  #16195529) When using the --only-innodb-with-frm option, MySQL Enterprise Backup tried to create temporary files at unintended locations in the file system, which might cause a failure when, for example, the user had no write privilege for those locations.   This fix makes sure the paths for the temporary files are  correct. (Bug #14787324)  A backup process might hang when it ran into an LSN mismatch between a data file  and the redo log. This fix makes sure the process does not hang and it displays an error message showing the  name of the problematic data file (Bug #14791645) Please post your questions / comments about Backup in forums. Thanks, MEB Team

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 1: Why Build Smart Phone Apps

    - by Tim Murphy
    This is part 1 in a series of post based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Intro Most of us already carry smartphones. We play games on them. We keep up with what is going on with our friends and our favorite teams. We take pictures of our kids at their events. But the question is if that is all they are good for. Many companies have aspects of their business that lend themselves to being performed by mobile devices. Some of them lean toward larger device such as tablets, but many can be executed on smartphones. This and the following articles will discuss some of the possible applications of smartphone technology for businesses, the platforms that are available and the considerations you need to make when building them. I'll take a look at some specific scenarios and wrap up with a couple of capabilities that are just emerging that can be used in the future. Why Build Enterprise Smartphone Applications So what are some of the ways that you can leverage smartphone technology to gain efficiency in your business or a clients business. There are a few major areas that I have seen mobile platforms being an advantage to. Your mobile sales force is a key candidate for leveraging smartphone apps.  They can visit clients in their retail location and place orders on site. It is a more personal approach which can gain you customer loyalty.  A sales person may also gather information about the way a client does business or who their target market is. This allows them you to focus marketing information or build customized support for your customer. You may also have need to track physical inventory in a store. This is something that has historically been done with laser scanners, but with the camera capabilities in today's phones and tablets it is possible to use more general multi-purpose devices.  This can save costs on both hardware and telecommunication contracts. Delivery verification is another area that historically has been the domain of specialized devices but can now be accomplished with smartphones.  This also reduces costs because it is also used for communicating with the driver and other operations.  Add to that the navigation capability of smartphones and you can see how the return on investment increases. Executives are always on the go. They spend most of their time in meetings and yet they need access to decision making information at their finger tips. With a smartphone app they can get alerts when major sales are closed or critical accounting process are completed that may need their attention. They can also answer questions by instantly pulling up BI reports. I have often heard operations support people say that they need things like VPN and RDP from their phones. If they can also have notifications of outages or critical support requests they can be react to situations without needing to be tied to their desks. These are all valid reasons to need smartphone applications.  In the next installment I will discuss platforms and features. del.icio.us Tags: Smartphones,Enterprise Smartphone Apps,Architecture

    Read the article

  • Webcast On-Demand: Building Java EE Apps That Scale

    - by jeckels
    With some awesome work by one of our architects, Randy Stafford, we recently completed a webcast on scaling Java EE apps efficiently. Did you miss it? No problem. We have a replay available on-demand for you. Just hit the '+' sign drop-down for access.Topics include: Domain object caching Service response caching Session state caching JSR-107 HotCache and more! Further, we had several interesting questions asked by our audience, and we thought we'd share a sampling of those here for you - just in case you had the same queries yourself. Enjoy! What is the largest Coherence deployment out there? We have seen deployments with over 500 JVMs in the Coherence cluster, and deployments with over 1000 JVMs using the Coherence jar file, in one system. On the management side there is an ecosystem of monitoring tools from Oracle and third parties with dashboards graphing values from Coherence's JMX instrumentation. For lifecycle management we have seen a lot of custom scripting over the years, but we've also integrated closely with WebLogic to leverage its management ecosystem for deploying Coherence-based applications and managing process life cycles. That integration introduces a new Java EE archive type, the Grid Archive or GAR, which embeds in an EAR and can be seen by a WAR in WebLogic. That integration also doesn't require any extra WebLogic licensing if Coherence is licensed. How is Coherence different from a NoSQL Database like MongoDB? Coherence can be considered a NoSQL technology. It pre-dates the NoSQL movement, having been first released in 2001 whereas the term "NoSQL" was coined in 2009. Coherence has a key-value data model primarily but can also be used for document data models. Coherence manages data in memory currently, though disk persistence is in a future release currently in beta testing. Where the data is managed yields a few differences from the most well-known NoSQL products: access latency is faster with Coherence, though well-known NoSQL databases can manage more data. Coherence also has features that well-known NoSQL database lack, such as grid computing, eventing, and data source integration. Finally Coherence has had 15 years of maturation and hardening from usage in mission-critical systems across a variety of industries, particularly financial services. Can I use Coherence for local caching? Yes, you get additional features beyond just a java.util.Map: you get expiration capabilities, size-limitation capabilities, eventing capabilites, etc. Are there APIs available for GoldenGate HotCache? It's mostly a black box. You configure it, and it just puts objects into your caches. However you can treat it as a glass box, and use Coherence event interceptors to enhance its behavior - and there are use cases for that. Are Coherence caches updated transactionally? Coherence provides several mechanisms for concurrency control. If a project insists on full-blown JTA / XA distributed transactions, Coherence caches can participate as resources. But nobody does that because it's a performance and scalability anti-pattern. At finer granularity, Coherence guarantees strict ordering of all operations (reads and writes) against a single cache key if the operations are done using Coherence's "EntryProcessor" feature. And Coherence has a unique feature called "partition-level transactions" which guarantees atomic writes of multiple cache entries (even in different caches) without requiring JTA / XA distributed transaction semantics.

    Read the article

  • Pass a JSON array to a WCF web service

    - by Tawani
    I am trying to pass a JSON array to a WCF service. But it doesn't seem to work. I actually pulled an array [GetStudents] out the service and sent the exact same array back to the service [SaveStudents] and nothing (empty array) was received. The JSON array is of the format: [ {"Name":"John","Age":12}, {"Name":"Jane","Age":11}, {"Name":"Bill","Age":12} ] And the contracts are of the following format: //Contracts [DataContract] public class Student{ [DataMember]public string Name { get; set; } [DataMember]public int Age{ get; set; } } [CollectionDataContract(Namespace = "")] public class Students : List<Student> { [DataMember]public Endorsements() { } [DataMember]public Endorsements(IEnumerable<Student> source) : base(source) { } } //Operations public Students GetStudents() { var result = new Students(); result.Add(new Student(){Name="John",12}); result.Add(new Student(){Name="Jane",11}); result.Add(new Student(){Name="Bill",12}); return result; } //Operations public void SaveStudents(Students list) { Console.WriteLine(list.Count); //It always returns zero } It there a particular way to send an array to a WCF REST service?

    Read the article

  • Git subtree not properly using .gitignore when doing a partial clone

    - by D W
    I am a graduate student with many scripts, bibliography data in bibtex, thesis draft in latex, presentations in open office, posters in scribus, and figures and result data. I would like to put everything in one project under version control. Then when I need to work on a portion such as the bibliography data, I would like to check that subdirectory out, modify it as necessary and merge it back.I would like the ability to check out one version to my home computer, and a different one to my work computer and make changes to each independently and eventually merge them back. I would also like to be able to check out a piece of code from this big project and import it with versioning into a separate project. If I may changes I'd like to be able to merge them back to the original project. Based on my understanding git subtree can do this. http://github.com/apenwarr/git-subtree There is an example that is along the lines of what I'm trying to do at: http://psionides.jogger.pl/2010/02/04/sharing-code-between-projects-with-git-subtree/ Say the trunk of my project contained the directories: (bib bin cfg data fig src todo). When I use git subtree split -P bib -b export git checkout export I get a the bib directory, plus all files that should have been ignored or considered binary based on .gitignore such as the src directory and everything in it that ends in a tilde or the ./data directory. dwickrama@DWwork:~/research/trunk$ ls * -r biblography.bib JabRef src: script1.sh~ README~ script2.sh~ script3.sh~ script4.R~ script5.awk~ script5.py~ cfg: cfgFile1.ini~ cfgFile2.ini~ cfgFile3.ini~ bin: bigBinaryPackage1 bigBinaryPackage2 dwickrama@DWwork:~/research/trunk$ My .gitignore file is as follows: *.doc diff=word *.tex diff=tex *.bib diff=bibtex *.py diff=python *.eps binary *.jpg binary *.png binary ./bin/* binary *~ How do I prevent this?

    Read the article

  • CAD like 3D geometry .NET library

    - by Naszta
    I am looking for a good 3D CAD like library. I need basic geometry shapes (cube, sphere, torus etc.) and the library should make the surface mesh - based on the shapes and some boolean operations. I have found many libraries on google (wrapped on C++), but most of them are not really comfortable, and/or do not support union/intersection. http://www.geometros.com/sgcore/index.htm - it has wrapped interface, http://www.opencsg.org/ - I haven't found wrapped interface, http://carve-csg.com/ - I haven't found wrapped interface, http://gts.sourceforge.net/ - I haven't found wrapped interface, http://www.ogre3d.org/ - I haven't found basic geometric shapes and boolean operators, http://brlcad.org/ - its interface is not clear for me, I haven't found wrapped interface, http://www.cgal.org/ - currently I try to make it work, I haven't found wrapped interface, http://www.k-3d.org/ - I haven't found wrapped interface, http://www.opencascade.org/ - I haven't found wrapped interface, http://ilnumerics.net/ - it does not support solid boolean operations, http://www.techsoft3d.com/ - seems to be really good one. Support both C++ and C#, http://www.devdept.com/products/eyeshot/ - one more C# library. It was not tested. Open source would be nice, but not necessary. Many thanks for help. P.S.: the previous topic Update: in C# we will use Eyeshot project.

    Read the article

  • Android app resets on orientation change, best way to handle?

    - by Anthony Westover
    So I am making a basic chess app to play around with some various elements of android programming and so far I am learning a lot, but this time I am lost. When the orientation of the emulator changes the activity gets reset. Based on my research the same thing will happen anytime the application is paused/interrupted, ie. keyboard change, phone call, hitting the home key etc. Obviously, it is not viable to have a chess game constantly reset, so once again I find myself needing to learn how to fix this problem. My research brings up a few main things, overriding the onPaused method in my Activity, listening for Orientation, Keyboard changes in my manifest (via android:configChanges), using Parcelables, or Serialization. I have looked up a lot of sample code using Pacelables, but to be honest it is too confusing. Maybe coming back tomorrow with fresh eyes will be beneficial, but right now the more I look at Parcelables the less sense it makes. My application utilizes a Board object, which has 64 Cell Objects(in an 8x8 2D array), and each cell has a Piece Object, either an actual piece or null if the space is empty. Assuming that I use either Parcelable or Serialization I am assuming that I would have to Parcelize or Serialize each class, Board, Cell, and Piece. First and foremost, is Parcelable or Serialization even the right thing to be looking at for this problem? If so is either Parcelable or Serializable preferred for this? And am I correct in assuming that each of the three objects would have to be Parceled/Serialized? Finally, does anybody have a link to a simple to understand Parcelable tutorial? Anything to help me understand, and stop further headaches down the road when my application expands even further. Any help would be appreciated.

    Read the article

  • Chart controls for ASP.NET

    - by tolism7
    I am looking people's opinion and experience on using chart controls within an ASP.NET application (web forms or MVC) primarily but also in any kind of project. I am currently doing my research and I have a pretty big list of controls to evaluate. My list includes (in no particular order): ASP.NET controls: DevExpress XtraCharts (http://demos.devexpress.com/XtraChartsDemos/) Dundas Chart for .NET (http://www.dundas.com/) Telerik RadChart for ASP.NET AJAX (http://www.telerik.com/) ComponentArt Charting & Visualization for ASP.NET (http://www.componentart.com/) Infragistics WebChart (http://www.infragistics.com/dotnet/netadvantage/aspnet.aspx#Overview) .net Charting (http://www.dotnetcharting.com/) Chart Control for .Net Framework (Microsoft's) (http://weblogs.asp.net/scottgu/archive/2008/11/24/new-asp-net-charting-control-lt-asp-chart-runat-quot-server-quot-gt.aspx) Flash controls: FusionCharts v3 (http://www.fusioncharts.com/) XML/SWF Charts (http://www.maani.us/xml%5Fcharts/index.php) amCharts (http://www.amcharts.com/) AnyChart (http://www.anychart.com/home/) Javascript: Flot (http://code.google.com/p/flot/) Flotr (http://solutoire.com/flotr/) jqPlot (http://www.jqplot.com/index.php) (If I missed some that worth to be compared against the above please let me know.) What I am looking is opinions on using any of the above so I can form my own and help others do the same, based on what I read here. I do not care which one is better. What I care for is why someone likes one of the above and what do these controls offer as a distinct advantage. I am interested in developer's opinion and I would like to find out which things are difficult doing with any of the above controls and which things are easy to achieve. AJAX compatibility (build in to the controls but also manual), ASP.NET compatibility, input capabilities, data binding options, performance, how much code does one need to write in order to create a chart, are some of the things that I would want to read about. I have already done my research on StackOverflow for relevant questions but there is nothing on the level of detail that I would want to read in order to make a responsible decision.

    Read the article

  • Roadmap to Android development

    - by Matthew
    Hello, I've done a little research, and am interested in developing for Android. I've never programmed before, and have no idea how to go from zero experience to developing for a mobile device. My interest is in eventually making some sort of 2d game. Is there a lesson plan for starting from the ground up? I would think one would need to learn the Java language to start. Looking at the Sun website, it's a bit daunting. Is there a book, specifically, that would wrap up this knowledge in a bit of a directed lesson plan? I'm not sure if opengl-es is what would be required for 2d games. I've done a little research on this, and it's even far more daunting than Java itself. I can't even begin to figure out where to start with even just opengl, sans -es. My best guess would be that I need further knowledge in Java to continue with this, but even still, is it possible to learn concurrently with Java?

    Read the article

  • How to force two process to run on the same CPU?

    - by kovan
    Context: I'm programming a software system that consists of multiple processes. It is programmed in C++ under Linux. and they communicate among them using Linux shared memory. Usually, in software development, is in the final stage when the performance optimization is made. Here I came to a big problem. The software has high performance requirements, but in machines with 4 or 8 CPU cores (usually with more than one CPU), it was only able to use 3 cores, thus wasting 25% of the CPU power in the first ones, and more than 60% in the second ones. After many research, and having discarded mutex and lock contention, I found out that the time was being wasted on shmdt/shmat calls (detach and attach to shared memory segments). After some more research, I found out that these CPUs, which usually are AMD Opteron and Intel Xeon, use a memory system called NUMA, which basically means that each processor has its fast, "local memory", and accessing memory from other CPUs is expensive. After doing some tests, the problem seems to be that the software is designed so that, basically, any process can pass shared memory segments to any other process, and to any thread in them. This seems to kill performance, as process are constantly accessing memory from other processes. Question: Now, the question is, is there any way to force pairs of processes to execute in the same CPU?. I don't mean to force them to execute always in the same processor, as I don't care in which one they are executed, altough that would do the job. Ideally, there would be a way to tell the kernel: If you schedule this process in one processor, you must also schedule this "brother" process (which is the process with which it communicates through shared memory) in that same processor, so that performance is not penalized.

    Read the article

  • iPhone - return from an NSOperation

    - by lostInTransit
    Hi I am using a subclass of NSOperation to do some background processes. I want the operation to be cancelled when the user clicks a button. Here's what my NSOperation subclass looks like - (id)init{ self = [super init]; if(self){ //initialization code goes here _isFinished = NO; _isExecuting = NO; } return self; } - (void)start { if (![NSThread isMainThread]) { [self performSelectorOnMainThread:@selector(start) withObject:nil waitUntilDone:NO]; return; } [self willChangeValueForKey:@"isExecuting"]; _isExecuting = YES; [self didChangeValueForKey:@"isExecuting"]; //operation goes here } - (void)finish{ //releasing objects here [self willChangeValueForKey:@"isExecuting"]; [self willChangeValueForKey:@"isFinished"]; _isExecuting = NO; _isFinished = YES; [self didChangeValueForKey:@"isExecuting"]; [self didChangeValueForKey:@"isFinished"]; } - (void)cancel{ [self willChangeValueForKey:@"isCancelled"]; [self didChangeValueForKey:@"isCancelled"]; [self finish]; } And this is how I am adding objects of this class to a queue and listening for KVO notifications operationQueue = [[NSOperationQueue alloc] init]; [operationQueue setMaxConcurrentOperationCount:5]; [operationQueue addObserver:self forKeyPath:@"operations" options:0 context:&OperationsChangedContext]; - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context { if (context == &OperationsChangedContext) { NSLog(@"Queue size: %u", [[operationQueue operations] count]); } else { [super observeValueForKeyPath:keyPath ofObject:object change:change context:context]; } } To cancel an operation (on a button click for instance), I tried calling -cancel but it doesn't make a difference. Also tried calling -finish but even that doesn't change anything. Every time I add an operation to the queue, the queue size only increases. finish is called (checked using NSLog statements) but it doesn't really end the operation. I'm still not very confident I'm doing this right Can someone please tell me where I am going wrong? Thanks a lot

    Read the article

  • DDD principlers and ASP.NET MVC project design

    - by kaivalya
    Two part questions I have a product aggregate that has; Prices PackagingOptions ProductDescriptions ProductImages etc I have modeled one product repository and did not create individual repositories for any of the child classes. All db operations are handled through product repository. Am I understanding the DDD concept correctly so far? Sometimes the question comes to my mind that having a repository for lets say packaging options could make my life easier by directly fetching a the packaging option from the DB by using its ID instead of asking the product repository to find it in its PackagingOptions collection and give it to me.. Second part is managing the edit create operations using ASP.MVC frame work I am currently trying to manage all add edit remove of these child collections of product through product controller(sound right?). One challenge I am now facing is; If I edit a specific packaging option of product through mydomain/product/editpackagingoption/10 I have access to the id of the packaging option But I don't have the ID of the product it self and this forces me to write a query to first find the product that has this specific packaging option then edit that product and the revelant packaging option. I can do this as all packaging option have their unique ID but this would fail if I have collections that don't have unique ID. That feels very wrong.. The next option I thought of is sending both the product and packaging option IDs on the url like; mydomain/product/editpackagingoption/3/10 But I am not sure if that is a good design either. So I am at a point that I am a bit confused. might be having fundamental misunderstandings around all of this... I would appreciate if you bear with the long question and help me put this together. thanks!

    Read the article

  • Stack and Hash joint

    - by Alexandru
    I'm trying to write a data structure which is a combination of Stack and HashSet with fast push/pop/membership (I'm looking for constant time operations). Think of Python's OrderedDict. I tried a few things and I came up with the following code: HashInt and SetInt. I need to add some documentation to the source, but basically I use a hash with linear probing to store indices in a vector of the keys. Since linear probing always puts the last element at the end of a continuous range of already filled cells, pop() can be implemented very easy without a sophisticated remove operation. I have the following problems: the data structure consumes a lot of memory (some improvement is obvious: stackKeys is larger than needed). some operations are slower than if I have used fastutil (eg: pop(), even push() in some scenarios). I tried rewriting the classes using fastutil and trove4j, but the overall speed of my application halved. What performance improvements would you suggest for my code? What open-source library/code do you know that I can try?

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >