Search Results

Search found 3638 results on 146 pages for 'major major'.

Page 11/146 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Where does the version number come from?

    - by Robert Schneider
    I have a version control system (e.g. Subversion) and now I'd like to set up a build process. Now I have to create a version number and insert it into the system. But where does the version number come from and get into? Assume I want to use this common <major.<minor.<bugfix/revision scheme. Should I pass a number to the build script? Or should I pass arguments like increaseMajor, increaseMinor, increaseRevision? Or would you recommend to create a branch with the number which will be detected by the build script? I could imagine that the major and minor version number have to be put in manually somewhere. The revision number could be increased automaically. But still I don't know where I would place the major and minor number. In my case I have some php files that I would like to zip, but before I have to insert some version numbers into php file. I have edited this post to try to make my request clearer: I do not use Subversion, that was just an example. And I don't want to discuss the version number scheme. Imagine I want to create version 3.5.0 or 3.5.1. Would I pass this version number to a build script? Would the script create the branch in the repository with this number or would it expect that someone has already created this branch? Manually? Or would the build script look for name of the branch (e.g. '3.5.1) and use it for further things? And does the version number come from my brain or is it automatically created (I guess the major/minor number it comes from my little brain and revision number is created)? Or would you place the number into a file that may gets inserted into the repository? I guess if would use a release management tool I would insert the version number there. But I don't use one yet.

    Read the article

  • Using in app purchase to unlock features vs. using free & paid app versions for iPhone

    - by yabada
    I have an app that I was going to release as a free (lite) version with some of the total functionality and a paid full version with advanced functionality. Now, with in app purchase for free apps I am thinking of going that route with the ability to unlock features as needed. I'm not talking about a trial version that expires.I want people to be able to try out the app and get an idea of the interface and functionality before deciding to purchase the full functionality of each major section of the app, basically. Here's an analogy of what my app would be like. Let's say you have a cooking app that teaches you to cook in different styles. There could be major section for French, Italian, and Chinese. Each section could have some rudiments unlocked in the free app so users can see the UI and basics of the functionality. Then, the user could decide to purchase each major section (or not) individually with in app purchase or buy the full versioned app (with the free/paid model). One concern I have with offering a free app with in app purchase would be with feedback. I would be very clear in my description in the app store that there is in app purchase for full features but I'm worried that less serious users could/would leave negative feedback. I suppose that's always a risk but curious about any experience with this. It also seems that it could be a whole lot more complicated keeping track of what portions of the app are locked and unlocked with in app purchase. I know I'd have to have all the code for the full functionality and "lock" the portions that haven't been purchased. How do people usually lock portions of their code? I'm not talking about the process of purchasing (I've read the In App Purchase Programming Guide) but after the purchase has been made. Would I just keep track of what the user has purchased and put conditionals on the sections that are initially locked? Or is there another way to do this as well? My instinct is for the in app purchase (particularly since users could purchase the major sections that they want individually).

    Read the article

  • Basic principles of computer encryption?

    - by Andrew
    I can see how a cipher can be developed using substitutions and keys, and how those two things can become more and more complex, thus offering some protection from decryption through brute-force approaches. But specifically I'm wondering: what other major concepts beyond substitution and key are involved? is the protection/secrecy of the key a greater vulnerability than the strength of the encryption? why does encryption still hold up when the key is 'public' ? are performance considerations a major obstacle to the development of more secure encryption?

    Read the article

  • Summer Learning Opportunities for an upcoming College Freshman

    - by SteveStifler
    I'm currently a senior in high school, about to matriculate and pursue a major in Computer Science (possibly dual-major with electrical engineering. Comments?). I already program regularly as a hobby, but I would like to get a jump start this summer by perhaps attending a seminar, helping out on an open source project...you know, something legitimate that will bolster my knowledge of the computer science field. Any ideas?

    Read the article

  • Accepting Payment with local debit cards from foreign countries

    - by megatr0n
    Hi all. I have an ecommerce website and I am currently accepting payments from visa, master card and all the other major cards. However, one issue I am having is accepting payment from customers using local debit cards. Say someone from China doesn't have a major credit and he wants to use his local debit card, I want be able to accept payment from him as long as its legal. How do I go about this? Thanks.

    Read the article

  • Making a Game Without Graphics?

    - by cam
    Is it possible or even constructive to make a game without any graphics (but is intended to become graphical) I'm not good with graphics at all, so I'd like to write the skeleton for the game then have a graphics programmer/artist fill in the rest. I could write up all the major classes, and their interactions, and all the major functions/parts of the game. If so, what should I do to make it easier to integrate graphics into the game later on (every drawn object should have a Draw, Rotate, Collide, etc method) ?

    Read the article

  • Is strict mode more performant?

    - by sje397
    Does executing javascript within a browser in 'strict mode' make it more performant, in general? Do any of the major browsers do additional optimisation or use any other techniques that will improve performance in strict mode? Edit: Since none of the major engines actually implement strict mode, I'll rephrase slightly: Is strict mode intended, amongst its other goals, to allow browsers to introduce additional optimisations or other performance enhancements?

    Read the article

  • linux raid 1: right after replacing and syncing one drive, the other disk fails - understanding what is going on with mdstat/mdadm

    - by devicerandom
    We have an old RAID 1 Linux server (Ubuntu Lucid 10.04), with four partitions. A few days ago /dev/sdb failed, and today we noticed /dev/sda had pre-failure ominous SMART signs (~4000 reallocated sector count). We replaced /dev/sdb this morning and rebuilt the RAID on the new drive, following this guide: http://www.howtoforge.com/replacing_hard_disks_in_a_raid1_array Everything went smooth until the very end. When it looked like it was finishing to synchronize the last partition, the other old one failed. At this point I am very unsure of the state of the system. Everything seems working and the files seem to be all accessible, just as if it synchronized everything, but I'm new to RAID and I'm worried about what is going on. The /proc/mdstat output is: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md3 : active raid1 sdb4[2](S) sda4[0] 478713792 blocks [2/1] [U_] md2 : active raid1 sdb3[1] sda3[2](F) 244140992 blocks [2/1] [_U] md1 : active raid1 sdb2[1] sda2[2](F) 244140992 blocks [2/1] [_U] md0 : active raid1 sdb1[1] sda1[2](F) 9764800 blocks [2/1] [_U] unused devices: <none> The order of [_U] vs [U_]. Why aren't they consistent along all the array? Is the first U /dev/sda or /dev/sdb? (I tried looking on the web for this trivial information but I found no explicit indication) If I read correctly for md0, [_U] should be /dev/sda1 (down) and /dev/sdb1 (up). But if /dev/sda has failed, how can it be the opposite for md3 ? I understand /dev/sdb4 is now spare because probably it failed to synchronize it 100%, but why does it show /dev/sda4 as up? Shouldn't it be [__]? Or [_U] anyway? The /dev/sda drive now cannot even be accessed by SMART anymore apparently, so I wouldn't expect it to be up. What is wrong with my interpretation of the output? I attach also the outputs of mdadm --detail for the four partitions: /dev/md0: Version : 00.90 Creation Time : Fri Jan 21 18:43:07 2011 Raid Level : raid1 Array Size : 9764800 (9.31 GiB 10.00 GB) Used Dev Size : 9764800 (9.31 GiB 10.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:27:33 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : a3b4dbbd:859bf7f2:bde36644:fcef85e2 Events : 0.7704 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 17 1 active sync /dev/sdb1 2 8 1 - faulty spare /dev/sda1 /dev/md1: Version : 00.90 Creation Time : Fri Jan 21 18:43:15 2011 Raid Level : raid1 Array Size : 244140992 (232.83 GiB 250.00 GB) Used Dev Size : 244140992 (232.83 GiB 250.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:39:06 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : 8bcd5765:90dc93d5:cc70849c:224ced45 Events : 0.1508280 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 18 1 active sync /dev/sdb2 2 8 2 - faulty spare /dev/sda2 /dev/md2: Version : 00.90 Creation Time : Fri Jan 21 18:43:19 2011 Raid Level : raid1 Array Size : 244140992 (232.83 GiB 250.00 GB) Used Dev Size : 244140992 (232.83 GiB 250.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:46:44 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : 2885668b:881cafed:b8275ae8:16bc7171 Events : 0.2289636 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 19 1 active sync /dev/sdb3 2 8 3 - faulty spare /dev/sda3 /dev/md3: Version : 00.90 Creation Time : Fri Jan 21 18:43:22 2011 Raid Level : raid1 Array Size : 478713792 (456.54 GiB 490.20 GB) Used Dev Size : 478713792 (456.54 GiB 490.20 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:19:20 2013 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 0 0 1 removed 2 8 20 - spare /dev/sdb4 The active sync on /dev/sda4 baffles me. I am worried because if tomorrow morning I have to replace /dev/sda, I want to be sure what should I sync with what and what is going on. I am also quite baffled by the fact /dev/sda decided to fail exactly when the raid finished resyncing. I'd like to understand what is really happening. Thanks a lot for your patience and help. Massimo

    Read the article

  • Waterfall Model (SDLC) vs. Prototyping Model

    The characters in the fable of the Tortoise and the Hare can easily be used to demonstrate the similarities and differences between the Waterfall and Prototyping software development models. This children fable is about a race between a consistently slow moving but steadfast turtle and an extremely fast but unreliable rabbit. After closely comparing each character’s attributes in correlation with both software development models, a trend seems to appear in that the Waterfall closely resembles the Tortoise in that Waterfall Model is typically a slow moving process that is broken up in to multiple sequential steps that must be executed in a standard linear pattern. The Tortoise can be quoted several times in the story saying “Slow and steady wins the race.” This is the perfect mantra for the Waterfall Model in that this model is seen as a cumbersome and slow moving. Waterfall Model Phases Requirement Analysis & Definition This phase focuses on defining requirements for a project that is to be developed and determining if the project is even feasible. Requirements are collected by analyzing existing systems and functionality in correlation with the needs of the business and the desires of the end users. The desired output for this phase is a list of specific requirements from the business that are to be designed and implemented in the subsequent steps. In addition this phase is used to determine if any value will be gained by completing the project. System Design This phase focuses primarily on the actual architectural design of a system, and how it will interact within itself and with other existing applications. Projects at this level should be viewed at a high level so that actual implementation details are decided in the implementation phase. However major environmental decision like hardware and platform decision are typically decided in this phase. Furthermore the basic goal of this phase is to design an application at the system level in those classes, interfaces, and interactions are defined. Additionally decisions about scalability, distribution and reliability should also be considered for all decisions. The desired output for this phase is a functional  design document that states all of the architectural decisions that have been made in regards to the project as well as a diagrams like a sequence and class diagrams. Software Design This phase focuses primarily on the refining of the decisions found in the functional design document. Classes and interfaces are further broken down in to logical modules based on the interfaces and interactions previously indicated. The output of this phase is a formal design document. Implementation / Coding This phase focuses primarily on implementing the previously defined modules in to units of code. These units are developed independently are intergraded as the system is put together as part of a whole system. Software Integration & Verification This phase primarily focuses on testing each of the units of code developed as well as testing the system as a whole. There are basic types of testing at this phase and they include: Unit Test and Integration Test. Unit Test are built to test the functionality of a code unit to ensure that it preforms its desired task. Integration testing test the system as a whole because it focuses on results of combining specific units of code and validating it against expected results. The output of this phase is a test plan that includes test with expected results and actual results. System Verification This phase primarily focuses on testing the system as a whole in regards to the list of project requirements and desired operating environment. Operation & Maintenance his phase primarily focuses on handing off the competed project over to the customer so that they can verify that all of their requirements have been met based on their original requirements. This phase will also validate the correctness of their requirements and if any changed need to be made. In addition, any problems not resolved in the previous phase will be handled in this section. The Waterfall Model’s linear and sequential methodology does offer a project certain advantages and disadvantages. Advantages of the Waterfall Model Simplistic to implement and execute for projects and/or company wide Limited demand on resources Large emphasis on documentation Disadvantages of the Waterfall Model Completed phases cannot be revisited regardless if issues arise within a project Accurate requirement are never gather prior to the completion of the requirement phase due to the lack of clarification in regards to client’s desires. Small changes or errors that arise in applications may cause additional problems The client cannot change any requirements once the requirements phase has been completed leaving them no options for changes as they see their requirements changes as the customers desires change. Excess documentation Phases are cumbersome and slow moving Learn more about the Major Process in the Sofware Development Life Cycle and Waterfall Model. Conversely, the Hare shares similar traits with the prototyping software development model in that ideas are rapidly converted to basic working examples and subsequent changes are made to quickly align the project with customers desires as they are formulated and as software strays from the customers vision. The basic concept of prototyping is to eliminate the use of well-defined project requirements. Projects are allowed to grow as the customer needs and request grow. Projects are initially designed according to basic requirements and are refined as requirement become more refined. This process allows customer to feel their way around the application to ensure that they are developing exactly what they want in the application This model also works well for determining the feasibility of certain approaches in regards to an application. Prototypes allow for quickly developing examples of implementing specific functionality based on certain techniques. Advantages of Prototyping Active participation from users and customers Allows customers to change their mind in specifying requirements Customers get a better understanding of the system as it is developed Earlier bug/error detection Promotes communication with customers Prototype could be used as final production Reduced time needed to develop applications compared to the Waterfall method Disadvantages of Prototyping Promotes constantly redefining project requirements that cause major system rewrites Potential for increased complexity of a system as scope of the system expands Customer could believe the prototype as the working version. Implementation compromises could increase the complexity when applying updates and or application fixes When companies trying to decide between the Waterfall model and Prototype model they need to evaluate the benefits and disadvantages for both models. Typically smaller companies or projects that have major time constraints typically head for more of a Prototype model approach because it can reduce the time needed to complete the project because there is more of a focus on building a project and less on defining requirements and scope prior to the start of a project. On the other hand, Companies with well-defined requirements and time allowed to generate proper documentation should steer towards more of a waterfall model because they are in a position to obtain clarified requirements and have to design and optimal solution prior to the start of coding on a project.

    Read the article

  • DotNetNuke 7.0 Only Weeks Away!

    - by sbwalker
    The software industry moves at a lightning pace, and it is only through constant focus and continuous investment that a software product can remain both stable and relevant over the long term. As we approach the 10 Year Anniversary of the DotNetNuke platform, it seems only fitting that we are on the verge of announcing yet another significant product milestone. DotNetNuke 7.0 is just around the corner and represents a bold step forward for our Content Management Platform, including substantial business productivity enhancements, investments in web platform relevance, and a significant overhaul and modernization of the user interface and user experience. It has been five months since I posted the announcement that the next major version of the platform was going to be DotNetNuke 7.0.  This announcement created tremendous excitement and anticipation in the DotNetNuke community, as major version increments have always been utilized as an opportunity  to introduce revolutionary new product features and capabilities. After months of intense product development, the finish line is finally in sight. With that, I am pleased to announce that we released a Release Candidate (RC) of DotNetNuke 7.0 yesterday. You can download the RC from our project page on Codeplex. A Release Candidate represents a software version which is very near to “release” quality. So although we will not be officially endorsing the RC for production use, or providing an official upgrade path, it does represent a significant milestone in our software development efforts ( if you are looking for a more detailed explanation of our software release terminology, I would encourage you to read the blog written by Co-Founder, Joe Brinkman titled "What's In A Name?" ). Modernizing a software platform does have its share of challenges from a backward compatibility perspective and, as usual, we are taking great care in ensuring a seamless upgrade path for our customers. In order to remain relevant and progressive, you need to be aware that DotNetNuke 7.0 has adopted a new set of baseline infrastructure requirements including ASP.NET 4.0.  As a result we are encouraging all major stakeholders in the ecosystem ( module developers, designers, partners, customers, etc... ) to take the opportunity to install the RC in their own local environments. This is the last opportunity to let us know about any final issues which may need to be addressed prior to final release. Mark your calendars now… the expected public release date (RTM) for DotNetNuke 7.0 will be Wednesday, November 28th. On a side note, we expect to release a 6.2.5 Maintenance version today. This release contains some high priority product quality improvements as well as security patches for some vulnerabilities reported through our standard ecosystem channels. As a result we will be encouraging all of our customers to upgrade to the 6.2.5 release as soon as it is available. I hope everyone is as excited as I am about the upcoming DotNetNuke 7.0 release. Please take the opportunity over the next week to put the new platform through its paces. Remember, only through our collective efforts can we ensure that this release has the greatest market impact of any DotNetNuke release to date.

    Read the article

  • Nest reinvents smoke detectors. Introduces smart and talking smoke detector that keeps quite when you wave

    - by Gopinath
    Nest, the leading smart thermostat maker has introduced a smart home device today- Nest Protect, a smart, talking smoke & carbon monoxide detector that can quite when you wave your hand. Less annoyances and more intelligence Smoke detectors are around for hundreds of years and playing a major role in providing safety from fire accidents at home. But the technology of these devices is stale and there is no major innovation for the past several years. With the introduction of Nest Protect, the landscape of smoke detectors is all set to change just like how Nest thermostat redefined the industry two years ago. Nest Protect is internet enabled and equipped with motion- and smoke-detection sensors so that when it starts beeping you can silence it by waving hand instead of doing circus feats to turn off the alarm. Everyone who cooks in a home equipped with smoke detector would know how annoying it is to turn off sensitive smoke detectors that goes off control quite often. Apart from addressing the annoyances of regular smoke detector, Nest Protect has talking capabilities. It can alert users with clear & actionable instructions when it detects a danger. Instead of harsh beeps it actually speak to you so you know what is happening. It will tell you what smoke it has detected and in which room it is detected. Multiple Nest Protects installed in a home can communicate with each other. Lets say that there is a smoke in bed room, the Nest Protect installed in bed room shares this information to all Nest Protects installed in the home and your kitchen device can alert you that there is a smoke in bed room. There is an App for that The internet enabled Nest Protect has an app to view its status and various alerts. When the Protect is running on low battery it alerts you to replace them soon. If there is a smoke at home and you are away, you will get message alerts. The app works on all major smartphones as well as tablets. Auto shuts down gas furnaces/heaters on smoke Apart from forming a network with other Nest Protect devices installed at home, they can also communicate with Nest Thermostat if it is installed. When carbon monoxide is detected it can shut off your gas furnace automatically. Also with the help of motion detectors it improves Nest Thermostat’s auto-away functionality. It looks elegant and costs a lot more than a regular smoke detector Just like Nest Thermostat, Nest Protect is elegant and adorable. You just fall in love with it the moment you see it. It’s another master piece from the designer of Apple’s iPod. All is good with the Nest Protect, except the price!! It costs whooping $129, which is almost 4 times more expensive than the best selling conventional thermostats available at $30. A single bed room apartment would require at least 3 detectors and it costs around $390 to install Nest Protects compared to 90$ required for conventional smoke detectors. Though Nest Thermostat is an expensive one compared to conventional thermostats, it offered great savings through its intelligent auto-away feature. Users were able to able to see returns on their investments. If Nest Protect also can provide good return on investment the it will be very successful.

    Read the article

  • Planning in the Cloud - For Real

    - by jmorourke
    One of the hottest topics at Oracle OpenWorld 2012 this week is “the cloud”.  Over the past few years, Oracle has made major investments in cloud-based applications, including some acquisitions, and now has over 100 applications available through Oracle Cloud services.  At OpenWorld this week, Oracle announced seven new offerings delivered via the Oracle Cloud services platform, one of which is the Oracle Planning and Budgeting Cloud Service.  Based on Oracle Hyperion Planning, this service is the first of Oracle’s EPM applications to be to be offered in the Cloud.    This solution is targeted to organizations that are struggling with spreadsheets or legacy planning and budgeting applications, want to deploy a world class solution for financial planning and budgeting, but are constrained by IT resources and capital budgets. With the Oracle Planning and Budgeting Cloud Service, organizations can fast track their way to world-class financial planning, budgeting and forecasting – at cloud speed, with no IT infrastructure investments and with minimal IT resources. Oracle Hyperion Planning is a market-leading budgeting, planning and forecasting application that is used by over 3,300 organizations worldwide.  Prior to this announcement, Oracle Hyperion Planning was only offered on a license and maintenance basis.  It could be deployed on-premise, or hosted through Oracle On-Demand or third party hosting partners.  With this announcement, Oracle’s market-leading Hyperion Planning application will be available as a Cloud Service and through subscription-based pricing. This lowers the cost of entry and deployment for new customers and provides a scalable environment to support future growth. With this announcement, Oracle is the first major vendor to offer one of its core EPM applications as a cloud-based service.  Other major vendors have recently announced cloud-based EPM solutions, but these are only BI dashboards delivered via a cloud platform.   With this announcement Oracle is providing a market-leading, world-class financial budgeting, planning and forecasting as a cloud service, with the following advantages: ·                     Subscription-based pricing ·                     Available standalone or as an extension to Oracle Fusion Financials Cloud Service ·                     Implementation services available from Oracle and the Oracle Partner Network ·                     High scalability and performance ·                     Integrated financial reporting and MS Office interface ·                     Seamless integration with Oracle and non-Oracle transactional applications ·                     Provides customers with more options for their planning and budgeting deployment vs. strictly on-premise or cloud-only solution providers. The OpenWorld announcement of Oracle Planning and Budgeting Cloud Service is a preview announcement, with controlled availability expected in calendar year 2012.  For more information, check out the links below: Press Release Web site If you have any questions or need additional information, please feel free to contact me at [email protected].

    Read the article

  • Universities 2030: Learning from the Past to Anticipate the Future

    - by Mohit Phogat
    What will the landscape of international higher education look like a generation from now? What challenges and opportunities lie ahead for universities, especially “global” research universities? And what can university leaders do to prepare for the major social, economic, and political changes—both foreseen and unforeseen—that may be on the horizon? The nine essays in this collection proceed on the premise that one way to envision “the global university” of the future is to explore how earlier generations of university leaders prepared for “global” change—or at least responded to change—in the past. As the essays in this collection attest, many of the patterns associated with contemporary “globalization” or “internationalization” are not new; similar processes have been underway for a long time (some would say for centuries).[1] A comparative-historical look at universities’ responses to global change can help today’s higher-education leaders prepare for the future. Written by leading historians of higher education from around the world, these nine essays identify “key moments” in the internationalization of higher education: moments when universities and university leaders responded to new historical circumstances by reorienting their relationship with the broader world. Covering more than a century of change—from the late nineteenth century to the early twenty-first—they explore different approaches to internationalization across Europe, Asia, Australia, North America, and South America. Notably, while the choice of historical eras was left entirely open, the essays converged around four periods: the 1880s and the international extension of the “modern research university” model; the 1930s and universities’ attempts to cope with international financial and political crises; the 1960s and universities’ role in an emerging postcolonial international development apparatus; and the 2000s and the rise of neoliberal efforts to reform universities in the name of international economic “competitiveness.” Each of these four periods saw universities adopt new approaches to internationalization in response to major historical-structural changes, and each has clear parallels to today. Among the most important historical-structural challenges that universities confronted were: (1) fluctuating enrollments and funding resources associated with global economic booms and busts; (2) new modes of transportation and communication that facilitated mobility (among students, scholars, and knowledge itself); (3) increasing demands for applied science, technical expertise, and commercial innovation; and (4) ideological reconfigurations accompanying regime changes (e.g., from one internal regime to another, from colonialism to postcolonialism, from the cold war to globalized capitalism, etc.). Like universities today, universities in the past responded to major historical-structural changes by internationalizing: by joining forces across space to meet new expectations and solve problems on an ever-widening scale. Approaches to internationalization have typically built on prior cultural or institutional ties. In general, only when the benefits of existing ties had been exhausted did universities reach out to foreign (or less familiar) partners. As one might expect, this process of “reaching out” has stretched universities’ traditional cultural, political, and/or intellectual bonds and has invariably presented challenges, particularly when national priorities have differed—for example, with respect to curricular programs, governance structures, norms of academic freedom, etc. Strategies of university internationalization that either ignore or downplay cultural, political, or intellectual differences often fail, especially when the pursuit of new international connections is perceived to weaken national ties. If the essays in this collection agree on anything, they agree that approaches to internationalization that seem to “de-nationalize” the university usually do not succeed (at least not for long). Please continue reading the other essays at http://globalhighered.wordpress.com/

    Read the article

  • How do I position an ellipse on a Silverlight Grid?

    - by daviddennisei
    I am creating a silverlight application which will allow you to click at two places on the screen and will draw an ellipse whose major axis starts and ends at the click locations. The clickable area is a Silverlight Grid control. Currently: When you first click, I am: Dropping a marker at the click point. Creating an ellipse and parenting it to the Grid. Creating and setting an AngleTransform on the ellipse. As you move the mouse, I am: Calculating the distance to the first click point. Setting the Width of the ellipse to this length. Calculating the angle of a line to the click point and the Grid's X-Axis. Setting the Ellipse's AngleTransform Angle to this angle. So far, so good. The ellipse is displayed, and its length and angle of rotation follow the mouse as it moves. However, the major axis of the ellipse is offset from the click point. How do I position the Ellipse so its major axis starts at the click point and ends at the current mouse position?

    Read the article

  • Best free (or cheap) ASP.NET based CMS for a club/association?

    - by marc_s
    I've been looking for a simple, intuitive ASP.NET based CMS to handle my club/association site. It should be based on ASP.NET so I can add some extra specific pages offer a membership system, e.g. I want to be able to define members which can e.g. comment posts, while anonymous users can only read (no active participation) have features like news, forums, blogs, picture gallery be simple and easy to use I've been looking at GraffitiCMS: so far my favorite, but it has no forum, and no membership system, and no future, it seems - no development whatsoever in over a year :-( Sitefinity: had extreme trouble even getting it installed, and when it's finally up and running, I find it overly complicated and not intuitive at all Umbraco: same problem - hard to install, hard to get up and running, the intro videos on their site are only available to paying subscribers (what's up with that deal???)..... DotNetNuke: seems like a major overkill Community Server: seems like a major overkill - and seems to be more and more commercial, only Any others I've missed that I should have definitely looked at ??

    Read the article

  • How do you engineers keep your place clean?

    - by 280Z28
    I have a major problem keeping not only my desk, but my entire place clean. I end up spending all of my time behind the keyboard and slowly but surely it all turns to crap everywhere but between my monitors and I. It's causing some major problems IRL and I need to fix it. Question: I'm an engineer, and I eat, sleep, and breathe think like an engineer. Do you have any ideas for engineering a clean place? Note to the messy users out there: Not really interested. I've been messy for ages so I've got that covered. I'm looking for real suggestions from people who actually keep clean, preferably from some who've been in my shoes before.

    Read the article

  • Spiceworks versus Request Tracker?

    - by dmackey
    We currently utilize Request Tracker for help desk ticketing, we utilize Spiceworks for asset inventorying. I am pondering whether it might be worthwhile to move from RT to Spiceworks for help desk as well. Has anyone used both systems and can provide some insight into any benefits/problems with either system? Or has general philosophical reasons why one should use one solution over the other? Of course, RT is open source and Spiceworks is not - and usually this would be a major item for me - but since Spiceworks is free and takes community involvement fairly actively its not as major of a concern for me (personally).

    Read the article

  • Puppet yum repo - Pull down 2.7.x vs 3.0.x

    - by Mike Purcell
    So a few weeks ago I started on the path to using puppet to automate all the configs/services. At the time I was using the EPEL repo, which installed version 2.6.x. After some reading I was trying to gain access to the flatten method available via the puppet stdlib, and thought it was available by default in the newer 2.7.x version. So I added a puppet repo with the following settings: [puppetlabs] name=Puppet Labs Packages baseurl=http://yum.puppetlabs.com/el/$releasever/products/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs The problem with this, is it installed v3.0.x instead of 2.7.x. And apparently 3.0.x is a major upgrade which was released only a few weeks ago. Obviously I would prefer to use the 2.7.x for the next few months while PuppetLabs fix any defects which will inevitably arise after a major version. So my question is, what setting can I add to the puppet repo config to pull down only the 2.7.x branch and not the 3.0.x branch?

    Read the article

  • on Google App Engine 500 Error, it should be 200 instead of 500

    - by Faisal Amjad
    requestToken = function() { var getTokenURI = '/gettoken?userid=' + userid; var httpRequest = makeRequest(getTokenURI, true); httpRequest.onreadystatechange = function() { if (httpRequest.readyState == 4) { if (httpRequest.status == 200) { openChannel(httpRequest.responseText); } else { alert('ERROR: AJAX request status = ' + httpRequest.status); } } } }; function makeRequest(url, async) { var httpRequest; if (window.XMLHttpRequest) { httpRequest = new XMLHttpRequest(); } else if (window.ActiveXObject) { // IE try { httpRequest = new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { try { httpRequest = new ActiveXObject("Microsoft.XMLHTTP"); } catch (e) { } } } if (!httpRequest) { return false; } httpRequest.open('POST', url, async); httpRequest.send(); return httpRequest; } it is running excellent on localhost...but on google app engine it httpRequest.status equals 500 and goes in else statement. WHY? LOG on google app engine: /getFriendList?userid=d 500 253ms 0kb Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11 175.110.179.86 - - [17/Dec/2012:08:35:33 -0800] "POST /getFriendList?userid=d HTTP/1.1" 500 0 "http://faisalimmsngr.appspot.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11" "faisalimmsngr.appspot.com" ms=254 cpu_ms=110 instance=00c61b117caf2d11ca57d2a2296ccd0b902b038a W 2012-12-17 08:35:33.272 Failed startup of context com.google.apphosting.utils.jetty.RuntimeAppEngineWebAppContext@10ff62a{/,/base/data/home/apps/s~faisalimmsngr/1.363934467542140431} org.mortbay.util.MultiException[java.lang.UnsupportedClassVersionError: adv/web/mid/exam/FriendServlet : Unsupported major.minor version 51.0, java.lang.UnsupportedClassVersionError: adv/web/mid/exam/MessageServlet : Unsupported major.minor version 51.0, java.lang.UnsupportedClassVersionError: adv/web/mid/exam/TokenServlet : Unsupported major.minor version 51.0] at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:656) at org.mortbay.jetty.servlet.Context.startContext(Context.java:140) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1250) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:517) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:467) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at com.google.apphosting.runtime.jetty.AppVersionHandlerMap.createHandler(AppVersionHandlerMap.java:219) at com.google.apphosting.runtime.jetty.AppVersionHandlerMap.getHandler(AppVersionHandlerMap.java:194) at com.google.apphosting.runtime.jetty.JettyServletEngineAdapter.serviceRequest(JettyServletEngineAdapter.java:134) at com.google.apphosting.runtime.JavaRuntime$RequestRunnable.run(JavaRuntime.java:447) at com.google.tracing.TraceContext$TraceContextRunnable.runInContext(TraceContext.java:454) at com.google.tracing.TraceContext$TraceContextRunnable$1.run(TraceContext.java:461) at com.google.tracing.TraceContext.runInContext(TraceContext.java:703) at com.google.tracing.TraceContext$AbstractTraceContextCallback.runInInheritedContextNoUnref(TraceContext.java:338) at com.google.tracing.TraceContext$AbstractTraceContextCallback.runInInheritedContext(TraceContext.java:330) at com.google.tracing.TraceContext$TraceContextRunnable.run(TraceContext.java:458) at com.google.apphosting.runtime.ThreadGroupPool$PoolEntry.run(ThreadGroupPool.java:251) at java.lang.Thread.run(Thread.java:679) java.lang.UnsupportedClassVersionError: adv/web/mid/exam/FriendServlet : Unsupported major.minor version 51.0 at com.google.appengine.runtime.Request.process-c04431eac3a1f275(Request.java) at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:634) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:277) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at java.lang.ClassLoader.loadClass(ClassLoader.java:266) at org.mortbay.util.Loader.loadClass(Loader.java:91) at org.mortbay.util.Loader.loadClass(Loader.java:71) at org.mortbay.jetty.servlet.Holder.doStart(Holder.java:73) at org.mortbay.jetty.servlet.ServletHolder.doStart(ServletHolder.java:242) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:685) at org.mortbay.jetty.servlet.Context.startContext(Context.java:140) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1250) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:517) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:467) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at com.google.tracing.TraceContext$TraceContextRunnable.runInContext(TraceContext.java:454) at com.google.tracing.TraceContext$TraceContextRunnable$1.run(TraceContext.java:461) at com.google.tracing.TraceContext.runInContext(TraceContext.java:703) at com.google.tracing.TraceContext$AbstractTraceContextCallback.runInInheritedContextNoUnref(TraceContext.java:338) at com.google.tracing.TraceContext$AbstractTraceContextCallback.runInInheritedContext(TraceContext.java:330) at com.google.tracing.TraceContext$TraceContextRunnable.run(TraceContext.java:458) at java.lang.Thread.run(Thread.java:679)

    Read the article

  • to measure throughput of testing device connect to server via AP

    - by samantha
    Description of project- I have a test tool to which DUT connects. The test tool has an access point in it and once DUT get connected to it via mac address we check RSSI and some other features of WiFi of DUT. Now I am wondering is there is any way I can measure throughput of Device under test via mac address of DUT from server side. Test-tool has LINUX fedora 11 in it and major coding is done in c/C++ and json command. Previously, I have tried to install ftp server on test-tool and DUT can connect to it and we can measure the throughput or data transfer rate, but this is not feasible solution as it requires lot of intervention from DUT. What I am interested in is 1) To run some script on server side /test tool and it gives me throughput of bandwidth of connected device may be via mac address of DUT OR 2) Server script transfer some files/packets to DUT and we can measure the throughput. Coding is not a major challenge at this stage , I just need some tool which requires minimum intervention from DUT.

    Read the article

  • Can a UPS give off an EM field capable of interfering with desktop components?

    - by Magsol
    I own an APC Back-UPS ES 750; it's about 4 years old, and is the only major component remaining in a question that has been boggling me for the last 18 months. (Yep, I originally posted this question about the possessed desktop, and while I marked a solution and closed the question, a week later the same problem returned) I've tried plugging the desktop straight into the wall (but left the other components plugged into the UPS) and the desktop still froze. Is it possible that the EM field generated by the UPS is interfering with my desktop components and causing these otherwise-unpredictable system freezes? To me this sounds like a long shot, but aside from my twin LCD monitors, that just about takes care of all the major components.

    Read the article

  • After a week of smooth running, my Dell t310 servername reverted to the service tag and screwed everything up!

    - by user779887
    Has this happened to anyone else? Began having a tremedous amount of problems with our brand new Dell SBS 2011 T310 server and after nearly 15 hours will Dell support, they determined that the server name hade reverted to DXXXXXXXX ("D: + the service tag). The Dell tech said that this was a major major problem. I'm the only person with any access to the machine and I didn't attempt to change the server name. The tech said it didn't look as if anyone had hacked the box but something had replaced "OurCompanyServer" with "DXX12XX" It would seem to me, simply changing the server name back to what I originally had would correct the problem, but the Dell tech said it wasn't that easy. Anyone encounter this?

    Read the article

  • Comparison of Unix shells

    - by Andy White
    Of the major Unix shells (bash, ksh, tcsh, zsh, others?), are there any compelling reasons to use one over another? Which is the most interactive/command-line friendly? Which is the most conducive/intuitive for writing scripts? Are there any major built-in features that one shell offers that others don't? Are any of these shells really good for one type of function, but not another? Or are they all pretty well-rounded/flexible? Is it just a matter of personal preference? I can make this community wiki if anyone prefers.

    Read the article

  • Do you think Microsoft is finally on the right track with its Windows 7?

    - by Saif Bechan
    It has been a while now since Windows 7 has been released. So far I didn't hear of many major complaints about it. I can remember the time that Windows Vista hist the shelves. There were major complaints from both experts and just regular users. I do a lot of OS installs for just regular users. These are mostly family and friends, and sometimes there are some customers. Up till now I mostly still use Windows XP SP3, because it is stable and most people are familiar with it. I did Vista for some users but they always call me back with all sorts of questions and in the end I had to downgrade them to XP. Do you think it is safe now to recommend Windows 7 as a good operating system? Offcourse their hardware has to support it, but let's say that is the case. If you install Windows 7 a lot for people, what are the complaints about if you get them?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >