Search Results

Search found 39405 results on 1577 pages for 'zeta two'.

Page 153/1577 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • How to better adjust more then 2 keyboard layouts

    - by zetah
    From time to time I have to use characters not present in my two layouts: Latin and Cyrillic and instead digging in Character map I thought to additionally add 2 more keyboard layouts. My issue with this approach is that most of the time I use just two layouts, and while changing to different layout (Alt-Shift) I now have to press couple of times to switch to previous layout. It's not just number of pressings, but I have to press two keys at once and track keyboard indicator which is distracting. I tried some options presented in keyboard settings, but I think there is no option that I would like - change just between first two layouts on Alt-Shift, and if I want to use additional layout I can choose it from keyboard indicator drop-down menu. Any ideas how this might be possible

    Read the article

  • Terminology: .NET C++ vs. traditional C++

    - by Mike Clark
    Hello. I've recently been working with a team that's using both .NET C++ and pre-.NET C++. I fully understand the technical differences between the two technologies. However, I sometimes feel like I'm floundering when it comes to the terminology used to differentiate the two. Example: Say we have two projects: ProjectA contains "C++" code that builds a .NET assembly DLL. ProjectB contains Visual C++ code that builds a traditional native Windows DLL. What is the best way to succinctly and terminologically draw a distinction between the two projects? Again, I'm not asking for an in-depth technical description of the differences between the two technologies. I'm just looking for names and labels. This is how I might try to make the distinction when talking to someone about Project A and Project B: "ProjectA is a managed .NET C++ project" and ProjectB is an unmanaged Visual C++ DLL project." However I am not at all certain that this terminology is ideal, or even correct. Please describe what you feel the ideal language to use in this situation (or similar situations) might be. Feel free to motivate your answer.

    Read the article

  • How to position a sprite in a 2D animation skeleton?

    - by Paul Manta
    Given two joints that define a bone, I would like to know how to decide where, between those two joints, I should draw the sprite. This should be a fairly simple thing to solve, but there is one thing that I am not sure about. After I've determined the rotation of the sprite (which is the absolute angle the joints form with the x-axis), I also need to determine the origin point from where I need to start drawing the transformed image. So how should I position the sprite between the two joints? Should I make the center of the image be the midpoint between the two joints, or should I make one the of the joints be the origin? Do these things matter that much (could the wrong positioning make the sprite move oddly during the animation)?

    Read the article

  • Enabling multitouch on an acer 5742?

    - by Ben
    I am trying to get multitouch to work on my touchpad. I am currently trying to run a script to get it to work. It is set to start on boot, saved as .run and has been made executable. here is the code: #!/bin/bash #enable multitouch sleep 10 xinput set-int-prop "SynPS/2 Synaptics TouchPad" "Two-Finger Scrolling" 8 1 xinput set-int-prop "SynPS/2 Synaptics TouchPad" "Synaptics Two-Finger Scrolling" 8 1 1 xinput set-int-prop "SynPS/2 Synaptics TouchPad" "Synaptics Two-Finger Pressure" 32 10 xinput set-int-prop "SynPS/2 Synaptics TouchPad" "Synaptics Two-Finger Width" 32 8 xinput set-int-prop "SynPS/2 Synaptics TouchPad" "Synaptics Edge Scrolling" 8 0 0 0 xinput set-int-prop "SynPS/2 Synaptics TouchPad" "Synaptics Jumpy Cursor Threshold" 32 110 synclient TapButton2=2 exit the commands make multi touch work if I enter them in the terminal, but the script itself does not work. any suggestions?

    Read the article

  • Combining Multiple Queries and Parameters into One Operation

    - by shay.shmeltzer
    This question came up twice this week and while the solution is explained in a couple of previous blog entries I did, I thought that showing off the complete solution in a single video would be nice. The scenario is that you have two VOs with queries that are based on a parameter, I showed in the past how to create a parameter form that executes the query - and you can do this for both. But what if you actually need just one value to drive both queries? How do you combine two parameter forms and two buttons into one? This is what this video shows you. The steps are: Creating two parameter forms Setting the value of a parameter in the binding tab Creating a backing bean to execute the code for one button Adding the code to execute another operation Remarking the parts that can be dropped from the screen Check it out here:

    Read the article

  • Duplicate pseudo terminals in linux

    - by bobtheowl2
    On a redhat box [ Red Hat Enterprise Linux AS release 4 (Nahant Update 3) ] Frequently we notice two people being assigned to the same pseudo terminal. For example: $who am i user1 pts/4 Dec 29 08:38 (localhost:13.0) user2 pts/4 Dec 29 09:43 (199.xxx.xxx.xxx) $who -m user1 pts/4 Dec 29 08:38 (localhost:13.0) user2 pts/4 Dec 29 09:43 (199.xxx.xxx.xxx) $whoami user2 This causes problems in a script because "who am i" returns two rows. I know there are differences between the two commands, and obviously we can change the script to fix the problem. But it still bothers me that two users are being returned with the same terminal. We suspect it may be related to dead sessions. Can anyone explain why two (non-unique) pts number are being assigned and/or how that can be prevented in the future?

    Read the article

  • Managed Service Architectures Part I

    - by barryoreilly
    Instead of thinking about service oriented architecture, a concept that is continually defined, redefined, abused and mistreated, perhaps it is time to drop the acronym and consider what we actually need to get the job done.   ‘Pure’ SOA involves the modeling of an organisation’s processes, the so called ‘Top Down’ approach, followed by the implementation of these processes as services.     Another approach, more commonly seen in the wild, is the bottom up approach. This usually involves services that simply start popping up in the organization, and SOA in this case is often just an attempt to rein in these services. Such projects, although described as SOA projects for a variety of reasons, have clearly little relation to process driven architecture. Much has been written about these two approaches, with many deciding that a hybrid of both methods is needed to succeed with SOA.   These hybrid methods are a sensible compromise, but one gets the feeling that there is too much focus on ‘Succeeding with SOA’. Organisations who focus too much on bottom up development, or who waste too much time and money on top down approaches that don’t produce results, are often recommended to attempt an ‘agile’(Erl) or ‘middle-out’ (Microsoft) approach in order to succeed with SOA.  The problem with recommending this approach is that, in most cases, succeeding with SOA isn’t the aim of the project. If a project is started with the simple aim of ‘Succeeding with SOA’ then the reasons for the projects existence probably need to be questioned.   There are a number of things we can be sure of: ·         An organisation will have a number of disparate IT systems ·         Some of these systems will have redundant data and functionality ·         Integration will give considerable ROI ·         Integration will already be under way. ·         Services will already exist in the organisation ·         These services will be inconsistent in their implementation and in their governance   So there are three goals here: 1.       Alignment between the business and IT 2.     Integration of disparate systems 3.     Management of services.   2 and 3 are going to happen,  in fact they must happen if any degree of return is expected from the IT department. Ignoring 1 is considered a typical mistake in SOA implementations, as it ignores the business implications. However, the business implication of this approach is the money saved in more efficient IT processes. 2 and 3 are ongoing, and they will continue happening, even if a large project to produce a SOA metamodel is started. The result will then be an unstructured cackle of services, and a metamodel that is already going out of date. So we get stuck in and rebuild our services so that they match the metamodel, with the far reaching consequences that this will have on all our LOB systems are current. Lets imagine that this actually works ( how often do we rip and replace working software because it doesn't fit a certain pattern? Never -that's the point of integration), we will now be working with a metamodel that is out of date, and most likely incomplete if the organisation is large.      Accepting that an object can have more than one model over time, with perhaps more than one model being  at any given time will help us realise the limitations of the top down model. It is entirely normal , and perhaps necessary, for an organisation to be able to view an entity from different perspectives.   So, instead of trying to constantly force these goals in a straight line, why not let them happen in parallel, and manage the changes in each layer.     If  company A has chosen to model their business processes and create a business architecture, there will be a reason behind this. Often the aim is to make the business more flexible and able to cope with change, through alignment between the business and the IT department.   If company B’s IT department recognizes the problem of wild services springing up everywhere, and decides to do something about it, by designing a platform and processes for the introduction of services, is this not a valid approach?   With the hybrid approach, it is recommended that company A begin deploying services as quickly as possible. Based on models that are clearly incomplete, and which will therefore change rapidly and often in the near future. Natural business evolution will also mean that the models can be guaranteed to change in the not so near future. To ‘Succeed with SOA’ Company B needs to go back to the drawing board and start modeling processes and objects. So, in effect, we are telling business analysts to start developing code based on a model they are unsure of, and telling programmers to ignore the obvious and growing problems in their IT department and start drawing lines and boxes.     Could the problem be that there are two different problem domains? And the whole concept of SOA as it being described by clever salespeople today creates an example of oft dreaded ‘tight coupling’ between these two domains?   Could it be that we have taken two large problem areas, and bundled the solution together in order to create a magic bullet? And then convinced ourselves that the bullet actually exists?   Company A wants to have a closer relationship between the business and its IT department, in order to become a more flexible organization. Company B wants to decrease the maintenance costs of its IT infrastructure. If both companies focus on succeeding with SOA, then they aren’t focusing on their actual goals.   If Company A starts building services from incomplete models, without a gameplan, they will end up in the same situation as company B, with wild services. If company B focuses on modeling, they could easily end up with the same problems as company A.   Now we have two companies, who a short while ago had one problem each, that now have two problems each. This has happened because of a focus on ‘Succeeding with SOA’, rather than solving the problem at hand.   This is not to suggest that the two problem domains are unrelated, a strategy that encompasses both will obviously be good for the organization. But only if the organization realizes this and can develop such a strategy. This strategy cannot be bought in a box.       Anyone who has worked with SOA for a while will be used to analyzing the solutions to a problem and judging the solution’s level of coupling. If we have two applications that each perform separate functions, but need to communicate with each other, we create a integration layer between them, perhaps with a service, but we do all we can to reduce the dependency between the two systems. Using the same approach, we can separate the modeling (business architecture) and the service hosting (technical architecture).     The business architecture describes the processes and business objects in the business domain.   The technical architecture describes the hosting and management and implementation of services.   The glue that binds these together, the integration layer in our analogy, is the service contract, where the operations map the processes to their technical implementation, and the messages map business concepts to software objects in the implementation.   If we reduce the coupling between these layers, we should be able to allow developers to develop services, and business analysts to develop models, without the changes rippling through from one side to the other.   This would allow company A to carry on modeling, and company B to develop a service platform, each achieving their intended goal, without necessarily creating the problems seen in pure top down or bottom up approaches. Company B could then at a later date map their service infrastructure to a unified model, and company A could carry on modeling, insulating deployed services from changes in the ongoing modeling.   How do we do this?  The concept of service virtualization has been around for a while, and is instantly realizable in Microsoft’s Managed Services Engine. Here we can create a layer of virtual services, which represent the business analyst’s view, presenting uniform contracts to the outside world. These services can then transform and route messages to the actual service implementations. I like to think of the virtual services with their beautifully modeled interfaces as ‘SOA services’, and the implementations as simple integration ‘adapter’ services providing an interface to a technical implementation. The Managed Services Engine also provides policy based control over services, regardless of where they are deployed, simplifying handling of security, logging, exception handling etc.   This solves a big problem. The pressure to deliver services quickly is always there in projects. It is very important to quickly show value when implementing service architectures. There is also pressure to deliver quality, and you can’t easily do both at the same time. This approach allows quick delivery with quality increasing over time, allowing modeling and service development to occur in parallel and independent of each other. The link between business modeling and service implementation is not one that is obvious to many organizations, and requires a certain maturity to realize and drive forward. It is also completely possible that a company can benefit from one without the other, even if this approach is frowned upon today, there are many companies doing so and seeing ROI.   Of course there are disadvantages to this. The biggest one being the transformations necessary between the virtual interfaces and the service implementations. Bad choices in developing the services in the service implementation could mean that it is impossible to map the modeled processes to the implementation with redevelopment of the service. In many cases the architect will not have a choice here anyway, as proprietary systems are often delivered with predeveloped services. The alternative is to wait until the model is finished and then build the service according the model. However, if that approach worked we wouldn’t be having this discussion! And even when it does work, natural business evolution will mean that the two concepts (model and implementation) will immediately start to drift away from each other, so coupling them tightly together so that they are forever bound to the model that only applies at the time of the modeling work will not really achieve a great deal. Architecture is all about trade offs, and here a choice has to be made. The choice is between something will initially be of low quality but will work, or something that may well be impossible to achieve in most situations.         In conclusion, top-down is a natural approach for business analysts, and bottom-up  is a natural approach for developers. Instead of trying to force something on both that neither want, and which has not shown itself to be successful,  why not let them get on with their jobs, and let an enterprise architect coordinate the processes?

    Read the article

  • Requesting feedback on my OO design

    - by Prog
    I'm working on an application that creates music by itself. I'm seeking feedback for my OO design so far. This question will focus on one part of the program. The application produces Tune objects, that are the final musical products. Tune is an abstract class with an abstract method play. It has two subclasses: SimpleTune and StructuredTune. SimpleTune owns a Melody and a Progression (chord sequence). It's play implementation plays these two objects simultaneously. StructuredTune owns two Tune instances. It's own play plays the two Tunes one after the other according to a pattern (currently only ABAB). Melody is an abstract class with an abstract play method. It has two subclasses: SimpleMelody and StructuredMelody. SimpleMelody is composed of an array of notes. Invoking play on it plays these notes one after the other. StructuredMelody is composed of an array of Melody objects. Invoking play on it plays these Melodyies one after the other. I think you're starting to see the pattern. Progression is also an abstract class with a play method and two subclasses: SimpleProgression and StructuredProgression, each composed differently and played differently. SimpleProgression owns an array of chords and plays them sequentially. StructuredProgression owns an array of Progressions and it's play implementation plays them sequentially. Every class has a corresponding Generator class. Tune, Melody and Progression are matched with corresponding abstract TuneGenerator, MelodyGenerator and ProgressionGenerator classes, each with an abstract generate method. For example MelodyGenerator defines an abstract Melody generate method. Each of the generators has two subclasses, Simple and Structured. So for example MelodyGenerator has a subclasses SimpleMelodyGenerator, with an implementation of generate that returns a SimpleMelody. (It's important to note that the generate methods encapsulate complex algorithms. They are more than mere factory method. For example SimpleProgressionGenerator.generate() implements an algorithm to compose a series of Chord objects, which are used to instantiate the returned SimpleProgression). Every Structured generator uses another generator internally. It is a Simple generator be default, but in special cases may be a Structured generator. Parts of this design are meant to allow the end-user through the GUI to choose what kind of music is to be created. For example the user can choose between a "simple tune" (SimpleTuneGenerator) and a "full tune" (StructuredTuneGenerator). Other parts of the system aren't subject to direct user-control. What do you think of this design from an OOD perspective? What potential problems do you see with this design? Please share with me your criticism, I'm here to learn. Apart from this, a more specific question: the "every class has a corresponding Generator class" part feels very wrong. However I'm not sure how I could design this differently and achieve the same flexibility. Any ideas?

    Read the article

  • Install build-essentials in ubuntu 12.04

    - by Mukul Shukla
    After I install a fresh copy of ubuntu and I need to install build-essentials, I have to type: sudo apt-get update and sudo apt-get upgrade before installing build-essentials These two commands take a LOOTTT of time and install many things. Is there a way to install build-essentials without running these two commands, or a way that these two commands don't install all the updates and hence will take less time.

    Read the article

  • MySQL Multi-Aggregated Rows in Crosstab Queries

    MySQL's crosstabs contain aggregate functions on two or more fields, presented in a tabular format. In a multi-aggregate crosstab query, two different functions can be applied to the same field or the same function can be applied to multiple fields on the same (row or column) axis. Rob Gravelle shows you how to apply two different functions to the same field in order to create grouping levels in the row axis.

    Read the article

  • MySQL Multi-Aggregated Rows in Crosstab Queries

    MySQL's crosstabs contain aggregate functions on two or more fields, presented in a tabular format. In a multi-aggregate crosstab query, two different functions can be applied to the same field or the same function can be applied to multiple fields on the same (row or column) axis. Rob Gravelle shows you how to apply two different functions to the same field in order to create grouping levels in the row axis.

    Read the article

  • BizTalk&ndash;Mapping repeating EDI segments using a Table Looping functoid

    - by Bill Osuch
    BizTalk’s HIPAA X12 schemas have several repeating date/time segments in them, where the XML winds up looking something like this: <DTM_StatementDate> <DTM01_DateTimeQualifier>232</DTM01_DateTimeQualifier> <DTM02_ClaimDate>20120301</DTM02_ClaimDate> </DTM_StatementDate> <DTM_StatementDate> <DTM01_DateTimeQualifier>233</DTM01_DateTimeQualifier> <DTM02_ClaimDate>20120302</DTM02_ClaimDate> </DTM_StatementDate> The corresponding EDI segments would look like this: DTM*232*20120301~ DTM*233*20120302~ The DateTimeQualifier element indicates whether it’s the start date or end date – 232 for start, 233 for end. So in this example (an X12 835) we’re saying the statement starts on 3/1/2012 and ends on 3/2/2012. When you’re mapping from some other data format, many times your start and end dates will be within the same node, like this: <StatementDates> <Begin>20120301</Begin> <End>20120302</End> </StatementDates> So how do you map from that and create two repeating segments in your destination map? You could connect both the <Begin> and <End> nodes to a looping functoid, and connect its output to <DTM_StatementDate>, then connect both <Begin> and <End> to <DTM_StatementDate> … this would give you two repeating segments, each with the correct date, but how to add the correct qualifier? The answer is the Table Looping Functoid! To test this, let’s create a simplified schema that just contains the date fields we’re mapping. First, create your input schema: And your output schema: Now create a map that uses these two schemas, and drag a Table Looping functoid onto it. The first input parameter configures the scope (or how many times the records will loop), so drag a link from the StatementDates node over to the functoid. Yes, StatementDates only appears once, so this would make it seem like it would only loop once, but you’ll see in just a minute. The second parameter in the functoid is the number of columns in the output table. We want to fill two fields, so just set this to 2. Now drag the Begin and End nodes over to the functoid. Finally, we want to add the constant values for DateTimeQualifier, so add a value of 232 and another of 233. When all your inputs are configured, it should look like this: Now we’ll configure the output table. Click on the Table Looping Grid, and configure it to look like this: Microsoft’s description of this functoid says “The Table Looping functoid repeats with the looping record it is connected to. Within each iteration, it loops once per row in the table looping grid, producing multiple output loops.” So here we will loop (# of <StatementDates> nodes) * (Rows in the table), or 2 times. Drag two Table Extractor functoids onto the map; these are what are going to pull the data we want out of the table. The first input to each of these will be the output of the TableLooping functoid, and the second input will be the row number to pull from. So the functoid connected to <DTM01_DateTimeQualifier> will look like this: Connect these two functoids to the two nodes we want to populate, and connect another output from the Table Looping functoid to the <DTM_StatementDate> record. You should have a map that looks something like this: Create some sample xml, use it as the TestMap Input Instance, and you should get a result like the XML at the top of this post. Technorati Tags: BizTalk, EDI, Mapping

    Read the article

  • Platinum SEO Plugin Vs All-In-One SEO Pack

    The two biggest SEO optimization tools for WordPress are probably Platinum SEO and All-in-One SEO Pack. This article explains the relationship of the two and gives a brief rundown of installing Platinum SEO plugin on your blog. It then provides a comparison of the two, feature-for-feature.

    Read the article

  • Upgrading your Internet connection to obselescence

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/10/11/upgrading-your-internet-connection-to-obselescence.aspxRecently I was approached by two UK Internet Service Providers to upgrade to a fibre connection. In each case I asked two questions:Can you provide me with a fixed IP Address?Is the equipment to be provided compatible with IPv6?None of the question was satisfactorily answered. One of the persons answering even suggested that granting a fixed IP Address would be a breach of security!I find it very disturbing that two companies that present themselves as innovative should still not be preparing for IPv6. The answer I would have expected was that all new equipment being supplied was IPv6 compatible and that plans were in hand for a switchover to IPv6. Instead new equipment would be supplied that would have to be replaced when IPv6 comes. Equally disturbing was that the call center people who answered did not know why a fixed IP address was important or why the change to IPv6 would have to come.I would rather not name and shame the two companies, however I will be looking elsewhere for my next Internet Service Provider.

    Read the article

  • Zoom Layer centered on a Sprite

    - by clops
    I am in process of developing a small game where a space-ship travels through a layer (doh!), in some situations the spaceship comes close to an enemy space ship, and the whole layer is zoomed in on the two with the zoom level being dependent on the distance between the ship and the enemy. All of this works fine. The main question, however, is how do I keep the zoom being centered on the center point between the two space-ships and make sure that the two are not off-screen? Currently I control the zooming in the GameLayer object through the update method, here is the code (there is no layer repositioning here yet): -(void) prepareLayerZoomBetweenSpaceship{ CGPoint mainSpaceShipPosition = [mainSpaceShip position]; CGPoint enemySpaceShipPosition = [enemySpaceShip position]; float distance = powf(mainSpaceShipPosition.x - enemySpaceShipPosition.x, 2) + powf(mainSpaceShipPosition.y - enemySpaceShipPosition.y,2); distance = sqrtf(distance); /* Distance > 250 --> no zoom Distance < 100 --> maximum zoom */ float myZoomLevel = 0.5f; if(distance < 100){ //maximum zoom in myZoomLevel = 1.0f; }else if(distance > 250){ myZoomLevel = 0.5f; }else{ myZoomLevel = 1.0f - (distance-100)*0.0033f; } [self zoomTo:myZoomLevel]; } -(void) zoomTo:(float)zoom { if(zoom > 1){ zoom = 1; } // Set the scale. if(self.scale != zoom){ self.scale = zoom; } } Basically my question is: How do I zoom the layer and center it exactly between the two ships? I guess this is like a pinch zoom with two fingers!

    Read the article

  • JSR 355 Final Release, and moves JCP to version 2.9

    - by heathervc
    JSR 355, JCP EC Merge, passed the JCP EC Final Approval Ballot on 13 August 2012, with 14 Yes votes, 1 abstain (1 member did not vote) on the SE/EE EC, and 12 yes votes (2 members were not eligible to vote) on the ME EC.  JSR 355 posted a Final Release this week, moving the JCP program version to JCP 2.9.  The transition to a merged EC will happen after the 2012 EC Elections, as defined in the Appendix B of the JCP (pasted below), and the EC will operate under the new EC Standing Rules. In the previous version (2.8) of this Process Document there were two separate Executive Committees, one for Java ME and one for Java SE and Java EE combined. The single Executive Committee described in this version of the Process Document will be implemented through the following process: The 2012 annual elections will be held as defined in JCP 2.8, but candidates will be informed that if they are elected their term will be for only a single year, since all candidates must stand for re-election in 2013. Immediately after the 2012 election the two ECs will be merged. Oracle and IBM's second seats will be eliminated, resulting in a single EC with 30 members. All subsequent JSR ballots (even for in-progress JSRs) will then be voted on by the merged EC. For the 2013 annual elections three Ratified and two Elected Seats will be eliminated, thereby reducing the EC to 25 members. All 25 seats will be up for re-election in 2013. Members elected in 2013 will be ranked to determine whether their initial term will be one or two years. The 50% of Ratified and 50% of Elected members who receive the most votes will serve an initial two-year term, while all others will serve an initial one year term. All members elected in 2014 and subsequently will serve a two-year term. For clarity, note that the provisions specified in this version of the Process Document regarding a merged EC will apply to subsequent ballots on all existing JSRs, whether or not the Spec Leads of those JSRs chose to adopt this version of the Process Document in its entirety. <end of Appendix> Also of note:  the materials and minutes from the July EC meeting and the June EC Meeting are now available--following the July EC Meeting, Samsung and SK Telecom lost their EC seats. The June EC meeting also had a public portion--the audio from the public portion of the EC meeting are now posted online.  For Spec Leads there is also the recording of the EG Nominations call.

    Read the article

  • Merging the Executive Committees

    - by Patrick Curran
    As I explained in this blog last year, we use the Process to change the Process. The first of three planned JSRs to modify the way the JCP operates (JSR 348: Towards a new version of the Java Community Process) completed in October 2011. That JSR focused on changes to make our process more transparent and to enable broader participation. The second JSR was inspired by our conviction that Java is One Platform and by our expectation that Java ME and Java SE will become more aligned over time. In anticipation of this change JSR 355: JCP Executive Committee Merge will merge the two Executive Committees into one. The JSR is going very well. We have reached consensus within the Executive Committees, which serve as the Expert Group for process-change JSRs. How we intend to make the transition to a single EC is explained in the revised versions of the Process and EC Standing Rules documents that are currently posted for Early Draft Review. Our intention is to reduce the total number of EC seats but to keep the same ratio (2:1) of ratified and elected seats. Briefly, the plan will be implemented in two stages. The October 2012 elections will be held as usual, but candidates will be informed that they will serve only a one-year term if elected. The two ECs will be merged immediately after this election; at the same time, Oracle's second permanent seat and one of IBM's two ratified seats will be eliminated. The initial merged EC will therefore have 30 members. In the October 2013 elections we will eliminate three more ratified seats and two elected seats, thereby reducing the size of the combined EC to 25 members (16 ratified seats, 8 elected seats, plus Oracle's permanent seat.) All remaining seats, including those of members who were elected in 2012, will be up for re-election in 2013; that election should be particularly interesting. Starting in 2013 we will change from a three-year to a two-year election cycle (half of all EC members will be up for re-election each year.) We believe that these changes will streamline our operations, and position us for a future in which the distinctions between desktop and mobile devices become increasingly blurred. Please take this opportunity to review and comment on our proposed changes - we appreciate your input. Thank you, and onward to JCP.next.3!

    Read the article

  • Which firewall ports do I need to open in order for a domain trust to work?

    - by Massimo
    I have two Active Directory domains in two different forests; each domain has two DCs (all of them Windows Server 2008 R2). The domains are also in different networks, with a firewall connecting them. I need to create a two-way forest trust between the two domains and forest. How do I configure the firewall to allow this? I found this article, but it doesn't explain very clearly which traffic is required between DCs, and which traffic (if any) in needed instead between domain computers in one domain and DCs for the other one. I'm allowed to permit all traffic between the DCs, but allowing computers in one network to access DCs in the other one would be a little more difficult.

    Read the article

  • What is the cloud?

    - by llaszews
    Everyone has their own definition cloud computing. This is a real conversation overheard at small cafe in NH between two general contractors: Contractor One: I can't get document I need because it is on my home PC. Contractor Two: You need cloud computing! Contractor One: What the hell is that? Contractor Two: You log into one computer and all your information for all your other computers is available. The NH live free or die definition of cloud computing!

    Read the article

  • How do I optimize SEO in a multiblog WordPress install?

    - by user35585
    We are about to launch two product pages plus a corporate website. The goal is to keep a blog in all of the sites, but here it comes the question about how to do it in a way we get everything unified but do not mess with Google's web crawlers. We considered the following options: Putting a blog from which we retrieve two categories with custom CSS, so we have a blog that sub splits two category-dependent blogs; this way we can get the feeds and will point to it Putting two product blogs of which we retrieve their posts into a bigger, corporate blog Putting three independent blogs Despite I was for the first option, so we only have to address our content from the product pages, I would sincerely like to hear your opinion. We are afraid duplicate content or strange link games may make us lose PageRank. How would you do it?

    Read the article

  • How to forward http traffic through a specific network adapter.

    - by user18129
    i have the following scenario. Two laptops are connected via a router through the Ethernet ports. These two computers need to be able to communicate together. One computer also needs to access the internet through a different adapter (i.e. we will taking these two laptops two various sites where by the most common type of internet access will be wireless).In isolation all of the various adapters work fine (i.e. the internal network works fine, and the wireless connects to the internet). However,we try to turn on all of the adapters at the same time,the following occurs: If we bridge the two network connections together on the "Server" -The internet connection doesn't work through the wireless If we don't bridge the connections The internet connections don't work It seems like http traffic is trying to be sent through the Ethernet adapter (which of course is not connected to an internet connection). How can we solve this?

    Read the article

  • Triple monitors on hybrid video system GeForce+Intel

    - by v_mil
    I use Lenovo Ideapad Z580A with hybrid video: GeForce+Intel with Ubuntu 12.10 x64 Ukrainian. It has internal display and two outputs: HDMI and VGA. When I connect third display to VGA all displays go black. Pressing alt+backspace and login causes output to two displays: internal and VGA. System Settings - Displays (or monitors - I have Ukrainian interface) shows three displays: two are on, one (DVI) is off. Turning DVI on and pressing apply causes an error: Can not set configuration of controller CRTC 65. BIOS setting is Optimus (two video cards). Driver for GeForce is Nouveau. With best regards. Viktor.

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >