Search Results

Search found 2176 results on 88 pages for 'wide'.

Page 59/88 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Data Model Dissonance

    - by Tony Davis
    So often at the start of the development of database applications, there is a premature rush to the keyboard. Unless, before we get there, we’ve mapped out and agreed the three data models, the Conceptual, the Logical and the Physical, then the inevitable refactoring will dog development work. It pays to get the data models sorted out up-front, however ‘agile’ you profess to be. The hardest model to get right, the most misunderstood, and the one most neglected by the various modeling tools, is the conceptual data model, and yet it is critical to all that follows. The conceptual model distils what the business understands about itself, and the way it operates. It represents the business rules that govern the required data, its constraints and its properties. The conceptual model uses the terminology of the business and defines the most important entities and their inter-relationships. Don’t assume that the organization’s understanding of these business rules is consistent or accurate. Too often, one department has a subtly different understanding of what an entity means and what it stores, from another. If our conceptual data model fails to resolve such inconsistencies, it will reduce data quality. If we don’t collect and measure the raw data in a consistent way across the whole business, how can we hope to perform meaningful aggregation? The conceptual data model has more to do with business than technology, and as such, developers often regard it as a worthy but rather arcane ceremony like saluting the flag or only eating fish on Friday. However, the consequences of getting it wrong have a direct and painful impact on many aspects of the project. If you adopt a silo-based (a.k.a. Domain driven) approach to development), you are still likely to suffer by starting with an incomplete knowledge of the domain. Even when you have surmounted these problems so that the data entities accurately reflect the business domain that the application represents, there are likely to be dire consequences from abandoning the goal of a shared, enterprise-wide understanding of the business. In reading this, you may recall experiences of the consequence of getting the conceptual data model wrong. I believe that Phil Factor, for example, witnessed the abandonment of a multi-million dollar banking project due to an inadequate conceptual analysis of how the bank defined a ‘customer’. We’d love to hear of any examples you know of development projects poleaxed by errors in the conceptual data model. Cheers, Tony

    Read the article

  • How can I achieve a 3D-like effect with spritebatch's rotation and scale parameters

    - by Alic44
    I'm working on a 2d game with a top-down perspective similar to Secret of Mana and the 2D Final Fantasy games, with one big difference being that it's an action rpg using a 3-dimensional physics engine. I'm trying to draw an aimer graphic (basically an arrow) at my characters' feet when they're aiming a ranged weapon. At first I just converted the character's aim vector to radians and passed that into spritebatch, but there was a problem. The position of every object in my world is scaled for perspective when it's drawn to the screen. So if the physics engine coordinates are (1, 0, 1), the screen coords are actually (1, .707) -- the Y and Z axis are scaled by a perspective factor of .707 and then added together to get the screen coordinates. This meant that the direction the aimer graphic pointed (thanks to its rotation value passed into spritebatch) didn't match up with the direction the projectile actually traveled over time. Things looked fine when the characters fired left, right, up, or down, but if you fired on a diagonal the perspective of the physics engine didn't match with the simplistic way I was converting the character's aim direction to a screen rotation. Ok, fast forward to now: I've got the aimer's rotation matched up with the path the projectile will actually take, which I'm doing by decomposing a transform matrix which I build from two rotation matrices (one to represent the aimer's rotation, and one to represent the camera's 45 degree rotation on the x axis). My question is, is there a way to get not just rotation from a series of matrix transformations, but to also get a Vector2 scale which would give the aimer the appearance of being a 3d object, being warped by perspective? Orthographic perspective is what I'm going for, I think. So, the aimer arrow would get longer when facing sideways, and shorter when facing north and south because of the perspective. At the same time, it would get wider when facing north and south, and less wide when facing right or left. I'd like to avoid actually drawing the aimer texture in 3d because I'm still using spritebatch's layerdepth parameter at this point in my project, and I don't want to have to figure out how to draw a 3d object within the depth sorting system I already have. I can provide code and more details if this is too vague as a question... This is my first post on stack exchange. Thanks a lot for reading! Note: (I think) I realize it can't be a technically correct 3D perspective, because the spritebatch's vector2 scaling argument doesn't allow for an object to be skewed the way it actually should be. What I'm really interested in is, is there a good way to fake the effect, or should I just drop it and not scale at all? Edit to clarify without the help of a picture (apparently I can't post them yet): I want the aimer arrow to look like it has been painted on the ground at the character's feet, so it should appear to be drawn on the ground plane (in my case the XZ plane) which should be tilted at a 45 degree angle (around the X axis) from the viewing perspective. Alex

    Read the article

  • The cost of Programmer Team Clustering

    - by MarkPearl
    I recently was involved in a conversation about the productivity of programmers and the seemingly wide range in abilities that different programmers have in this industry. Some of the comments made were reiterated a few days later when I came across a chapter in Code Complete (v2) where it says "In programming specifically, many studies have shown order-of-magnitude differences in the quality of the programs written, the sizes of the programs written, and the productivity of programmers". In line with this is another comment presented by Code Complete when discussing teams - "Good programmers tend to cluster, as do bad programmers". This is something I can personally relate to. I have come across some really good and bad programmers and 99% of the time it turns out the team they work in is the same - really good or really bad. When I have found a mismatch, it hasn't stayed that way for long - the person has moved on, or the team has ejected the individual. Keeping this in mind I would like to comment on the risks an organization faces when forcing teams to remain together regardless of the mix. When you have the situation where someone is not willing to be part of the team but still wants to get a pay check at the end of each month, it presents some interesting challenges and hard decisions to make. First of all, when this occurs you need to give them an opportunity to change - for someone to change, they need to know what the problem is and what is expected. It is unreasonable to expect someone to change but have not indicated what they need to change and the consequences of not changing. If after a reasonable time of an individual being aware of the problem and not making an effort to improve you need to do two things... Follow through with the consequences of not changing. Consider the impact that this behaviour will have on the rest of the team. What is the cost of not following through with the consequences? If there is no follow through, it is often an indication to the individual that they can continue their behaviour. Why should they change if you don't care enough to keep your end of the agreement? In many ways I think it is very similar to the "Broken Windows" principles – if you allow the windows to break and don’t fix them, more will get broken. What is the cost of keeping them on? When keeping a disruptive influence in a team you risk loosing the good in the team. As Code Complete says, good and bad programmers tend to cluster - they have a tendency to keep this balance - if you are not going to help keep the balance they will. The cost of not removing a disruptive influence is that the good in the team will eventually help you maintain the clustering themselves by leaving.

    Read the article

  • Independent Research on 1500 Companies Reveals Challenges in Performance Visibility – Part 1

    - by ndwyouell
    At the end of May I was joined by Professor Andy Neely of Cambridge University on a webinar, with an audience of over 700, to discuss the results of this extensive study which covered 13 countries and nearly every commercial and industrial sector.  What stunned both of us was not so much the number listening but the 100 questions they asked in just 1 hour.  This certainly represents a record in my experience and for those that organized the webinar. So what was all the fuss about?  Well, to begin with this was a pretty big sample and it represented organizations with over $100m sales across the USA, Europe, Africa and the Middle East. It also delivered some pretty interesting results across a wide range of EPM subjects such as profitability, planning and reporting.  Let’s look at some of those findings. We kicked off with profitability, one of the key factors in driving performance, or that is what you would think, but in fact 82% of our respondents said they did not have complete visibility into the profitability of their organization. 91% of these went further to say that, not surprisingly, this lack of knowledge into the profitability has implications with over half citing 3 or more implications.  Implications cited included misallocated resources, revenue opportunities not maximized, erroneous decisions made and impaired financial performance.  Quite a list of implications, especially given the difficult economic circumstances many organizations are operating in at this time. So why is this?  Well other results in the study point to some of the potential reasons.  Firstly 59% of respondents that use spreadsheets use them for monitoring profitability and 93% of all managers responding to the study use spreadsheets to gather and analyze information.  This is an enormous proportion given the problems with using spreadsheets based performance management systems that have been widely talked about for many years.  For profitability analysis this is particularly important when you consider the typical requirement will be to allocate cost and revenue across 6+ dimensions based on many different allocation methods.  Not something that can be done easily in spreadsheets plus it gets to be a nightmare once you want to change allocations, run different scenarios and then change the basis of your planning and budgeting! It is no wonder so many organizations have challenges in performance visibility. My next blog will look at the fragmented nature of many organizations’ planning.  In the meantime if you want to read the complete report on the research go to: http://www.oracle.com/webapps/dialogue/ns/dlgwelcome.jsp?p_ext=Y&p_dlg_id=10077790&src=7038701&Act=29

    Read the article

  • Your Cinnamon Roll & Morning Coffee: Powered by Oracle Enterprise Manager

    - by Ruma Sanyal
    1024x768 Truth be told, as I was getting my morning coffee today, I was pondering the recent election results more than Oracle [there, I said it]. But then an email from Glen Hawkins from the Enterprise Management team hit my Inbox and I started viewing this video. It was about the world’s largest convenience store chain, 7-Eleven, focusing on creating the best Digital Guest Experience (DGE) for their customers. Turns out that Oracle Enterprise Manager (OEM) powers 7-Eleven’s DGE Middleware Platform as a Service solution that consists of Oracle SOA Suite, Exalogic, and Exadata. “We need to present a consistent view of 7-Eleven across all our endpoints: 10,000 stores & various digital entities like our websites and apps”, said Ronald Clanton, the DGE Program Director for 7-Eleven. As 7-Eleven was rolling out a loyalty program with mobile support across multiple geos, it had many complex business & technical requirements, including supporting a wide variety of different apps, 10M guests in NA alone, ability to support high speed transactions, and very aggressive timelines. A key requirement was shortening the cycle for provisioning new environments. Whereas with other vendors this would take a few weeks, Oracle consulting showed them how with OEM provisioning new environments would take half a day, which was quite impressive. 7-Eleven has started to roll out this new program and are delighted to report that some provisioning cycles are as low as 10 minutes which includes provisioning the full Oracle SOA suite, Exalogic and more. They are delighted with OEM’s reporting capabilities and customization thereof. Watch the video to see for yourself. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • Partners - Steer Clear of the Unknown with Oracle Enterprise Manager12c and Plug-in Extensibility

    - by Get_Specialized!
    Imagine if you just purchased a new car and as you entered the vehicle to drive it home and you found there was no steering wheel. And upon asking the dealer you were told that it was an option and you had a choice now or later of a variety of aftermarket steering wheels that fit a wide variety of automobiles. If you expected the car to already have a steering wheel designed to manage your transportation solution, you might wonder why someone would offer an application solution where its management is not offered as an option or come as part of the solution... Using management designed to support the underlying technology and that can provide management and support  for your own Oracle technology based solution can benefit your business  a variety of ways: increased customer satisfaction, reduction of support calls, margin and revenue growth. Sometimes when something is not included or recommended , customers take their own path which may not be optimal when using your solution and has later impact on the customers satisfaction or worse a negative impact on their business. As an Oracle Partner, you can reduce your research, certification, and time to market by selecting and offering management designed, developed, and supported for Oracle product technology by Oracle with Oracle Enterprise Manager 12c. For partners with solution specific management needs or seeking to differentiate themselves in the market, Enterprise Manager 12c is extensible and provides partners the opportunity to create their own plug-ins as well as a validation program for them.  Today a number of examples by partners are available and Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} more on the way from such partners as NetApp for NetApp storage and Blue Medora for VMware vSphere. To review and consider further for applicability to your solution, visit  the Oracle PartnerNetwork KnowledgeZone for Enterprise Manager under the Develop Tab http://www.oracle.com/partners/goto/enterprisemanager

    Read the article

  • Cone of Uncertainty in classic and agile projects

    - by DigiMortal
    David Starr from Scrum.org made interesting session in TechEd Europe 2012 - Implementing Scrum Using Team Foundation Server 2012. One of interesting things for me was how Cone of Uncertainty looks like in agile projects (or how agile methodologies distort the cone we know from waterfall projects). This posting illustrates two cones – one for waterfall and one for agile world. Cone of Uncertainty Cone of Uncertainty was introduced to software development community by Steve McConnell and it visualizes how accurate are our estimates over project timeline. Here is the Cone of Uncertainty when we deal with waterfall and Big Design Up-Front (BDUF). Cone of Uncertainty. Taken from MSDN Library page Estimating. The closer we are to project end the more accurate are our estimates. When project ends we know exactly how much every task took time. As we can see then cone is wide when we usually have to give our estimates – it happens somewhere between Initial Project Concept and Requirements Complete. Don’t ask me why Initial Project Concept is the stage where some companies give their best estimates – they just do it every time and doesn’t learn a thing later. This cone is inevitable for software development and agile methodologies that try to make software world better are also able to change the cone. Cone of Uncertainty in agile projects Agile methodologies usually try to avoid BDUF, waterfalls and other things that make all our mistakes highly expensive. Of course, we are not the only ones who make mistakes – don’t also forget our dear customers. Agile methodologies take development as creational work and focus on making it better. One main trick is to focus on small and short iterations. What it means? We are estimating functionalities that are easier for us to understand and implement. Therefore our estimates are more accurate. As we move from few big iterations to many small iterations we also distort and slice Cone of Uncertainty. This is how cone looks when agile methodologies are used. Cone of Uncertainty in agile projects. We have more cones to live with but they are way smaller. I don’t have any numbers to put here because I found any but still this “chart” should give you the point: more smaller iterations cause more but way smaller cones of uncertainty. We can handle these small uncertainties because steps we take to complete small tasks are more predictable and doesn’t grow very often above our heads. One more note. Consider that both of charts given in this posting describe exactly the same phase of same project – just uncertainties are different.

    Read the article

  • Two New CRM USER Communities just launched

    - by Divya Malik
    Here comes an announcement from Chris Gallen, from our Support Services team. For those of you who are EBS CRM users, here are two new recently launched communities that are now available to discuss topics that are important to you. These communities are for Sales & Marketing and  Telesales  The Sales & Marketing community is open to discuss a wide range of topics from Oracle Sales, Sales Online, Territory Management, Partner Management, Leads Management, Sales Offline, Sales for Handhelds, Sales Foundation, and Oracle Marketing. Some possible topics include Oracle Sales Implementations, TCA and DQM Integrations, Territory Management Setups and Definitions, Product Catalog Integrations, Sales Forecasting, Lead and Opportunity management, Sales Manager and Sales User responsibilities and Reports, Resource Management including Roles and Groups, Oracle Sales Personalizations, Concurrent Requests for Sales Reps and Sales Manager Dashboards, Integration with Quoting, Proposals, General Ledger, Advanced Product Catalog, CRM Resource Administration, etc. The Telesales community is available to discuss topics such as Customer/Org/Person/Party Relationships, TCA/DQM Integration, Lead and Opportunity Management, Universal Work Queue, Universal Search Features, Purchase Items/Product Integration, eBusiness Center Setup Issues, Interactions, Tasks and Notes Integrations, and Form Personalizations. How Can You Get Started? Here are the two ways to get engaged. A) Click here to access all our communities  OR B) My Oracle Support as follows: Log into My Oracle Support (Flash or Classic).                                                                                                                           Click the "Community" link at the top of the page. Click [Enter Here] on the following page. Select the community from the "My Communities" list on the top-left. Take advantage TODAY!

    Read the article

  • Capture a Query Executed By An Application Or User Against a SQL Server Database in Less Than a Minute

    - by Compudicted
    At times a Database Administrator, or even a developer is required to wear a spy’s hat. This necessity oftentimes is dictated by a need to take a glimpse into a black-box application for reasons varying from a performance issue to an unauthorized access to data or resources, or as in my most recent case, a closed source custom application that was abandoned by a deserted contractor without source code. It may not be news or unknown to most IT people that SQL Server has always provided means of back-door access to everything connecting to its database. This indispensible tool is SQL Server Profiler. This “gem” is always quietly sitting in the Start – Programs – SQL Server <product version> – Performance Tools folder (yes, it is for performance analysis mostly, but not limited to) ready to help you! So, to the action, let’s start it up. Once ready click on the File – New Trace button, or using Ctrl-N with your keyboard. The standard connection dialog you have seen in SSMS comes up where you connect the standard way: One side note here, you will be able to connect only if your account belongs to the sysadmin or alter trace fixed server role. Upon a successful connection you must be able to see this initial dialog: At this stage I will give a hint: you will have a wide variety of predefined templates: But to shorten your time to results you would need to opt for using the TSQL_Grouped template. Now you need to set it up. In some cases, you will know the principal’s login name (account) that needs to be monitored in advance, and in some (like in mine), you will not. But it is VERY helpful to monitor just a particular account to minimize the amount of results returned. So if you know it you can already go to the Event Section tab, then click the Column Filters button which would bring a dialog below where you key in the account being monitored without any mask (or whildcard):  If you do not know the principal name then you will need to poke around and look around for things like a config file where (typically!) the connection string is fully exposed. That was the case in my situation, an application had an app.config (XML) file with the connection string in it not encrypted: This made my endeavor very easy. So after I entered the account to monitor I clicked on Run button and also started my black-box application. Voilà, in a under a minute of time I had the SQL statement captured:

    Read the article

  • What OpenGL version(s) to learn and/or use?

    - by zuko
    So, I'm new to OpenGL... I have general knowledge of game programming but little practical experience. I've been looking into various articles and books and trying to dive into OpenGL, but I've found the various versions and old vs new way of doing things confusing. I guess my first questions is does anyone know some figures about percentages of gamers that can run each version of OpenGL. What's the market share like? 2.x, 3.x, 4.x... I looked into the requirements for Half Life 2 since I know Valve updated it with OpenGL to run on Mac and I know they usually try to hit a very wide user-base, and they say a minimum of GeForce 8 Series. I looked at the 8800 GT on Nvidia's website and it listed support for OpenGL 2.1. Which, maybe I'm wrong, sounds ancient to me since there's already 4.x. I looked up a driver for 8800GT and it says it supports 4.2! A bit of a discrepancy there, lol. I've also read things like XP only supports up to a certain version, or OS X only supports 3.2, or all kinds of other things. Overall, I'm just confused as to how much support there is for various versions and what version to learn/use. I'm also looking for learning resources. My search results thus far have pointed me to the OpenGL SuperBible. The 4th edition has great reviews on Amazon, but it teaches 2.1. The 5th edition teaches 3.3 and there are a couple things in the reviews that mention the 4th edition is better and that the 5th edition doesn't properly teach the new features or something? Basically, even within learning material I'm seeing discrepancies and I just don't even know where to start. From what I understand, 3.x started a whole new way of doing things and I've read from various articles and reviews that you want to "stay away from deprecated features like glBegin(), glEnd()" yet a lot of books and tutorials I've seen use that method. I've seen people saying that, basically, the new way of doing stuff is more complicated yet the old way is bad . Just a side note, personally, I know I still have a lot to learn beforehand, but I'm interested in tessellation; so I guess that factors into it as well, because, as far as I understand that's only in 4.x? [just btw, my desktop supports 4.2]

    Read the article

  • Are there software options (preferabbly .NET) for doing distance and speed analysis of footballers moving on video?

    - by Anonymous Type
    Editing Question for Clarity Thanks for feedback so far, very insightful. I'm not sure how far along this part of the software community is, and what if any libraries exist for me to leverage from. Heres what I'm trying to do. Problem: Take an existing video of a game of rugby league. The Rugby League field is 100 metres long, 70 metres wide, and has white line markings every 10 metres running along the width of the field, as well as along the sidelines. Each side has 13 players on the field. Players on each team have identical jerseys that normally constrast strongly against background colours (green/brown field colour) and the referee's colour (usually yellow) and the designated water runner (orange). All players have a unique number in thick white lettering on their backs for identification. Video is taken with a high definition camera. Currently only one camera is used (2D) and existing video does not contain a foreground object of fixed spatial dimensions (as suggested in one answer for comparision measurements, however I could add this to future filming sessions if it is worthwhile). The player's do not run in a straight line 50% of the time but will go sideways on on a diagonal to the play the ball. The distance measured always starts from the spot of the previous "tackle", which ends where the player stops forward movement. It is not always possible to determine the players number from the video (facing other direction, sunlight, others standing in the way of the camera). But this isn't important as the software could allow for manual inputting of unknown "runs" at a later point after analysis. Determine the distance between two points (i.e. where the player started his "run" and where he finished it). I'm guessing that this would be quite doable if I manually marked the start and end point in the video. But how would I use landmarks in the background to determine the distance (assuming the person taking the video has kept it from jerking around). Question: Do software packages or libraries exist that are specialised enough to assist with writing analysis software to determine a sports persons distance travelled based on video taken of the performance?

    Read the article

  • PMDB Block Size Choice

    - by Brian Diehl
    Choosing a block size for the P6 PMDB database is not a difficult task. In fact, taking the default of 8k is going to be just fine. Block size is one of those things that is always hotly debated. Everyone has their personal preference and can sight plenty of good reasons for their choice. To add to the confusion, Oracle supports multiple block sizes withing the same instance. So how to decide and what is the justification? Like most OLTP systems, Oracle Primavera P6 has a wide variety of data. A typical table's average row size may be less than 50 bytes or upwards of 500 bytes. There are also several tables with BLOB types but the LOB data tends not to be very large. It is likely that no single block size would be perfect for every table. So how to choose? My preference is for the 8k (8192 bytes) block size. It is a good compromise that is not too small for the wider rows, yet not to big for the thin rows. It is also important to remember that database blocks are the smallest unit of change and caching. I prefer to have more, individual "working units" in my database. For an instance with 4gb of buffer cache, an 8k block will provide 524,288 blocks of cache. The following SQL*Plus script returns the average, median, min, and max rows per block. column "AVG(CNT)" format 999.99 set verify off select avg(cnt), median(cnt), min(cnt), max(cnt), count(*) from ( select dbms_rowid.ROWID_RELATIVE_FNO(rowid) , dbms_rowid.ROWID_BLOCK_NUMBER(rowid) , count(*) cnt from &tab group by dbms_rowid.ROWID_RELATIVE_FNO(rowid) , dbms_rowid.ROWID_BLOCK_NUMBER(rowid) ) Running this for the TASK table, I get this result on a database with an 8k block size. Each activity, on average, has about 19 rows per block. Enter value for tab: task AVG(CNT) MEDIAN(CNT) MIN(CNT) MAX(CNT) COUNT(*) -------- ----------- ---------- ---------- ---------- 18.72 19 3 28 415917 I recommend an 8k block size for the P6 transactional database. All of our internal performance and scalability test are done with this block size. This does not mean that other block sizes will not work. Instead, like many other parameters, this is the safest choice.

    Read the article

  • Is it unusual for a small company (15 developers) not to use managed source/version control?

    - by LordScree
    It's not really a technical question, but there are several other questions here about source control and best practice. The company I work for (which will remain anonymous) uses a network share to host its source code and released code. It's the responsibility of the developer or manager to manually move source code to the correct folder depending on whether it's been released and what version it is and stuff. We have various spreadsheets dotted around where we record file names and versions and what's changed, and some teams also put details of different versions at the top of each file. Each team (2-3 teams) seems to do this differently within the company. As you can imagine, it's an organised mess - organised, because the "right people" know where their stuff is, but a mess because it's all different and it relies on people remembering what to do at any one time. One good thing is that everything is backed up on a nightly basis and kept indefinitely, so if mistakes are made, snapshots can be recovered. I've been trying to push for some kind of managed source control for a while, but I can't seem to get enough support for it within the company. My main arguments are: We're currently vulnerable; at any point someone could forget to do one of the many release actions we have to do, which could mean whole versions are not stored correctly. It could take hours or even days to piece a version back together if necessary We're developing new features along with bug fixes, and often have to delay the release of one or the other because some work has not been completed yet. We also have to force customers to take versions that include new features even if they just want a bug fix, because there's only really one version we're all working on We're experiencing problems with Visual Studio because multiple developers are using the same projects at the same time (not the same files, but it's still causing problems) There are only 15 developers, but we all do stuff differently; wouldn't it be better to have a standard company-wide approach we all have to follow? My questions are: Is it normal for a group of this size not to have source control? I have so far been given only vague reasons for not having source control - what reasons would you suggest could be valid for not implementing source control, given the information above? Are there any more reasons for source control that I could add to my arsenal? I'm asking mainly to get a feel for why I have had so much resistance, so please answer honestly. I'll give the answer to the person I believe has taken the most balanced approach and has answered all three questions. Thanks in advance

    Read the article

  • Down to the Wire - Yet More Solaris Things to See at OpenWorld (and JavaOne!)

    - by Larry Wake
    San Francisco is bracing for the annual invasion. The airport's jammed, the tweets are flying, and the numbers are crazy: more than 50,000 attendees and 2,500+ sessions, taking over Moscone Convention Center, two streets, Union Square, and seemingly every hotel in town (98,000 hotel room nights). So yeah, it's busy. And it's not just OpenWorld--we've also got JavaOne, MySQL Connect, and four other sub-events going on as well. Speaking of JavaOne, you can find Solaris-related activity there, too -- I've highlighted one hands-on lab below. Here's a last pre-event roundup of activities for consideration; enjoy the show(s)! (Remember, Schedule Builder is your friend; use it with the session numbers below to register.) Monday, October 1st: 3:15 PM - General Session: Accelerate Your Business with the Oracle Hardware Advantage(GEN9691, Moscone North Hall D) John Fowler, head of Oracle's Systems organization, will talk about Oracle hardware technology and how it's co-engineered with other key technologies, including Oracle Solaris. Tuesday, October 2nd: 10:15 AM - Building an IaaS Platform with SPARC, Oracle Solaris 11, and Oracle VM Server for SPARC(CON4431, Moscone South 270)Get the birds-eye lowdown (whatever that means) on how U.S. Cellular  built its Infrastructure as a Service (IaaS) cloud delivery platform with Oracle’s SPARC T4 servers, Oracle Solaris 11, Oracle Solaris Cluster 4, and Oracle VM Server for SPARC. The session covers the high-level design, business case made, implementation details, and lessons learned. 11:45 AM - Oracle Solaris 11 Panel: Insights and Directions from Oracle Solaris Core Engineering(CON8790, Moscone South 252) This has been one of the livelier Solaris-related sessions in years past (and I'm not saying that just because I get to moderate it this year). A panel of core engineers responsible for a wide range of key Solaris technologies will talk about some of the interesting work they've been doing -- but mostly we keep time open for the panel to take questions from attendees, because that's the fun part. Wednesday, October 3rd: 10:00 AM - Tracing Your Java Application Tuning on Oracle Solaris with DTrace(HOL10214, Hilton San Francisco, Franciscan A/B/C/D) This JavaOne hands-on lab will show how to use the DTrace framework to dynamically trace your Java applications on Oracle Solaris and uncover new tuning opportunities. Thursday, October 4th: 12:45 PM - Oracle Solaris 11: Optimized for Oracle Database, Oracle WebLogic Server, and Java(CON8800, Moscone South 252) Explore how Oracle Solaris 11 has been built to be the best platform for the cloud and enterprise applications, with built-in optimizations to improve performance and deliver unique functionality with Oracle Database, Oracle WebLogic Server, and Java.

    Read the article

  • Windows Azure Directory Sync Generic Failure

    - by Armand
    Ok so I have a domain that I want to sync to Office365 but when I start the Windows Azure Active Directory Sync tool Configuration Wizard I get an error with the following details: System.Management.ManagementException: Generic failure at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode) at System.Management.ManagementObjectCollection.ManagementObjectEnumerator.MoveNext() at Microsoft.Online.DirSync.Common.MiisAction.GetTargetMA() at Microsoft.Online.DirSync.Common.MiisAction.IsSyncInProgress() at Microsoft.Online.DirSync.Common.PrerequisiteChecks.ThrowIfSyncInProgress() at Microsoft.Online.DirSync.UI.IntroductionWizardPage.PrerequisiteValidation() at Microsoft.Online.DirSync.UI.IntroductionWizardPage.OnLoad(EventArgs e) at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible) at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible) at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible) at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible) at System.Windows.Forms.Control.CreateControl() at System.Windows.Forms.Control.WmShowWindow(Message& m) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) I have searched far and wide to no avail, this happens before I can even enter any details. A few notes: The server is not the domain controller Sharepoint 2013 is installed on this server The account I log in with and run the application with is a domain and enterprise admin I right click and run as administrator when I start the application So when I click continue on the error and go through the steps I get two possible scenarios that change from time to time at now predictable rate: 1) I just get an error, generic failure. 2) I get an error "Cannot start service MSOnlineSyncScheduler on computer '.'." Any help?

    Read the article

  • PowerShell Import DnsShell Module

    - by krousemw
    So here's the list of available modules in this directory. As you can see DnsShell is there. PS C:\windows\system32> Get-Module -ListAvailable Directory: C:\windows\system32\WindowsPowerShell\v1.0\Modules ModuleType Name ExportedCommands ---------- ---- ---------------- Manifest ActiveDirectory {Get-ADRootDSE, New-ADObject, Rename- ADObject, Move-ADObject...} Manifest AppLocker {Set-AppLockerPolicy, Get-AppLockerPolicy, Test-AppLockerPolicy, Get-AppLo... Manifest BitsTransfer {Add-BitsFile, Remove-BitsTransfer, Complete-BitsTransfer, Get-BitsTransfe... Manifest CimCmdlets {Get-CimAssociatedInstance, Get-CimClass, Get-CimInstance, Get-CimSession...} Binary DnsShell Script ISE {New-IseSnippet, Import-IseSnippet, Get- IseSnippet} Manifest Microsoft.PowerShell.Diagnostics {Get-WinEvent, Get-Counter, Import-Counter, Export-Counter...} Manifest Microsoft.PowerShell.Host {Start-Transcript, Stop-Transcript} Manifest Microsoft.PowerShell.Management {Add-Content, Clear-Content, Clear- ItemProperty, Join-Path...} Manifest Microsoft.PowerShell.Security {Get-Acl, Set-Acl, Get-PfxCertificate, Get-Credential...} Manifest Microsoft.PowerShell.Utility {Format-List, Format-Custom, Format-Table, Format-Wide...} Manifest Microsoft.WSMan.Management {Disable-WSManCredSSP, Enable- WSManCredSSP, Get-WSManCredSSP, Set-WSManQui... Script PSDiagnostics {Disable-PSTrace, Disable- PSWSManCombinedTrace, Disable-WSManTrace, Enable... Binary PSScheduledJob {New-JobTrigger, Add-JobTrigger, Remove-JobTrigger, Get-JobTrigger...} Manifest PSWorkflow {New-PSWorkflowExecutionOption, New-PSWorkflowSession, nwsn} Manifest PSWorkflowUtility Invoke-AsWorkflow Manifest TroubleshootingPack {Get-TroubleshootingPack, Invoke-TroubleshootingPack} When I run the command to Import-Module DnsShell, I get this error and I dont know why.. PS C:\windows\system32> Import-Module DnsShell Import-Module : Could not load file or assembly 'file:///C:\windows\system32\WindowsPowerShell\v1.0\Modules\DnsShell\DnsShell.dll' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515) At line:1 char:1 + Import-Module DnsShell + ~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [Import-Module], FileLoadException + FullyQualifiedErrorId : System.IO.FileLoadException,Microsoft.PowerShell.Commands.ImportModuleCommand Note: I would have posted pictures but I needed a rep of at least 10 in serverfault

    Read the article

  • Samba: share home directories when home directories are symbolic links

    - by Owen
    I have set up a new Ubuntu 9.10 system for five users. In the system is a large LVM volume where all the data is to be kept. The main system disk is not for this purpose, so I attempted to move the home directories using usermod -d /var/data/username -m And started creating my shares for these new home locations. But then I thought: hey, Samba has built-in home directory sharing! So I enabled that, and it didn't work. The shares were not published to the network. Only the share for user 'owen' was published; his folder hadn't been moved. So I thought: maybe Samba home sharing only works for default home locations, so how about I move the home directories back to where they were, and then make them symlinks. root@boxenmkiv:/home# ls -l total 4 lrwxrwxrwx 1 brett brett 25 2010-04-03 08:48 brett -> /var/data/brett/ lrwxrwxrwx 1 carly carly 23 2010-04-03 08:48 carly -> /var/data/carly/ lrwxrwxrwx 1 dave dave 21 2010-04-03 08:48 dave -> /var/data/dave/ lrwxrwxrwx 1 kate kate 23 2010-04-03 08:47 kate -> /var/data/kate/ drwxr-xr-x 4 owen owen 4096 2010-04-03 08:44 owen Like so. Still no go. The only users share which is published to the network is 'owen' who as you can see above has not had his home directory moved. I have also added the following to my smb.conf [global] follow symlinks = yes wide symlinks = yes unix extensions = no With no luck. Am I going about doing this the entirely wrong way? Should I just give up and manually create shares for the users? Thanks in advance.

    Read the article

  • Intel Wireless 4965AGN not achieving N throughput when connected to an Airport Express N network

    - by BenA
    I have an Intel Wireless WiFi Link 4965AGN adaptor in my laptop (HP Pavillion dv2000 series) which is connecting to a 5Ghz-only 802.11n network provided by an Apple Airport Express. The network is using WPA2 encryption. My desktop is also connected the Airport, via a Linksys WUSB600N USB adaptor. Both are running with the latest drivers, and the Airport is running the latest firmware. The Airport is also configured to use wide channels. The problem I have is that I never get throughput above 4MB/s when transferring files between the two machines. Even a pessimistic calculation shows a 270Mbps network as being capable of transfer rates at well above 10MB/s. I'm pretty sure I've isolated the issue to being the Intel adaptor, as wiring the desktop to the AP, and using the Linksys adaptor on the laptop immediately yielded speeds limited by the 100MB/s ethernet connection. I know that 802.11n is still a draft standard, and so mixing kit from different manufacturers can easily lead to poor results, but I was just wondering if anybody else out there has had success with this Intel adaptor on an N network? Or even better, connecting it to an Airport Express? Can anybody give me any advice on how to troubleshoot this issue? I should also mention that the Airport Express doesn't allow you to manually specify channels when running in N mode, and that I've been able to rule out interference from other Wireless LANs by scanning. There aren't any other 5GHz networks in my area. All ideas welcome! Update: A while later, I've just updated to the most recent drivers for both the Intel chip in the laptop, and the USB adaptor. Unfortunately this hasn't improved things :(. If anybody has any advice it would be be gratefully received.

    Read the article

  • mdadm cron job sends email that cron has run

    - by Andrew
    I've got an Ubuntu 8.04 server using mdadm to create several RAID1 arrays. I created /etc/cron.hourly/mdadm as follows: #! /bin/sh set -e mdadm --monitor /dev/md0 /dev/md3 /dev/md4 --oneshot (Yes, the array numbers are not sequential, and I'm not using --scan beacuse I have a degraded array that may or may not have been used as swap and I can't delete, but I think that's a separate issue. If it's the underlying cause of this, I need to fix it.) mdadm sends me email (configured in the /etc/mdadm/mdadm.conf) on DegradedArray etc. events. This is the desired behaviour. What is not desired, and I can't work out, is why cron is sending me (relatively pointless) emails, via an alias in /etc/aliases: From: root@<hostname> (Cron Daemon) To: root@<hostname> Subject: Cron <root@<hostname>> cd / && run-parts --report /etc/cron.hourly Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <id@hostname> Date: Fri, 7 May 2010 13:17:01 +0930 (CST) /etc/cron.hourly/mdadm: mdadm: Monitor using email address "<root_alias@domain>" from config file I've got a dozen other servers behaving correctly (mdadm sends email, cron doesnt') with identical /etc/crontab files: # /etc/crontab: system-wide crontab # <snip comments> SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly <snip anacron jobs> Should I simply remove the --report, or is there something else in my cron config somewhere causing this?

    Read the article

  • cset as non-root to set cpu affinity for running processes

    - by RaveTheTadpole
    I've been playing with cset to set cpu affinity for running processes. I'm recreating the built-in "shield" function manually with set and proc, to add some subsets for specific threads of my application. I have a bash script that is calling cset to create the sets, and move the correct threads to the correct sets. It works when run with sudo. Now I'd like to make this script executable by another user, who does not have sudo powers. I trust this user enough to be responsible with cset, but don't want to open up the wide powers of root. I thought that CAP_SYS_NICE -- which is needed for sched_setaffinity, which I just assume cset must use -- on the script would be sufficient, but that didn't work. I tried extending CAP_SYS_NICE to the cset program (which is a thin python wrapper for the cset python library). No dice. The output of cap_to_text on my CAP_SYS_NICE'd scripts is "=cap_ipc_lock,cap_sys_nice,cap_sys_resource+eip" (it has ipc_lock and sys_resource for other reasons; I think only sys_nice is relevant). Any ideas?

    Read the article

  • IPv6: Can't ping anything - "Operation not permitted"

    - by Matthew Iselin
    I've been working on getting IPv6 support into my network, and had everything working properly for a short while. The server is running Ubuntu Server 8.10. Now however whenever I attempt to do anything related to IPv6 on the server, I get "Operation not permitted". This is coming from things like wide-dhcpv6-client (when trying to get an IPv6 address from the ISP) and radvd - both log errors of this type into syslog. Even pinging the loopback interface fails: xxx@gordon:~$ ping6 ::1 PING ::1(::1) 56 data bytes ping: sendmsg: Operation not permitted ping: sendmsg: Operation not permitted ping: sendmsg: Operation not permitted ^C --- ::1 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2014ms xxx@gordon:~$ sudo ping6 ::1 sudo: unable to resolve host gordon PING ::1(::1) 56 data bytes ping: sendmsg: Operation not permitted ping: sendmsg: Operation not permitted ping: sendmsg: Operation not permitted ^C --- ::1 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2014ms As you can see, I have attempted pinging as root, as most of the material I've found on the internet points to a permission problem. However, that has not helped. Any hints to getting unstuck would be appreciated.

    Read the article

  • Distributed development staff needing a common IP range

    - by bakasan
    I work on a development staff that is geographically distributed, mostly all throughout the state of CA, but several key members also must travel frequently. We rely quite heavily on a 3rd party provider API for a great deal of our subsystems (can't get into who it is or what they do). The 3rd party however is quite stringent on network access and have no notion of a development sandbox. Access is restricted to 2, 3 IP numbers and that's about it. Once we account for our production servers, that leaves us with an IP or two to spare for our dev team--which is still problematic as people's home IP changes, people travel, we have more than 2 devs, etc. Wide IP blocks are not permitted by the 3rd party. Nor will they allow dynamic DNS type services. There is no simple console to swap IPs on the fly either (e.g. if a dev's IP at home changes or they are on the road). As none of us are deep network experts, I'm wondering what our viable options are? Are there such things as 3rd party hosts to VPNs? Generally I think of a VPN as a mechanism to gain access to a home office, but the notion would be a 3rd party VPN that we'd all connect to and we'd register this as an IP origin w/ our 3rd party. We've considered using Amazon EC2 to effectively host a dev environment for each dev and using that to connect. Amazon only gives you so many static IPs however (I believe 5?) so this would only be a stop gap solution until our team size out strips our IP count at Amazon. Those were the only viable thoughts that I had, but again, I'm far from a networking guy. Tried searching for similar threads, but I'm not even sure I know the right vernacular to look around for.

    Read the article

  • VMWare Fusion: "No Permission to access this virtual machine"

    - by Craig Walker
    I had a VMWare Fusion VM backed up on my home network file server (Ubuntu). I wanted to run it again, so I copied it back to my Macbook. When I tried to launch it in VMWare, I got an error message: No permission to access this virtual machine. Configuration file: /Users/craig/WinXP Clean + Scanner.vmwarevm/WinXP Pro Test.vmx The permissions look fine to me: The bundle directory is 777 The bundle files (including the listed .vmx) are all 666 User is craig (my current user); group is staff. I changed the group to wheel at the suggestion of this page, but that didn't help. Finder shows read & write for craig, staff, and everyone on the bundle directory The bundle dir is also not locked Finder also shows rw and unlocked for the .vmx file The parent directory is also rw & unlocked Disk Utility permissions check doesn't show any problems with any of the associated files It sure looks like I should have wide open access to run this VM; why is Fusion complaining?

    Read the article

  • Error installing Sony Remote Play

    - by Iszi Rory or Isznti
    I'm trying to install Remote Play software to connect my laptop to my PS3. I've found a guide with instructions which seem to be in fairly wide use (found similar walk-throughs on numerous other sites), for running the software on a non-Vaio PC. Tech-Recipies: Playstation 3 – Use Remote Play on any Windows 7 PC The setup essentially goes like this: Download Remote Play software. Download patch by NTAuthority. Install Remote Play as normal. Reboot. Extract NTAuthority patch to Remote Play program folder. Manually register patched DLLs via CLI. Run Remote Play software. Sadly, my problem is early in - Step 3. I had to use Google to find the software download, as the link from Tech-Recipies seems broken. I found the download on Sony's site here: Sony eSupport: Remote Play with PlayStation®3 After downloading and running the software, I hit "Next" at the welcome screen and "I Agree" at the EULA screen. After this, a popup informs me that Setup is checking my computer's information. Then, Setup terminates with this error: I'm running Windows 7 Ultimate x64. Is anyone familiar with this error in this software? Is there a way to work around it? Did I perhaps pick the wrong download from Sony's site?

    Read the article

  • Making OpenSSL work on PHP Windows 2008 server with FastCGI

    - by KacieHouser
    I have been researching all day. Here is what I have done: In C:/PHP/php.ini and C:/PHP/php-cgi-fcgi.ini I have made the extension_dir = "C:/PHP/ext" I uncommented extension=php_openssl.dll I went to http://windows.php.net/download/ and got the thread safe version with the PHP 5.4 (5.4.8) version of DLL's In C:/PHP/ext I replaced the php_openssl.dll with the one I downloaded In System32 and SysWOW64 I added the following DLL's ssleay.dll libeay.dll I restarted the IIS server in the Server Manager under Web Server and stopped and started the World Wide Web Publishing Service That didn't work, so I tried same thing with the unthreaded versions. I still get: Fatal error: Call to undefined function ftp_ssl_connect() in C:\inetpub\wwwroot\REMOVED_dev\save_data.php on line 5 Here are related things from phpinfo(): System Windows NT DEV-WEB1 6.1 build 7601 (Windows Server 2008 R2 Standard Edition Service Pack 1) i586 Compiler MSVC9 (Visual C++ 2008) Architecture x86 Configure Command cscript /nologo configure.js "--enable-snapshot-build" "--enable-debug-pack" "--disable-zts" "--disable-isapi" "--disable-nsapi" "--without-mssql" "--without-pdo-mssql" "--without-pi3web" "--with-pdo-oci=C:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8=C:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8-11g=C:\php-sdk\oracle\instantclient11\sdk,shared" "--with-enchant=shared" "--enable-object-out-dir=../obj/" "--enable-com-dotnet" "--with-mcrypt=static" "--disable-static-analyze" "--with-pgo" Server API CGI/FastCGI Configuration File (php.ini) Path C:\Windows Loaded Configuration File C:\PHP\php-cgi-fcgi.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) Registered PHP Streams php, file, glob, data, http, ftp, zip, compress.zlib, compress.bzip2, https, ftps, sqlsrv, phar Registered Stream Socket Transports tcp, udp, ssl, sslv3, sslv2, tls FTP support enabled Protocols dict, file, ftp, ftps, gopher, http, https, imap, imaps, ldap, pop3, pop3s, rtsp, scp, sftp, smtp, smtps, telnet, tftp openssl OpenSSL support enabled OpenSSL Library Version OpenSSL 0.9.8t 18 Jan 2012 OpenSSL Header Version OpenSSL 0.9.8x 10 May 2012 What am I missing here?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >