Search Results

Search found 3420 results on 137 pages for '3d vision'.

Page 93/137 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Is your team is a high-performing team?

    As a child I can remember looking out of the car window as my father drove along the Interstate in Florida while seeing prisoners wearing bright orange jump suits and prison guards keeping a watchful eye on them. The prisoners were taking part in a prison road gang. These road gangs were formed to help the state maintain the state highway infrastructure. The prisoner’s primary responsibilities are to pick up trash and debris from the roadway. This is a prime example of a work group or working group used by most prison systems in the United States. Work groups or working groups can be defined as a collection of individuals or entities working together to achieve a specific goal or accomplish a specific set of tasks. Typically these groups are only established for a short period of time and are dissolved once the desired outcome has been achieved. More often than not group members usually feel as though they are expendable to the group and some even dread that they are even in the group. "A team is a small number of people with complementary skills who are committed to a common purpose, performance goals, and approach for which they are mutually accountable." (Katzenbach and Smith, 1993) So how do you determine that a team is a high-performing team?  This can be determined by three base line criteria that include: consistently high quality output, the promotion of personal growth and well being of all team members, and most importantly the ability to learn and grow as a unit. Initially, a team can successfully create high-performing output without meeting all three criteria, however this will erode over time because team members will feel detached from the group or that they are not growing then the quality of the output will decline. High performing teams are similar to work groups because they both utilize a collection of individuals or entities to accomplish tasks. What distinguish a high-performing team from a work group are its characteristics. High-performing teams contain five core characteristics. These characteristics are what separate a group from a team. The five characteristics of a high-performing team include: Purpose, Performance Measures, People with Tasks and Relationship Skills, Process, and Preparation and Practice. A high-performing team is much more than a work group, and typically has a life cycle that can vary from team to team. The standard team lifecycle consists of five states and is comparable to a human life cycle. The five states of a high-performing team lifecycle include: Formulating, Storming, Normalizing, Performing, and Adjourning. The Formulating State of a team is first realized when the team members are first defined and roles are assigned to all members. This initial stage is very important because it can set the tone for the team and can ultimately determine its success or failure. In addition, this stage requires the team to have a strong leader because team members are normally unclear about specific roles, specific obstacles and goals that my lay ahead of them.  Finally, this stage is where most team members initially meet one another prior to working as a team unless the team members already know each other. The Storming State normally arrives directly after the formulation of a new team because there are still a lot of unknowns amongst the newly formed assembly. As a general rule most of the parties involved in the team are still getting used to the workload, pace of work, deadlines and the validity of various tasks that need to be performed by the group.  In this state everything is questioned because there are so many unknowns. Items commonly questioned include the credentials of others on the team, the actual validity of a project, and the leadership abilities of the team leader.  This can be exemplified by looking at the interactions between animals when they first meet.  If we look at a scenario where two people are walking directly toward each other with their dogs. The dogs will automatically enter the Storming State because they do not know the other dog. Typically in this situation, they attempt to define which is more dominating via play or fighting depending on how the dogs interact with each other. Once dominance has been defined and accepted by both dogs then they will either want to play or leave depending on how the dogs interacted and other environmental variables. Once the Storming State has been realized then the Normalizing State takes over. This state is entered by a team once all the questions of the Storming State have been answered and the team has been tested by a few tasks or projects.  Typically, participants in the team are filled with energy, and comradery, and a strong alliance with team goals and objectives.  A high school football team is a perfect example of the Normalizing State when they start their season.  The player positions have been assigned, the depth chart has been filled and everyone is focused on winning each game. All of the players encourage and expect each other to perform at the best of their abilities and are united by competition from other teams. The Performing State is achieved by a team when its history, working habits, and culture solidify the team as one working unit. In this state team members can anticipate specific behaviors, attitudes, reactions, and challenges are seen as opportunities and not problems. Additionally, each team member knows their role in the team’s success, and the roles of others. This is the most productive state of a group and is where all the time invested working together really pays off. If you look at an Olympic figure skating team skate you can easily see how the time spent working together benefits their performance. They skate as one unit even though it is comprised of two skaters. Each skater has their routine completely memorized as well as their partners. This allows them to anticipate each other’s moves on the ice makes their skating look effortless. The final state of a team is the Adjourning State. This state is where accomplishments by the team and each individual team member are recognized. Additionally, this state also allows for reflection of the interactions between team members, work accomplished and challenges that were faced. Finally, the team celebrates the challenges they have faced and overcome as a unit. Currently in the workplace teams are divided into two different types: Co-located and Distributed Teams. Co-located teams defined as the traditional group of people working together in an office, according to Andy Singleton of Assembla. This traditional type of a team has dominated business in the past due to inadequate technology, which forced workers to primarily interact with one another via face to face meetings.  Team meetings are primarily lead by the person with the highest status in the company. Having personally, participated in meetings of this type, usually a select few of the team members dominate the flow of communication which reduces the input of others in group discussions. Since discussions are dominated by a select few individuals the discussions and group discussion are skewed in favor of the individuals who communicate the most in meetings. In addition, Team members might not give their full opinions on a topic of discussion in part not to offend or create controversy amongst the team and can alter decision made in meetings towards those of the opinions of the dominating team members. Distributed teams are by definition spread across an area or subdivided into separate sections. That is exactly what distributed teams when compared to a more traditional team. It is common place for distributed teams to have team members across town, in the next state, across the country and even with the advances in technology over the last 20 year across the world. These teams allow for more diversity compared to the other type of teams because they allow for more flexibility regarding location. A team could consist of a 30 year old male Italian project manager from New York, a 50 year old female Hispanic from California and a collection of programmers from India because technology allows them to communicate as if they were standing next to one another.  In addition, distributed team members consult with more team members prior to making decisions compared to traditional teams, and take longer to come to decisions due to the changes in time zones and cultural events. However, team members feel more empowered to speak out when they do not agree with the team and to notify others of potential issues regarding the work that the team is doing. Virtual teams which are a subset of the distributed team type is changing organizational strategies due to the fact that a team can now in essence be working 24 hrs a day because of utilizing employees in various time zones and locations.  A primary example of this is with customer services departments, a company can have multiple call centers spread across multiple time zones allowing them to appear to be open 24 hours a day while all a employees work from 9AM to 5 PM every day. Virtual teams also allow human resources departments to go after the best talent for the company regardless of where the potential employee works because they will be a part of a virtual team all that is need is the proper technology to be setup to allow everyone to communicate. In addition to allowing employees to work from home, the company can save space and resources by not having to provide a desk for every team member. In fact, those team members that randomly come into the office can actually share one desk amongst multiple people. This is definitely a cost cutting plus given the current state of the economy. One thing that can turn a team into a high-performing team is leadership. High-performing team leaders need to focus on investing in ongoing personal development, provide team members with direction, structure, and resources needed to accomplish their work, make the right interventions at the right time, and help the team manage boundaries between the team and various external parties involved in the teams work. A team leader needs to invest in ongoing personal development in order to effectively manage their team. People have said that attitude is everything; this is very true about leaders and leadership. A team takes on the attitudes and behaviors of its leaders. This can potentially harm the team and the team’s output. Leaders must concentrate on self-awareness, and understanding their team’s group dynamics to fully understand how to lead them. In addition, always learning new leadership techniques from other effective leaders is also very beneficial. Providing team members with direction, structure, and resources that they need to accomplish their work collectively sounds easy, but it is not.  Leaders need to be able to effectively communicate with their team on how their work helps the company reach for its organizational vision. Conversely, the leader needs to allow his team to work autonomously within specific guidelines to turn the company’s vision into a reality.  This being said the team must be appropriately staffed according to the size of the team’s tasks and their complexity. These tasks should be clear, and be meaningful to the company’s objectives and allow for feedback to be exchanged with the leader and the team member and the leader and upper management. Now if the team is properly staffed, and has a clear and full understanding of what is to be done; the company also must supply the workers with the proper tools to achieve the tasks that they are asked to do. No one should be asked to dig a hole without being given a shovel.  Finally, leaders must reward their team members for accomplishments that they achieve. Awards could range from just a simple congratulatory email, a party to close the completion of a large project, or other monetary rewards. Managing boundaries is very important for team leaders because it can alter attitudes of team members and can add undue stress to the team which will force them to loose focus on the tasks at hand for the group. Team leaders should promote communication between team members so that burdens are shared amongst the team and solutions can be derived from hearing the opinions of multiple sources. This also reinforces team camaraderie and working as a unit. Team leaders must manage the type and timing of interventions as to not create an even bigger mess within the team. Poorly timed interventions can really deflate team members and make them question themselves. This could really increase further and undue interventions by the team leader. Typically, the best time for interventions is when the team is just starting to form so that all unproductive behaviors are removed from the team and that it can retain focus on its agenda. If an intervention is effectively executed the team will feel energized about the work that they are doing, promote communication and interaction amongst the group and improve moral overall. High-performing teams are very import to organizations because they consistently produce high quality output and develop a collective purpose for their work. This drive to succeed allows team members to utilize specific talents allowing for growth in these areas.  In addition, these team members usually take on a sense of ownership with their projects and feel that the other team members are irreplaceable. References: http://blog.assembla.com/assemblablog/tabid/12618/bid/3127/Three-ways-to-organize-your-team-co-located-outsourced-or-global.aspx Katzenbach, J.R. & Smith, D.K. (1993). The Wisdom of Teams: Creating the High-performance Organization. Boston: Harvard Business School.

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    "Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack. Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while. Self-Service BI Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI. This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me: PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.) Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.) One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.) Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.) Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.) This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users. It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations. Collaborative BI I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time. Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people." The Microsoft BI Stack in General A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years. Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?" Expo Hall I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here. Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions. Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind! Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack.Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while.Self-Service BISelf-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI.This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me:PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.)Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.)One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.)Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.)Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.)This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users.It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations.Collaborative BII have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time.Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people."The Microsoft BI Stack in GeneralA question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years.Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?"Expo HallI had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here.Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions.Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind!Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • bind9 DNS Ubuntu names pingible on server, but not on Windows Machines?

    - by leeand00
    I setup a DNS server today on Ubuntu, following this tutorial. My intent was to setup my network for dns-name resolving on the private LAN within a single zone (nothing fancy I just want name resolution). I've tested the setup on the DNS server machine itself, and I can ping all the machines listed in the configuration file. I've also configured the Windows Machines on my network, and for some reason they are incapable of pinging by names as was possible on the DNS Server itself. I've tried running nslookup on the Windows DNS clients and I receive and error mentioning the address of the DNS server. DNS forwarding works fine, I'm not having any trouble accessing the internet, the problem only lies within accessing names within the private LAN. Here are my configuration files: options { directory "/var/cache/bind"; // If there is a firewall between you and nameservers you want // to talk to, you may need to fix the firewall to allow multiple // ports to talk. See http://www.kb.cert.org/vuls/id/800113 // If your ISP provided one or more IP addresses for stable // nameservers, you probably want to use them as forwarders. // Uncomment the following block, and insert the addresses replacing // the all-0's placeholder. // forwarders { // 0.0.0.0; // }; forwarders { 8.8.8.8; 8.8.8.4; 74.242.0.12; //68.87.76.178; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; /etc/bind/named.conf.options zone "leerdomain.local" { type master; file "/etc/bind/zones/leerdomain.local.db"; notify no; }; zone "2.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/rev.2.168.192.in-addr.arpa"; notify no; }; /etc/bind/named.conf.local Lookup: $TTL 3D @ IN SOA ns.leerdomain.local. admin.leerdomain.local. ( 2010011001 28800 3600 604800 38400 ); leerdomain.local. IN NS ns.leerdomain.local. ns IN A 192.168.2.9 asus IN A 192.168.2.254 www IN CNAME asus vaio IN A 192.168.2.253 iptouch IN A 192.168.2.252 toshiba IN A 192.168.2.251 gw IN A 192.168.2.1 TXT "Network Gateway" /etc/bind/zones/leerdomain.local.db (Validates fine with named-checkzone when validating zone leerdomain.local) Reverse Lookup: $TTL 3D @ IN SOA ns.leerdomain.local. admin.leerdomain.local. ( 201001101 28800 604800 604800 86400 ) IN NS ns.leerdomain.local. 1 IN PTR gw.leerdomain.local. 254 IN PTR asus.leerdomain.local. 253 IN PTR vaio.leerdomain.local. 252 IN PTR iptouch.leerdomain.local. 251 IN PTR toshiba.leerdomain.local. /etc/bind/zones/rev.2.168.192.in-addr.arpa *(Does not validate with named-checkzone when validating zone leerdomain.local gives an error of: zone leerdomain.local/IN: NS 'ns.leerdomain.local' has no address records (A or AAAA) zone leerdomain.local/IN: not loaded due to errors. * Despite not validating bind9 starts without errors in /var/log/syslog I've also configured a few of the windows machines on my network to have the static ip as specified in the lookup and reverse lookup config files. i.e. Using nslookup yields the following results: C:\Users\leeand00>nslookup ns Server: UnKnown Address: 192.168.2.9 *** UnKnown can't find ns: Non-existent domain C:\Users\leeand00>nslookup gw Server: UnKnown Address: 192.168.2.9 Name: gw. Additionally trying to ping by name also fails on machines that are not the DNS Server. Is there something wrong with my configuration of either the nameserver or the Windows Boxes that is keeping me from accessing other machines using names?

    Read the article

  • Convert from apache rewrite to nginx

    - by Linux Intel
    I want to convert from apache rewrite modules to nginx RewriteCond %{QUERY_STRING} mosConfig_[a-zA-Z_]{1,21}(=|\%3D) [OR] RewriteCond %{QUERY_STRING} base64_encode.*\(.*\) [OR] RewriteCond %{QUERY_STRING} (\<|%3C).*script.*(\>|%3E) [NC,OR] RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) RewriteCond %{QUERY_STRING} SELECT(=|\[|\%[0-9A-Z]{0,2}) [OR] RewriteCond %{QUERY_STRING} UNION(=|\[|\%[0-9A-Z]{0,2}) [OR] RewriteCond %{QUERY_STRING} UPDATE(=|\[|\%[0-9A-Z]{0,2}) [OR] RewriteRule ^([^.]*)/?$ index.php [L] RewriteRule ^domain/trial/cms$ index/index.php?%{QUERY_STRING} [L] RewriteCond %{HTTP:Range} ([a-z]+) [NC] RewriteRule ([0-9_\-]+)flv$ http://www.domain.com [R,L] RewriteCond %{ENV:byte-ranges-specifier} !^$ RewriteRule ([0-9_\-]+)flv$ http://www.domain.com [R,L] RewriteCond %{HTTP_USER_AGENT} !^Mozilla/5 [NC] RewriteCond %{HTTP_USER_AGENT} !^Mozilla/4 [NC] RewriteCond %{HTTP_USER_AGENT} !^Opera [NC] RewriteRule ([0-9_\-]+)flv$ http://www.domain.com [R,L] RewriteRule ^$ index/index.php?%{QUERY_STRING} [L] RewriteCond %{SCRIPT_FILENAME} !sss.php [NC] RewriteCond %{SCRIPT_FILENAME} !m-administrator [NC] RewriteRule ^([^/^.]*)$ sss.php?encrypted=$1&%{QUERY_STRING} [L] RewriteCond %{SCRIPT_FILENAME} !sss.php [NC] RewriteCond %{SCRIPT_FILENAME} !m-administrator [NC] RewriteRule ^([^/^.]*)/([^/^.]*)$ sss.php?tab=$1&page=$2&%{QUERY_STRING} [L] RewriteCond %{SCRIPT_FILENAME} !sss.php [NC] RewriteCond %{SCRIPT_FILENAME} !m-administrator [NC] RewriteRule ^([^/^.]*)/([^/^.]*)/([^.]*)$ sss.php?tab=$1&page=$2&queryString=$3&%{QUERY_STRING} [L] RewriteCond %{SCRIPT_FILENAME} !sss.php [NC] RewriteCond %{SCRIPT_FILENAME} !security.php [NC] RewriteRule ^([^/]*)$ index/$1?%{QUERY_STRING} [L] I tried to convert it by online tools such as : http://www.anilcetin.com/convert-apache-htaccess-to-nginx/ but it didn't convert it correctly. The conversion output is : if ($args ~ "mosConfig_[a-zA-Z_]{1,21}(=|%3D)"){ set $rule_0 1; } if ($args ~ "base64_encode.*(.*)"){ set $rule_0 1; } if ($args ~* "(<|%3C).*script.*(>|%3E)"){ set $rule_0 1; } if ($args ~ "GLOBALS(=|[|%[0-9A-Z]{0,2})"){ set $rule_0 1; } if ($args ~ "_REQUEST(=|[|%[0-9A-Z]{0,2})"){ set $rule_0 1; } if ($args ~ "SELECT(=|[|%[0-9A-Z]{0,2})"){ set $rule_0 1; } if ($args ~ "UNION(=|[|%[0-9A-Z]{0,2})"){ set $rule_0 1; } if ($args ~ "UPDATE(=|[|%[0-9A-Z]{0,2})"){ set $rule_0 1; } if ($rule_0 = "1"){ rewrite ^/([^.]*)/?$ /index.php last; } if ($rule_1 = ""){ rewrite ^/domain/trial/cms$ /index/index.php?$args last; } if ($http_range ~* "([a-z]+)"){ set $rule_2 1$rule_2; } if ($rule_2 = "1"){ rewrite /([0-9_-]+)flv$ http://www.domain.com redirect; } #ignored: condition 0 if ($rule_3 = "1"){ rewrite /([0-9_-]+)flv$ http://www.domain.com redirect; } if ($http_user_agent !~* "^Mozilla/5"){ set $rule_4 1$rule_4; } if ($http_user_agent !~* "^Mozilla/4"){ set $rule_4 2$rule_4; } if ($http_user_agent !~* "^Opera"){ set $rule_4 3$rule_4; } if ($rule_4 = "321"){ rewrite /([0-9_-]+)flv$ http://www.domain.com redirect; } if ($rule_5 = ""){ rewrite ^/$ /index/index.php?$args last; } if ($uri !~* "sss.php"){ set $rule_6 1$rule_6; } if ($uri !~* "m-administrator"){ set $rule_6 2$rule_6; } if ($rule_6 = "21"){ rewrite ^/([^/^.]*)$ /sss.php?encrypted=$1&$args last; } if ($uri !~* "sss.php"){ set $rule_7 1$rule_7; } if ($uri !~* "m-administrator"){ set $rule_7 2$rule_7; } if ($rule_7 = "21"){ rewrite ^/([^/^.]*)/([^/^.]*)$ /sss.php?tab=$1&page=$2&$args last; } if ($uri !~* "sss.php"){ set $rule_8 1$rule_8; } if ($uri !~* "m-administrator"){ set $rule_8 2$rule_8; } if ($rule_8 = "21"){ rewrite ^/([^/^.]*)/([^/^.]*)/([^.]*)$ /sss.php?tab=$1&page=$2&queryString=$3&$args last; } if ($uri !~* "sss.php"){ set $rule_9 1$rule_9; } if ($uri !~* "security.php"){ set $rule_9 2$rule_9; } if ($rule_9 = "21"){ rewrite ^/([^/]*)$ /index/$1?$args last; } Please help me with the proper conversion result for nginx in order to work perfectly.

    Read the article

  • Week in Geek: 4chan Falls Victim to DDoS Attack Edition

    - by Asian Angel
    This week we learned how to tweak the low battery action on a Windows 7 laptop, access an eBook collection anywhere in the world, “extend iPad battery life, batch resize photos, & sync massive music collections”, went on a reign of destruction with Snow Crusher, and had fun decorating our desktops with abstract icon collections. Photo by pasukaru76. Random Geek Links We have included extra news article goodness to help you catch up on any developments that you may have missed during the holiday break this past week. Note: The three 27C3 articles listed here represent three different presentations at the 27th Chaos Communication Congress hacker conference. 4chan victim of DDoS as FBI investigates role in PayPal attack Users of 4chan may have gotten a taste of their own medicine after the site was knocked offline by a DDoS attack from an unknown origin early Thursday morning. Report: FBI seizes server in probe of WikiLeaks attacks The FBI has seized a server in Texas as part of its hunt for the groups behind the pro-WikiLeaks denial-of-service attacks launched in December against PayPal, Visa, MasterCard, and others. Mozilla exposes older user-account database Mozilla has disabled 44,000 older user accounts for its Firefox add-ons site after a security researcher found part of a database of the account information on a publicly available server. Data breach affects 4.9 million Honda customers Japanese automaker Honda has put some 2.2 million customers in the United States on a security breach alert after a database containing information on the owners and their cars was hacked. Chinese Trojan discovered in Android games An Android-based Trojan called “Geinimi” has been discovered in the wild and the Trojan is capable of sending personal information to remote servers and exhibits botnet-like behavior. 27C3 presentation claims many mobiles vulnerable to SMS attacks According to security experts, an ‘SMS of death’ threatens to disable many current Sony Ericsson, Samsung, Motorola, Micromax and LG mobiles. 27C3: GSM cell phones even easier to tap Security researchers have demonstrated how open source software on a number of revamped, entry-level cell phones can decrypt and record mobile phone calls in the GSM network. 27C3: danger lurks in PDF documents Security researcher Julia Wolf has pointed out numerous, previously hardly known, security problems in connection with Adobe’s PDF standard. Critical update for WordPress A critical update has been made available for WordPress in the form of version 3.0.4. The update fixes a security bug in WordPress’s KSES library. McAfee Labs Predicts Geolocation, Mobile Devices and Apple Will Top the List of Targets for Emerging Threats in 2011 The list comprises 2010’s most buzzed about platforms and services, including Google’s Android, Apple’s iPhone, foursquare, Google TV and the Mac OS X platform, which are all expected to become major targets for cybercriminals. McAfee Labs also predicts that politically motivated attacks will be on the rise. Windows Phone 7 piracy materializes with FreeMarketplace A proof-of-concept application, FreeMarketplace, that allows any Windows Phone 7 application to be downloaded and installed free of charge has been developed. Empty email accounts, and some bad buzz for Hotmail In the past few days, a number of Hotmail users have been complaining about a rather disconcerting issue: their Hotmail accounts, some up to 10 years old, appear completely empty.  No emails, no folders, nothing, just what appears to be a new account. Reports: Nintendo warns of 3DS risk for kids Nintendo has reportedly issued a warning that the 3DS, its eagerly awaited glasses-free 3D portable gaming device, should not be used by children under 6 when the gadget is in 3D-viewing mode. Google eyes ‘cloaking’ as next antispam target Google plans to take a closer look at the practice of “cloaking,” or presenting one look to a Googlebot crawling one’s site while presenting another look to users. Facebook, Twitter stock trading drawing SEC eye? The high degree of investor interest in shares of hot Silicon Valley companies that aren’t yet publicly traded–like Facebook, Twitter, LinkedIn, and Zynga–may be leading to scrutiny from the U.S. Securities and Exchange Commission (SEC). Random TinyHacker Links Photo by jcraveiro. Exciting Software Set for Release in 2011 A few bloggers from great websites such as How-To Geek, Guiding Tech and 7 Tutorials took the time to sit down and talk about their software wishes for 2011. Take the time to read it and share… Wikileaks Infopr0n An infographic detailing the quest to plug WikiLeaks. The New York Times Guide to Mobile Apps A growing collection of all mobile app coverage by the New York Times as well as lists of favorite apps from Times writers. 7,000,000,000 (Video) A fascinating look at the world’s population via National Geographic Magazine. Super User Questions Check out the great answers to these hot questions from Super User. How to use a Personal computer as a Linux web server for development purposes? How to link processing power of old computers together? Free virtualization tool for testing suspicious files? Why do some actions not work with Remote Desktop? What is the simplest way to send a large batch of pictures to a distant friend or colleague? How-To Geek Weekly Article Recap Had a busy week and need to get caught up on your HTG reading? Then sit back and relax while enjoying these hot posts full of how-to roundup goodness. The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 The 20 Best How-To Geek Linux Articles of 2010 How to Search Just the Site You’re Viewing Using Google Search Ask the Readers: Backing Your Files Up – Local Storage versus the Cloud One Year Ago on How-To Geek Need more how-to geekiness for your weekend? Then look through this great batch of articles from one year ago that focus on dual-booting and O.S. installation goodness. Dual Boot Your Pre-Installed Windows 7 Computer with Vista Dual Boot Your Pre-Installed Windows 7 Computer with XP How To Setup a USB Flash Drive to Install Windows 7 Dual Boot Your Pre-Installed Windows 7 Computer with Ubuntu Easily Install Ubuntu Linux with Windows Using the Wubi Installer The Geek Note We hope that you and your families have had a terrific holiday break as everyone prepares to return to work and school this week. Remember to keep those great tips coming in to us at [email protected]! Photo by pjbeardsley. Latest Features How-To Geek ETC The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Tune Pop Enhances Android Music Notifications Another Busy Night in Gotham City Wallpaper Classic Super Mario Brothers Theme for Chrome and Iron Experimental Firefox Builds Put Tabs on the Title Bar (Available for Download) Android Trojan Found in the Wild Chaos, Panic, and Disorder Wallpaper

    Read the article

  • Ops Center Solaris 11 IPS Repository Management: Using ISO Images

    - by S Stelting
    Please join us for a live WebEx presentation of this topic on Tuesday, November 20th at 9am MDT. Details for the call are provided below: https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209834017&UID=1512096072&PW=NYTVlZTYxMzdm&RT=MiMxMQ%3D%3D Meeting password: oracle123 Call-in toll-free number: 1-866-682-4770 International numbers: http://www.intercall.com/oracle/access_numbers.htm Conference Code: 762 9343 # Security Code: 7777 # With Enterprise Manager Ops Center 12c, you can provision, patch, monitor and manage Oracle Solaris 11 instances. To do this, Ops Center creates and maintains a Solaris 11 Image Packaging System (IPS) repository on the Enterprise Controller. During the Enterprise Controller configuration, you can load repository content directly from Oracle's Support Web site and subsequently synchronize the repository as new content becomes available. Of course, you can also use Solaris 11 ISO images to create and update your Ops Center repository. There are a few excellent reasons for doing this: You're running Ops Center in disconnected mode, and don't have Internet access on your Enterprise Controller You'd rather avoid the bandwidth associated with live synchronization of a Solaris 11 package repository This demo will show you how to use Solaris 11 ISO images to set up and update your Ops Center repository. Prerequisites This tip assumes that you've already installed the Enterprise Controller on a Solaris 11 OS instance and that you're ready for post-install configuration. In addition, there are specific Ops Center and OS version requirements depending on which version of Solaris 11 you plan to install.You can get full details about the requirements in the Release Notes for Ops Center 12c update 2. Additional information is available in the Ops Center update 2 Readme document. Part 1: Using a Solaris 11 ISO Image to Create an Ops Center Repository Step 1 – Download the Solaris 11 Repository Image The Oracle Web site provides a number of download links for official Solaris 11 images. Among those links is a two-part downloadable repository image, which provides repository content for Solaris 11 SPARC and X86 architectures. In this case, I used the Solaris 11 11/11 image. First, navigate to the Oracle Web site and accept the OTN License agreement: http://www.oracle.com/technetwork/server-storage/solaris11/downloads/index.html Next, download both parts of the Solaris 11 repository image. I recommend using the Solaris 11 11/11 image, and have provided the URLs here: http://download.oracle.com/otn/solaris/11/sol-11-1111-repo-full.iso-ahttp://download.oracle.com/otn/solaris/11/sol-11-1111-repo-full.iso-b Finally, use the cat command to generate an ISO image you can use to create your repository: # cat sol-11-1111-repo-full.iso-a sol-11-1111-repo-full.iso-b > sol-11-1111-repo-full.iso The process is very similar if you plan to set up a Solaris 11.1 release in Ops Center. In that case, navigate to the Solaris 11 download page, accept the license agreement and download both parts of the Solaris 11.1 repository image. Use the cat command to create a single ISO image for Solaris 11.1 Step 2 – Mount the Solaris 11 ISO Image in your Local Filesystem Once you have created the Solaris 11 ISO file, use the mount command to attach it to your local filesystem. After the image has been mounted, you can browse the repository from the ./repo subdirectory, and use the pkgrepo command to verify that Solaris 11 recognizes the content: Step 3 – Use the Image to Create your Ops Center Repository When you have confirmed the repository is available, you can use the image to create the Enterprise Controller repository. The operation will be slightly different depending on whether you configure Ops Center for Connected or Disconnected Mode operation.For connected mode operation, specify the mounted ./repo directory in step 4.1 of the configuration wizard, replacing the default Web-based URL. Since you're synchronizing from an OS repository image, you don't need to specify a key or certificate for the operation. For disconnected mode configuration, specify the Solaris 11 directory along with the path to the disconnected mode bundle downloaded by running the Ops Center harvester script: Ops Center will run a job to import package content from the mounted ISO image. A synchronization job can take several hours to run – in my case, the job ran for 3 hours, 22 minutes on a SunFire X4200 M2 server. During the job, Ops Center performs three important tasks: Synchronizes all content from the image and refreshes the repository Updates the IPS publisher information Creates OS Provisioning profiles and policies based on the content When the job is complete, you can unmount the ISO image from your Enterprise Controller. At that time, you can view the repository contents in your Ops Center Solaris 11 library. For the Solaris 11 11/11 release, you should see 8,668 packages and patches in the contents. You should also see default deployment plans for Solaris 11 provisioning. As part of the repository import, Ops Center generates plans and profiles for desktop, small and large servers for the SPARC and X86 architecture. Part 2: Using a Solaris 11 SRU to update an Ops Center Repository It's possible to use the same approach to upgrade your Ops Center repository to a Solaris 11 Support Repository Update, or SRU. Each SRU provides packages and updates to Solaris 11 - for example, SRU 8.5 provided the packaged for Oracle VM Server for SPARC 2.2 SRUs are available for download as ISO images from My Oracle Support, under document ID 1372094.1. The document provides download links for all SRUs which have been released by Oracle for Solaris 11. SRUs are cumulative, so later versions include the packages from earlier SRUs. After downloading an ISO image for an SRU, you can mount it to your local filesystem using a mount command similar to the one shown for Solaris 11 11/11. When the ISO image is mounted to the file system, you can perform the Add Content action from the Solaris 11 Library to synchronize packages and patches from the mounted image. I used the same mount point, so the repository URL was file://mnt/repo once again: After the synchronization of an SRU is complete, you can verify its content in the Solaris 11 library using the search function. The version pattern is 0.175.0.#, where the # is the same value as the SRU. In this example, I upgraded to SRU 1. The update job ran in just under 8 minutes, and a quick search shows that 22 software components were added to the repository: It's also possible to search for "Support Repository Update" to confirm the SRU was successfully added to the repository. Details on any of the update content are available by clicking the "View Details" button under the Packages/Patches entry.

    Read the article

  • Differences between AForge and OpenCV

    - by vrish88
    Hello, I am just learning about computer vision and C#. It seems like two prominent image processing libraries are OpenCV and AForge. What are some of the differences of the two? I am making a basic image editor in C# and while researching I have come across articles on both. But I don't really know why I would choose one over the other. I would like to eventually improve the app to include more advanced functions. Thanks.

    Read the article

  • Extracting ""((Adj|Noun)+|((Adj|Noun)(Noun-Prep)?)(Adj|Noun))Noun"" from Text (Justeson & Katz, 1995)

    - by ssuhan
    I would like to query if it is possible to extract ((Adj|Noun)+|((Adj|Noun)(Noun-Prep)?)(Adj|Noun))Noun proposed by Justeson and Katz (1995) in R package openNLP? That is, I would like to use this linguistic filtering to extract candidate noun phrases. I cannot well understand its meaning. Could you do me a favor to explain it or transform such representation into R language. Many thanks. Maybe we can start the sample code from: library("openNLP") acq <- "This paper describes a novel optical thread plug gauge (OTPG) for internal thread inspection using machine vision. The OTPG is composed of a rigid industrial endoscope, a charge-coupled device camera, and a two degree-of-freedom motion control unit. A sequence of partial wall images of an internal thread are retrieved and reconstructed into a 2D unwrapped image. Then, a digital image processing and classification procedure is used to normalize, segment, and determine the quality of the internal thread." acqTag <- tagPOS(acq) acqTagSplit = strsplit(acqTag," ")

    Read the article

  • CodePlex Daily Summary for Sunday, May 16, 2010

    CodePlex Daily Summary for Sunday, May 16, 2010New Projects3D Calculator: 3D Calc is a simple calculator application for Windows Phone 7, the purpose of this project is to demo the 3D animations capabilities of WP7 and sh...azaleas: AzaleasBlueset Studio Opensource Projects: Only for Opensource projects form Blueset Studio.Breck: A Phoenix and Jumper Moneky Production: Breck is a first person non-violent shooter developed in C++ and Dark GDK. After the main game is developed we are looking into making a sequel or...Discuz! Forum SDK: This project is use to login in and post or reply topic on discuz forum.Dominion.NET: Evolving Dominion source code originally written in VB6 and posted by "jatill" on Collectible Card Game Headquarters. Migration of the design and s...EkspSys2010-ITR: A mini project for the course Experimental System devolopment in spring 2010Facebook Graph Toolkit: This project is a .Net implementation of the Facebook Graph API. The aim of this project is to be a replacement to the existing Facebook Toolkit (h...iFree: This is a solution for Vietnamese network socialInfoPath Editor for Developer: InfoPath Editor for developer allows user to modify the html text directly inside InfoPath designer or filler and push the change back to InfoPath ...iZeit: Run your own online calendar, with blog integration, recurrence, todo list and categories.machgos dotNet Tests: Just some little test-projects for learningmim: TBAMinePost: MinePost is a game made for the first 48 hour Reddit Game Jam.Mockina: Mockina is a mock framework. Expression tree syntax is used to specify which members to mock, both public and non-public. The code is easy to under...MSBuild Launch Pad (mPad): This is just another shell extension for MSBuild to enable quick execution of MSBuild scripts via Windows Explorer context menu. (C) 2010 Lex LiPeacock: A browser like tabbed applicationPrimeCalculation: PrimeCalculation is a .NET app to calculate primes in a given range. Speed on Core2Duo 2,4GHZ: Found all primes from 0 to 1 billion in 35 seconds (...Slightly Silverlight: A Framework that leverages Silverlight for processing, business logic but standard HTML for the presentation layer.Stopwatch: Stopwatch is a tool for measuring the time. To start and pause stopwatch you only need to press a key on the keyboard. An additional context menu a...YAXLib: Yet Another XML Serialization Library for the .NET Framework: YAXLib is an XML Serialization library which helps you structure freely the XML result, choose among private and public fields to be serialized, an...New ReleasesActivate Your Glutes: v1.0.3.0: This release is a migration to VS2010, .Net 4, MVC2 and Entity Framework 4. The code has also been considerably cleaned up - taking advantage of E...AnyCAD: AnyCAD.Free.ENU.v1.1: http://www.anycad.net Modeling •2D: Line, Rectangle, Arc, Arch, Circle, Spline, Polygon •Feature: Extrude, Loft, Chamfer, Sweep, Revol •Boolean: ...Blueset Studio Opensource Projects: 多功能计算器 3.5: 稳定版本。Code for Rapid C# Windows Development eBook: LLBLGen LINQPad Data Context Driver Ver 1.0.0.0: First release of a Static LLBLGen Pro Data Context Driver for LINQPad I recommend LINQPad 4 as it seems more stable with this driver than LINQPad 2.DSQLT - Dynamic SQL Templates: Release 1.2. Some behaviour has changed!!: Attention. Some behaviour has changed! Now its necessary to use WildCards in the pattern-parameter for DSQLT.AllSourceContains DSQLT.Databases DSQ...FDS AutoCAD plug-in: FDS to AutoCAD plug-in: Basic functionality was implemented. Some routines like setting fds executable location are still not automated.Feature Builder Guidance Extensions: FBGX 2 - Standalone FX: Background: The Feature Builder Guidance is extensible and displays guidance content supplied by all the Feature Builder Guidance Extensions (FBGX...Floe IRC Client: Floe IRC Client 2010-05 R3: - You can now right click on the input box to get options for toggling bold, underline, colors, etc. - The size of the nickname column is now saved...Floe IRC Client: Floe IRC Client 2010-05 R4: - A user's channel status now appears next to their nick when they talk (e.g. @Nick or +Nick) - Fixed an error where certain kinds of network probl...HD-Trailers.NET Downloader: HD-Trailers.NET Downloader v1.0: Version 1.0 Thanks to Wolfgang for all his help. I let this project languish for too long while focusing on other things, but his involvement has ...InfoPath Editor for Developer: InfoPath Editor Beta 1: Intial Release: Can load InfoPath inner html. Can edit InfoPath inner html. InfoPath 2007 only.LinkSharp: LinkSharp 0.1.0: First release of LinkSharp. Set up iis, and use the sql script to create a new database.PowerAuras: PowerAuras V3.0.0F: This version adds better integration with GTFO New Flags Added PvP flag In 5-Man Instance In Raid Instance In Battleground In ArenaRx Contrib: V1.4: Add the ability to catch internal exception and the ability to publish error by queue adaptersSEO SiteMap: SEO SiteMap RC1: -SevenZipLib Library: v9.13.2: Stable release associated with 7z.dll 9.13 beta. Ability to create and update archives not implemente yet.Silverlight / WPF Controls: Upload, FlipPanel, DeepZoom, Animation, Encryption: Code Camp Demonstration: This code example demonstrates MVVM/MEF with WPF with attached properties,security and custom ICommand class.SQL Data Capture - Black Box Application Testing: SQLDataCapture V1.2: Added Entity Framework Support to CRUD generator (Insert Stored Procedure) and switched to VS 2010 for development.Stopwatch: Stopwatch 0.1: Stopwatch Release 0.1VCC: Latest build, v2.1.30515.0: Automatic drop of latest buildYet another developer blog - Examples: Asynchronous TreeView in ASP.NET MVC: This sample application shows how to use jQuery TreeView plugin for creating an asynchronous TreeView in ASP.NET MVC. This application is accompani...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryRawrPHPExcelBlogEngine.NETMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersMirror Testing SystemN2 CMSStyleCop

    Read the article

  • Which license should I use for my open source project

    - by Tyler
    Hey everybody - I have an open source project that I'm working on and I'm trying to figure out what liscense would be the best match. Essentially my project is a framework that developers will use to create projects of their own. The vision I had for the licensing of the project (in plain English): The user must not sell the source code or any derivatives. The user must not sell the framework or any derivatives. The user is allowed to sell a program which uses the framework. Basically, I just don't want anyone to abuse the open source aspect of the project, take what I've done and turn around and sell it. Any suggestions on a good license to use to acheive this? Thanks, Tyler

    Read the article

  • No root file system is defined error after installation

    - by LearnCode
    I installed ubuntu through Wubi and once i rebooted I get no root file system defined error. here's the output of the boot_info_script.Could anyone point me out where the error is. Boot Info Script 0.60 from 17 May 2011 ============================= Boot Info Summary: =============================== => Windows is installed in the MBR of /dev/sda. => Windows is installed in the MBR of /dev/sdb. sda1: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /bootmgr /Boot/BCD /Windows/System32/winload.exe /ntldr /ntdetect.com /wubildr /ubuntu/winboot/wubildr /wubildr.mbr /ubuntu/winboot/wubildr.mbr /ubuntu/disks/root.disk /ubuntu/disks/swap.disk sda1/Wubi: _____________________________________________________________________ File system: Boot sector type: Unknown Boot sector info: Mounting failed: mount: unknown filesystem type '' sda2: __________________________________________________________________________ File system: vfat Boot sector type: Unknown Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /boot.ini /ntldr /NTDETECT.COM sdb1: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: ============================ Drive/Partition Info: ============================= Drive: sda _____________________________________________________________________ Disk /dev/sda: 160.0 GB, 160041885696 bytes 240 heads, 63 sectors/track, 20673 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 63 301,250,879 301,250,817 7 NTFS / exFAT / HPFS /dev/sda2 301,250,943 312,575,759 11,324,817 c W95 FAT32 (LBA) GUID Partition Table detected, but does not seem to be used. Partition Start Sector End Sector # of Sectors System /dev/sda1 323,465,741,313,502,988275,962,973,585-323,465,465,350,529,402 - /dev/sda2 242,728,591,638,290,720578,721,383,108,845,578335,992,791,470,554,859 - /dev/sda3 1,827,498,311,425,204,2562,091,935,274,843,009,907264,436,963,417,805,652 - /dev/sda4 579,711,218,081,401,3572,006,665,459,744,645,1521,426,954,241,663,243,796 - /dev/sda11 270,286,346,402,038,1183,786,543,326,404,525,9543,516,256,980,002,487,837 - /dev/sda12 4,179,681,002,230,769,6684,179,389,374,010,033,387-291,628,220,736,280 - /dev/sda13 232,556,480,979,456,1311,160,152,593,793,119,235927,596,112,813,663,105 - /dev/sda14 98,342,784,050,266,9183,691,264,578,843,725,1953,592,921,794,793,458,278 - /dev/sda15 2,307,845,219,957,882,4961,850,841,032,955,276,350-457,004,187,002,606,145 - /dev/sda16 512,592,046,878,946,497368,458,231,024,779,444-144,133,815,854,167,052 - /dev/sda17 2,504,135,232,870,384,3923,665,087,872,719,320,8291,160,952,639,848,936,438 - /dev/sda18 3,783,181,605,270,691,304122,034,509,624,708,942-3,661,147,095,645,982,361 - /dev/sda19 3,519,661,520,275,829,5122,376,243,094,723,723,587-1,143,418,425,552,105,924 - /dev/sda20 3,867,920,076,859,0744,494,691,111,933,625,1044,490,823,191,856,766,031 - /dev/sda21 1,500,144,061,909,253,7612,511,182,033,846,676,3401,011,037,971,937,422,580 - /dev/sda22 13,035,625,499,900,0062,360,168,613,941,394,9472,347,132,988,441,494,942 - /dev/sda23 4,228,978,682,068,599,48813,159,423,631,648,263-4,215,819,258,436,951,224 - /dev/sda24 3,695,955,742,872,046,9084,561,928,726,501,845,776865,972,983,629,798,869 - /dev/sda25 1,297,460,286,683,948,0461,444,350,486,339,417,957146,890,199,655,469,912 - /dev/sda26 1,228,858,248,533,131,831 0-1,228,858,248,533,131,830 - /dev/sda121 3,189,184,846,146,487,1461,849,820,258,006,914,852-1,339,364,588,139,572,293 - /dev/sda122 1,226,215,547,991,800,578389,781,518,734,546,300-836,434,029,257,254,277 - /dev/sda123 3,851,660,168,574,583,4654,046,215,657,583,031,556194,555,489,008,448,092 - /dev/sda124 1,197,460,980,174,153,341699,103,965,005,093,246-498,357,015,169,060,094 - Drive: sdb _____________________________________________________________________ Disk /dev/sdb: 750.2 GB, 750153367552 bytes 255 heads, 63 sectors/track, 91200 cylinders, total 1465143296 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdb1 2,048 1,465,143,295 1,465,141,248 7 NTFS / exFAT / HPFS "blkid" output: ________________________________________________________________ Device UUID TYPE LABEL /dev/loop0 iso9660 Ubuntu 11.04 amd64 /dev/loop1 squashfs /dev/sda1 E814B55B14B52E06 ntfs /dev/sda2 01CD-023B vfat HP_RECOVERY /dev/sdb1 7836F22A36F1E8D0 ntfs Elements ================================ Mount points: ================================= Device Mount_Point Type Options /dev/loop0 /cdrom iso9660 (ro,noatime) /dev/loop1 /rofs squashfs (ro,noatime) /dev/sdb1 /mnt fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) ================================ sda2/boot.ini: ================================ -------------------------------------------------------------------------------- [boot loader] timeout=0 default=C:\CMDCONS\BOOTSECT.DAT [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Professional" /fastdetect C:\CMDCONS\BOOTSECT.DAT="Microsoft Windows Recovery Console" /cmdcons -------------------------------------------------------------------------------- ======================== Unknown MBRs/Boot Sectors/etc: ======================== Unknown GPT Partiton Type c104043000e9b9040dff24b580010100 Unknown GPT Partiton Type 46313020746f20737461727420746865 Unknown GPT Partiton Type 65727920706172746974696f6e207761 Unknown GPT Partiton Type 727920706172746974696f6e0d0a0000 Unknown GPT Partiton Type 000f84e5f7668b162404e82804744066 Unknown GPT Partiton Type ce01e8dc038bfe66391624047505e8d9 Unknown GPT Partiton Type 0345086603f0e881030bd2740333d240 Unknown GPT Partiton Type bece01e8db0287fec645041266895508 Unknown GPT Partiton Type 01f60634010175078b363b01e854f5e8 Unknown GPT Partiton Type 313825740ffec03865107408fec03824 Unknown GPT Partiton Type 02f60634014074088bfdbece01e85101 Unknown GPT Partiton Type 263401f9e894f30f858ef4e8e201e8ec Unknown GPT Partiton Type f7e960f35245434f5645525966606633 Unknown GPT Partiton Type 660faf1e00106603dac3668b0e001066 Unknown GPT Partiton Type 8bfd386d04740583c710e2f6c36660c6 Unknown GPT Partiton Type 04ebf132c0b91000f3aac3bf0c04ebf3 Unknown GPT Partiton Type 02662bc1660fb71e0e02662bc366031e Unknown GPT Partiton Type f4b40ebb0700b901003c08751381ff25 Unknown GPT Partiton Type 534f465448494e4b90653f62011b0100 Unknown GPT Partiton Type 0b050900027777772e68702e636f6d00 Unknown GPT Partiton Type d441a0f5030003000ecb744a08bb3746 Unknown GPT Partiton Type f8579a116b4a7aa931cde97a4b9b5c09 Unknown GPT Partiton Type 7229990415b77c0a1970e7e824237a3a Unknown GPT Partiton Type afb6e34d6b4bd8c7c0eada19a9786cc3 Unknown BootLoader on sda1/Wubi 00000000 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 |0000000000000000| * 00000200 Unknown BootLoader on sda2 00000000 e9 a7 00 52 45 43 4f 56 45 52 59 00 02 08 20 00 |...RECOVERY... .| 00000010 02 00 00 00 00 f8 00 00 3f 00 f0 00 7f b9 f4 11 |........?.......| 00000020 8c cd ac 00 1e 2b 00 00 00 00 00 00 02 00 00 00 |.....+..........| 00000030 01 00 06 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000040 80 00 29 3b 02 cd 01 20 20 20 20 20 20 20 20 20 |..);... | 00000050 20 20 46 41 54 33 32 20 20 20 8b d0 c1 e2 02 80 | FAT32 ......| 00000060 e6 01 66 c1 e8 07 66 3b 46 f8 74 2a 66 89 46 f8 |..f...f;F.t*f.F.| 00000070 66 03 46 f4 66 0f b6 5e 28 80 e3 0f 74 0f 3a 5e |f.F.f..^(...t.:^| 00000080 10 0f 83 90 00 66 0f af 5e 24 66 03 c3 bb e0 07 |.....f..^$f.....| 00000090 b9 01 00 e8 cf 00 8b da 66 8b 87 00 7e 66 25 ff |........f...~f%.| 000000a0 ff ff 0f 66 3d f8 ff ff 0f c3 33 c9 8e d9 8e c1 |...f=.....3.....| 000000b0 8e d1 66 bc f4 7b 00 00 bd 00 7c 66 0f b6 46 10 |..f..{....|f..F.| 000000c0 66 f7 66 24 66 0f b7 56 0e 66 03 56 1c 66 89 56 |f.f$f..V.f.V.f.V| 000000d0 f4 66 03 c2 66 89 46 fc 66 c7 46 f8 ff ff ff ff |.f..f.F.f.F.....| 000000e0 66 8b 46 2c 66 50 e8 af 00 bb 70 00 b9 01 00 e8 |f.F,fP....p.....| 000000f0 73 00 bf 00 07 b1 0b be a9 7d f3 a6 74 2a 03 f9 |s........}..t*..| 00000100 83 c7 15 81 ff 00 09 72 ec 66 40 4a 75 db 66 58 |[email protected]| 00000110 e8 47 ff 72 cf be b4 7d ac 84 c0 74 09 b4 0e bb |.G.r...}...t....| 00000120 07 00 cd 10 eb f2 cd 19 66 58 ff 75 09 ff 75 0f |........fX.u..u.| 00000130 66 58 bb 00 20 66 83 f8 02 72 da 66 3d f8 ff ff |fX.. f...r.f=...| 00000140 0f 73 d2 66 50 e8 50 00 0f b6 4e 0d e8 16 00 c1 |.s.fP.P...N.....| 00000150 e1 05 03 d9 66 58 53 e8 00 ff 5b 72 d8 8a 56 40 |....fXS...[r..V@| 00000160 ea 00 00 00 20 66 60 66 6a 00 66 50 53 6a 00 66 |.... f`fj.fPSj.f| 00000170 68 10 00 01 00 8b f4 b8 00 42 8a 56 40 cd 13 be |h........B.V@...| 00000180 c7 7d 72 94 67 83 44 24 06 20 66 67 ff 44 24 08 |.}r.g.D$. fg.D$.| 00000190 e2 e3 83 c4 10 66 61 c3 66 48 66 48 66 0f b6 56 |.....fa.fHfHf..V| 000001a0 0d 66 f7 e2 66 03 46 fc c3 4e 54 4c 44 52 20 20 |.f..f.F..NTLDR | 000001b0 20 20 20 20 0d 0a 4e 6f 20 53 79 73 74 65 6d 20 | ..No System | 000001c0 44 69 73 6b 20 6f 72 0d 0a 44 69 73 6b 20 49 2f |Disk or..Disk I/| 000001d0 4f 20 65 72 72 6f 72 0d 0a 50 72 65 73 73 20 61 |O error..Press a| 000001e0 20 6b 65 79 20 74 6f 20 72 65 73 74 61 72 74 0d | key to restart.| 000001f0 0a 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa |..............U.| 00000200 =============================== StdErr Messages: =============================== umount: /isodevice: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1))

    Read the article

  • In a multithreaded app, would a multi-core or multiprocessor arrangement be better?

    - by Michael
    I've read a lot on this topic already both here (e.g., stackoverflow.com/questions/1713554/threads-processes-vs-multithreading-multi-core-multiprocessor-how-they-are or http://stackoverflow.com/questions/680684/multi-cpu-multi-core-and-hyper-thread) and elsewhere (e.g., ixbtlabs.com/articles2/cpu/rmmt-l2-cache.html or software.intel.com/en-us/articles/multi-core-introduction/), but I still am not sure about a couple things that seem very straightforward. So I thought I'd just ask. (1) Is a multi-core processor in which each core has dedicated cache effectively the same as a multiprocessor system (balanced of course for processor speed, cache size, and so on)? (2) Let's say I have some images to analyze (i.e., computer vision), and I have these images loaded into RAM. My app spawns a thread for each image that needs to be analyzed. Will this app on a shared cache multi-core processor run slower than on a dedicated cache multi-core processor, and would the latter run at the same speed as on an equivalent single-core multiprocessor machine? Thank you for the help!

    Read the article

  • ODBC Proxy for remotely accessing legacy resources?

    - by Winston Fassett
    Our project uses AcuCorp's AcuODBC driver to access a legacy Vision database. The problem is that we only have a 32-bit driver and the installer simply won't run on our 64-bit servers. I need a way to use SSIS to pull data from that system. As far as I can tell, there are 3 options: Set up a whole new SQL Server instance with SSIS and the AcuODBC drivers on a 32-bit VM (costly) Try to hack the 32-bit driver onto our 64-bit server manually (failure prone and unsupported) Set up a 32-bit VM with some sort of "proxy" service that our 64-bit SSIS can use to pull the data. The first option is the least desirable. If you have any suggestions for options 2 or 3, or anything else I haven't thought of, I'd love to hear them.

    Read the article

  • Dice face value recognition

    - by Jakob Gade
    I’m trying to build a simple application that will recognize the values of two 6-sided dice. I’m looking for some general pointers, or maybe even an open source project. The two dice will be black and white, with white and black pips respectively. Their distance to the camera will always be the same, but their position on the playing surface will be random. (not the best example, the surface will be a different color and the shadows will be gone) I have no prior experience with developing this kind of recognition software, but I would assume the trick is to first isolate the faces by searching for the square profile with a dominating white or black color (the rest of the image, i.e. the table/playing surface, will in distinctly different colors), and then isolate the pips for the count. Shadows will be eliminated by top down lighting. I’m hoping the described scenario is so simple (read: common) it may even be used as an “introductory exercise” for developers working on OCR technologies or similar computer vision challenges.

    Read the article

  • Subversion roadmap

    - by gbjbaanb
    Recently there was a post to the subversion dev mailing list suggesting a vision and roadmap for the future of Subversion. As a result, I'm posting this to elicit some suggestions and contributions from the users of Subversion. Any comments are welcome, and I shall feedback a synopsis with a link to this question to the dev mailing list. On the post, several ideas were suggested as being "very nice to have" and are offered as the starting point of a future roadmap. These are: Obliterate Shelve/Checkpoint Repository-dictated Configuration Rename Tracking Improved Merging Improved Tree Conflict Handling Enterprise Authentication Mechanisms Forward History Searching Log Message Templates So given all the above, what features in subversion, or missing from subversion, do you think could be improved or added?

    Read the article

  • How to Tell a Hardware Problem From a Software Problem

    - by Chris Hoffman
    Your computer seems to be malfunctioning — it’s slow, programs are crashing or Windows may be blue-screening. Is your computer’s hardware failing, or does it have a software problem that you can fix on your own? This can actually be a bit tricky to figure out. Hardware problems and software problems can lead to the same symptoms — for example, frequent blue screens of death may be caused by either software or hardware problems. Computer is Slow We’ve all heard the stories — someone’s computer slows down over time because they install too much software that runs at startup or it becomes infected with malware. The person concludes that their computer is slowing down because it’s old, so they replace it. But they’re wrong. If a computer is slowing down, it has a software problem that can be fixed. Hardware problems shouldn’t cause your computer to slow down. There are some rare exceptions to this — perhaps your CPU is overheating and it’s downclocking itself, running slower to stay cooler — but most slowness is caused by software issues. Blue Screens Modern versions of Windows are much more stable than older versions of Windows. When used with reliable hardware with well-programmed drivers, a typical Windows computer shouldn’t blue-screen at all. If you are encountering frequent blue screens of death, there’s a good chance your computer’s hardware is failing. Blue screens could also be caused by badly programmed hardware drivers, however. If you just installed or upgraded hardware drivers and blue screens start, try uninstalling the drivers or using system restore — there may be something wrong with the drivers. If you haven’t done anything with your drivers recently and blue screens start, there’s a very good chance you have a hardware problem. Computer Won’t Boot If your computer won’t boot, you could have either a software problem or a hardware problem. Is Windows attempting to boot and failing part-way through the boot process, or does the computer no longer recognize its hard drive or not power on at all? Consult our guide to troubleshooting boot problems for more information. When Hardware Starts to Fail… Here are some common components that can fail and the problems their failures may cause: Hard Drive: If your hard drive starts failing, files on your hard drive may become corrupted. You may see long delays when you attempt to access files or save to the hard drive. Windows may stop booting entirely. CPU: A failing CPU may result in your computer not booting at all. If the CPU is overheating, your computer may blue-screen when it’s under load — for example, when you’re playing a demanding game or encoding video. RAM: Applications write data to your RAM and use it for short-term storage. If your RAM starts failing, an application may write data to part of the RAM, then later read it back and get an incorrect value. This can result in application crashes, blue screens, and file corruption. Graphics Card: Graphics card problems may result in graphical errors while rendering 3D content or even just while displaying your desktop. If the graphics card is overheating, it may crash your graphics driver or cause your computer to freeze while under load — for example, when playing demanding 3D games. Fans: If any of the fans fail in your computer, components may overheat and you may see the above CPU or graphics card problems. Your computer may also shut itself down abruptly so it doesn’t overheat any further and damage itself. Motherboard: Motherboard problems can be extremely tough to diagnose. You may see occasional blue screens or similar problems. Power Supply: A malfunctioning power supply is also tough to diagnose — it may deliver too much power to a component, damaging it and causing it to malfunction. If the power supply dies completely, your computer won’t power on and nothing will happen when you press the power button. Other common problems — for example, a computer slowing down — are likely to be software problems. It’s also possible that software problems can cause many of the above symptoms — malware that hooks deep into the Windows kernel can cause your computer to blue-screen, for example. The Only Way to Know For Sure We’ve tried to give you some idea of the difference between common software problems and hardware problems with the above examples. But it’s often tough to know for sure, and troubleshooting is usually a trial-and-error process. This is especially true if you have an intermittent problem, such as your computer blue-screening a few times a week. You can try scanning your computer for malware and running System Restore to restore your computer’s system software back to its previous working state, but these aren’t  guaranteed ways to fix software problems. The best way to determine whether the problem you have is a software or hardware one is to bite the bullet and restore your computer’s software back to its default state. That means reinstalling Windows or using the Refresh or reset feature on Windows 8. See whether the problem still persists after you restore its operating system to its default state. If you still see the same problem – for example, if your computer is blue-screening and continues to blue-screen after reinstalling Windows — you know you have a hardware problem and need to have your computer fixed or replaced. If the computer crashes or freezes while reinstalling Windows, you definitely have a hardware problem. Even this isn’t a completely perfect method — for example, you may reinstall Windows and install the same hardware drivers afterwards. If the hardware drivers are badly programmed, the blue-screens may continue. Blue screens of death aren’t as common on Windows these days — if you’re encountering them frequently, you likely have a hardware problem. Most blue screens you encounter will likely be caused by hardware issues. On the other hand, other common complaints like “my computer has slowed down” are easily fixable software problems. When in doubt, back up your files and reinstall Windows. Image Credit: Anders Sandberg on Flickr, comedy_nose on Flickr     

    Read the article

  • Architecture choice about representation of collections in Business Objects

    - by Rajarshi
    I have made certain choices in my architecture which I request the community to review and comment. I am breaking up the post in smaller sections to make it easier to understand the context and then suggest/comment. I am sorry that the post is long, but is required to explain the context. What am I building A typical business application where there are application users, security roles, business operation/action rights based on roles and several business modules like Stock Receive, Stock Transfer, Sale Order, Sale Invoice, Sale Return, Stock Audit etc. and several reports. The application is a WinForm application since it has a lot of rich and responsive UI requirements and has to operate in disconnected mode (with a local SQL Server), most of the time. What have I done I have built a framework - nothing to boast about, but just a set of libraries that serves the repetative requirements of my application, e.g. authentication, role based authorization, data access, validation, exception handling, logging, change status tracking, presentation model compliance and reasonable loose coupling between components. No, I have not written everything from scratch, you can say I have consolidated many things together like some concepts from CSLA, Martin Fowler for Presentation Model, blocks from Enterprise Library, Unity etc. to build a set of libraries that will help my developers be productive quickly without having to look up Google for many of the technical requirements. I have tried to keep the framework generic so that it can be used in typical business applications and also tried to follow some best practices that will support the same Business Objects to be used in an ASP.NET MVC environment also. My present architecture serves my objectives well, and have built several modules (on WinForm) without much trouble. The architecture also lent itself well to build some usable prototype on ASP.NET MVC with the same set of business objects, without changing a single line of code. My Dilemma I have used Custom Business Objects since that gives me a clearer OOP representation of the problem scope in my solution scope, and helps me visualize my entire solution as collection of objects with data and behavior rather than having a set of relational data (DataSet) and implement behaviours (business logic, validation) etc. separately. With rich databinding support in .NET 2.0 binding Custom Business Objects to UI was a breeze. Now while building my business objects, I am still in a dilemma about representation of collections in business objects. Currently I am using DataSets to represent collections while I have seen many suggestions to implement custom collections. For example, in my vision, a typical Sale Invoice Object will contain 'Sales Invoice Items' as a collection. Now theoritically, I can accept that the each 'Sales Invoice Item' should have its own behavior along with their data (ItemCode, Name, Qty, Price etc.) but typically managing of Sale Invoice Items in a Sale Invoice is handled by the Sale Invoice Object itself, e.g. adding/removing Items from collection. Additionally, we can also put business logic/rules for the Sales Invoice Items like "Qty should not be greater than the ordered qty", "Price should be max 10% above the price in Sale Order" etc. in the Sale Invoice object itself. With that kind of a vision, I felt that most business object child collections can be managed by the parent itself, including add/remove from collection as well and implementing business logic for the collection items, hence the collection items hold nothing but data. Additionally, typical collections are represented in UI in Grids, where ability to support DataBinding becomes very important for any collection. Implementing a custom collection, in that case would also mean, I have to implement robust DataBinding support as well, for the collection, which is of course time consuming. Now, considering child collection behaviors are implemented in the parent and the need for DataBinding of child collections, I chose DataSet to represent any child collection in my business objects. In the above example of Sale Invoice I will have 'Invoice Number', 'Date', 'Customer' etc. as attributes of the 'Sale Invoice' but 'InvoiceItems' as a DataSet. Of course, when I say DataSet, it is not a vanilla dataset but an extended DataSet that supports business rule validation and the same role based security model of my framework to allow/deny any business operation to rows/columns of the DataSet, automatically. This approach has allowed easier collection management and databinding in my business objects and my developers are able to deliver modules rapidly. Questions Do you feel that the approach is reasonable? Do you see any shortcomings of this approach? I am recently thinking of using 'Typed DataSets' as child collections, for easier representation in code, that will allow me to write 'currentInvoice.InvoiceItems' (for the DataTable) and 'invoiceItem.ProductCode' or 'invoiceItem.Qty', instead of 'drow["ProductCode"].ToString()' or '(int)drow["Qty"]' etc. Does this choice have any demerits? Thank you if you have read so far and a salute if you still have the Energy to answer.

    Read the article

  • Using Mapkit to create a local searchable Map

    - by Digital D
    Using MapKit as a base, I'm planning on adding a map to a project with 'local search' capabilities. I think 'local search' describes the feature I want to design into the map. Here is my vision. The map is displayed on the bottom half of a view. The user's current location is highlighted by default. When the user pushes the 'search' button annotation pins drop onto the map. The search is programmatically fixed to a certain item....for example supermarkets. So supermarkets in a 5 mile radius of the user's current location will populate the map. How would I add this local search feature to the already amazing MapKit? I've learned an incredible amount as a new developer in the last few months, and look forward to learning googles...correction googols more. Thanks in anticipation.

    Read the article

  • Securing Plugin Data in WordPress From Access by Other Plugins?

    - by farinspace
    There probably is some solution to this, whether it involves code running on just the wordpress installation or a combination of a wordpress installation and a master server I am not sure yet, but please remember not to have tunnel vision and consider any and all possible solutions: The scenario is this: A WordPress plugin (plugin-A) that manages some sort of valuable data (something that the admin would not want stolen), lets say, lead data with user's name and email addresses, the plugin uses its own db tables. Other than the obvious (which is the admin installing plugin-B, not knowing its malicious intent), what is to prevent another WordPress plugin (plugin-B) from accessing plugin-A data or hacking plugin-A files to circumvent security.

    Read the article

  • How to set default values to all wrong or null parameters of method?

    - by Roman
    At the moment I have this code (and I don't like it): private RenderedImage private RenderedImage getChartImage (GanttChartModel model, String title, Integer width, Integer height, String xAxisLabel, String yAxisLabel, Boolean showLegend) { if (title == null) { title = ""; } if (xAxisLabel == null) { xAxisLabel = ""; } if (yAxisLabel == null) { yAxisLabel = ""; } if (showLegend == null) { showLegend = true; } if (width == null) { width = DEFAULT_WIDTH; } if (height == null) { height = DEFAULT_HEIGHT; } ... } How can I improve it? I have some thoughts about introducing an object which will contain all these parameters as fields and then, maybe, it'll be possible to apply builder pattern. But still don't have clear vision how to implement that and I'm not sure that it's worth to be done. Any other ideas?

    Read the article

  • Singletons and other design issues

    - by Ahmed Saleh
    I have worked using different languages like C++/Java and currently AS3. Most applications were computer vision, and small 2D computer games. Most companies that I have worked for, they use Singletons in a language like AS3, to retrieve elements or classes in an easy way. Their problem is basically they needs some variables or to call other functions from other classes. In a language like AS3, there is no private constructor, and they write a hacky code to prevent new instances. In Java and C++ I also faced the situation that I need to use other classe's members or to call their functions in different classes. The question is, is there a better or another design, to let other classes interact with each others without using singletons? I feel that composition is the answer, but I need more detailed solutions or design suggestions.

    Read the article

  • Process for beginning a Ruby on Rails project

    - by Daniel Beardsley
    I'm about to begin a Ruby on Rails project and I'd love to hear how others go through the process of starting an application design. I have quite a bit of experience with RoR, but don't have that many starting from scratch with only a vision experiences and would appreciate the wisdom of others who've been there. I'm looking for an order of events, reasons for the order, and maybe why each part is important. I can think of a few starting points, but I'm not sure where it's best to begin Model design and relationships (entities, how they relate, and their attributes) Think of user use-cases (or story-boards) and implement the minimum to get these done Create Model unit-tests then create the necessary migrations and AR models to get the tests to pass Hack out the most basic version of the simplest part of your application and go from there Start with a template for a rails app (like http://github.com/thoughtbot/suspenders) Do the boring gruntwork first (User auth, session management, ...) ...

    Read the article

  • Control webcam from C#

    - by Moulde
    Hi I have a Creative Life CAM Optia AF webcam, the software included in the package is able to control the camera in different ways, like set autofocus to auto or manual, and a bunch of gamma and brightness settings. I'm capturing the feed with the AForge Computer vision library, and it's working great. But i would like to be able to set the manual focus from inside my application. Ive been searching for a tutorial, but come up empty handed. Can i somehow either disassemble the included software, or is there some way to fetch the traffic / instructions being sent to the device? Thanks in advance.

    Read the article

  • Standardizing camera input in OpenCV? (Contrast/Saturation/Brightness etc..)

    - by karpathy
    I am building an application using OpenCV that uses the webcam and runs some vision algorithms. I would like to make this application available on the internet after I am done, but I am concerned about the vast differences in camera settings on every computer, and I am worried that the algorithm may break if the settings are too different from mine. Is there any way, after capturing the frame, to post process it and make sure that the contrast is X, brightness is Y, and saturation is Z? I think the Camera settings themselves can not be changed from code directly using the current OpenCV Python bindings. Would anyone be able to tell me about how I could calculate some of these parameters from the image and adjust them appropriately using OpenCV?

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >