Search Results

Search found 70 results on 3 pages for 'crunch'.

Page 2/3 | < Previous Page | 1 2 3  | Next Page >

  • Google Earth 6–It’s All About Trees & Better Street View

    - by Gopinath
    The latest version of Google Earth is all about viewing 3D models of trees that we can see as we walk through the streets in Google Earth and integrated street views. Tech Crunch says ..trees are obviously a hugely important part of the Earth. To get them into Google Earth, the search giant has made 3D models of over 50 different species of trees. And they’ve included over 80 million of them in various places around the world including Athens, Berlin, Chicago, New York City, San Francisco, and Tokyo. The other big addition to this latest version of Google Earth is Integrated Street View. To be clear, Google has had a form of Street View in Google Earth since 2008, but now it’s fully a part of the experience. This means that you can go all the way from space, right down to Street View seamlessly. Check the embedded video to know more about Google Earth 6 features This article titled,Google Earth 6–It’s All About Trees & Better Street View, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Building Web Applications with ACT and jQuery

    - by dwahlin
    My second talk at TechEd is focused on integrating ASP.NET AJAX and jQuery features into websites (if you’re interested in Silverlight you can download code/slides for that talk here). The content starts out by discussing ScriptManager features available in ASP.NET 3.5 and ASP.NET 4 and provides details on why you should consider using a Content Delivery Network (CDN).  If you’re running an external facing site then checking out the CDN features offered by Microsoft or Google is definitely recommended. The talk also goes into the process of contributing to the Ajax Control Toolkit as well as the new Ajax Minifier tool that’s available to crunch JavaScript and CSS files. The extra fun starts in the next part of the talk which details some of the work Microsoft is doing with the jQuery team to donate template, globalization and data linking code to the project. I go into jQuery templates, data linking and a new globalization option that are all being worked on. I want to thank Stephen Walther, Dave Reed and James Senior for their thoughts and contributions since some of the topics covered are pretty bleeding edge right now.The slides and sample code for the talk can be downloaded below.     Download Slides and Samples

    Read the article

  • Business Analyst role in development process

    - by Ryan
    I work as a business analyst and I currently oversee much of the development efforts of an internal project. I'm responsible for the requirements, specs, and overall testing. I work closely with the developers (onshore and offshore). The offshore team produces all of the reports. Version 1.0 had a 9 month development cycle and I had about 4-5 months to test all the reports. There was the usual back and forth to get the implementation right. Version 2.0 had a much shorter development cycle (3 months). I received the first version of the reports about 3 weeks ago and noticed a lot of things wrong with it. Many of the requirements were wrong and the performance of the queries was horrendous at 5x - 6x longer than it should have been. The onshore lead developer was out and did not supervise the offshore development team in generating the reports. Without consulting management, I took a look at the SQL in the reports and was able to improve performance greatly (by a factor of 6x) which is acceptable for this version. I sent the updated queries as guidelines to the offshore team and told them they should look at doing X instead of Y to improve performance and also to fix some specific logic issues. I then spoke to my managers about this because it doesn't feel right that I was developing SQL queries, but given our time crunch I saw no other way. We were able to fix the issue quite fast which I'm happy with. Current situation: the onshore managers aren't too pleased that the offshore team did not code for performance. I know there are some things I could have done better throughout this process and I do not in any way consider myself a programmer. My question is, if an offshore team that works apart from the onshore project resources fails to deliver an acceptable release, is it appropriate to clean up their work to meet a deadline? What kind of problems could this create in the future?

    Read the article

  • Statistical Software Quality Control References

    - by Xodarap
    I'm looking for references about hypothesis testing in software management. For example, we might wonder whether "crunch time" leads to an increase in defect rate - this is a surprisingly difficult thing to do. There are many questions on how to measure quality - this isn't what I'm asking. And there are books like Kan which discuss various quality metrics and their utilities. I'm not asking this either. I want to know how one applies these metrics to make decisions. E.g. suppose we decide to go with critical errors / KLOC. One of the problems we'll have to deal with with that this is not a normally distributed data set (almost all patches have zero critical errors). And further, it's not clear that we really want to examine the difference in means. So what should our alternative hypothesis be? (Note: Based on previous questions, my guess is that I'll get a lot of answers telling me that this is a bad idea. That's fine, but I'd request that it's based on published data, instead of your own experience.)

    Read the article

  • Failed Project: When to call it?

    - by Dan Ray
    A few months ago my company found itself with its hands around a white-hot emergency of a project, and my entire team of six pulled basically a five week "crunch week". In the 48 hours before go-live, I worked 41 of them, two back to back all-nighters. Deep in the middle of that, I posted what has been my most successful question to date. During all that time there was never any talk of "failure". It was always "get it done, regardless of the pain." Now that the thing is over and we as an organization have had some time to sit back and take stock of what we learned, one question has occurred to me. I can't say I've ever taken part in a project that I'd say had "failed". Plenty that were late or over budget, some disastrously so, but I've always ended up delivering SOMETHING. Yet I hear about "failed IT projects" all the time. I'm wondering about people's experience with that. What were the parameters that defined "failure"? What was the context? In our case, we are a software shop with external clients. Does a project that's internal to a large corporation have more space to "fail"? When do you make that call? What happens when you do? I'm not at all convinced that doing what we did is a smart business move. It wasn't my call (I'm just a code monkey) but I'm wondering if it might have been better to cut our losses, say we're not delivering, and move on. I don't just say that due to the sting of the long hours--the company royally lost its shirt on the project, plus the intangible costs to the company in terms of employee morale and loyalty were large. Factor that against the PR hit of failing to deliver a high profile project like this one was... and I don't know what the right answer is.

    Read the article

  • During interviews, how do I gauge a company's respect for my position?

    - by Bluu
    I'm a web developer who previously joined a software company not knowing their value and respect went to big data analysis, not their website. Sure, they needed a public-facing website, but I eventually found that the most exciting, valued projects there went to data teams. Realizing this, members of the web team were picked off and switched teams, making it hard for those left behind to keep up the work load, and making us look bad. At times it seemed the company culture sneered at us, wondering, "What does that team even do here?" A friend of mine had the opposite problem at another software company. All he wanted to do was crunch big numbers. However he complained that the rest of the company wouldn't shut up about developing the usability of their website. Meanwhile his analytics team languished. I've also heard of salespeople getting love at a company, while engineering as a whole is undervalued, or vice versa. As for my story, if I could have known the company was like that, I might have avoided the job in the first place. So, before I join a new company, how do I gauge its actual respect for my programming role? For its other roles? I want to avoid companies that aren't serious about my particular focus in programming, or, perhaps bigger picture, companies that don't value everybody who works there. (Note I think gauging the company's attitude toward the basic needs of its programmers is covered by these related questions.)

    Read the article

  • 2d game view camera zoom, rotation & offset using 'Filter' / 'Shader' processing?

    - by Arthur Wulf White
    I wish to add the ability to zoom-in, zoom-out, rotate and move the view in a top-down view over a collection of points and lines in a large 2d map. I split the map into a grid so I only need to render the points that are 'near' the camera. My question is, how do I render a point A(Xp,Yp) assuming the following details: Offset of the camera pov from the origin of the map is: Xc, Yc Meaning the camera center is positioned on top of that point. If there's a point in Xc, Yc it is positioned in the center of the screen. The rotation angle is: alpha The scale is: S Read my answer first. I am thinking there is more optimized solution, thanks. My question is how to include the following improvement: I read in the AS3 Bible book that: In regards to ShaderInput, You can use these methods to coerce Pixel Bender to crunch huge sets of data masquerading as images, without doing too much work on the ActionScript side to make them look like images. Meaning if I am performing the same linear function on a lot of items, I can do it all at once if I use Shaders correctly and save processing time. Does anyone know how that is accomplished? Here is a sample of what I mean: http://wonderfl.net/c/eFp0/

    Read the article

  • Case studies for successful service (project) based software development businesses without constant overtime from its employees [closed]

    - by Ryan Taylor
    I work for an IT company that is primarily services (project) based rather than product based. All software engineers are salaried. The company has set new expectations that everyone should work 48 hours per week instead of 40. Note, this isn't occasional overtime due to crunches. This is the new 40. The reasoning is that this enables the company to provide benefits to its employees such as monetary incentives and training because the company is more profitable. more hours worked = more billable hours = larger profit I understand the need for profitability and the occasional crunch time and have put in the extra hours when it was needed and beneficial to the project. However, I am also very sensitive to work life balance and have raised my concerns about the the new expectation. My employer is open to other methods to increase profitability so I hold hope that we can turn things around before it becomes a horrible place to work. How does a services based company become more profitable without increasing the number of hours expected from it's salaried employees? Are there any case studies showing the pros and cons of consistent overtime? Are there any case studies for a successful service based business model (for software development companies) that does not require consistent overtime from its employees?

    Read the article

  • How to avoid the GameManager god object?

    - by lorancou
    I just read an answer to a question about structuring game code. It made me wonder about the ubiquitous GameManager class, and how it often becomes an issue in a production environment. Let me describe this. First, there's prototyping. Nobody cares about writing great code, we just try to get something running to see if the gameplay adds up. Then there's a greenlight, and in an effort to clean things up, somebody writes a GameManager. Probably to hold a bunch of GameStates, maybe to store a few GameObjects, nothing big, really. A cute, little, manager. In the peaceful realm of pre-production, the game is shaping up nicely. Coders have proper nights of sleep and plenty of ideas to architecture the thing with Great Design Patterns. Then production starts and soon, of course, there is crunch time. Balanced diet is long gone, the bug tracker is cracking with issues, people are stressed and the game has to be released yesterday. At that point, usually, the GameManager is a real big mess (to stay polite). The reason for that is simple. After all, when writing a game, well... all the source code is actually here to manage the game. It's easy to just add this little extra feature or bugfix in the GameManager, where everything else is already stored anyway. When time becomes an issue, no way to write a separate class, or to split this giant manager into sub-managers. Of course this is a classical anti-pattern: the god object. It's a bad thing, a pain to merge, a pain to maintain, a pain to understand, a pain to transform. What would you suggest to prevent this from happening?

    Read the article

  • Programming ... where to start?

    - by agnesb
    For the last 4 months, I've working tirelessly on a project with my partner, who is a super programmer. He did 100% of the whole mechanism that makes our site work. My job is to take care of the cosmetic aspects of the site ... thus I should say I am good enough at CSS and html. However, since we are using Drupal to build our site, from time to time, I need his help in order to figure out how to do the customization. Sometimes, I got frustrated. I know that as a partner, I should know a little bit on how to program. However, during the crunch time when you have to deliver lightning fast (we have our site built from scratch to finish in 4 weeks ... and you are all welcome to come join the fun! It's a site for programmers!) there is no time to learn from the basics. All I can do is to pick up whatever I need at the moment. Now the site is launched, I am thinking it should be time to do some learning. So, where should I start? My partner always said I need to start with Python. What's your take on this? Thanks.

    Read the article

  • Interesting SQL Sorting Issue

    - by rofly
    It's crunch time, deadline for my most recent contract is coming in two days and almost everything is complete and working fine (knock on wood) except for one issue. In one of my stored procedures, I'm needing to return a result set as follows. group_id | name A101 | Craig A102 | Craig Z101 | Craig Z102 | Craig A101 | Jim A102 | Jim Z101 | Jim Z102 | Jim B101 | Andy B102 | Andy Z101 | Andy Z102 | Andy The names need to be sorted by the first character of the group id and also include the Z101/Z102 entries. By sorting strictly by the group id, I get a result set as follows: A101 | Craig A102 | Craig A101 | Jim A102 | Jim B101 | Andy B102 | Andy Z101 | Andy Z102 | Andy Z101 | Jim Z102 | Jim I really can't think of a solution that doesn't involve me making a cursor and bloating the stored procedure up more than it already is. I'm sure a great mind out there has an elegant solution and I'm eager to see what the community can come up with. Thanks a ton in advance.

    Read the article

  • Does anyone know a better alternative to MS Excel's Solver?

    - by tundal45
    My company has to crunch a lot of data and part of the process involves running the solver and plotting a graph through resulting data points. Obviously there is a lot of copy and paste involved and the whole process is shaky, error prone and all round cluster-fudge. I was wondering if there was an alternative to the solver that can be used so that even if we have to use excel to plot the final graph, there will be a lot less data that needs to be copied and pasted back and forth. It would be great especially if the tool could be easily integrated into a .NET application but I am open to suggestions that may require a little bit of code-fu to get this to work. Thanks!

    Read the article

  • MySQL Export with Column Heading

    - by st4nt0n
    Hello - I am very, very, new to mySQL. I've got experience in general technical terms, but not with the syntax or concepts of mySQL. I have been tasked with exporting a table from MySQL into a pipe delimited .txt or .xls that I can use to add 7500 more records to manually, then import back into the table. I tried to use INTO OUTFILE, but I don't get column headings, which I need for reference to merge the new records. Is there a good resource that can explain this to a complete novice? I would usually go down to my bookstore and start learning, but I'm on a bit of a time crunch. Thanks all!

    Read the article

  • Advanced queries in HBase

    - by Teflon Ted
    Given the following HBase schema scenario (from the official FAQ)... How would you design an Hbase table for many-to-many association between two entities, for example Student and Course? I would define two tables: Student: student id student data (name, address, ...) courses (use course ids as column qualifiers here) Course: course id course data (name, syllabus, ...) students (use student ids as column qualifiers here) This schema gives you fast access to the queries, show all classes for a student (student table, courses family), or all students for a class (courses table, students family). How would you satisfy the request: "Give me all the students that share at least two courses in common"? Can you build a "query" in HBase that will return that set, or do you have to retrieve all the pertinent data and crunch it yourself in code?

    Read the article

  • Architecture for analysing search result impressions/clicks to improve future searches

    - by Hais
    We have a large database of items (10m+) stored in MySQL and intend to implement search on metadata on these items, taking advantage of something like Sphinx. The dataset will be changing slightly on a daily basis so Sphinx will be re-indexing daily. However we want the algorithm to self-learn and improve search results by analysing impression and click data so that we provide better results for our customers on that search term, and possibly other similar search terms too. I've been reading up on Hadoop and it seems like it has the potential to crunch all this data, although I'm still unsure how to approach it. Amazon has tutorials for compiling impression vs click data using MapReduce but I can't see how to get this data in a useable format. My idea is that when a search term comes in I query Sphinx to get all the matching items from the dataset, then query the analytics (compiled on an hourly basis or similar) so that we know the most popular items for that search term, then cache the final results using something like Memcached, Membase or similar. Am I along the right lines here?

    Read the article

  • Oracle Financial Analytics for SAP Certified with Oracle Data Integrator EE

    - by denis.gray
    Two days ago Oracle announced the release of Oracle Financial Analytics for SAP.  With the amount of press this has garnered in the past two days, there's a key detail that can't be missed.  This release is certified with Oracle Data Integrator EE - now making the combination of Data Integration and Business Intelligence a force to contend with.  Within the Oracle Press Release there were two important bullets: ·         Oracle Financial Analytics for SAP includes a pre-packaged ABAP code compliant adapter and is certified with Oracle Data Integrator Enterprise Edition to integrate SAP Financial Accounting data directly with the analytic application.  ·         Helping to integrate SAP financial data and disparate third-party data sources is Oracle Data Integrator Enterprise Edition which delivers fast, efficient loading and transformation of timely data into a data warehouse environment through its high-performance Extract Load and Transform (E-LT) technology. This is very exciting news, demonstrating Oracle's overall commitment to Oracle Data Integrator EE.   This is a great way to start off the new year and we look forward to building on this momentum throughout 2011.   The following links contain additional information and media responses about the Oracle Financial Analytics for SAP release. IDG News Service (Also appeared in PC World, Computer World, CIO: "Oracle is moving further into rival SAP's turf with Oracle Financial Analytics for SAP, a new BI (business intelligence) application that can crunch ERP (enterprise resource planning) system financial data for insights." Information Week: "Oracle talks a good game about the appeal of an optimized, all-Oracle stack. But the company also recognizes that we live in a predominantly heterogeneous IT world" CRN: "While some businesses with SAP Financial Accounting already use Oracle BI, those integrations had to be custom developed. The new offering provides pre-built integration capabilities." ECRM Guide:  "Among other features, Oracle Financial Analytics for SAP helps front-line managers improve financial performance and decision-making with what the company says is comprehensive, timely and role-based information on their departments' expenses and revenue contributions."   SAP Getting Started Guide for ODI on OTN: http://www.oracle.com/technetwork/middleware/data-integrator/learnmore/index.html For more information on the ODI and its SAP connectivity please review the Oracle® Fusion Middleware Application Adapters Guide for Oracle Data Integrator11g Release 1 (11.1.1)

    Read the article

  • What Would a CyberWar Do To Your Business?

    - by Brian Dayton
    In mid-February the Bipartisan Policy Center in the United States hosted Cyber ShockWave, a simulation of how the country might respond to a catastrophic cyber event. An attack takes place, they can't isolate where it came from or who did it, simulated press reports and market impacts...and the participants in the exercise have to brief the President and advise him/her on what to do. Last week, Former Department of Homeland Security Secretary Michael Chertoff who participated in the exercise summarized his findings in Federal Computer Weekly. The article, given FCW's readership and the topic is obviously focused on the public sector and US Federal policies. However, it touches on some broader issues that impact the private sector as well--which are applicable to any government and country/region-- such as: ·         How would the US (or any) government collaborate to identify and defeat such an attack? Chertoff calls this out as a current gap. How do the public and private sector collaborate today? How would the massive and disparate collection of agencies and companies act together in a crunch? ·         What would the impact on industries and global economies be? Chertoff, and a companion article in Government Computer News, only touch briefly on the subject--focusing on the impact on capital markets. "There's no question this has a disastrous impact on the economy," said Stephen Friedman, former director of the National Economic Council under President George W. Bush who played the role of treasury secretary. "You have financial markets shut down at this point, ordinary transactions are dramatically depleted, there's no question that this has a major impact on consumer confidence." That Got Me Thinking ·         How would it impact Oracle's customers? I know they have business continuity plans--is this one of their scenarios? What if it's not? How would it impact manufacturing lines, ATM networks, customer call centers... ·         How would it impact me and the companies I rely on? The supermarket down the street, my Internet Service Provider, the service station where I bought gas last night.   I sure don't have any answers, and neither do Chertoff or the participants in the exercise. "I have to tell you that ... we are operating in a bit of unchartered territory." said Jamie Gorelick, a former deputy attorney general who played the role of attorney general in the exercise.    But it is a good thing that governments and businesses are considering this scenario and doing what they can to prevent it from happening.

    Read the article

  • Windows Azure Recipe: Big Data

    - by Clint Edmonson
    As the name implies, what we’re talking about here is the explosion of electronic data that comes from huge volumes of transactions, devices, and sensors being captured by businesses today. This data often comes in unstructured formats and/or too fast for us to effectively process in real time. Collectively, we call these the 4 big data V’s: Volume, Velocity, Variety, and Variability. These qualities make this type of data best managed by NoSQL systems like Hadoop, rather than by conventional Relational Database Management System (RDBMS). We know that there are patterns hidden inside this data that might provide competitive insight into market trends.  The key is knowing when and how to leverage these “No SQL” tools combined with traditional business such as SQL-based relational databases and warehouses and other business intelligence tools. Drivers Petabyte scale data collection and storage Business intelligence and insight Solution The sketch below shows one of many big data solutions using Hadoop’s unique highly scalable storage and parallel processing capabilities combined with Microsoft Office’s Business Intelligence Components to access the data in the cluster. Ingredients Hadoop – this big data industry heavyweight provides both large scale data storage infrastructure and a highly parallelized map-reduce processing engine to crunch through the data efficiently. Here are the key pieces of the environment: Pig - a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. Mahout - a machine learning library with algorithms for clustering, classification and batch based collaborative filtering that are implemented on top of Apache Hadoop using the map/reduce paradigm. Hive - data warehouse software built on top of Apache Hadoop that facilitates querying and managing large datasets residing in distributed storage. Directly accessible to Microsoft Office and other consumers via add-ins and the Hive ODBC data driver. Pegasus - a Peta-scale graph mining system that runs in parallel, distributed manner on top of Hadoop and that provides algorithms for important graph mining tasks such as Degree, PageRank, Random Walk with Restart (RWR), Radius, and Connected Components. Sqoop - a tool designed for efficiently transferring bulk data between Apache Hadoop and structured data stores such as relational databases. Flume - a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large log data amounts to HDFS. Database – directly accessible to Hadoop via the Sqoop based Microsoft SQL Server Connector for Apache Hadoop, data can be efficiently transferred to traditional relational data stores for replication, reporting, or other needs. Reporting – provides easily consumable reporting when combined with a database being fed from the Hadoop environment. Training These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. Hadoop Learning Resources (20+ tutorials and labs) Huge collection of resources for learning about all aspects of Apache Hadoop-based development on Windows Azure and the Hadoop and Windows Azure Ecosystems SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • What Would a CyberWar Do To Your Business?

    - by [email protected]
    In mid-February the Bipartisan Policy Center in the United States hosted Cyber ShockWave, a simulation of how the country might respond to a catastrophic cyber event. An attack takes place, they can't isolate where it came from or who did it, simulated press reports and market impacts...and the participants in the exercise have to brief the President and advise him/her on what to do. Last week, Former Department of Homeland Security Secretary Michael Chertoff who participated in the exercise summarized his findings in Federal Computer Weekly. The article, given FCW's readership and the topic is obviously focused on the public sector and US Federal policies. However, it touches on some broader issues that impact the private sector as well--which are applicable to any government and country/region-- such as: · How would the US (or any) government collaborate to identify and defeat such an attack? Chertoff calls this out as a current gap. How do the public and private sector collaborate today? How would the massive and disparate collection of agencies and companies act together in a crunch? · What would the impact on industries and global economies be? Chertoff, and a companion article in Government Computer News, only touch briefly on the subject--focusing on the impact on capital markets. "There's no question this has a disastrous impact on the economy," said Stephen Friedman, former director of the National Economic Council under President George W. Bush who played the role of treasury secretary. "You have financial markets shut down at this point, ordinary transactions are dramatically depleted, there's no question that this has a major impact on consumer confidence." That Got Me Thinking · How would it impact Oracle's customers? I know they have business continuity plans--is this one of their scenarios? What if it's not? How would it impact manufacturing lines, ATM networks, customer call centers... · How would it impact me and the companies I rely on? The supermarket down the street, my Internet Service Provider, the service station where I bought gas last night. I sure don't have any answers, and neither do Chertoff or the participants in the exercise. "I have to tell you that ... we are operating in a bit of unchartered territory." said Jamie Gorelick, a former deputy attorney general who played the role of attorney general in the exercise. But it is a good thing that governments and businesses are considering this scenario and doing what they can to prevent it from happening.

    Read the article

  • What is the standard term for my role?

    - by sigil
    I'm doing work that involves writing code and managing developers in a "special projects" division of a large company. I'd like to define my role better and figure out if there's an industry standard term for what I do, so that it will be easier for me to research best practices and work on a career path What I do all day: A macro that connects an Excel sheet to an Access database is acting funny; I get called in to figure out what's happening and debug it. Someone needs data extracted from a bunch of files on Sharepoint. I figure out a client-side solution because I'm not authorized to do anything server-side and getting IT to do anything would take several months and need a business case. A manager wants a new data entry tool for their team. I interview the manager and team members to work out the functional requirements, then design/develop/test the application. Someone needs a VBA script to crunch some data for their presentation that's due in two hours. I drop everything I'm doing to hack out a quick script and run the analysis, without much in the way of testing. A developer has been hired to build a database for one of the teams, since I'm working on too many different things and don't have time to take this project on in the timeframe required. I direct his work and push him to meet certain deadlines, interview stakeholders to get more info that will help him figure out how to build the necessary forms, and modify the functional requirements of the database to fit in the timeframe. Someone wants to load a set of data into a GIS system and set up an ongoing refresh and reporting of this data set. I facilitate the conversation between the GIS developers and the owners of this data set, and design a demo application as proof of concept. It's kind of an "all-purpose programming and IT management" position, but it's not officially IT because the company has an actual IT department with a rigorously defined system of submitting requests, developing code, and managing projects. What I do, I guess, is more of a handyman job, where stuff falls to me because I'm the geekiest one in the room. Is there a standard term in the software world for what I do?

    Read the article

  • Windows Server 2008 / SQL 2008 Licensing for Authenticated Web Application

    - by MikeM
    Hello, I'm trying to crunch some numbers to see what the software costs involved are for hosting an application we are developing. Users will not be anonymous - they will need to log in. SQL Server 2008: SQL Server licensing is easy - it will be licensed per-processor. No real fuss there. The cost of CALs would be much higher for the number of users as compared to the processor licenses. Windows Server 2008: This is where it gets trickier. We need to license the OS for both the web servers (there will be a couple) plus the database servers (also a couple). The Web Servers could run on the Web Edition without a need for CALs, but if you continue reading, you will see that may not matter much because I will likely have user CALs for each user anyway. We can't use the "External Connector" for any of the Windows licenses, because that doesn't cover customers who are paying to access a hosted application. We can't use the Web Edition for the SQL Servers because that license only allows database running on Web Edition to host data for the local web application (i.e. other web servers can't connect to it). So that leaves us with the "full" editions of Windows Server for the database server OS. I find this a little rediculous, and I feel as though I must be missing something, but it looks to me like I will actually need to buy a CAL for every user who signs up to use our service. I feel like I'm missing something because that means that for every user, I have to shell out $40 for a CAL. That could be one or two years' worth of revenue from each user for an inexpensive service! Is there any way to serve a web application to authenticated users without paying for individual Windows Server CALs, if the web servers and SQL servers are seperate boxes?

    Read the article

  • when to upgrade server to include more cores, versus more processors, versus additional server?

    - by gkdsp
    The server hosting market is separated into single, double, qual, etc., processors, where each processor has several cores, or CPUs. My company will offer a Linux-based web application that relies on an Apache web server and a middle tier for business logic. The middle tier is used to crunch math, and return result to a client. Many clients may access the application simultaneously. The company will start with one processor having 4 cores. I'm trying to understand how the app uses the cores and then how to scale the application as business grows, in terms of servers/processors/cores. For example, I'd assume initially one core would be used for Apache, and the other 3 used to process client's requests for math crunching... Question 1: does that mean, with the 3 cores available, I can handle 3 separate client requests simultaneously (e.g. 1 for each of 3 cores)? I mean, except for the shared RAM, is this effectively like having 3 individual machines (from pt of view or processing client requests simulaneously)? Or, only one client's request may be processed at any one time, but that client's request is divided up into up to 3 cores depending on the type of process running that does the math crunching and whether or not it can take advantage of multi threading (so the # of cores impacts how fast any one client request completes)? I'm confused about what the cores mean to the application here. Question 2: As the business grows and more client requests need to be processed, should the server be upgraded to (A) a new machine with more cores, (B) a new machine with two processors, 4 cores each, or (C) keep the original server and add another server with a single processor? Which route provides the most efficient way to scale the application, in terms of processing more client requests per time interval? Is the choice, for example, limited by RAM (when you need more RAM than box can handle it's time to add another server), or something else? Question 3: Is the total number of client requests processed simultaneously equal to the number of cores times the number of servers (minus the one core for Apache)?

    Read the article

  • What are the typical methods used to scale up/out email storage servers?

    - by nareshov
    Hi, What I've tried: I have two email storage architectures. Old and new. Old: courier-imapds on several (18+) 1TB-storage servers. If one of them show signs of running out of disk space, we migrate a few email accounts to another server. the servers don't have replicas. no backups either. New: dovecot2 on a single huge server with 16TB (SATA) storage and a few SSDs we store fresh mails on the SSDs and run a doveadm purge to move mails older than a day to the SATA disks there is an identical server which has a max-15min-old rsync backup from the primary server higher-ups/management wanted to pack in as much storage as possible per server in order to minimise the cost of SSDs per server the rsync'ing is done because GlusterFS wasn't replicating well under that high small/random-IO. scaling out was expected to be done with provisioning another pair of such huge servers on facing disk-crunch issues like in the old architecture, manual moving of email accounts would be done. Concerns/doubts: I'm not convinced with the synchronously-replicated filesystem idea works well for heavy random/small-IO. GlusterFS isn't working for us yet, I'm not sure if there's another filesystem out there for this use case. The idea was to keep identical pairs and use DNS round-robin for email delivery and IMAP/POP3 access. And if one the servers went down for whatever reasons (planned/unplanned), we'd move the IP to the other server in the pair. In filesystems like Lustre, I get the advantage of a single namespace whereby I do not have to worry about manually migrating accounts around and updating MAILHOME paths and other metadata/data. Questions: What are the typical methods used to scale up/out with the traditional software (courier-imapd / dovecot)? Do traditional software that store on a locally mounted filesystem pose a roadblock to scale out with minimal "problems"? Does one have to re-write (parts of) these to work with an object-storage of some sort - such as OpenStack object storage?

    Read the article

  • The Evolution of Television and Home Entertainment

    - by Bill Evjen
    This is a group that is focused on entertainment in the aviation industry. I am attending their conference for the first time as it relates to my job at Swank Motion Pictures and what we do for our various markets. I will post my notes here. The Evolution of Television and Home Entertainment by Patrick Cosson, Veebeam TV has been the center of living rooms for sometime. Conversations and culture evolve around the TV. The way we consume this content has dramatically been changing. After TV, we had the MTV revolution of TV. It has created shorter attention spans, it made us more materialistic, narcissistic, and not easily impressed. Then we came to the Internet. The amount of content has expanded. It contains a ton of user-generated content, provides filtering, organization, distribution. We now have a problem. We are in the age of digital excess. We can access whatever we want. In conjunction with this – we are moving. The challenge we have now is curation. The trends  we see: rapid shift from scheduled to on demand consumption. A move to Internet protocols from cable Rapid fragmentation of media a transition from the TV set to a variety of screens Social connections bring mediators and amplifiers. TiVo – the shift to on demand It is because of a time-crunch Provides personal experiences Once old consumption habits are changed, there is no way back! Experiences are that people are loading up content and then bringing it with them on planes, to hotels, etc. Rapid fragmentation of media sources Many new professional content sources and channels, the rise of digital distribution, and the rise of user-generated content contribute to the wealth of content sources and abundant choice. Netflix, BBC iPlayer, hulu, Pandora, iTunes, Amazon Video, Vudu, Voddler, Spotify (these companies didn’t exist 5 years ago). People now expect this kind of consumption. People are now thinking how to deliver all these tools. Transition from the TV set to multi-screens The TV screen has traditionally been the dominant consumption screen for TV and video. Now the PC, game consoles, and various mobile devices are rapidly becoming common video devices. Multi-screens are now the norm. Social connections becoming key mediators What increasingly funnels traffic on the web, social networking enablers, will become an integral part of the discovery, consumption and sharing model for Television. The revolution will be broadcasted on Facebook and Twitter. There is business disruption There are a lot of new entrants Rapid internationalization Increasing competition from existing media players A fragmenting audience base Web browser Freedom to access any site The fight over the walled garden Most devices are not powerful enough to support a full browser PC will always be present in the living room Wireless link between PC and TV Output 1080p, plays anything, secure Key players and their challenges Services Internet media is increasingly interconnected to social media and publicly shared UGC Content delivery moving to IPTV Rights management issues are creating silos and hindering a great user experience and growth Devices Devices are becoming people’s windows into all kinds of media from all kinds of sources There won’t be a consolidation of the device landscape, rather the opposite Finding the right niche makes the most sense. We are moving to an on demand world of streaming world. People want full access to anything.

    Read the article

  • Acceptance tests done first...how can this be accomplished?

    - by Crazy Eddie
    The basic gist of most Agile methods is that a feature is not "done" until it's been developed, tested, and in many cases released. This is supposed to happen in quick turnaround chunks of time such as "Sprints" in the Scrum process. A common part of Agile is also TDD, which states that tests are done first. My team works on a GUI program that does a lot of specific drawing and such. In order to provide tests, the testing team needs to be able to work with something that at least attempts to perform the things they are trying to test. We've found no way around this problem. I can very much see where they are coming from because if I was trying to write software that targeted some basically mysterious interface I'd have a very hard time. Although we have behavior fairly well specified, the exact process of interacting with various UI elements when it comes to automation seems to be too unique to a feature to allow testers to write automated scripts to drive something that does not exist. Even if we could, a lot of things end up turning up later as having been missing from the specification. One thing we considered doing was having the testers write test "scripts" that are more like a set of steps that must be performed, as described from a use-case perspective, so that they can be "automated" by a human being. This can then be performed by the developer(s) writing the feature and/or verified by someone else. When the testers later get an opportunity they automate the "script" for regression purposes mainly. This didn't end up catching on in the team though. The testing part of the team is actually falling behind us by quite a margin. This is one reason why the apparently extra time of developing a "script" for a human being to perform just did not happen....they're under a crunch to keep up with us developers. If we waited for them, we'd get nothing done. It's not their fault really, they're a bottle neck but they're doing what they should be and working as fast as possible. The process itself seems to be set up against them. Very often we end up having to go back a month or more in what we've done to fix bugs that the testers have finally gotten to checking. It's an ugly truth that I'd like to do something about. So what do other teams do to solve this fail cascade? How can we get testers ahead of us and how can we make it so that there's actually time for them to write tests for the features we do in a sprint without making us sit and twiddle our thumbs in the meantime? As it's currently going, in order to get a feature "done", using agile definitions, would be to have developers work for 1 week, then testers work the second week, and developers hopefully being able to fix all the bugs they come up with in the last couple days. That's just not going to happen, even if I agreed it was a reasonable solution. I need better ideas...

    Read the article

< Previous Page | 1 2 3  | Next Page >