Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 2/457 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Ubuntu 12.4 - Terminal - Huge/Large text on each command line [closed]

    - by gotqn
    Possible Duplicate: Is it possible to change my terminal window prompt text? I have been using "Ubuntu 12.4" for few days now (no previous Linux experiences at all) and I have noticed that the symbols on each command line more then this in many examples in the network. For example, I have: And I want to remove the "gotqn-System-Product-Name" part, because it is taking too much space? What should I do to change this?

    Read the article

  • Javascript frameworks for large development teams

    - by pllee
    My company is reevaluating what kind of web framework we want to use. We are currently using the Ext 4.0 framework but there are questions being raised that it may not be the right framework to use. I like what Ext has to offer (rich GUIs, data package and class system) are there other frameworks out there that are similar? Are there frameworks out there tailored to medium/large software companies? Info: Potentially 100's of developers converting thick client screens to the web. Data modeling is important and well as rich GUI support. Maintainability and uniformity across multiple products important as well. Any info is greatly appreciated.

    Read the article

  • AS3 Working With Arbitrarily Large Files

    - by Kekoa
    I am trying to read a very large file in AS3 and am having problems with the runtime just crashing on me. I'm currently using a FileStream to open the file asynchronously. This does not work(crashes without an Exception) for files bigger than about 300MB. _fileStream = new FileStream(); _fileStream.addEventListener(IOErrorEvent.IO_ERROR, loadError); _fileStream.addEventListener(Event.COMPLETE, loadComplete); _fileStream.openAsync(myFile, FileMode.READ); In looking at the documentation, it sounds like the FileStream class still tries to read in the entire file to memory(which is bad for large files). Is there a more suitable class to use for reading large files? I really would like something like a buffered FileStream class that only loads the bytes from the files that are going to be read next. I'm expecting that I may need to write a class that does this for me, but then I would need to read only a piece of a file at a time. I'm assuming that I can do this by setting the position and readAhead properties of the FileStream to read a chunk out of a file at a time. I would love to save some time if there is a class like this that already exists. Is there a good way to process large files in AS3, without loading entire contents into memory?

    Read the article

  • Bring 2 GB Large Pages to Solaris 10

    - by Giri Mandalika
    Few facts: 8 KB is the default page size on Oracle Solaris 10 and 11 as of this writing Both hardware and software must have support for 2 GB large pages SPARC T4 processors are capable of supporting 2 GB pages Oracle Solaris 11 kernel has in-built support for 2 GB pages Oracle Solaris 10 has no default support for 2 GB pages Memory intensive 64-bit applications may benefit the most from using 2 GB pages Prerequisites: OS: Oracle Solaris 10 8/11 (Update 10) or later Hardware: Oracle servers with SPARC T4 processors e.g., SPARC T4-1, T4-2 or T4-4, SPARC SuperCluster T4-4 Steps to enable 2 GB large pages on Oracle Solaris 10: Install the latest kernel patch or ensure that 147440-04 or later was installed Check the patch download instructions Add the following line to /etc/system and reboot set max_uheap_lpsize=0x80000000 Finally check the output of the following command when the system is back online pagesize -a eg., % pagesize -a 8192 <-- 8K 65536 <-- 64K 4194304 <-- 4M 268435456 <-- 256M 2147483648 <-- 2G % uname -a SunOS jar-jar 5.10 Generic_147440-21 sun4v sparc sun4v Also See: Solaris 9 or later: More performance with Large Pages (MPSS) Large page support for instructions (text) in Solaris 10 1/06 Solaris: How To Disable Out Of The Box (OOB) Large Page Support? Memory fragmentation / Large Pages on Solaris x86

    Read the article

  • Not sure how to link json 100% in php

    - by ronhdoge
    Im trying to create an rss feed that my droid app reads but i have some holes that i can figure how to fix the rss link page is http://www.mandarich.com/mandarichServer/mlb/indexbaseball.php when reading the rss i can see where the icon is missing on some and cant figure out why and cant figure saint louis at all. and the code i have for the php is as follows: <?php $teams["boston"] = "bostonredsox.gif"; $teams["nyyankees"] = "newyorkyankes.gif"; $teams["baltimore"] = "baltimoreorioles.gif"; $teams["tampa"] = "tampabayrays.gif"; $teams["toronto"] = "torontobluejays.gif"; $teams["atlanta"] = "atlantabraves.gif"; $teams["florida"] = "floridamarlins.gif"; $teams["nymets"] = "newyorkmets.gif"; $teams["philadelphia"] = "philadelphiaphillies.gif"; $teams["washington"] = "washingtonnationals.gif"; $teams["chicagosox"] = "chicagowhitesox.gif"; $teams["cleveland"] = "clevelandindians.gif"; $teams["detroit"] = "detroittigers.gif"; $teams["kansas"] = "kansascityroyals.gif"; $teams["minnesota"] = "minnesotatwins.gif"; $teams["chicagocubs"] = "chicagocubs.gif"; $teams["cincinnati"] = "cinncinatireds.gif"; $teams["houston"] = "houstonastros.gif"; $teams["milwaukee"] = "milwaukeebrewers.gif"; $teams["pittsburgh"] = "pitsburghpirates.gif"; $teams["st.louis"] = "stlouiscardinals.gif"; $teams["laangels"] = "losangelesangels.gif"; $teams["oakland"] = "oaklandathletics.gif"; $teams["seattle"] = "seattlemariners.gif"; $teams["texas"] = "texasrangers.gif"; $teams["arizona"] = "arizonadiamondbacks.gif"; $teams["colorado"] = "coloradorockies.gif"; $teams["ladodgers"] = "losangelesdodgers.gif"; $teams["sandiego"] = "sandiegopadres.gif"; $teams["sanfrancisco"] = "sanfranciscogiants.gif"; $abbr["arizona"] = "ARI"; $abbr["oakland"] = "OAK"; $abbr["baltimore"] = "BAL"; $abbr["tampa"] = "TAM"; $abbr["boston"] = "BOS"; $abbr["nyyankees"] = "NYY"; $abbr["texas"] = "TEX"; $abbr["toronto"] = "TOR"; $abbr["laangels"] = "LAA"; $abbr["atlanta"] = "ALT"; $abbr["colorado"] = "COL"; $abbr["philadelphia"] = "PHI"; $abbr["florida"] = "FLA"; $abbr["milwaukee"] = "MIL"; $abbr["washington"] = "WAS"; $abbr["chicagosox"] = "CHW"; $abbr["cleveland"] = "CLE"; $abbr["detroit"] = "DET"; $abbr["seattle"] = "SEA"; $abbr["sanfrancisco"] = "SFO"; $abbr["st.louis"] = "STL"; $abbr["chicagocubs"] = "CHC"; $abbr["houston"] = "HOU"; $abbr["nymets"] = "NYM"; $abbr["cincinnati"] = "CIN"; $abbr["sandiego"] = "SDG"; $abbr["ladodgers"] = "LAD"; $abbr["pittsburgh"] = "PIT"; $abbr["minnesota"] = "MIN"; $abbr["kansas"] = "KAN"; ?

    Read the article

  • Large Image in .net

    - by Modir
    I want to create large image by C#. (i have some photos with large size (4800 * 4800). i want to merge these photos.) I use Bitmap but don't support. (Error : Invalid Parameter)

    Read the article

  • Large Image in C#

    - by Modir
    Hi Friend I want to create large image by C#. (i have some photos with large size(4800 * 4800). i want merge these photos.) i use Bitmap but don't support. (Error : Invalid Parameter) Please guide me. THANKS

    Read the article

  • Parsing large delimited files with dynamic number of columns

    - by annelie
    Hi, What would be the best approach to parse a delimited file when the columns are unknown before parsing the file? The file format is Rightmove v3 (.blm), the structure looks like this: #HEADER# Version : 3 EOF : '^' EOR : '~' #DEFINITION# AGENT_REF^ADDRESS_1^POSTCODE1^MEDIA_IMAGE_00~ // can be any number of columns #DATA# agent1^the address^the postcode^an image~ agent2^the address^the postcode^^~ // the records have to have the same number of columns as specified in the definition, however they can be empty etc #END# The files can potentially be very large, the example file I have is 40Mb but they could be several hundred megabytes. Below is the code I had started on before I realised the columns were dynamic, I'm opening a filestream as I read that was the best way to handle large files. I'm not sure my idea of putting every record in a list then processing is any good though, don't know if that will work with such large files. List<string> recordList = new List<string>(); try { using (FileStream fs = new FileStream(path, FileMode.Open, FileAccess.Read)) { StreamReader file = new StreamReader(fs); string line; while ((line = file.ReadLine()) != null) { string[] records = line.Split('~'); foreach (string item in records) { if (item != String.Empty) { recordList.Add(item); } } } } } catch (FileNotFoundException ex) { Console.WriteLine(ex.Message); } foreach (string r in recordList) { Property property = new Property(); string[] fields = r.Split('^'); // can't do this as I don't know which field is the post code property.PostCode = fields[2]; // etc propertyList.Add(property); } Any ideas of how to do this better? It's C# 3.0 and .Net 3.5 if that helps. Thanks, Annelie

    Read the article

  • AJAX get data from large HTML page as the large HTML page loads

    - by Ed
    Not entirely sure whether this has a name but basically I have a large HTML page that is generated from results in a db. So viewing the HTML page (which is a report) in a browser directly does not display all contents immediately but displays what it has and additional HTML is added as the results from the DB are retrieved... Is there a way I can make an AJAX request to this HTML page and as opposed to waiting until the whole page (report) is ready, I can start processing the response as the HTML report is loaded? Or is there another way of doing it? Atm I make my AJAX response it just sits there for a minute or two until the HTML page is complete... If context is of any use: The HTML report is generated by a Java servlet and the page making the AJAX call is a JSP page. Unfortunately I can't very easily break the report up because it is generated by BIRT (Eclipse reporting extension). Thanks in advance.

    Read the article

  • Upload large files in .NET

    - by Austin
    I've done a good bit of research to find an upload component for .NET that I can use to upload large files, has a progress bar, and can resume the upload of large files. I've come across some components like AjaxUploader, SlickUpload, and PowUpload, to name a few. Each of these options cost money and only PowUpload does the resumable upload, but it does it with a java applet. I'm willing to pay for a component that does those things well, but if I could write it myself that would be best. I have two questions: Is it possible to resume a file upload on the client without using flash/java/Silverlight? Does anyone have some code or a link to an article that explains how to write a .NET HTTPHandler that will allow streaming upload and an ajax progress bar? Thank you, Austin [Edit] I realized I do need to be able to do resumable file uploads for my project, any suggestions for components that can do that?

    Read the article

  • Best Practices for Renaming, Refactoring, and Breaking Changes with Teams

    - by David in Dakota
    What are some Best Practices for refactoring and renaming in team environments? I bring this up with a few scenarios in mind: If a library that is commonly referenced is refactored to introduce a breaking change to any library or project that references it. E.g. arbitrarily changing the name of a method. If projects are renamed and solutions must be rebuilt with updated references to them. If project structure is changed to be "more organized" by introducing folders and moving existing projects or solutions to new locations. Some additional thoughts/questions: Should changes like this matter or is resulting pain an indication of structure gone awry? Who should take responsibility for fixing errors related to a breaking change? If a developer makes a breaking change should they be responsible for going into affected projects and updating them or should they alert other developers and prompt them to change things? Is this something that can be done on a scheduled basis or is it something that should be done as frequently as possible? If a refactoring is put off for too long it is increasingly difficult to reconcile but at the same time in a day spending 1 hour increments fixing a build because of changes happening elsewhere. Is this a matter of a formal communication process or can it be organic?

    Read the article

  • Large File Upload in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Okay this is a big BIG B-I-G problem. And with SP2010 it’s going to be more prominent, because atleast at the server side, SharePoint can support large files much much better than SharePoint 2007 ever did. The issue with very large files being uploaded through any browser based API are - Reliably transferring gigabyte or bigger files without breakages over a protocol like HTTP, which is better suited for tiny transfers like images and text. Not killing your browser because it has to load all that in memory Not killing your web server because All that you upload through HTTP post, first gets streamed into IIS Memory, w3wp.exe memory before the ENTIRE FILE finishes uploading .. before it is stored. Which means, You cannot show an accurate and live progress bar of the upload, IIS gives you no such accurate metric of an upload. All the counters it gives you are approximate. Your w3wp.exe eats up all server memory – 4GB of it, for a 4GB upload. A thread is kept busy for the entire duration of the upload, thereby greatly limiting your web server’s capability to serve newer requests. Kills effective load balancing. Not killing your content database because, As you are uploading a very large file, that large file gets written sequentially into the DB, and therefore for a very large file very severely impacts the database performance. I had put together another video showing RBS usage in SharePoint 2010. I talked about many practical ramifications of using RBS in SharePoint in that video. Note that enabling large file support will never ever be a point and click job, simply because there are too many questions one needs to ask, and too many things one needs to plan for. However, one part that will remain common across all large file upload scenarios, in SharePoint or outside of SharePoint is to do it efficiently while not killing the web server. In this video, I describe using the Telerik Silverlight Upload control with SharePoint 2010 to enable efficient large file uploads in SharePoint. Presenting .. The video Comment on the article ....

    Read the article

  • T-SQL in Chicago – the LobsterPot teams with DataEducation

    - by Rob Farley
    In May, I’ll be in the US. I have board meetings for PASS at the SQLRally event in Dallas, and then I’m going to be spending a bit of time in Chicago. The big news is that while I’m in Chicago (May 14-16), I’m going to teach my “Advanced T-SQL Querying and Reporting: Building Effectiveness” course. This is a course that I’ve been teaching since the 2005 days, and have modified over time for 2008 and 2012. It’s very much my most popular course, and I love teaching it. Let me tell you why. For years, I wrote queries and thought I was good at it. I was a developer. I’d written a lot of C (and other, more fun languages like Prolog and Lisp) at university, and then got into the ‘real world’ and coded in VB, PL/SQL, and so on through to C#, and saw SQL (whichever database system it was) as just a way of getting the data back. I could write a query to return just about whatever data I wanted, and that was good. I was better at it than the people around me, and that helped. (It didn’t help my progression into management, then it just became a frustration, but for the most part, it was good to know that I was good at this particular thing.) But then I discovered the other side of querying – the execution plan. I started to learn about the translation from what I’d written into the plan, and this impacted my query-writing significantly. I look back at the queries I wrote before I understood this, and shudder. I wrote queries that were correct, but often a long way from effective. I’d done query tuning, but had largely done it without considering the plan, just inferring what indexes would help. This is not a performance-tuning course. It’s focused on the T-SQL that you read and write. But performance is a significant and recurring theme. Effective T-SQL has to be about performance – it’s the biggest way that a query becomes effective. There are other aspects too though – such as using constructs better. For example – I can write code that modifies data nicely, but if I haven’t learned about the MERGE statement and the way that it can impact things, I’m missing a few tricks. If you’re going to do this course, a good place to be is the situation I was in a few years before I wrote this course. You’re probably comfortable with writing T-SQL queries. You know how to make a SELECT statement do what you need it to, but feel there has to be a better way. You can write JOINs easily, and understand how to use LEFT JOIN to make sure you don’t filter out rows from the first table, but you’re coding blind. The first module I cover is on Query Execution. Take a look at the Course Outline at Data Education’s website. The first part of the first module is on the components of a SELECT statement (where I make you think harder about GROUP BY than you probably have before), but then we jump straight into Execution Plans. Some stuff on indexes is in there too, as is simplification and SARGability. Some of this is stuff that you may have heard me present on at conferences, but here you have me for three days straight. I’m sure you can imagine that we revisit these topics throughout the rest of the course as well, and you’d be right. In the second and third modules we look at a bunch of other aspects, including some of the T-SQL constructs that lots of people don’t know, and various other things that can help your T-SQL be, well, more effective. I’ve had quite a lot of people do this course and be itching to get back to work even on the first day. That’s not a comment about the jokes I tell, but because people want to look at the queries they run. LobsterPot Solutions is thrilled to be partnering with Data Education to bring this training to Chicago. Visit their website to register for the course. @rob_farley

    Read the article

  • Git workflow for small teams

    - by janos
    I'm working on a git workflow to implement in a small team. The core ideas in the workflow: there is a shared project master that all team members can write to all development is done exclusively on feature branches feature branches are code reviewed by a team member other than the branch author the feature branch is eventually merged into the shared master and the cycle starts again The article explains the steps in this cycle in detail: https://github.com/janosgyerik/git-workflows-book/blob/small-team-workflow/chapter05.md Does this make sense or am I missing something?

    Read the article

  • Challenges of Managing Off Shore Web Development Teams

    Have you ever thought of challenges that may arise in managing a full fledged team of professionals who are located thousands of miles away from your official location? The problem of skillfully managing an official team of your company is quite an uphill task and can give rise to numerous problems.

    Read the article

  • How can teams collaborate on Unity 3D projects?

    - by nosferat
    With a friend of mine, we are planning to develop a small game to get the hang of game development and teamwork. But since Unity 3D barely supports version control (or at least the free version lacks of it) we have no idea how to efficiently manage teamwork. Sharing tasks in a small project is also seems like a challange for us. I would also appreciate any advice that could be useful for beginner indie developers related to teamwork. :)

    Read the article

  • Workflow of sharing code for small teams

    - by Mihalis Bagos
    Problem is, we have developed a small CMS, that is different per implementation (currently). Of course development of this is never complete. Sometimes, we are working on more than one project that implements it (by copying-pasting the code files of the CMS to each project), and we add a new feature that we want to share on other projects as well (these can be small ones too, ie a custom ajax JSON controller - we use MVC) What we want to do is quickly and uniformly share the code with all other projects, via a version control system (or something similar), and generally organize the workflow as we know this isn't a very good workflow that we have. What would you suggest? Also, at the momment, the software we use is Visual Studio 2010, so we are strongly considering TFS, but even if we get it we still don't know the ideal workflow, or even if TFS supports what we want to do. Edit: Also note, we have specific implementations that have modifications over the CMS base that we want to KEEP only in the project area. (ie: a specific feature that we DONT want to share with the base CMS code)

    Read the article

  • Games for software development teams? [closed]

    - by g.foley
    We have been running weekly meetings for the team in the interest of learning. I'd like to mix these up from sit and listen type exercises to something more engaging. So I'm looking for a fun games to play with a team of 10 developers. They are of ranging experience, and the games must provide some kind of insight to some fundamental concept of programming the developers tend to forget. All ideas welcome!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >