Search Results

Search found 15906 results on 637 pages for 'scott and the dev team'.

Page 224/637 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • Is sudo dd taking too long to wipe hard drive?

    - by Adam133718
    I have a 200gb HDD which I removed from a macbook due to several corrupt files in startup. One thing led to another and I decided that I needed to format the drive. I used the command sudo dd if=/dev/zero of=/dev/sdb which is supposed to wipe everything off of the hard drive. It is my understanding that the command writes 0's over every bit on the drive, which I would imagine must take a while. The process has been going for about 18 hours now. I can use other functions of operating system like the web browser and I can even use another terminal window, so I know the system is not frozen. Should I restart the process or let it continue on? Any advice will help. Thanks. By the way, I already noticed a post similar to this that was previously answered though the user was not using the same command as I was.

    Read the article

  • Vista missing from grub bootlist after installing ubuntu

    - by tacomensa
    I installed Ubuntu on a logical partition a while ago. When I get to the grub bootlist, Vista is not there. What i get is this: Ubuntu, with linux 2.6.32-26 Ubuntu, with linux 2.6.32-26 (Recovery mode) Ubuntu, with linux 2.6.32-25 Ubuntu, with linux 2.6.32-26 (Recovery mode) Ubuntu, with linux 2.6.32-24 Ubuntu, with linux 2.6.32-26 (Recovery mode) Memory test (memtest86+) Windows vista (loader) (on/dev/sda1) windows recovery environment (loader) (on/dev/sda2) "Windows vista (loader)" is an acer erecovery manager Im guessing that grub installed on my primary partition so it overwrite the vista MBR and i dont have the option to boot vista. Is there some way i can just edit the MBR and add vista to it or how will i have to repair this? here is my boot script http://pastebin.com/7HZFjBT7

    Read the article

  • Release Notes 12/12/2012

    This past week the CodePlex team worked on several fixes to improve the stability of our TFS infrastructure, including applying TFS 2012 Update 1. We apologize for the recent downtime. We are not completely out of the woods, but we appreciate your patience as we work through the issues. Additional Bug Fixes: Fixed several issues with character encoding within file paths. Fixed issue where the number of pull requests and forks were disappearing after selecting either link. Fixed issue blocking license changes when special characters exist in copyright holder field. Have ideas on how to improve CodePlex? Please visit our suggestions page! Vote for existing ideas or submit a new one. As always you can reach out to the CodePlex team on Twitter @codeplex or reach me directly @mgroves84

    Read the article

  • What is an achievable way of setting content budgets (e.g. polygon count) for level content in a 3D title?

    - by MrCranky
    In answering this question for swquinn, the answer raised a more pertinent question that I'd like to hear answers to. I'll post our own strategy (promise I won't accept it as the answer), but I'd like to hear others. Specifically: how do you go about setting a sensible budget for your content team. Usually one of the very first questions asked in a development is: what's our polygon budget? Of course, these days it's rare that vertex/poly count alone is the limiting factor, instead shader complexity, fill-rate, lighting complexity, all come into play. What the content team want are some hard numbers / limits to work to such that they have a reasonable expectation that their content, once it actually gets into the engine, will not be too heavy. Given that 'it depends' isn't a particularly useful answer, I'd like to hear a strategy that allows me to give them workable limits without being a) misleading, or b) wrong.

    Read the article

  • DataTable to Generic List Conversion

    using System;using System.Collections.Generic;using System.Linq;using System.Data;namespace ConsoleApplication1{ class Program { static void Main(string[] args) { DataTable table = new DataTable { Columns = { {"Number", typeof(int)}, {"Name", typeof(string)} } }; //Just adding few test rows to datatable. for (int i = 1; i <= 5; i++) { table.Rows.Add(i, "Name" + i); } var returnList = from row in table.AsEnumerable() select new MyObject { Number = row.Field<int>("Number"), Name = row.Field<String>("Name") }; //Displaying converted collection foreach (MyObject item in returnList) { Console.WriteLine("{0}\t{1}", item.Number, item.Name); } } } class MyObject { public int Number { get; set; } public String Name { get; set; } }} span.fullpost {display:none;}

    Read the article

  • Low-level 10-finger multi-touch data on the Nexus 7?

    - by Croad Langshan
    I'm considering getting a Nexus 7 to do some multi-touch development on Ubuntu in the run-up to 13.04 (i.e., now :-). What APIs, /dev files, or protocols are available, or could be made available with not too much work on my part? What data is available from the device? The data I want to get my hands on is -- if I can -- the same as I get from /dev/uinput/event* from an Apple Magic Trackpad, viz: positions of all touches (could be as many as 10 simultaneous touches, but much more typically 6 or fewer) their size/pressure (in both x and y directions) their angle their identity -- i.e. an integer that is somewhat reliably preserved across touch events, for as long as a finger doesn't lift off the surface Not all of this data is essential -- but the more of it there is, the merrier.

    Read the article

  • Nomodeset Installation

    - by Camacho3112
    I were following the address from Coldfish on How to set nomodeset, but I don't know how to "save" the changes made to the line GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nomodeset" I hit CTRL+O to save and get File Name to write: /etc/default/grub AND typed sudo update-grub AND hit ENTER. After that, I open another Terminal an type: sudo update-grub (ask me for password) and them I got this: joseluis@ubuntu:~$ sudo update-grub [sudo] password for joseluis: Generating grub.cfg ... cat: /boot/grub/video.lst: No such file or directory Found linux image: /boot/vmlinuz-2.6.38-12-generic Found initrd image: /boot/initrd.img-2.6.38-12-generic Found linux image: /boot/vmlinuz-2.6.38-8-generic Found initrd image: /boot/initrd.img-2.6.38-8-generic Found Windows 7 (loader) on /dev/sda1 Found Ubuntu 10.04 LTS (10.04) on /dev/sda6 done joseluis@ubuntu:~$ SO: Were I'm? Were is my direction now? Thanks for the help.

    Read the article

  • Is it the job of a developer to suggest IT requirements?

    - by anything
    I am the only developer working on a web application which is nearing to its end. Now we are looking into making it Live in maybe a couple of months time. This is a web application for a non IT company. Though they have their own internal IT team, they have asked me on what will be the hardware requirements for the live servers eg. RAM, 32 bit or 64 bit. Shouldn't the internal IT team be doing this or since I am the only person working on the project is it my resposiblity to let them know of the any specific hardware requiremnts which may impact the performance of the project? The reason I am asking this question is that, I have not this before. All the times I used to be given a server and asked to deploy apps on it. I never used to worry about the server configuration etc.

    Read the article

  • Remote Desktop advice

    - by spoon16
    Coming from Windows, so that is what my expectations are based on. I have a Ubuntu desktop edition instance running as a virtual machine on a server. I would like to use it as my primary open source dev environment but the VNC tools I have used don't seem to be as rich as "Remote Desktop Connection" in Windows. The two things that are missing for me: connecting/logging into a non-console user sessions dynamically resizing the graphical resolution based on the size of the remote desktop window device sharing (USB devices plugged into client shared with remote) Is there an appropriate client that I can run on Windows to connect to my ubuntu dev instance that provides these capabilities?

    Read the article

  • TraceTune shows Reads graphically

    - by Bill Graziano
    TraceTune now shows a graphical view of logical reads for each SQL statement in a trace file.  The width of the colored bar in the screen capture below is the percentage of logical reads for that statement.  The absolute number of reads is shown to the right. Any statement that has a user entered comment is shown in bold.  If you hover over the statement it will show the most recent comment for that statement.

    Read the article

  • SQL Server Configuration Scripting Utility Release 9

    - by Bill Graziano
    There’s another update to my little utility to script a SQL Server’s configuration.  I use this for two purposes.  First, I use it to keep my database mirroring servers up to date.  Second, I capture the output in a version control system and keep that for historical reference. In release 3.0.9 I made the following changes: Rewrote the encrypted trigger scripting.  It will now list the encrypted triggers in a comment in the table script but can’t actually script them. It now scripts any server event notifications. You can script a single database using the /scriptdb flag.  Please note that it will also script the instance and system databases when it does this. It will script any user-defined endpoints.  This will capture your mirroring endpoints and more importantly any service broker endpoints. It will gracefully skip database mail on the Express Edition. It still doesn’t support SQL Server 2012.  I think that’s the next feature to add though.

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part III)

    The last post ended with us just getting started on stumbling into text template file customization, a task that required a Visual Studio extension (Tangible T4 Editor) to even have a chance at completing.  Despite the benefits of the Tangible T4 Editor, I still had a hard time putting together a solid text template that would be easy to explain.  This is mostly due to the way the files allow you to mix code (encapsulated in <# #>) with straight-up text to generate.  It is effective to be sure, but not very readable.  Nevertheless, I will try and explain what was accomplished in my custom tt file, though the details of which are not really the point of this article (my way of saying dont criticize my crappy code, and certainly dont use it in any somewhat real application.  You may become dumber just by looking at this code.  You have been warned really the footnote I should put at the end of all of my blog posts). To begin with, there were two basic requirements that I needed the code generator to satisfy:  Reading one to many entity framework files, and using the entities that were found to write one to many class files.  Thankfully, using the Entity Object Generator as a starting point gave us an example on how to do exactly that by using the MetadataLoader and EntityFrameworkTemplateFileManager you include references to these items and use them like so: // Instantiate an entity framework file reader and file writer MetadataLoader loader = new MetadataLoader(this); EntityFrameworkTemplateFileManager fileManager = EntityFrameworkTemplateFileManager.Create(this); // Load the entity model metadata workspace MetadataWorkspace metadataWorkspace = null; bool allMetadataLoaded =loader.TryLoadAllMetadata("MFL.tt", out metadataWorkspace); EdmItemCollection ItemCollection = (EdmItemCollection)metadataWorkspace.GetItemCollection(DataSpace.CSpace); // Create an IO class to contain the 'get' methods for all entities in the model fileManager.StartNewFile("MFL.IO.gen.cs"); Next, we want to be able to loop through all of the entities found in the model, and then each property for each entity so we can generate classes and methods for each.  The code for that is blissfully simple: // Iterate through each entity in the model foreach (EntityType entity in ItemCollection.GetItems<EntityType>().OrderBy(e => e.Name)) {     // Iterate through each primitive property of the entity     foreach (EdmProperty edmProperty in entity.Properties.Where(p => p.TypeUsage.EdmType is PrimitiveType && p.DeclaringType == entity))     {         // TODO:  Create properties     }     // Iterate through each relationship of the entity     foreach (NavigationProperty navProperty in entity.NavigationProperties.Where(np => np.DeclaringType == entity))     {         // TODO:  Create associations     } } There really isnt anything more advanced than that going on in the text template the only thing I had to blunder through was realizing that if you want the generator to interpret a line of code (such as our iterations above), you need to enclose the code in <# and #> while if you want the generator to interpret the VALUE of code, such as putting the entity name into the class name, you need to enclose the code in <#= and #> like so: public partial class <#=entity.Name#> To make a long story short, I did a lot of repetition of the above to come up with a text template that generates a class for each entity based on its properties, and a set of IO methods for each entity based on its relationships.  The two work together to provide lazy-loading for hierarchical data (such getting Team.Players) so it should be pretty intuitive to use on a front-end.  This text template is available here you can tweak the inputFiles array to load one or many different edmx models and generate the basic xml IO and class files, though it will probably only work correctly in the simplest of cases, like our MFL model described in the previous post.  Additionally, there is no validation, logging or error handling which is something I want to handle later by stumbling through the enterprise library 5.0. The code that gets generated isnt anything special, though using the LINQ to XML feature was something very new and exciting for me I had only worked with XML in the past using the DOM or XML Reader objects along with XPath, and the LINQ to XML model is just so much more elegant and supposedly efficient (something to test later).  For example, the following code was generated to create a Player object for each Player node in the XML:         return from element in GetXmlData(_PlayerDataFile).Descendants("Player")             select new Player             {                 Id = int.Parse(element.Attribute("Id").Value)                 ,ParentName = element.Parent.Name.LocalName                 ,ParentId = long.Parse(element.Parent.Attribute("Id").Value)                 ,Name = element.Attribute("Name").Value                 ,PositionId = int.Parse(element.Attribute("PositionId").Value)             }; It is all done in one line of code, no looping needed.  Even though GetXmlData loads the entire xml file just like the old XML DOM approach would have, it is supposed to be much less resource intensive.  I will definitely put that to the test after we develop a user interface for getting at this data.  Speaking of the data where IS the data?  Weve put together a pretty model and a bunch of code around it, but we dont have any data to speak of.  We can certainly drop to our favorite XML editor and crank out some data, but if it doesnt totally match our model, it will not load correctly.  To help with this, Ive built in a method to generate xml at any given layer in the hierarchy.  So for us to get the closest possible thing to real data, wed need to invoke MFL.IO.GenerateTeamXML and save the results to file.  Doing so should get us something that looks like this: <Team Id="0" Name="0">   <Player Id="0" Name="0" PositionId="0">     <Statistic Id="0" PassYards="0" RushYards="0" Year="0" />   </Player> </Team> Sadly, it is missing the Positions node (havent thought of a way to generate lookup xml yet) and the data itself isnt quite realistic (well, as realistic as MFL data can be anyway).  Lets manually remedy that for now to give us a decent starter set of data.  Note that this is TWO xml files Lookups.xml and Teams.xml: <Lookups Id=0>   <Position Id="0" Name="Quarterback"/>   <Position Id="1" Name="Runningback"/> </Lookups> <Teams Id=0>   <Team Id="0" Name="Chicago">     <Player Id="0" Name="QB Bears" PositionId="0">       <Statistic Id="0" PassYards="4000" RushYards="120" Year="2008" />       <Statistic Id="1" PassYards="4200" RushYards="180" Year="2009" />     </Player>     <Player Id="1" Name="RB Bears" PositionId="1">       <Statistic Id="2" PassYards="0" RushYards="800" Year="2007" />       <Statistic Id="3" PassYards="0" RushYards="1200" Year="2008" />       <Statistic Id="4" PassYards="3" RushYards="1450" Year="2009" />     </Player>   </Team> </Teams> Ok, so we have some data, we have a way to read/write that data and we have a friendly way of representing that data.  Now, what remains is the part that I have been looking forward to the most: present the data to the user and give them the ability to add/update/delete, and doing so in a way that is very intuitive (easy) from a development standpoint.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Does having a higher paid technical job mean you do not get to code any more?

    - by c_maker
    I work at a large company where technical people fall roughly in one of these categories: A developer on a scrum team who develops for a single product and maybe works with other teams that are closely related to the product. An architect who is more of a consultant on multiple teams (5-6) and tries to recognize commonalities between team efforts that could be abstracted into libraries (architects do not write the library code, however). This architect also attends many meetings with management and attempts to set technical direction. In my company the architect role is where most technical people move into as the next step in their career. My questions are: Do most companies work such a way that their highest paid technical people are far removed from writing code? Is this a natural tendency for a developer's career? Can a developer have it all (code AND set direction?)

    Read the article

  • I want to increase the size of my boot partition (Ubuntu 14.04 version) [duplicate]

    - by Mike
    This question already has an answer here: How do I free up more space in /boot? 11 answers How to resize partitions? 5 answers I read in another post that kernels are distributed as new releases rather than upgrades. I didn't know this when I was allocating space to my partitions during my initial install of Ubuntu. As a result I ran out of space on my boot partition. Can I increase the size of it using GParted and how do I do this without doing damage to my system? 1 1049kB 512MB 511MB fat32 boot 2 512MB 768MB 256MB ext2 3 768MB 1000GB 999GB lvm Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/ubuntu--vg-swap_1: 3712MB Sector size (logical/physical): 512B/4096B Partition Table: loop Number Start End Size File system Flags 1 0.00B 3712MB 3712MB linux-swap(v1) Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/ubuntu--vg-root: 996GB Sector size (logical/physical): 512B/4096B Partition Table: loop Number Start End Size File system Flags 1 0.00B 996GB 996GB ext4 Sorry, don't know how to capture and post the terminal output screen.

    Read the article

  • Release Notes for 11/20/2012

    The CodePlex team deployed a few times over the last week. Below is a roll-up of changes: Fixed issue with being able add additional commits to pull requests - Thanks to Oren Novotny Fixed problem with issue summaries breaking within words - Thanks to Jeff Handley and SoonDead Corrected inconsistencies between the time displayed on the history page and previous versions page for Git/Hg commits. Fixed perma-link issue when linking to forks. - Thanks to Scott Blomquist Fixed problem with connecting via Windows Live Writer - Thanks to yufeih Fixed source browsing problem when folders have special characters. Fixed AppHarbor service hooks for Mercurial projects. Have ideas on how to improve CodePlex? Please visit our suggestions page! Vote for existing ideas or submit a new one. As always you can reach out to the CodePlex team on Twitter @codeplex or reach me directly @mgroves84

    Read the article

  • An Overview of Document Generation in SharePoint

    Document Generation is an automated way of generating and distributing documents. You no longer have to manually create a document, instead you start by designing a template. Specific information wil... [Author: Scott Duglase - Computers and Internet - June 14, 2010]

    Read the article

  • Configure SQL Express 2005 for remote access

    Please follow the below steps as shown in pictures to configure SQL Server Express 2005 for remote access. Fig1: Open SQL Serve Configuration Manager Fig2: Navigate to SQL Serve 2005 N/W configuration and click on Protocols node Fig3: Enable TCP/IP Protocol Fig4: Enable Named Pipes Protocol Fig5: After enabling TCP/IP and Named Pipes protocols Fig6: Finally click on TCP/IP to configure the port number to listen N/W requests to SQL Express 2005. span.fullpost {display:none;}

    Read the article

  • Reading XML Content

    using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Linq; using System.Diagnostics; using System.Threading; using System.Xml; using System.Reflection; namespace XMLReading { class Program     { static void Main(string[] args)         { string fileName = @"C:\temp\t.xml"; List<EmergencyContactXMLDTO> emergencyContacts = new XmlReader<EmergencyContactXMLDTO, EmergencyContactXMLDTOMapper>().Read(fileName); foreach (var item in emergencyContacts)             { Console.WriteLine(item.FileNb);             }          }     } public class XmlReader<TDTO, TMAPPER> where TDTO : BaseDTO, new() where TMAPPER : PCPWXMLDTOMapper, new()     { public List<TDTO> Read(String fileName)         { XmlTextReader reader = new XmlTextReader(fileName); List<TDTO> emergencyContacts = new List<TDTO>(); while (true)             {                 TMAPPER mapper = new TMAPPER(); bool isFound = SeekElement(reader, mapper.GetMainXMLTagName()); if (!isFound) break;                 TDTO dto = new TDTO(); foreach (var propertyKey in mapper.GetPropertyXMLMap())                 { String dtoPropertyName = propertyKey.Key; String xmlPropertyName = propertyKey.Value;                     SeekElement(reader, xmlPropertyName);                     SetValue(dto, dtoPropertyName, reader.ReadElementString());                 }                 emergencyContacts.Add(dto);             } return emergencyContacts;         } private void SetValue(Object dto, String propertyName, String value)         { PropertyInfo prop = dto.GetType().GetProperty(propertyName, BindingFlags.Public | BindingFlags.Instance);             prop.SetValue(dto, value, null);         } private bool SeekElement(XmlTextReader reader, String elementName)         { while (reader.Read())             { XmlNodeType nodeType = reader.MoveToContent(); if (nodeType != XmlNodeType.Element)                 { continue;                 } if (reader.Name == elementName)                 { return true;                 }             } return false;         }     } public class BaseDTO     {     } public class EmergencyContactXMLDTO : BaseDTO     { public string FileNb { get; set; } public string ContactName { get; set; } public string ContactPhoneNumber { get; set; } public string Relationship { get; set; } public string DoctorName { get; set; } public string DoctorPhoneNumber { get; set; } public string HospitalName { get; set; }     } public interface PCPWXMLDTOMapper     { Dictionary<string, string> GetPropertyXMLMap(); String GetMainXMLTagName();     } public class EmergencyContactXMLDTOMapper : PCPWXMLDTOMapper     { public Dictionary<string, string> GetPropertyXMLMap()         { return new Dictionary<string, string>             {                 { "FileNb", "XFileNb" },                 { "ContactName", "XContactName"},                 { "ContactPhoneNumber", "XContactPhoneNumber" },                 { "Relationship", "XRelationship" },                 { "DoctorName", "XDoctorName" },                 { "DoctorPhoneNumber", "XDoctorPhoneNumber" },                 { "HospitalName", "XHospitalName" },             };         } public String GetMainXMLTagName()         { return "EmergencyContact";         }     } } span.fullpost {display:none;}

    Read the article

  • Help with cron syntax

    - by Randy
    I need to setup a cronjob on my webhost. The documentation for my webapp reads as follows: you will need to create following cronjob: /public_html/cake/console/cake -app /public_html/app master Also, I want any output written to a log file. My hosts documentation says this: You can have cron send an email everytime it runs a command. If you do not want an email to be sent for an individual cron job you can redirect the command's output to /dev/null like this: mycommand /dev/null 2&1 Can someone help me write the cron job? I dont know the syntax at all. Thanks for the help!

    Read the article

  • Business Analyst vs. Architect [closed]

    - by suslik
    I'm a developer of a few years in the financial industry and will soon need to decide what career path to try and row towards. Broadly speaking I have two options: something more 'people' oriented like BAs, or keep coding and try to make more technical decisions like the Architects do where I currently work. Here are my perceptions right now: Business Analysts: get paid way more than devs once they do their job, it seems like they usually have no worries more likely to go REALLY high up in the organization (VPs, etc) Architects: things like certification matters (I see this as a con) called in when things go wrong more than anyone else (weekends & overtime) long career path to get to (dev - senior dev - team lead - architect) I would find the latter more intellectually rewarding, but when I look at it I just can't justify it in terms of lifestyle. Am I wrong / what am I missing? Can you really make a lot of money in a technical role or must you really get out of coding? Thank you for any constructive input.

    Read the article

  • Ubutnu 12.04 mdadm inactive

    - by user32274
    For a while now, my RAID 5 has ceased to work. Everytime I tried "madm --detail /dev/md127", its states all the drive and drive info, but that two of the drives have been removed. After some restarts, doing the same thing, i am getting /dev/md127 does not appear to be active. When I go into DiskUtil, I can see all 6 Hard Drives healthy and present, and i can see the Raid 5 at the bottom under Multi-disk Devices. However, the Raid says 0.0kb, and is not active. Please help and let me know how to proceed from here. I would really like to avoid rebuilding the RAID, especially because all 6 drives seem to be healthy and present. Thanks so much.

    Read the article

  • Ubuntu not mounted?

    - by z3matt
    In Live CD i went in the terminal and when i do 'sudo update-grub' it responds /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?). Here's the breakdown of my drive: sda1 - vfat - Windows 7: FAT32 sda2 - sda3 - nfs - Windows Vista/7: NTFS - Windows 7 sda3/Wubi: - sda4 - Grub2 sda5 - Ubuntu 12.04.1 LTS sda6 - sda7 - sda8 - BIOS Boot Partition Also at the top of the page it states : = No boot loader is installed in the MBR of /dev/sda Any and all help is appreciated and welcomed. When my computer boots, it goes into GRUB and has the options for Windows 7 and Windows Memory Test but no option for Ubuntu. I want to run a dual-boot through it.

    Read the article

  • GlassFish Community Event @ JavaOne - Save the date!

    - by alexismp
    The interest for having a GlassFish community event at JavaOne is still very strong both inside Oracle and in the community, so this year again we'll be hosting a get together on the Sunday prior to the main event. If you're in town and attending JavaOne, mark your calendars : Sunday 2nd, October 2011 - 12:30pm-4:30pm in the Moscone This will be an opportunity to discuss the community status (adoption of Java EE 6, GlassFish 3.1.x) and hear about future plans, mainly around Java EE 7 and the related GlassFish release(s). We'd also like to have several participants share their deployment stories as well as some time for an free-form unconference format and some team building activity. Of course, beyond all the content shared in slides, this should really also be a good excuse to meet folks from the community and from the core GlassFish team at Oracle. Here's a post on last year's event. And before anybody asks, we are still exploring the party situation :-)

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >