Search Results

Search found 8354 results on 335 pages for 'count boxer'.

Page 189/335 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • Workaround: build FBX in XNA raise OutOfMemoryException

    - by Vitus
    If you try to add large FBX 3D model to the XNA project, and build it, you can get an OutOfMemoryException build error like following: Error    1    Building content threw OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.    at System.Collections.Generic.List`1.set_Capacity(Int32 value)    at System.Collections.Generic.List`1.EnsureCapacity(Int32 min)    at System.Collections.Generic.List`1.InsertRange(Int32 index, IEnumerable`1 collection)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.VertexChannel`1.InsertRange(Int32 index, Int32 count)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.VertexContent.InsertRange(Int32 index, IEnumerable`1 positionIndexCollection)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.MeshBuilder.AddTriangleVertex(Int32 indexIntoVertexCollection)    at Microsoft.Xna.Framework.Content.Pipeline.MeshConverter.FillNodeWithInfoFromMesh(KFbxNode* fbxNode, String name, KFbxGeometryConverter* geometryConverter)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessInformationInNode(KFbxNode* fbxNode, String name, Boolean* partOfMainSkeleton, Boolean* warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessNode(ValueType parentAbsoluteTransform, NodeContent potentialParent, KFbxNode* fbxNode, Boolean partOfMainSkeleton, Boolean warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessNode(ValueType parentAbsoluteTransform, NodeContent potentialParent, KFbxNode* fbxNode, Boolean partOfMainSkeleton, Boolean warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.Import(String filename, ContentImporterContext context)    at Microsoft.Xna.Framework.Content.Pipeline.ContentImporter`1.Microsoft.Xna.Framework.Content.Pipeline.IContentImporter.Import(String filename, ContentImporterContext context)    //additional calls here …   My desktop PC have 8Gb RAM, and Visual Studio’s process devenv.exe use under 2Gb of it while build process (about 3.5-4Gb of RAM is always free). It’s obvious, that VS can’t address more than 2Gb of RAM, and when that limit is over, build process is fail. OS on my PC is Win x64,  so I “charge” devenv.exe by using editbin.exe utility – in the VS Command prompt I run following: editbin "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe" /LARGEADDRESSAWARE This command edits the image to indicate that the application can handle addresses larger than 2 gigabytes. After that FBX file successfully built! Of course, you must put proper path to devenv.exe, depend on your installation path. If you are on Win x86, you need to do additional action – more info here.   P.S.: although now you can build a bigger files, than usual, keep in mind, that XNA have some restrictions on vertex buffer size etc., depend on your current XNA project profile (Reach or HiDef). And if your model’s vertexbuffer size more than 64Mb (with Reach profile), that model can’t be built and raise an error.

    Read the article

  • Are copyright notices really required?

    - by Alasdair
    Ever since I made my first web page 13 years ago I have followed the pattern of showing a copyright notice in the footer of each page. Over the years the format of this notice has changed in the following way; Copyright © <NAME> yyyy. All rights are reserved. Copyright © <NAME> yyyy © yyyy <NAME> © <NAME> This has generally mirrored the format used by Google. However, I recently noticed that they no longer display a copyright notice on their home page nor have one in their source code/meta tags. I see they still display it on most (if not all) other pages. I understand that Google are very keen to keep the word count down on their homepage, which could be the reason for this sacrafice, but my question is more general and relates to all websites. Since I've always just done it out of habit, I'm hoping someone can explain if/when I a copyright notice is actually required to protect your content and rights. Also, when it is required, is there a format in which the notice must adhere to in order to be valid?

    Read the article

  • hProduct-microformats not work in google

    - by silverfox
    I'm trying to work with hProduct was testing tool for google microformats (http://www.google.com/webmasters/tools/richsnippets), but it is not recognizing the data: does not recognize the photo does not recognize the price does not recognize the category only recognizes the rating HTML: <div class="hproduct"> <span class="brand">ACME</span> <span class="fn">Executive Anvil</span> <img class="photo" src="http://microformats.org/wiki/skins/Microformats/images/logo.gif" /> <span class="review hreview-aggregate"> Average rating: <span class="rating">4.4</span>, based on <span class="count">89 </span> reviews </span> Regular price: $179.99 Sale: $<span class="price">119.99</span> (Sale ends 5 November!) <span class="description">Sleeker than ACME's Classic Anvil, the Executive Anvil is perfect for the business traveler looking for something to drop from a height.</span> Category: <span class="category"> <span class="value-title" title="Hardware > Tools > Anvils">Anvils</span> </span> </div> and still shows this warning: waring: In order to generate a preview with rich snippets, either price or review or availability needs to be present. I used google's own example: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=186036 I also tested the microformas.org: http://microformats.org/wiki/google-rich-snippets-examples

    Read the article

  • Should I start making connections even if I'm not ready for a job yet?

    - by James
    The first job is always the hardest to get and I'm not exception. I'm 23 years old and I have no college degree but planned on going to college this year if all goes well (CS of course). I'm self-studying java right now. I know most of the topics related to the language besides the more advanced topics and I'm beginning to look at open source projects. I would like to find a job (at least a part time job) after a year or two when I'll gain more experience and learn more about java technologies and other technologies that interest me. Finding a job will be a bit difficult because most of the people (or a lot of them at least) at my current age already have 2 years or more of experience, so I will be somewhat disadvantaged. Should I start building connections and joining websites such as linkedin ? I never bothered to look into it because I'm not much of a social network person. If I start contributing to open source projects and create personal projects for 2 years could I apply for jobs that require 1-2 years of experience? Does this experience count ?

    Read the article

  • Simplify Your Code with LINQ

    - by dwahlin
    I’m a big fan of LINQ and use it wherever I can to minimize code and make applications easier to maintain overall. I was going through a code file today refactoring it based on suggestions provided by Resharper and came across the following method: private List<string> FilterTokens(List<string> tokens) { var cleanedTokens = new List<string>(); for (int i = 0; i < tokens.Count; i++) { string token = tokens[i]; if (token != null) { cleanedTokens.Add(token); } } return cleanedTokens; }   In looking through the code I didn’t see anything wrong but Resharper was suggesting that I convert it to a LINQ expression: In thinking about it more the suggestion made complete sense because I simply wanted to add all non-null token values into a List<string> anyway. After following through with the Resharper suggestion the code changed to the following. Much, much cleaner and yet another example of why LINQ (and Resharper) rules: private List<string> FilterTokens(IEnumerable<string> tokens) { return tokens.Where(token => token != null).ToList(); }

    Read the article

  • Deferred rendering with both Clockwise and CounterClockwise culling

    - by user1423893
    I have a deferred rendering system that works well with objects that appear solid and drawn using CounterClockwise culling. I have a problem with Clockwise culled objects that are supposed to represent hollow that display their inside faces only. The image below shows a CounterClockwise culled object (left) Clockwise culled object (right). The Clockwise culled object faces display what would be displayed on the CounterClockwise face. How can I get the lighting to light the inner faces for Clockwise culled objects and continue lighting the outer CounterClockwise faces as normal? My lighting method is below private void DeferredLighting(GameTime gameTime) { // Set the render target for the lights game.GraphicsDevice.SetRenderTarget(lightMap); // Clear the render target to (0, 0, 0, 0) game.GraphicsDevice.Clear(Color.Transparent); // Set the render states game.GraphicsDevice.BlendState = BlendState.Additive; game.GraphicsDevice.DepthStencilState = DepthStencilState.None; game.GraphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; // Set sampler state to Point as the Surface type requires it in XNA 4.0 game.GraphicsDevice.SamplerStates[0] = SamplerState.PointClamp; // Set the camera properties for all lights BaseLight.SetCameraProperties(game.ActiveCamera); // Draw the lights int numLights = lights.Count; for (int i = 0; i < numLights; ++i) { if (lights[i].Diffuse.W > 0f) { lights[i].Render(gameTime, ref normalMap, ref depthMap, ref sgrMap); } } // Resolve the render target game.GraphicsDevice.SetRenderTarget(null); } I have tried adjusting the render states but no combination works for both objects.

    Read the article

  • Camera Collision inside the room model

    - by sanddy
    I am having a problem in Calculating the camera collision for my Room model which consists of sofa, tables and other models. The users shall be moving the camera front, back, rotating so i need to make sure that the camera does not collide with any of the models with in the room. I have treated all my models inside the room by BoundingBox[] and the camera by BoundingSphere. So, far i have implemented collision by looking into the tutorial from http://www.toymaker.info/Games/XNA/html/xna_model_collisions.html which was great. But, I guess the problem lies in the Transformation part. I debugged and found some points to be at Vector(-XXX,-XXX,-XXX) where X is digit. Also i found my radius of some models where too large(in thousand, i just looked into its radius value before converting to BoundingBox). Do I need to scale the model for collision??? Below are my code:- On My LoadContent(): Matrix[] transforms = new Matrix[myModel.Bones.Count]; myModel.CopyAbsoluteBoneTransformsTo(transforms); int index = 0; box = new List<BoundingBox>(); BoundingBox worldModel = Utility.CalculateBoundingBox(myModel); foreach (ModelMesh mesh in myModel.Meshes) { Vector3[] obb = new Vector3[8]; worldModel.GetCorners(obb); Vector3[] asdf = (Vector3[])obb.Clone(); Vector3.Transform(obb, ref transforms[mesh.ParentBone.Index], obb); BoundingBox worldBox = BoundingBox.CreateFromPoints(obb); box.Add(worldBox); index++; } On CameraPosition Update: BoundingSphere bs = new BoundingSphere(this.cameraPos, 5.0f); if (RoomWalkthrough.Utility.CheckCollision(bs, bb)) { // Do Something } Please Help.

    Read the article

  • SQL SERVER – Storing Variable Values in Temporary Array or Temporary List

    - by pinaldave
    SQL Server does not support arrays or a dynamic length storage mechanism like list. Absolutely there are some clever workarounds and few extra-ordinary solutions but everybody can;t come up with such solution. Additionally, sometime the requirements are very simple that doing extraordinary coding is not required. Here is the simple case. Let us say here are the values: a, 10, 20, c, 30, d. Now the requirement is to store them in a array or list. It is very easy to do the same in C# or C. However, there is no quick way to do the same in SQL Server. Every single time when I get such requirement, I create a table variable and store the values in the table variables. Here is the example: For SQL Server 2012: DECLARE @ListofIDs TABLE(IDs VARCHAR(100)); INSERT INTO @ListofIDs VALUES('a'),('10'),('20'),('c'),('30'),('d'); SELECT IDs FROM @ListofIDs; GO When executed above script it will give following resultset. Above script will work in SQL Server 2012 only for SQL Server 2008 and earlier version run following code. DECLARE @ListofIDs TABLE(IDs VARCHAR(100), ID INT IDENTITY(1,1)); INSERT INTO @ListofIDs SELECT 'a' UNION ALL SELECT '10' UNION ALL SELECT '20' UNION ALL SELECT 'c' UNION ALL SELECT '30' UNION ALL SELECT 'd'; SELECT IDs FROM @ListofIDs; GO Now in this case, I have to convert numbers to varchars because I have to store mix datatypes in a single column. Additionally, this quick solution does not give any features of arrays (like inserting values in between as well accessing values using array index). Well, do you ever have to store temporary multiple values in SQL Server – if the count of values are dynamic and datatype is not specified early how will you about storing values which can be used later in the programming. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Exporting PowerPoint Slides with Specific Heights and Widths

    - by Damon Armstrong
    I found myself in need of exporting PowerPoint slides from a presentation and was fairly excited when I found that you could save them off in standard image formats. The problem is that Microsoft conveniently exports all images with a resolution of 960 x 720 pixels, which is not the resolution I wanted.  You can, however, specify the resolution if you are willing to put a macro into your project: Sub ExportSlides()   For i = 1 To ActiveWindow.Selection.SlideRange.Count     Dim fileName As String     If (i < 10) Then       fileName = "C:\PowerPoint Export\Slide" & i & ".png"     Else       fileName = "C:\PowerPoint Export\Slide0" & i & ".png"     End If     ActiveWindow.Selection.SlideRange(i).Export fileName, "PNG", 1280, 720   Next End Sub When you call the Export method you can specify the file type as well as the dimensions to use when creating the image.  If the macro approach is not your thing, then you can also modify the default settings through the registry: http://support.microsoft.com/kb/827745

    Read the article

  • Xml Serialization and the [Obsolete] Attribute

    - by PSteele
    I learned something new today: Starting with .NET 3.5, the XmlSerializer no longer serializes properties that are marked with the Obsolete attribute.  I can’t say that I really agree with this.  Marking something Obsolete is supposed to be something for a developer to deal with in source code.  Once an object is serialized to XML, it becomes data.  I think using the Obsolete attribute as both a compiler flag as well as controlling XML serialization is a bad idea. In this post, I’ll show you how I ran into this and how I got around it. The Setup Let’s start with some make-believe code to demonstrate the issue.  We have a simple data class for storing some information.  We use XML serialization to read and write the data: public class MyData { public int Age { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public List<String> Hobbies { get; set; }   public MyData() { this.Hobbies = new List<string>(); } } Now a few simple lines of code to serialize it to XML: static void Main(string[] args) { var data = new MyData {    FirstName = "Zachary", LastName = "Smith", Age = 50, Hobbies = {"Mischief", "Sabotage"}, }; var serializer = new XmlSerializer(typeof (MyData)); serializer.Serialize(Console.Out, data); Console.ReadKey(); } And this is what we see on the console: <?xml version="1.0" encoding="IBM437"?> <MyData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Age>50</Age> <FirstName>Zachary</FirstName> <LastName>Smith</LastName> <Hobbies> <string>Mischief</string> <string>Sabotage</string> </Hobbies> </MyData>   The Change So we decided to track the hobbies as a list of strings.  As always, things change and we have more information we need to store per-hobby.  We create a custom “Hobby” object, add a List<Hobby> to our MyData class and we obsolete the old “Hobbies” list to let developers know they shouldn’t use it going forward: public class Hobby { public string Name { get; set; } public int Frequency { get; set; } public int TimesCaught { get; set; }   public override string ToString() { return this.Name; } } public class MyData { public int Age { get; set; } public string FirstName { get; set; } public string LastName { get; set; } [Obsolete("Use HobbyData collection instead.")] public List<String> Hobbies { get; set; } public List<Hobby> HobbyData { get; set; }   public MyData() { this.Hobbies = new List<string>(); this.HobbyData = new List<Hobby>(); } } Here’s the kicker: This serialization is done in another application.  The consumers of the XML will be older clients (clients that expect only a “Hobbies” collection) as well as newer clients (that support the new “HobbyData” collection).  This really shouldn’t be a problem – the obsolete attribute is metadata for .NET compilers.  Unfortunately, the XmlSerializer also looks at the compiler attribute to determine what items to serialize/deserialize.  Here’s an example of our problem: static void Main(string[] args) { var xml = @"<?xml version=""1.0"" encoding=""IBM437""?> <MyData xmlns:xsi=""http://www.w3.org/2001/XMLSchema-instance"" xmlns:xsd=""http://www.w3.org/2001/XMLSchema""> <Age>50</Age> <FirstName>Zachary</FirstName> <LastName>Smith</LastName> <Hobbies> <string>Mischief</string> <string>Sabotage</string> </Hobbies> </MyData>"; var serializer = new XmlSerializer(typeof(MyData)); var stream = new StringReader(xml); var data = (MyData) serializer.Deserialize(stream);   if( data.Hobbies.Count != 2) { throw new ApplicationException("Hobbies did not deserialize properly"); } } If you run the code above, you’ll hit the exception.  Even though the XML contains a “<Hobbies>” node, the obsolete attribute prevents the node from being processed.  This will break old clients that use the new library, but don’t yet access the HobbyData collection. The Fix This fix (in this case), isn’t too painful.  The XmlSerializer exposes events for times when it runs into items (Elements, Attributes, Nodes, etc…) it doesn’t know what to do with.  We can hook in to those events and check and see if we’re getting something that we want to support (like our “Hobbies” node). Here’s a way to read in the old XML data with full support of the new data structure (and keeping the Hobbies collection marked as obsolete): static void Main(string[] args) { var xml = @"<?xml version=""1.0"" encoding=""IBM437""?> <MyData xmlns:xsi=""http://www.w3.org/2001/XMLSchema-instance"" xmlns:xsd=""http://www.w3.org/2001/XMLSchema""> <Age>50</Age> <FirstName>Zachary</FirstName> <LastName>Smith</LastName> <Hobbies> <string>Mischief</string> <string>Sabotage</string> </Hobbies> </MyData>"; var serializer = new XmlSerializer(typeof(MyData)); serializer.UnknownElement += serializer_UnknownElement; var stream = new StringReader(xml); var data = (MyData)serializer.Deserialize(stream);   if (data.Hobbies.Count != 2) { throw new ApplicationException("Hobbies did not deserialize properly"); } }   static void serializer_UnknownElement(object sender, XmlElementEventArgs e) { if( e.Element.Name != "Hobbies") { return; }   var target = (MyData) e.ObjectBeingDeserialized; foreach(XmlElement hobby in e.Element.ChildNodes) { target.Hobbies.Add(hobby.InnerText); target.HobbyData.Add(new Hobby{Name = hobby.InnerText}); } } As you can see, we hook in to the “UnknownElement” event.  Once we determine it’s our “Hobbies” node, we deserialize it ourselves – as well as populating the new HobbyData collection.  In this case, we have a fairly simple solution to a small change in XML layout.  If you make more extensive changes, it would probably be easier to do some custom serialization to support older data. A sample project with all of this code is available from my repository on bitbucket. Technorati Tags: XmlSerializer,Obsolete,.NET

    Read the article

  • Send Multiple InMemory Attachments Using FileUpload Controls

    - by bullpit
    I wanted to give users an ability to send multiple attachments from the web application. I did not want anything fancy, just a few FileUpload controls on the page and then send the email. So I dropped five FileUpload controls on the web page and created a function to send email with multiple attachments. Here’s the code: public static void SendMail(string fromAddress, string toAddress, string subject, string body, HttpFileCollection fileCollection)     {         // CREATE THE MailMessage OBJECT         MailMessage mail = new MailMessage();           // SET ADDRESSES         mail.From = new MailAddress(fromAddress);         mail.To.Add(toAddress);           // SET CONTENT         mail.Subject = subject;         mail.Body = body;         mail.IsBodyHtml = false;                        // ATTACH FILES FROM HttpFileCollection         for (int i = 0; i < fileCollection.Count; i++)         {             HttpPostedFile file = fileCollection[i];             if (file.ContentLength > 0)             {                 Attachment attachment = new Attachment(file.InputStream, Path.GetFileName(file.FileName));                 mail.Attachments.Add(attachment);             }         }           // SEND MESSAGE         SmtpClient smtp = new SmtpClient("127.0.0.1");         smtp.Send(mail);     } And here’s how you call the method: protected void uxSendMail_Click(object sender, EventArgs e)     {         HttpFileCollection fileCollection = Request.Files;         string fromAddress = "[email protected]";         string toAddress = "[email protected]";         string subject = "Multiple Mail Attachment Test";         string body = "Mail Attachments Included";         HelperClass.SendMail(fromAddress, toAddress, subject, body, fileCollection);            }

    Read the article

  • Quick Question, robots.txt Disallow: /*/ does what exactly?

    - by Exit
    A SEO firm suggested changing the robots.txt to: User-agent: * Disallow: /*/ Allow: /ims/ I'm not sure what that would do, but my guess is that is would tell all robots to index nothing but the ims folder. I understand the wildcard, but I'm confused by the slashes and don't know how they would play out in conjunction with the wildcard. * Update * I didn't mention that there is a sitemap listed in the robots.txt file, but according to one tech blogger, he realized that sitemaps trump robots exclusions. So, even though this says in Google Webmaster Tools that everything with a trailing slash will not be indexed, the sitemap contains the important links. I did notice that the link count on Google went from 360 to 336, and the sitemap links under the URL scaled back to 3 from 6. I'm not sure the cause or what links were removed, though. Perhaps it cleaned out garbage. I'm still clueless why they would add in 'Allow: /ims/', that seems pointless. And a quick list of what would index according to the robots rules above (withouth the sitemap) using /*/: domain.com Indexed domain.com/page.html Indexed domain.com/folder/ Not Indexed domain.com/folder/page.html Not Indexed

    Read the article

  • autostart app with tag in awm

    - by nonsenz
    while giving awm a try i encounter some problems. i want to autostart some apps when awm is started with specific tags. here's the relevant config i use for that. first my tags with layouts: tags = { names = {"mail", "www", "video", "files", 5, 6, 7, 8, 9}, layout = {layouts[11], layouts[11], layouts[11], layouts[11], layouts[1], layouts[1], layouts[1], layouts[1], layouts[1]} } for s = 1, screen.count() do -- Each screen has its own tag table. tags[s] = awful.tag(tags.names, s, tags.layout) end now the app-autostart stuff: awful.util.spawn("chromium-browser") awful.util.spawn("firefox") awful.util.spawn("vlc") awful.util.spawn_with_shell("xterm -name files -e mc") awful.util.spawn_with_shell("xterm -name 5term") awful.util.spawn_with_shell("xterm -name 5term") awful.util.spawn_with_shell("xterm -name 5term") awful.util.spawn_with_shell("xterm -name 5term") awful.util.spawn_with_shell("xfce4-power-manager") i use xterm with the -name param to give them custom classes (for custom tags via rules). and now some rules to connect apps with tags: awful.rules.rules = { -- All clients will match this rule. { rule = { }, properties = { border_width = beautiful.border_width, border_color = beautiful.border_normal, focus = true, keys = clientkeys, buttons = clientbuttons } }, { rule = { class = "MPlayer" }, properties = { floating = true } }, { rule = { class = "pinentry" }, properties = { floating = true } }, { rule = { class = "gimp" }, properties = { floating = true } }, -- Set Firefox to always map on tags number 2 of screen 1. -- { rule = { class = "Firefox" }, -- properties = { tag = tags[1][2] } }, { rule = { class = "Firefox" }, properties = { tag = tags[1][2] } }, { rule = { class = "Chromium-browser" }, properties = { tag = tags[1][1] } }, { rule = { class = "Vlc"}, properties = { tag = tags[1][3] } }, { rule = { class = "files"}, properties = { tag = tags[1][4] } }, { rule = { class = "5term"}, properties = { tag = tags[1][5] } }, } it works for chromium, firefox and vlc but not for the xterms with the "-name" param. when i check the xterms after they started with xprop i can see: WM_CLASS(STRING) = "5term", "XTerm" i think that sould work, but the xterms are placed on the first workspace/tag.

    Read the article

  • Double audio cd ripping weirdness

    - by jqno
    Since I installed Ubuntu 12.04, Rhythmbox, Banshee and Sound Juicer have started acting weird around double cd's, and specifically, cd #2 of said double cd. Sometimes, they will show the information of cd #1. Track names, durations, and even count are incorrect. Sometimes, they will first show the tracks for cd #1, then continue onto cd #2 if cd #2 has more tracks than #1. Sound Juicer seems to be unable to find any track durations at all, even for single cd's. Obviously, this is a pain when I'm trying to rip double cd's. And I have a fair number of them, which I want to rip. This happens on both my machines (a slightly aging iMac, and a 1-year-old Sony Vaio). However, on previous versions of Ubuntu, this never happened. All on the same machines. So I suspect 12.04 is using a different lib for extracting audio cd data. Just for kicks, I tried with Linux Mint 13, and there it works correctly, even though it claims to be based on Ubuntu 12.04 and therefore should be using (partially) the same software. So if the Mint guys can fix it, I should be able to do it too, right? So, my question: what changed in 12.04 that could cause this? And more importantly: what can I do to fix it?

    Read the article

  • Why unhandled exceptions are useful

    - by Simon Cooper
    It’s the bane of most programmers’ lives – an unhandled exception causes your application or webapp to crash, an ugly dialog gets displayed to the user, and they come complaining to you. Then, somehow, you need to figure out what went wrong. Hopefully, you’ve got a log file, or some other way of reporting unhandled exceptions (obligatory employer plug: SmartAssembly reports an application’s unhandled exceptions straight to you, along with the entire state of the stack and variables at that point). If not, you have to try and replicate it yourself, or do some psychic debugging to try and figure out what’s wrong. However, it’s good that the program crashed. Or, more precisely, it is correct behaviour. An unhandled exception in your application means that, somewhere in your code, there is an assumption that you made that is actually invalid. Coding assumptions Let me explain a bit more. Every method, every line of code you write, depends on implicit assumptions that you have made. Take this following simple method, that copies a collection to an array and includes an item if it isn’t in the collection already, using a supplied IEqualityComparer: public static T[] ToArrayWithItem( ICollection<T> coll, T obj, IEqualityComparer<T> comparer) { // check if the object is in collection already // using the supplied comparer foreach (var item in coll) { if (comparer.Equals(item, obj)) { // it's in the collection already // simply copy the collection to an array // and return it T[] array = new T[coll.Count]; coll.CopyTo(array, 0); return array; } } // not in the collection // copy coll to an array, and add obj to it // then return it T[] array = new T[coll.Count+1]; coll.CopyTo(array, 0); array[array.Length-1] = obj; return array; } What’s all the assumptions made by this fairly simple bit of code? coll is never null comparer is never null coll.CopyTo(array, 0) will copy all the items in the collection into the array, in the order defined for the collection, starting at the first item in the array. The enumerator for coll returns all the items in the collection, in the order defined for the collection comparer.Equals returns true if the items are equal (for whatever definition of ‘equal’ the comparer uses), false otherwise comparer.Equals, coll.CopyTo, and the coll enumerator will never throw an exception or hang for any possible input and any possible values of T coll will have less than 4 billion items in it (this is a built-in limit of the CLR) array won’t be more than 2GB, both on 32 and 64-bit systems, for any possible values of T (again, a limit of the CLR) There are no threads that will modify coll while this method is running and, more esoterically: The C# compiler will compile this code to IL according to the C# specification The CLR and JIT compiler will produce machine code to execute the IL on the user’s computer The computer will execute the machine code correctly That’s a lot of assumptions. Now, it could be that all these assumptions are valid for the situations this method is called. But if this does crash out with an exception, or crash later on, then that shows one of the assumptions has been invalidated somehow. An unhandled exception shows that your code is running in a situation which you did not anticipate, and there is something about how your code runs that you do not understand. Debugging the problem is the process of learning more about the new situation and how your code interacts with it. When you understand the problem, the solution is (usually) obvious. The solution may be a one-line fix, the rewrite of a method or class, or a large-scale refactoring of the codebase, but whatever it is, the fix for the crash will incorporate the new information you’ve gained about your own code, along with the modified assumptions. When code is running with an assumption or invariant it depended on broken, then the result is ‘undefined behaviour’. Anything can happen, up to and including formatting the entire disk or making the user’s computer sentient and start doing a good impression of Skynet. You might think that those can’t happen, but at Halting problem levels of generality, as soon as an assumption the code depended on is broken, the program can do anything. That is why it’s important to fail-fast and stop the program as soon as an invariant is broken, to minimise the damage that is done. What does this mean in practice? To start with, document and check your assumptions. As with most things, there is a level of judgement required. How you check and document your assumptions depends on how the code is used (that’s some more assumptions you’ve made), how likely it is a method will be passed invalid arguments or called in an invalid state, how likely it is the assumptions will be broken, how expensive it is to check the assumptions, and how bad things are likely to get if the assumptions are broken. Now, some assumptions you can assume unless proven otherwise. You can safely assume the C# compiler, CLR, and computer all run the method correctly, unless you have evidence of a compiler, CLR or processor bug. You can also assume that interface implementations work the way you expect them to; implementing an interface is more than simply declaring methods with certain signatures in your type. The behaviour of those methods, and how they work, is part of the interface contract as well. For example, for members of a public API, it is very important to document your assumptions and check your state before running the bulk of the method, throwing ArgumentException, ArgumentNullException, InvalidOperationException, or another exception type as appropriate if the input or state is wrong. For internal and private methods, it is less important. If a private method expects collection items in a certain order, then you don’t necessarily need to explicitly check it in code, but you can add comments or documentation specifying what state you expect the collection to be in at a certain point. That way, anyone debugging your code can immediately see what’s wrong if this does ever become an issue. You can also use DEBUG preprocessor blocks and Debug.Assert to document and check your assumptions without incurring a performance hit in release builds. On my coding soapbox… A few pet peeves of mine around assumptions. Firstly, catch-all try blocks: try { ... } catch { } A catch-all hides exceptions generated by broken assumptions, and lets the program carry on in an unknown state. Later, an exception is likely to be generated due to further broken assumptions due to the unknown state, causing difficulties when debugging as the catch-all has hidden the original problem. It’s much better to let the program crash straight away, so you know where the problem is. You should only use a catch-all if you are sure that any exception generated in the try block is safe to ignore. That’s a pretty big ask! Secondly, using as when you should be casting. Doing this: (obj as IFoo).Method(); or this: IFoo foo = obj as IFoo; ... foo.Method(); when you should be doing this: ((IFoo)obj).Method(); or this: IFoo foo = (IFoo)obj; ... foo.Method(); There’s an assumption here that obj will always implement IFoo. If it doesn’t, then by using as instead of a cast you’ve turned an obvious InvalidCastException at the point of the cast that will probably tell you what type obj actually is, into a non-obvious NullReferenceException at some later point that gives you no information at all. If you believe obj is always an IFoo, then say so in code! Let it fail-fast if not, then it’s far easier to figure out what’s wrong. Thirdly, document your assumptions. If an algorithm depends on a non-trivial relationship between several objects or variables, then say so. A single-line comment will do. Don’t leave it up to whoever’s debugging your code after you to figure it out. Conclusion It’s better to crash out and fail-fast when an assumption is broken. If it doesn’t, then there’s likely to be further crashes along the way that hide the original problem. Or, even worse, your program will be running in an undefined state, where anything can happen. Unhandled exceptions aren’t good per-se, but they give you some very useful information about your code that you didn’t know before. And that can only be a good thing.

    Read the article

  • What are the boundaries of the product owner in scrum?

    - by Saeed Neamati
    In another question, I asked about why I feel scrum turns active developers into passive developers, and it seems that the overall problem is not scrumy (related to scrum), and rather it's related to the bad implementation of scrum. So, here I have some questions about the scope of the responsibilities of PO (product owner) and the limitations he/she shouldn't pass. Should PO interfere the UI design, when there are designers at work in scrum team? (an example of this which has happened to us, is to replace checkboxes with a drop down list with two items, namely, yes and no; or to make some boxes larger, or to left-align some content instead of centering them on the page, or stuff like that). If yeah, to what extent? Colors? Layout? Should PO interfere in Design and architecture of coding? This hasn't happened to us yet, but I'm really curious about the boundaries. For example does PO has the right to change the platform (moving from ASP.NET MVC to PHP, or something like that), or choosing the count of servers (tier architecture), etc. Should PO interfere in validation mechanisms? For example, this field should be required, or we don't need to get this piece of information from user. Sometimes, analyzers and designers confirm that something can be handled behind the scene, like extracting the user profile info from another source, instead of asking for it in UI. How granular could/should PO get into the analysis and design? For example, a user story might be: "As a customer, I'd like to be able to buy new domains online". However, scrum team can implement this user story in a wizard of five steps, or in one single page. To which level PO should monitor, or govern, or supervise the technical analysis, design, and implementation? I asked these questions to judge whether our implementation is right or wrong?

    Read the article

  • XNA C# How to draw fonts in different color

    - by XNA newbie
    I'm doing a simple chat system with XNA C#. It is a chatbox that contains 5 lines of chat typed by the users. Something like a MMORPG chatting system. [User1name] says: Hi [User2name] says: Hello [User1name] says: What are you doing? [User2name] says: I'm fine [System] The time is now 1:03AM. When the user pressed 'ENTER', the text he entered will be added inside an ArrayList chatList.Add(s); For displaying the text he entered, I used for (int i = 0; i < chatList.Count(); i++) { spriteBatch.DrawString(font, chatList[i], new Vector2(40, arr1[i]), Color.Yellow); } *arr1[i] contains 5 y-axis points to print my 5 line of chats in the chatbox Question1: What if I have another type of message which will be added into ChatList (something like a system message). I need the System Message to be printed out in red color. And if the user keeps on chatting, the chat box will be updated according: (MAX 5 LINES) The newest chat will be shown below, and the oldest one will be deleted if they reached the max 5 lines. [User2name] says: Hello [User1name] says: What are you doing? [User2name] says: I'm fine [System] The time is now 1:03AM. [User1name] says: Ok, great to hear that! I'm having trouble to print each line color according to their msg type. For normal msg, it should be yellow. For system msg, it should be red. Question2: And for the next problem, I need the chat texts to be white color, while the names of the user is yellow (like warcraft3 chat system). How do I do that? I have a hard time thinking of a solution for these to work. Advise needed.

    Read the article

  • Google+ Platform Office Hours for April 4th 2012: Open Q&A

    Google+ Platform Office Hours for April 4th 2012: Open Q&A We hold weekly Google+ Platform Office Hours using Hangouts On Air most Wednesdays from 11:30am until 12:15pm PST. This week we opened the session up to your questions about the Google+ platform. Here's a list of the topics we addressed: - 1:40 - HTTPS and hangout apps - 4:48 - The Google+ badge on Blogger - 6:51 - Warnings logged to the console by the +1 button - 7:57 - +1 button count discrepancies between the button, Google Analytics and Google Webmaster Tools - 11:04 - Using Google+ to identify users on an external website Our starter projects include this functionality. You can find them here: developers.google.com - 14:12 - When will the feature I want be released? - 16:05 - Redirecting your domain to your Google+ Page Jenny mentions a blog entry about redirecting to your Google+ profile: goo.gl - 17:30 - Pulling public Google+ activity from your Google+ Page into your website The starter projects also demonstrate this functionality: developers.google.com - 19:43 - Integrating the Google+ badge with Google Analytics tracking Oops! Jenny mentions callbacks. She was in error. The +1 button provides callbacks but the badge does not at this time. Sorry about that. Discuss this video on Google+: goo.gl Learn more about our Office Hours: developers.google.com From: GoogleDevelopers Views: 114 5 ratings Time: 21:28 More in Science & Technology

    Read the article

  • How do I fix a garbled screen on a Gateway LT3103u?

    - by paracaudex
    I've been having garbled screen problems on a Gateway LT3103u on Ubuntu for a while. I just did a fresh install of Ubuntu 11.10 and continue to have issues. I installed xubuntu-desktop in case the issues had to do with the sophisticated GNOME graphics. The problem is less bad, but it's still there. After a few minutes of using XFCE, the screen gets garbled. I assume this has something to do with the graphics card, but I don't know how to go about troubleshooting something like this. Where should I start? Update: Here is the description of the VGA card from lspci -vvv: 01:05.0 VGA compatible controller: ATI Technologies Inc RS690M [Radeon X1200 Series] (prog-if 00 [VGA controller]) Subsystem: Acer Incorporated [ALI] Device 028c Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast TAbort- SERR- [disabled] Capabilities: [50] Power Management version 2 Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME- Capabilities: [80] MSI: Enable- Count=1/1 Maskable- 64bit+ Address: 0000000000000000 Data: 0000 Kernel driver in use: radeon Kernel modules: radeon Update: Setting GRUB_CMDLINE_LINUX="nomodeset" in /etc/default/grub seems to have fixed it in both Ubuntu and xubuntu-desktop. I will test it for a day or so to see if the problems recur and then post more detail with some links to an explanation. Update 2: It is possible to use this fix for Nvidia card (GTX 260) when graphics is defective after 11.10 upgrade/install? First few restarts was graphic ok, then after few restarts begins suddenly be defective and it stay so. I must returned to 11.04 because this problem and I wait for 12.04. So I hope in this fix.

    Read the article

  • Linqpad and StreamInsight

    Slightly before the announcement of StreamInsight being available for Linqpad I downloaded it from here.  I had seen Roman Schindlauer demonstrate it at Teched and it looked a really good tool to do some StreamInsight dev.   You will need .Net 4.0 and StreamInsight installed. Here’s what you need to do after downloading and installing Linqpad. Add a new connection   The next thing we need to do is install and enable the StreamInsight driver.  Choose to view more drivers.   Choose StreamInsight     Select the driver after install     I have chosen the Default Context.     And after all that I can finally get to writing my query.  This is a very simple query where I turn a collection (IEnumerable) into a PointStream.  After doing that I create 30 minute windows over the stream before outputting the count of events in each of those windows to the result window.     I have played with Linqpad only a little but I think it is going to be a really good tool to get ideas developed and quickly.  I have also enabled Autcompletion (paid £25) and I recommend it.

    Read the article

  • Unable to mount external hard drive - Damaged file system and MFT

    - by Khalifa Abbas Lame
    I get the following error when i try to mount my external hard drive. UNABLE TO MOUNT Error mounting /dev/sdc1 at /media/khalibloo/Khalibloo2: Command-line `mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sdc1" "/media/khalibloo/Khalibloo2"' exited with non-zero exit status 13: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Failed to read of MFT, mft=6 count=1 br=-1: Input/output error Failed to open inode FILE_Bitmap: Input/output error Failed to mount '/dev/sdc1': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details. It doesn't mount on windows either: "I/O Device error" it's an ntfs hard drive with a single partition Of course, i tried chkdsk /f. it reported several file segments as unreadable, but didn't say whether it fixed them or not (apparently not). also tried with the /b flag. ntfsfix reported the volume as corrupt. TestDisk was able to fix a small error with the partition table by adding the "80" flag for the active (only) partition. TestDisk also confirmed that the boot sector was fine and it matched the backup. However, when attempting to repair the MFT, it couldn't read the MFT. It also couldn't list the files on the hard drive. It says file system may be damaged. Active@ also shows that MFT is missing or corrupt. So how do i fix the file system? or the MFT?

    Read the article

  • .Net oracle parameter order

    - by jkrebsbach
    Using the ODAC (Oracle Data Access Components) downloaded from Oracle to talk to a handfull of Oracle DBs - Was putting together my DAL to update the DB, and things weren't working as I hoped - UPDATE foo SET bar = :P_BAR WHERE bap = :P_BAP I assign my parameters - objCmd.Parameters.Add(objBap); objCmd.Parameters.Add(objBar);   Execute update command - int result = objCmd.ExecuteNonQuery() and result is zero! ...  Is my filter incorrect? SELECT count(*) FROM foo WHERE bap = :P_BAP ...result is one... Is my new value incorrect?  Am I using Char instead of Varchar somewhere and need an RTRIM?  Is there a transaction getting involved?  An error thrown and not caught? The answer: Order of parameters.   The order parameters are added to the Oracle Command object must match the order the parameters are referenced in the SQL statement.  I was adding the parameters for the WHERE clause before adding the SET value parameters, and for that reason although no error was being thrown, no value was updated either. Flip parameter collection around to match order of params in the SQL statement, and ExecuteNonQuery() is back to returning the number of rows affected.

    Read the article

  • What data to send when tracking clicks with Google Analytics events (and how)?

    - by user359650
    When tracking clicks on links, there are 3 items I'm interested in: link location in the page by grabbing the id of the closest parent: to see influence of location on click-through link text: to see influence of text on click-through link href attribute value: to see where people go when leaving my website The problem when using Google Analytics to track those clicks is that events only have 3 available text fields, one of which being the category, which if you use to store one of the above items will create a mess in your Event reporting because you will have as many categories as item values. Therefore if you assign a predefined value to the category (e.g. clicks), then you're left with only 2 event fields (action, label) to store 3 items (location, text, href). That in itself isn't the end of the world because you can concatenate 2 items into 1 event field, then use the reporting or the API to filter things out. Accordingly what I plan on doing is this: category: clicks action: {location_on_page} ¦ {text} label: {href} where {__} are variable values related to the clicked links With this I can easily create some reports directly via the GUI: downloads: include only events where label ends with .pdf click outs to particular domains: include only events where label contains domain And for more complex tasks I need to export the data (or use the API): influence of location on clicks: for each location in the design, count number of events that have that location in the action, then corroborate with pageviews of the corresponding pages. Whilst this looks good I'm wondering if there is a better approach, hence the following questions: Q1: Can you foresee any particular issues with this particular setup (e.g. things I won't be able to report on)? Q2: Can you think of other data that would be interesting to include in the event?

    Read the article

  • MapReduce

    - by kaleidoscope
    MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of  intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper. Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data,  scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system. Example: A process to count the appearances of each different word in a set of documents void map(String name, String document):   // name: document name   // document: document contents   for each word w in document:     EmitIntermediate(w, 1); void reduce(String word, Iterator partialCounts):   // word: a word   // partialCounts: a list of aggregated partial counts   int result = 0;   for each pc in partialCounts:     result += ParseInt(pc);   Emit(result); Here, each document is split in words, and each word is counted initially with a "1" value by the Map function, using the word as the result key. The framework puts together all the pairs with the same key and feeds them to the same call to Reduce, thus this function just needs to sum all of its input values to find the total appearances of that word.   Sarang, K

    Read the article

  • Alternate method to dependent, nested if statements to check multiple states

    - by octopusgrabbus
    Is there an easier way to process multiple true/false states than using nested if statements? I think there is, and it would be to create a sequence of states, and then use a function like when to determine if all states were true, and drop out if not. I am asking the question to make sure there is not a preferred Clojure way to do this. Here is the background of my problem: I have an application that depends on quite a few input files. The application depends on .csv data reports; column headers for each report (.csv files also), so each sequence in the sequence of sequences can be zipped together with its columns for the purposes of creating a smaller sequence; and column files for output data. I use the following functions to find out if a file is present: (defn kind [filename] (let [f (File. filename)] (cond (.isFile f) "file" (.isDirectory f) "directory" (.exists f) "other" :else "(cannot be found)" ))) (defn look-for [filename expected-type] (let [find-status (kind-stat filename expected-type)] find-status)) And here are the first few lines of a multiple if which looks ugly and is hard to maintain: (defn extract-re-values "Plain old-fashioned sub-routine to process real-estate values / 3rd Q re bills extract." [opts] (if (= (utl/look-for (:ifm1 opts) "f") 0) ; got re columns? (if (= (utl/look-for (:ifn1 opts) "f") 0) ; got re data? (if (= (utl/look-for (:ifm3 opts) "f") 0) ; got re values output columns? (if (= (utl/look-for (:ifm4 opts) "f") 0) ; got re_mixed_use_ratio columns? (let [re-in-col-nams (first (utl/fetch-csv-data (:ifm1 opts))) re-in-data (utl/fetch-csv-data (:ifn1 opts)) re-val-cols-out (first (utl/fetch-csv-data (:ifm3 opts))) mu-val-cols-out (first (utl/fetch-csv-data (:ifm4 opts))) chk-results (utl/chk-seq-len re-in-col-nams (first re-in-data) re-rec-count)] I am not looking for a discussion of the best way, but what is in Clojure that facilitates solving a problem like this.

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >