Search Results

Search found 6189 results on 248 pages for 'garbage collection'.

Page 43/248 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • c#: Clean way to fit a collection into a multidimensional array?

    - by Rosarch
    I have an ICollection<MapNode>. Each MapNode has a Position attribute, which is a Point. I want to sort these points first by Y value, then by X value, and put them in a multidimensional array (MapNode[,]). The collection would look something like this: (30, 20) (20, 20) (20, 30) (30, 10) (30, 30) (20, 10) And the final product: (20, 10) (20, 20) (20, 30) (30, 10) (30, 20) (30, 30) Here is the code I have come up with to do it. Is this hideously unreadable? I feel like it's more hacky than it needs to be. private Map createWorldPathNodes() { ICollection<MapNode> points = new HashSet<MapNode>(); Rectangle worldBounds = WorldQueryUtils.WorldBounds(); for (float x = worldBounds.Left; x < worldBounds.Right; x += PATH_NODE_CHUNK_SIZE) { for (float y = worldBounds.Y; y > worldBounds.Height; y -= PATH_NODE_CHUNK_SIZE) { // default is that everywhere is navigable; // a different function is responsible for determining the real value points.Add(new MapNode(true, new Point((int)x, (int)y))); } } int distinctXValues = points.Select(node => node.Position.X).Distinct().Count(); int distinctYValues = points.Select(node => node.Position.Y).Distinct().Count(); IList<MapNode[]> mapNodeRowsToAdd = new List<MapNode[]>(); while (points.Count > 0) // every iteration will take a row out of points { // get all the nodes with the greatest Y value currently in the collection int currentMaxY = points.Select(node => node.Position.Y).Max(); ICollection<MapNode> ythRow = points.Where(node => node.Position.Y == currentMaxY).ToList(); // remove these nodes from the pool we're picking from points = points.Where(node => ! ythRow.Contains(node)).ToList(); // ToList() is just so it is still a collection // put the nodes with max y value in the array, sorting by X value mapNodeRowsToAdd.Add(ythRow.OrderByDescending(node => node.Position.X).ToArray()); } MapNode[,] mapNodes = new MapNode[distinctXValues, distinctYValues]; int xValuesAdded = 0; int yValuesAdded = 0; foreach (MapNode[] mapNodeRow in mapNodeRowsToAdd) { xValuesAdded = 0; foreach (MapNode node in mapNodeRow) { // [y, x] may seem backwards, but mapNodes[y] == the yth row mapNodes[yValuesAdded, xValuesAdded] = node; xValuesAdded++; } yValuesAdded++; } return pathNodes; } The above function seems to work pretty well, but it hasn't been subjected to bulletproof testing yet.

    Read the article

  • How does one bind to a List<DataRow> collection in WPF XAML?

    - by Elan
    Using a DataTable or DataView, one can specify the binding in XAML for example as follows: <Image Source="{Binding Thumbnail}" /> In my case I have created a List collection of DataRow objects and the above is not working. This is my data source: List<DataRow> I could of course convert the List< to a DataTable, but I am curious if there was a way to specify the binding in XAML to access the "Thumbnail" column within the DataRow that is stored in a List< collection. Elan

    Read the article

  • How does one bind to an List<DataRow> collection in WPF XAML?

    - by Elan
    Using a DataTable or DataView, one can specify the binding in XAML for example as follows: <Image Source="{Binding Thumbnail}" /> In my case I have created a List collection of DataRow objects and the above is not working. This is my data source: List<DataRow> I could of course convert the List< to a DataTable, but I am curious if there was a way to specify the binding in XAML to access the "Thumbnail" column within the DataRow that is stored in a List< collection. Elan

    Read the article

  • Can 'locals' be used with 'collection' when rendering partials in Rails?

    - by Gav
    Everything works okay when I try to render a partial like this: = render :partial => "/shared/enquiry/car_type", :collection => @enquiry.available_car_types However, if I also want to pass a variable (in this case 'path', because I'm sharing this partial across two forms), the path is not available to me: = render :partial => "/shared/enquiry/car_type", :collection => @enquiry.available_car_types, :locals => {:path => customers_enquiry_path} I've tried moving things around, but nothing appears to work, leading me to believe one cannot use locals with collections. Any help would be appreciated. Gav

    Read the article

  • Why am I getting "Collection was modified; enumeration operation may not execute" when not modifying

    - by ccornet
    I have two collections of strings: CollectionA is a StringCollection property of an object stored in the system, while CollectionB is a List generated at runtime. CollectionA needs to be updated to match CollectionB if there are any differences. So I devised what I expected to be a simple LINQ method to perform the removal. var strDifferences = CollectionA.Where(foo => !CollectionB.Contains(foo)); foreach (var strVar in strDifferences) { CollectionA.Remove(strVar); } But I am getting a "Collection was modified; enumeration operation may not execute" error on strDifferences... even though it is a separate enumerable from the collection being modified! I originally devised this explicitly to evade this error, as my first implementation would produce it (as I was enumerating across CollectionA and just removing when !CollectionB.Contains(str)). Can anyone shed some insight into why this enumeration is failing?

    Read the article

  • How to give a custom ASP.NET control the ability to parse XML markup to a collection?

    - by Vilx-
    I'm writing a custom ASP.NET webcontrol and would like it to have a collection of custom items which can also be specified in the XML markup. Something like this: class MyControl: WebControl { public IList<MyItemType> MyItems { get; private set; } } And in the markup: <asd:MyControl runat="server" id="mc1"> <MyItems> <MyDerivedCustomItem asd="dsa"/> <MyOtherDerivedCustomItem asd="dsa"/> </MyItems> </asd:MyControl> How do I do this? I though this was all about implementing some interface on the collection or adding some special attributes to the property, but nothing I do seems to work.

    Read the article

  • Reflection PropertyInfo GetValue call errors out for Collection<> type property.

    - by Vinit Sankhe
    Hey Guys, I have a propertyInfo object and I try to do a GetValue using it. object source = mysourceObject //This object has a property "Prop1" of type Collection<>. var propInfo = source.GetType().GetProperty("Prop1"); var propValue = prop.GetValue(this, null); // do whatever with propValue // ... I get error at the GetValue() call as "Value cannot be null.\r\nParameter name: source" "Prop1" is a plain property declared as Collection. prop.PropertyType = {Name = "Collection1" FullName = "System.Collections.ObjectModel.Collection1[[Application1.DummyClass, Application1, Version=1.5.5.5834, Culture=neutral, PublicKeyToken=628b2ce865838339]]"} System.Type {System.RuntimeType}

    Read the article

  • .NET Extension Objects with XSLT -- how to iterate over a collection?

    - by Pandincus
    Help me, Stackoverflow! I have a simple .NET 3.5 console app that reads some data and sends emails. I'm representing the email format in an XSLT stylesheet so that we can easily change the wording of the email without needing to recompile the app. We're using Extension Objects to pass data to the XSLT when we apply the transformation: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl" xmlns:EmailNotification="ext:EmailNotification"> -- this way, we can have statements like: <p> Dear <xsl:value-of select="EmailNotification:get_FullName()" />: </p> The above works fine. I pass the object via code like this (some irrelevant code omitted for brevity): // purely an example structure public struct EmailNotification { public string FullName { get; set; } } // Somewhere in some method ... var notification = new Notification("John Smith"); // ... XsltArgumentList xslArgs = new XsltArgumentList(); xslArgs.AddExtensionObject("ext:EmailNotification", notification); // ... // The part where it breaks! (This is where we do the transformation) xslt.Transform(fakeXMLDocument.CreateNavigator(), xslArgs, XmlWriter.Create(transformedXMLString)); So, all of the above code works. However, I wanted to get a little fancy (always my downfall) and pass a collection, so that I could do something like this: <p>The following accounts need to be verified:</p> <xsl:for-each select="EmailNotification:get_SomeCollection()"> <ul> <li> <xsl:value-of select="@SomeAttribute" /> </li> </ul> <xsl:for-each> When I pass the collection in the extension object and attempt to transform, I get the following error: "Extension function parameters or return values which have Clr type 'String[]' are not supported." or List, or IEnumerable, or whatever I try to pass in. So, my questions are: How can I pass in a collection to my XSLT? What do I put for the xsl:value-of select="" inside the xsl:for-each ? Is what I am trying to do impossible?

    Read the article

  • Is it reasonable for REST resources to be singular and plural?

    - by Evan
    I have been wondering if, rather than a more traditional layout like this: api/Products GET // gets product(s) by id PUT // updates product(s) by id DELETE // deletes (product(s) by id POST // creates product(s) Would it be more useful to have a singular and a plural, for example: api/Product GET // gets a product by id PUT // updates a product by id DELETE // deletes a product by id POST // creates a product api/Products GET // gets a collection of products by id PUT // updates a collection of products by id DELETE // deletes a collection of products (not the products themselves) POST // creates a collection of products based on filter parameters passed So, to create a collection of products you might do: POST api/Products {data: filters} // returns api/Products/<id> And then, to reference it, you might do: GET api/Products/<id> // returns array of products In my opinion, the main advantage of doing things this way is that it allows for easy caching of collections of products. One might, for example, put a lifetime of an hour on collections of products, thus drastically reducing the calls on a server. Of course, I currently only see the good side of doing things this way, what's the downside?

    Read the article

  • Retrieve data from an ASP.Net application using Ado.Net 2.0 disconnected model

    - by nikolaosk
    This is the second post in a series of posts regarding to ADO.Net 2.0. Have a look at the first post if you like. In this post I am going to investigate the "Disconnected" model. When I say "Disconnected" I mean Datasets . Datasets are in memory representations of tables in a particular database. A Dataset contains a Table collection and each Table collection contains a Row collection and each Row collection contains a Columns collection. So initially you connect to the database, get the data to...(read more)

    Read the article

  • Linq to SQL - How to compare against a collection in the where clause?

    - by Sgraffite
    I'd like to compare against an IEnumerable collection in my where clause. Do I need to manually loop through the collection to pull out the column I want to compare against, or is there a generic way to handle this? I want something like this: public IEnumerable<Cookie> GetCookiesForUsers(IEnumerable<User> Users) { var cookies = from c in db.Cookies join uc in db.UserCookies on c.CookieID equals uc.CookieID join u in db.Users on uc.UserID equals u.UserID where u.UserID.Equals(Users.UserID) select c; return cookies.ToList(); } I'm used to using the lambda Linq to SQL syntax, but I decided to try the SQLesque syntax since I was using joins this time. What is a good way to do this?

    Read the article

  • WPF Binding to Items within a collection? (or converter with parameters)

    - by Sonic Soul
    i am using a WPF DataGrid, and in my Details row, i would like to show separate objects within a sub collection of each grid item. Is it possible to have a finer control of Path? for example something like... Path=SubCollection['ItemX'] etc.. also, if i was to use a converter, i don't want to have to create a separate converter for each item.. so would there be a way to supply a parameter to a converter that could than determine which collection item to return??

    Read the article

  • MS Access ADP front end and SQL Server back end for field data collection?

    - by Brash Equilibrium
    I am an anthropologist. I am going to the field and will use a netbook to collect survey data. The survey forms will need to allow me to enter data into multiple tables, search tables, allow subforms, and be fast enough to not slow down my interview. I have considered storing the data in a SQL Server Express 2008 R2 server (there will be a lot of data) while using a Microsoft Access data project as a front end. To cut down the number of steps required to collect and store data, I'm considering using the netbook for both data storage and collection (after reading this article about SQL Server on a netbook). My questions are: (1) Is there a simpler solution that is also gratis (gratis because I already have a MS Access license from my workplace, and SQL Server Express is, obviously, free)? (2) Does my idea to store and collect data using the netbook make sense? Thank you.

    Read the article

  • Ruby: what the hell does this code saying ????

    - by wefwgeweg
    i discovered this in a dark place one day...what the hell is it supposed to do ?? def spliceElement(newelement,dickwad) dox = Nokogiri::HTML(newelement) fuck = dox.xpath("//text()").to_a fuck.each do |shit| if shit.text.include? ": " dickwad << shit.text.split(': ')[1].strip + "|" else if shit.text =~ /\s{1,}/ or shit.text =~ /\n{1,}/ puts "fuck" else dickwad << shit.text.squeeze(" ").strip + "|" end end end dickwad << "\n" end def extract(newdoc, newarray) doc = Nokogiri::HTML(newdoc) collection = Array.new newarray.each do |dong| newb = doc.xpath(dong).to_a #puts doc.xpath(dong).text collection << newb end dickwad = ""; if collection.length > 1 (0...collection.first.length).each do |i| (0...collection.length).each do |j| somefield = collection[j][i].to_s.gsub(/\s{2,}/,' ') spliceElement(somefield, dickwad) end newrow = dickwad.chop + "\n" return newrow.to_s end else collection.first.each do |shit| somefield = shit.to_s.gsub(/\s{2,}/,' ') spliceElement(somefield, dickwad) puts somefield + "\n\n" #newrow = dickwad.chop + "\n" #puts newrow #return newrow.to_s sleep 1 end end

    Read the article

  • Adding two Set[Any]

    - by Alex Boisvert
    Adding two Set[Int] works: Welcome to Scala version 2.8.1.final (Java HotSpot(TM) Server VM, Java 1.6.0_23). Type in expressions to have them evaluated. Type :help for more information. scala> Set(1,2,3) ++ Set(4,5,6) res0: scala.collection.immutable.Set[Int] = Set(4, 5, 6, 1, 2, 3) But adding two Set[Any] doesn't: scala> Set[Any](1,2,3) ++ Set[Any](4,5,6) <console>:6: error: ambiguous reference to overloaded definition, both method ++ in trait Addable of type (xs: scala.collection.TraversableOnce[Any])scala.collection.immutable.Set[Any] and method ++ in trait TraversableLike of type [B >: Any,That](that: scala.collection.TraversableOnce[B])(implicit bf: scala.collection.generic.CanBuildFrom[scala.collection.immutable.Set[Any],B,That])That match argument types (scala.collection.immutable.Set[Any]) Set[Any](1,2,3) ++ Set[Any](4,5,6) ^ Any suggestion to work around this error?

    Read the article

  • Parallelism in .NET – Part 6, Declarative Data Parallelism

    - by Reed
    When working with a problem that can be decomposed by data, we have a collection, and some operation being performed upon the collection.  I’ve demonstrated how this can be parallelized using the Task Parallel Library and imperative programming using imperative data parallelism via the Parallel class.  While this provides a huge step forward in terms of power and capabilities, in many cases, special care must still be given for relative common scenarios. C# 3.0 and Visual Basic 9.0 introduced a new, declarative programming model to .NET via the LINQ Project.  When working with collections, we can now write software that describes what we want to occur without having to explicitly state how the program should accomplish the task.  By taking advantage of LINQ, many operations become much shorter, more elegant, and easier to understand and maintain.  Version 4.0 of the .NET framework extends this concept into the parallel computation space by introducing Parallel LINQ. Before we delve into PLINQ, let’s begin with a short discussion of LINQ.  LINQ, the extensions to the .NET Framework which implement language integrated query, set, and transform operations, is implemented in many flavors.  For our purposes, we are interested in LINQ to Objects.  When dealing with parallelizing a routine, we typically are dealing with in-memory data storage.  More data-access oriented LINQ variants, such as LINQ to SQL and LINQ to Entities in the Entity Framework fall outside of our concern, since the parallelism there is the concern of the data base engine processing the query itself. LINQ (LINQ to Objects in particular) works by implementing a series of extension methods, most of which work on IEnumerable<T>.  The language enhancements use these extension methods to create a very concise, readable alternative to using traditional foreach statement.  For example, let’s revisit our minimum aggregation routine we wrote in Part 4: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re doing a very simple computation, but writing this in an imperative style.  This can be loosely translated to English as: Create a very large number, and save it in min Loop through each item in the collection. For every item: Perform some computation, and save the result If the computation is less than min, set min to the computation Although this is fairly easy to follow, it’s quite a few lines of code, and it requires us to read through the code, step by step, line by line, in order to understand the intention of the developer. We can rework this same statement, using LINQ: double min = collection.Min(item => item.PerformComputation()); Here, we’re after the same information.  However, this is written using a declarative programming style.  When we see this code, we’d naturally translate this to English as: Save the Min value of collection, determined via calling item.PerformComputation() That’s it – instead of multiple logical steps, we have one single, declarative request.  This makes the developer’s intentions very clear, and very easy to follow.  The system is free to implement this using whatever method required. Parallel LINQ (PLINQ) extends LINQ to Objects to support parallel operations.  This is a perfect fit in many cases when you have a problem that can be decomposed by data.  To show this, let’s again refer to our minimum aggregation routine from Part 4, but this time, let’s review our final, parallelized version: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Here, we’re doing the same computation as above, but fully parallelized.  Describing this in English becomes quite a feat: Create a very large number, and save it in min Create a temporary object we can use for locking Call Parallel.ForEach, specifying three delegates For the first delegate: Initialize a local variable to hold the local state to a very large number For the second delegate: For each item in the collection, perform some computation, save the result If the result is less than our local state, save the result in local state For the final delegate: Take a lock on our temporary object to protect our min variable Save the min of our min and local state variables Although this solves our problem, and does it in a very efficient way, we’ve created a set of code that is quite a bit more difficult to understand and maintain. PLINQ provides us with a very nice alternative.  In order to use PLINQ, we need to learn one new extension method that works on IEnumerable<T> – ParallelEnumerable.AsParallel(). That’s all we need to learn in order to use PLINQ: one single method.  We can write our minimum aggregation in PLINQ very simply: double min = collection.AsParallel().Min(item => item.PerformComputation()); By simply adding “.AsParallel()” to our LINQ to Objects query, we converted this to using PLINQ and running this computation in parallel!  This can be loosely translated into English easily, as well: Process the collection in parallel Get the Minimum value, determined by calling PerformComputation on each item Here, our intention is very clear and easy to understand.  We just want to perform the same operation we did in serial, but run it “as parallel”.  PLINQ completely extends LINQ to Objects: the entire functionality of LINQ to Objects is available.  By simply adding a call to AsParallel(), we can specify that a collection should be processed in parallel.  This is simple, safe, and incredibly useful.

    Read the article

  • Passing a parameter so that it cannot be changed – C#

    - by nmarun
    I read this requirement of not allowing a user to change the value of a property passed as a parameter to a method. In C++, as far as I could recall (it’s been over 10 yrs, so I had to refresh memory), you can pass ‘const’ to a function parameter and this ensures that the parameter cannot be changed inside the scope of the function. There’s no such direct way of doing this in C#, but that does not mean it cannot be done!! Ok, so this ‘not-so-direct’ technique depends on the type of the parameter – a simple property or a collection. Parameter as a simple property: This is quite easy (and you might have guessed it already). Bulent Ozkir clearly explains how this can be done here. Parameter as a collection property: Obviously the above does not work if the parameter is a collection of some type. Let’s dig-in. Suppose I need to create a collection of type KeyTitle as defined below. 1: public class KeyTitle 2: { 3: public int Key { get; set; } 4: public string Title { get; set; } 5: } My class is declared as below: 1: public class Class1 2: { 3: public Class1() 4: { 5: MyKeyTitleList = new List<KeyTitle>(); 6: } 7: 8: public List<KeyTitle> MyKeyTitleList { get; set; } 9: public ReadOnlyCollection<KeyTitle> ReadonlyKeyTitleCollection 10: { 11: // .AsReadOnly creates a ReadOnlyCollection<> type 12: get { return MyKeyTitleList.AsReadOnly(); } 13: } 14: } See the .AsReadOnly() method used in the second property? As MSDN says it: “Returns a read-only IList<T> wrapper for the current collection.” Knowing this, I can implement my code as: 1: public static void Main() 2: { 3: Class1 class1 = new Class1(); 4: class1.MyKeyTitleList.Add(new KeyTitle { Key = 1, Title = "abc" }); 5: class1.MyKeyTitleList.Add(new KeyTitle { Key = 2, Title = "def" }); 6: class1.MyKeyTitleList.Add(new KeyTitle { Key = 3, Title = "ghi" }); 7: class1.MyKeyTitleList.Add(new KeyTitle { Key = 4, Title = "jkl" }); 8:  9: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 10:  11: Console.ReadLine(); 12: } 13:  14: private static void TryToModifyCollection(ReadOnlyCollection<KeyTitle> readOnlyCollection) 15: { 16: // can only read 17: for (int i = 0; i < readOnlyCollection.Count; i++) 18: { 19: Console.WriteLine("{0} - {1}", readOnlyCollection[i].Key, readOnlyCollection[i].Title); 20: } 21: // Add() - not allowed 22: // even the indexer does not have a setter 23: } The output is as expected: The below image shows two things. In the first line, I’ve tried to access an element in my read-only collection through an indexer. It shows that the ReadOnlyCollection<> does not have a setter on the indexer. The second line tells that there’s no ‘Add()’ method for this type of collection. The capture below shows there’s no ‘Remove()’ method either, there-by eliminating all ways of modifying a collection. Mission accomplished… right? Now, even if you have a collection of different type, all you need to do is to somehow cast (used loosely) it to a List<> and then do a .AsReadOnly() to get a ReadOnlyCollection of your custom collection type. As an example, if you have an IDictionary<int, string>, you can create a List<T> of this type with a wrapper class (KeyTitle in our case). 1: public IDictionary<int, string> MyDictionary { get; set; } 2:  3: public ReadOnlyCollection<KeyTitle> ReadonlyDictionary 4: { 5: get 6: { 7: return (from item in MyDictionary 8: select new KeyTitle 9: { 10: Key = item.Key, 11: Title = item.Value, 12: }).ToList().AsReadOnly(); 13: } 14: } Cool huh? Just one thing you need to know about the .AsReadOnly() method is that the only way to modify your ReadOnlyCollection<> is to modify the original collection. So doing: 1: public static void Main() 2: { 3: Class1 class1 = new Class1(); 4: class1.MyKeyTitleList.Add(new KeyTitle { Key = 1, Title = "abc" }); 5: class1.MyKeyTitleList.Add(new KeyTitle { Key = 2, Title = "def" }); 6: class1.MyKeyTitleList.Add(new KeyTitle { Key = 3, Title = "ghi" }); 7: class1.MyKeyTitleList.Add(new KeyTitle { Key = 4, Title = "jkl" }); 8: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 9:  10: Console.WriteLine(); 11:  12: class1.MyKeyTitleList.Add(new KeyTitle { Key = 5, Title = "mno" }); 13: class1.MyKeyTitleList[2] = new KeyTitle{Key = 3, Title = "GHI"}; 14: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 15:  16: Console.ReadLine(); 17: } Gives me the output of: See that the second element’s Title is changed to upper-case and the fifth element also gets displayed even though we’re still looping through the same ReadOnlyCollection<KeyTitle>. Verdict: Now you know of a way to implement ‘Method(const param1)’ in your code!

    Read the article

  • How to register a domain for a beginner?

    - by garbage collection
    I've never registered a .com , .net like domain before, and I would like to do some research before doing so. I currently have a ruby on rails app running Heroku. Is there anything special I have to do prior to registering domain on my ruby on rails app at all? Or is it as easy as just inserting my current Heroku address to mask it with another .com or .net name? Is there some special features I should look for registering domain? Or is it typical for domain seller to just sell domain names only? Any recommendations on sellers? Thank you.

    Read the article

  • How to register a domain for a beginner?

    - by garbage collection
    I've never registered a .com , .net like domain before, and I would like to do some research before doing so. I currently have a ruby on rails app running Heroku. Is there anything special I have to do prior to registering domain on my ruby on rails app at all? Or is it as easy as just inserting my current Heroku address to mask it with another .com or .net name? Is there some special features I should look for registering domain? Or is it typical for domain seller to just sell domain names only? Any recommendations on sellers? Thank you.

    Read the article

  • Google Dart vs CoffeeScript? Which one should one learn?

    - by garbage collection
    I was thinking about learning CoffeeScript some time in the future. In the mean time, Google came out with Dart that seems to do what CoffeeScript does. Google says: Dart code can be executed in two different ways: either on a native virtual machine or on top of a JavaScript engine by using a compiler that translates Dart code to JavaScript. This means you can write a web application in Dart and have it compiled and run on any modern browser. Does anyone know advantages and disadvantages of learning Dart or CoffeeScript?

    Read the article

  • Why does Scala require functions to have explicit return type?

    - by garbage collection
    I recently began learning to program in Scala, and it's been fun so far. I really like the ability to declare functions within another function which just seems to intuitive thing to do. One pet peeve I have about Scala is the fact that Scala requires explicit return type in its functions. And I feel like this hinders on expressiveness of the language. Also it's just difficult to program with that requirement. Maybe it's because I come from Javascript and Ruby comfort zone. But for a language like Scala which will have tons of connected functions in an application, I cannot conceive how I brainstorm in my head exactly what type the particular function I am writing should return with recursions after recursions. This requirement of explicit return type declaration on functions, do not bother me for languages like Java and C++. Recursions in Java and C++, when they did happen, often were dealt with 2 to 3 functions max. Never several functions chained up together like Scala. So I guess I'm wondering if there is a good reason why Scala should have the requirement of functions having explicit return type?

    Read the article

  • How can I force Javascript garbage collection in IE? IE is acting very slow after AJAX calls & DOM

    - by RenderIn
    I have a page with chained drop-downs. Choosing an option from the first select populates the second, and choosing an option from the second select returns a table of matching results using the innerHtml function on an empty div on the page. The problem is, once I've made my selections and a considerable amount of data is brought onto the page, all subsequent Javascript on the page runs exceptionally slowly. It seems as if all the data I pulled back via AJAX to populate the div is still hogging a lot of memory. I tried setting the return object which contains the AJAX results to null after calling innerHtml but with no luck. Firefox, Safari, Chrome and Opera all show no performance degradation when I use Javascript to insert a lot of data into the DOM, but in IE it is very apparent. To test that it's a Javascript/DOM issue rather than a plain old IE issue, I created a version of the page that returns all the results on the initial load, rather than via AJAX/Javascript, and found IE had no performance problems. FYI, I'm using jQuery's jQuery.get method to execute the AJAX call. EDIT This is what I'm doing: <script type="text/javascript"> function onFinalSelection() { var searchParameter = jQuery("#second-select").val(); jQuery.get("pageReturningAjax.php", {SEARCH_PARAMETER: searchParameter}, function(data) { jQuery("#result-div").get(0).innerHtml = data; //jQuery("#result-div").html(data); //Tried this, same problem data = null; }, "html"); } </script>

    Read the article

  • Why do I get garbage output when printing an int[]?

    - by Kat
    My program is suppose to count the occurrence of each character in a file ignoring upper and lower case. The method I wrote is: public int[] getCharTimes(File textFile) throws FileNotFoundException { Scanner inFile = new Scanner(textFile); int[] lower = new int[26]; char current; int other = 0; while(inFile.hasNext()){ String line = inFile.nextLine(); String line2 = line.toLowerCase(); for (int ch = 0; ch < line2.length(); ch++) { current = line2.charAt(ch); if(current >= 'a' && current <= 'z') lower[current-'a']++; else other++; } } return lower; } And is printed out using: for(int letter = 0; letter < 26; letter++) { System.out.print((char) (letter + 'a')); System.out.println(": " + ts.getCharTimes(file)); } Where ts is a TextStatistic object created earlier in my main method. However when I run my program, instead of printing out the number of how often the character occurs it prints: a: [I@f84386 b: [I@1194a4e c: [I@15d56d5 d: [I@efd552 e: [I@19dfbff f: [I@10b4b2f And I don't know what I'm doing wrong.

    Read the article

  • Urllib's urlopen breaking on some sites (e.g. StackApps api): returns garbage results

    - by Edan Maor
    I'm using urllib2's urlopen function to try and get a JSON result from the StackOverflow api. The code I'm using: >>> import urllib2 >>> conn = urllib2.urlopen("http://api.stackoverflow.com/0.8/users/") >>> conn.readline() The result I'm getting: '\x1f\x8b\x08\x00\x00\x00\x00\x00\x04\x00\xed\xbd\x07`\x1cI\x96%&/m\xca{\x7fJ\... I'm fairly new to urllib, but this doesn't seem like the result I should be getting. I've tried it in other places and I get what I expect (the same as visiting the address with a browser gives me: a JSON object). Using urlopen on other sites (e.g. "http://google.com") works fine, and gives me actual html. I've also tried using urllib and it gives the same result. I'm pretty stuck, not even knowing where to look to solve this problem. Any ideas?

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >