Search Results

Search found 1367 results on 55 pages for 'kyle west'.

Page 11/55 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • As a person getting into mobile development, what's the best mobile platform in terms of profitability? [closed]

    - by Kyle Loman
    I realize this question can range very far so would love to hear any and all opinions on this. However, I'll be honest and say that I have been thinking of this in terms of most profitable. I know how this may sound either way but this is one of my main sticking points. I realize that I'm not guaranteed a single cent and success is never guaranteed but I'm going into this with the thought of making something out of it both financially and also for my own interest. I know that iOS gets a lot of attention on this front but Android commands a lot more market share. However, I know there are drawbacks to Android too, whether it's in the actual development process and programming (though I've heard conflicting reports on this, such as how easy/difficult it is for to address screen res in different devices) or the app ecosystem being flooded. But iOS's app ecosystem has been described as too saturated and harder to compete in for that reason. Since Windows Phone has fewer apps than both of those two, that might be the best place to start in order to be closer to the ground floor of the store and be noticed more? Less saturation = better chances of sales or differentiating? Something like the gold rush during the first years of the iOS App Store (not exactly but at least in concept)? Would it be that despite fewer users on the platform, there's more exposure due to less competition so that may translate to better success at sales? Plus, I know MS is in it for the long haul so I'm not too fearful of something like WebOS going away. Obviously RIM isn't very popular nowadays but I read a recent article that says Blackberry actually has the apps that make the most money, any thoughts on that: http://gigaom.com/mobile/which-mobile-oss-apps-make-most-money-surprise-its-blackberry/ Again, this is all I've heard or known about so if there's anything to add or correct here, please do. In addition, this has actually affected my next personal phone upgrade. I'm eligible for a carrier discount now and I've had my eye on the iPhone 5. However, the Lumia 920 is the one I'm holding out for and I'm open to trying an Android but I'm not sure I can wait that long for any new Nexus or even the Razr HD. Even the new Lumia in November is making me antsy, I'm so close to just getting an iPhone 5. But when I say this has affected my phone choice, I'd want to be able to carry the apps I write with me so that I'm able to pull my phone out to show people without having to carry around a second device to do so. So that's why I'd like to make my personal phone match the main platform I'm developing for. Of course, I will likely expand to other platforms if I gain any decent success but the one I target now would serve well as my personal phone I carry around so that I can use it as a marketing tool, in a sense, showing people my apps if the opportunity presents itself. So what's the best mobile platform to choose, and especially in regards to most lucrative? As said previously, this would influence my personal phone choice greatly. Thanks in advance and I hope this isn't taken the wrong way - I understand there are trade-offs and other factors that may balance this out but making some revenue is key among that. For some background, I have done software development and know programming language concepts so I'm not entirely new to it and I do get the notion of being familiar with these things so that I can translate this skill among a variety of languages but I'm currently just having difficulty choosing my first main mobile platform based on the criteria I've outlined above.

    Read the article

  • When is my View too smart?

    - by Kyle Burns
    In this posting, I will discuss the motivation behind keeping View code as thin as possible when using patterns such as MVC, MVVM, and MVP.  Once the motivation is identified, I will examine some ways to determine whether a View contains logic that belongs in another part of the application.  While the concepts that I will discuss are applicable to most any pattern which favors a thin View, any concrete examples that I present will center on ASP.NET MVC. Design patterns that include a Model, a View, and other components such as a Controller, ViewModel, or Presenter are not new to application development.  These patterns have, in fact, been around since the early days of building applications with graphical interfaces.  The reason that these patterns emerged is simple – the code running closest to the user tends to be littered with logic and library calls that center around implementation details of showing and manipulating user interface widgets and when this type of code is interspersed with application domain logic it becomes difficult to understand and much more difficult to adequately test.  By removing domain logic from the View, we ensure that the View has a single responsibility of drawing the screen which, in turn, makes our application easier to understand and maintain. I was recently asked to take a look at an ASP.NET MVC View because the developer reviewing it thought that it possibly had too much going on in the view.  I looked at the .CSHTML file and the first thing that occurred to me was that it began with 40 lines of code declaring member variables and performing the necessary calculations to populate these variables, which were later either output directly to the page or used to control some conditional rendering action (such as adding a class name to an HTML element or not rendering another element at all).  This exhibited both of what I consider the primary heuristics (or code smells) indicating that the View is too smart: Member variables – in general, variables in View code are an indication that the Model to which the View is being bound is not sufficient for the needs of the View and that the View has had to augment that Model.  Notable exceptions to this guideline include variables used to hold information specifically related to rendering (such as a dynamically determined CSS class name or the depth within a recursive structure for indentation purposes) and variables which are used to facilitate looping through collections while binding. Arithmetic – as with member variables, the presence of arithmetic operators within View code are an indication that the Model servicing the View is insufficient for its needs.  For example, if the Model represents a line item in a sales order, it might seem perfectly natural to “normalize” the Model by storing the quantity and unit price in the Model and multiply these within the View to show the line total.  While this does seem natural, it introduces a business rule to the View code and makes it impossible to test that the rounding of the result meets the requirement of the business without executing the View.  Within View code, arithmetic should only be used for activities such as incrementing loop counters and calculating element widths. In addition to the two characteristics of a “Smart View” that I’ve discussed already, this View also exhibited another heuristic that commonly indicates to me the need to refactor a View and make it a bit less smart.  That characteristic is the existence of Boolean logic that either does not work directly with properties of the Model or works with too many properties of the Model.  Consider the following code and consider how logic that does not work directly with properties of the Model is just another form of the “member variable” heuristic covered earlier: @if(DateTime.Now.Hour < 12) {     <div>Good Morning!</div> } else {     <div>Greetings</div> } This code performs business logic to determine whether it is morning.  A possible refactoring would be to add an IsMorning property to the Model, but in this particular case there is enough similarity between the branches that the entire branching structure could be collapsed by adding a Greeting property to the Model and using it similarly to the following: <div>@Model.Greeting</div> Now let’s look at some complex logic around multiple Model properties: @if (ModelPageNumber + Model.NumbersToDisplay == Model.PageCount         || (Model.PageCount != Model.CurrentPage             && !Model.DisplayValues.Contains(Model.PageCount))) {     <div>There's more to see!</div> } In this scenario, not only is the View code difficult to read (you shouldn’t have to play “human compiler” to determine the purpose of the code), but it also complex enough to be at risk for logical errors that cannot be detected without executing the View.  Conditional logic that requires more than a single logical operator should be looked at more closely to determine whether the condition should be evaluated elsewhere and exposed as a single property of the Model.  Moving the logic above outside of the View and exposing a new Model property would simplify the View code to: @if(Model.HasMoreToSee) {     <div>There’s more to see!</div> } In this posting I have briefly discussed some of the more prominent heuristics that indicate a need to push code from the View into other pieces of the application.  You should now be able to recognize these symptoms when building or maintaining Views (or the Models that support them) in your applications.

    Read the article

  • Expanding on requestaudit - Tracing who is doing what...and for how long

    - by Kyle Hatlestad
    One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing.  This tracing section summarizes the top service requests happening in the server along with how they are performing.  By default, it has 2 different rotations.  One happens every 2 minutes (listing up to 5 services) and another happens every 60 minutes (listing up to 20 services).  These traces provide the total time for all the requests against that service along with the number of requests and its average request time.  This information can provide a good start in possibly troubleshooting performance issues or tracking a particular issue.   [Read More] 

    Read the article

  • Expanding on requestaudit - Tracing who is doing what...and for how long

    - by Kyle Hatlestad
    One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing.  This tracing section summarizes the top service requests happening in the server along with how they are performing.  By default, it has 2 different rotations.  One happens every 2 minutes (listing up to 5 services) and another happens every 60 minutes (listing up to 20 services).  These traces provide the total time for all the requests against that service along with the number of requests and its average request time.  This information can provide a good start in possibly troubleshooting performance issues or tracking a particular issue.   >requestaudit/6 12.10 16:48:00.493 Audit Request Monitor !csMonitorTotalRequests,47,1,0.39009329676628113,0.21034042537212372,1>requestaudit/6 12.10 16:48:00.509 Audit Request Monitor Request Audit Report over the last 120 Seconds for server wcc-base_4444****requestaudit/6 12.10 16:48:00.509 Audit Request Monitor -Num Requests 47 Errors 1 Reqs/sec. 0.39009329676628113 Avg. Latency (secs) 0.21034042537212372 Max Thread Count 1requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 1 Service FLD_BROWSE Total Elapsed Time (secs) 3.5320000648498535 Num requests 10 Num errors 0 Avg. Latency (secs) 0.3531999886035919 requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 2 Service GET_SEARCH_RESULTS Total Elapsed Time (secs) 2.694999933242798 Num requests 6 Num errors 0 Avg. Latency (secs) 0.4491666555404663requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 3 Service GET_DOC_PAGE Total Elapsed Time (secs) 1.8839999437332153 Num requests 5 Num errors 1 Avg. Latency (secs) 0.376800000667572requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 4 Service DOC_INFO Total Elapsed Time (secs) 0.4620000123977661 Num requests 3 Num errors 0 Avg. Latency (secs) 0.15399999916553497requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 5 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.4099999964237213 Num requests 8 Num errors 0 Avg. Latency (secs) 0.051249999552965164requestaudit/6 12.10 16:48:00.509 Audit Request Monitor ****End Audit Report***** To change the default rotation or size of output, these can be set as configuration variables for the server: RequestAuditIntervalSeconds1 – Used for the shorter of the two summary intervals (default is 120 seconds)RequestAuditIntervalSeconds2 – Used for the longer of the two summary intervals (default is 3600 seconds)RequestAuditListDepth1 – Number of services listed for the first request audit summary interval (default is 5)RequestAuditListDepth2 – Number of services listed for the second request audit summary interval (default is 20) If you want to get more granular, you can enable 'Full Verbose Tracing' from the System Audit Information page and now you will get an audit entry for each and every service request.  >requestaudit/6 12.10 16:58:35.431 IdcServer-68 GET_USER_INFO [dUser=bob][StatusMessage=You are logged in as 'bob'.] 0.08765099942684174(secs) What's nice is it reports who executed the service and how long that particular request took.  In some cases, depending on the service, additional information will be added to the tracing relevant to that  service. >requestaudit/6 12.10 17:00:44.727 IdcServer-81 GET_SEARCH_RESULTS [dUser=bob][QueryText=%28+dDocType+%3cmatches%3e+%60Document%60+%29][StatusCode=0][StatusMessage=Success] 0.4696030020713806(secs) You can even go into more detail and insert any additional data into the tracing.  You simply need to add this configuration variable with a comma separated list of variables from local data to insert. RequestAuditAdditionalVerboseFieldsList=TotalRows,path In this case, for any search results, the number of items the user found is traced: >requestaudit/6 12.10 17:15:28.665 IdcServer-36 GET_SEARCH_RESULTS [TotalRows=224][dUser=bob][QueryText=%28+dDocType+%3cmatches%3e+%60Application%60+%29][Sta... I also recently ran into the case where services were being called from a client through RIDC.  All of the services were being executed as the same user, but they wanted to correlate the requests coming from the client to the ones being executed on the server.  So what we did was add a new field to the request audit list: RequestAuditAdditionalVerboseFieldsList=ClientToken And then in the RIDC client, ClientToken was added to the binder along with a unique value that could be traced for that request.  Now they had a way of tracing on both ends and identifying exactly which client request resulted in which request on the server.

    Read the article

  • Why do we (really) program to interfaces?

    - by Kyle Burns
    One of the earliest lessons I was taught in Enterprise development was "always program against an interface".  This was back in the VB6 days and I quickly learned that no code would be allowed to move to the QA server unless my business objects and data access objects each are defined as an interface and have a matching implementation class.  Why?  "It's more reusable" was one answer.  "It doesn't tie you to a specific implementation" a slightly more knowing answer.  And let's not forget the discussion ending "it's a standard".  The problem with these responses was that senior people didn't really understand the reason we were doing the things we were doing and because of that, we were entirely unable to realize the intent behind the practice - we simply used interfaces and had a bunch of extra code to maintain to show for it. It wasn't until a few years later that I finally heard the term "Inversion of Control".  Simply put, "Inversion of Control" takes the creation of objects that used to be within the control (and therefore a responsibility of) of your component and moves it to some outside force.  For example, consider the following code which follows the old "always program against an interface" rule in the manner of many corporate development shops: 1: ICatalog catalog = new Catalog(); 2: Category[] categories = catalog.GetCategories(); In this example, I met the requirement of the rule by declaring the variable as ICatalog, but I didn't hit "it doesn't tie you to a specific implementation" because I explicitly created an instance of the concrete Catalog object.  If I want to test the functionality of the code I just wrote I have to have an environment in which Catalog can be created along with any of the resources upon which it depends (e.g. configuration files, database connections, etc) in order to test my functionality.  That's a lot of setup work and one of the things that I think ultimately discourages real buy-in of unit testing in many development shops. So how do I test my code without needing Catalog to work?  A very primitive approach I've seen is to change the line the instantiates catalog to read: 1: ICatalog catalog = new FakeCatalog();   once the test is run and passes, the code is switched back to the real thing.  This obviously poses a huge risk for introducing test code into production and in my opinion is worse than just keeping the dependency and its associated setup work.  Another popular approach is to make use of Factory methods which use an object whose "job" is to know how to obtain a valid instance of the object.  Using this approach, the code may look something like this: 1: ICatalog catalog = CatalogFactory.GetCatalog();   The code inside the factory is responsible for deciding "what kind" of catalog is needed.  This is a far better approach than the previous one, but it does make projects grow considerably because now in addition to the interface, the real implementation, and the fake implementation(s) for testing you have added a minimum of one factory (or at least a factory method) for each of your interfaces.  Once again, developers say "that's too complicated and has me writing a bunch of useless code" and quietly slip back into just creating a new Catalog and chalking any test failures up to "it will probably work on the server". This is where software intended specifically to facilitate Inversion of Control comes into play.  There are many libraries that take on the Inversion of Control responsibilities in .Net and most of them have many pros and cons.  From this point forward I'll discuss concepts from the standpoint of the Unity framework produced by Microsoft's Patterns and Practices team.  I'm primarily focusing on this library because it questions about it inspired this posting. At Unity's core and that of most any IoC framework is a catalog or registry of components.  This registry can be configured either through code or using the application's configuration file and in the most simple terms says "interface X maps to concrete implementation Y".  It can get much more complicated, but I want to keep things at the "what does it do" level instead of "how does it do it".  The object that exposes most of the Unity functionality is the UnityContainer.  This object exposes methods to configure the catalog as well as the Resolve<T> method which is used to obtain an instance of the type represented by T.  When using the Resolve<T> method, Unity does not necessarily have to just "new up" the requested object, but also can track dependencies of that object and ensure that the entire dependency chain is satisfied. There are three basic ways that I have seen Unity used within projects.  Those are through classes directly using the Unity container, classes requiring injection of dependencies, and classes making use of the Service Locator pattern. The first usage of Unity is when classes are aware of the Unity container and directly call its Resolve method whenever they need the services advertised by an interface.  The up side of this approach is that IoC is utilized, but the down side is that every class has to be aware that Unity is being used and tied directly to that implementation. Many developers don't like the idea of as close a tie to specific IoC implementation as is represented by using Unity within all of your classes and for the most part I agree that this isn't a good idea.  As an alternative, classes can be designed for Dependency Injection.  Dependency Injection is where a force outside the class itself manipulates the object to provide implementations of the interfaces that the class needs to interact with the outside world.  This is typically done either through constructor injection where the object has a constructor that accepts an instance of each interface it requires or through property setters accepting the service providers.  When using dependency, I lean toward the use of constructor injection because I view the constructor as being a much better way to "discover" what is required for the instance to be ready for use.  During resolution, Unity looks for an injection constructor and will attempt to resolve instances of each interface required by the constructor, throwing an exception of unable to meet the advertised needs of the class.  The up side of this approach is that the needs of the class are very clearly advertised and the class is unaware of which IoC container (if any) is being used.  The down side of this approach is that you're required to maintain the objects passed to the constructor as instance variables throughout the life of your object and that objects which coordinate with many external services require a lot of additional constructor arguments (this gets ugly and may indicate a need for refactoring). The final way that I've seen and used Unity is to make use of the ServiceLocator pattern, of which the Patterns and Practices team has also provided a Unity-compatible implementation.  When using the ServiceLocator, your class calls ServiceLocator.Retrieve in places where it would have called Resolve on the Unity container.  Like using Unity directly, it does tie you directly to the ServiceLocator implementation and makes your code aware that dependency injection is taking place, but it does have the up side of giving you the freedom to swap out the underlying IoC container if necessary.  I'm not hugely concerned with hiding IoC entirely from the class (I view this as a "nice to have"), so the single biggest problem that I see with the ServiceLocator approach is that it provides no way to proactively advertise needs in the way that constructor injection does, allowing more opportunity for difficult to track runtime errors. This blog entry has not been intended in any way to be a definitive work on IoC, but rather as something to spur thought about why we program to interfaces and some ways to reach the intended value of the practice instead of having it just complicate your code.  I hope that it helps somebody begin or continue a journey away from being a "Cargo Cult Programmer".

    Read the article

  • Trouble installing Ubuntu 12.04 from USB

    - by Kyle J
    I want to dual-boot Ubuntu Desktop 12.04 on my new ultrabook which has an Intel i7 3517U processor 6GB RAM Windows 7, 64-bit no CD/DVD drive I created my bootable USB stick using pendrivelinux.com with the "ubuntu-12.04.1-desktop-i386.iso". I am following these directions because they include nice screenshots; however, I do not get very far in the process. I am able to boot into the Live Desktop, and then I try to install onto my hard disk. Here are the series of actions that I take next: First, I see this ( http://i.imgur.com/vucYH ) window, and click 'continue' Then I get this ( http://imgur.com/2wESc ) window, and click 'continue' again This appears: and I get worried because it seems like there is no recognition that I have Windows installed. According to the directions I am following, I should see /dev/sda1 and /dev/sda2 partitions. In the drop-down menu at the bottom the only "Device for boot loader installation" is /dev/sdb and no information is shown. I am hesitant to click 'Install Now' for fear of what it might do to Windows. 4. I click 'Quit' and cancel the installation, but then about 5 seconds later this ( http://imgur.com/a/yXi0C ) window pops up (I have expanded it to full screen to scroll and show all the details). 5. Another second later this ( http://imgur.com/vxcrN ) comes up. I'm not sure how relevant this is. Does anyone have any insight into this issue?? Why does it not show my current Windows partition? What would happen if I tried to continue with the installation process? Thanks! PS - sorry, it would only let me post 2 hyperlinks as a new user

    Read the article

  • New DataCenter Options for Windows Azure

    - by ScottKlein
    Effective immediately, new compute and storage resource options are now available when selecting data center options in the Windows Azure Portal. "West US" and "East US" options are now available, for Compute and Storage. SQL Azure options for these two data centers will be available in the next few months. The official announcement can be found here.In terms of geo-replication:US East and West are paired together for Windows Azure Storage geo-replicationUS North and South are paired together for Windows Azure Storage geo-replicationThese two new data centers are now visible in the Windows Azure Management Portal effective immediately. Compute and Storage pricing remains the same across all data centers. Get started with Windows Azure through the free 90 day trial.

    Read the article

  • Stop trying to be perfect

    - by Kyle Burns
    Yes, Bob is my uncle too.  I also think the points in the Manifesto for Software Craftsmanship (manifesto.softwarecraftsmanship.org) are all great.  What amazes me is that tend to confuse the term “well crafted” with “perfect”.  I'm about to say something that will make Quality Assurance managers and many development types as well until you think about it as a craftsman – “Stop trying to be perfect”. Now let me explain what I mean.  Building software, as with building almost anything, often involves a series of trade-offs where either one undesired characteristic is accepted as necessary to achieve another desired one (or maybe stave off one that is even less desirable) or a desirable characteristic is sacrificed for the same reasons.  This implies that perfection itself is unattainable.  What is attainable is “sufficient” and I think that this really goes to the heart both of what people are trying to do with Agile and with the craftsmanship movement.  Simply put, sufficient software drives the greatest business value.   I've been in many meetings where “how can we keep anything from ever going wrong” has become the thing that holds us in analysis paralysis.  I've also been the guy trying way too hard to perfect some function to make sure that every edge case is accounted for.  Somewhere in there, something a drill instructor said while I was in boot camp occurred to me.  In response to being asked a question by another recruit having to do with some edge case (I can barely remember the context), he said “What if grasshoppers had machine guns?  Would the birds still **** with them?”  It sounds funny, but there's a lot of wisdom in those words.   “Sufficient” is different for every situation and it’s important to understand what sufficient means in the context of the work you’re doing.  If I’m writing a timesheet application (and please shoot me if I am), I’m going to have a much higher tolerance for imperfection than if you’re writing software to control life support systems on spacecraft.  I’m also likely to have less need for high volume performance than if you’re writing software to control stock trading transactions.   I’d encourage anyone who has read this far to instead of trying to be perfect, try to create software that is sufficient in every way.  If you’re working to make a component that is sufficient “better”, ask yourself if there is any component left that is not yet sufficient.  If the answer is “yes” you’re working on the wrong thing and need to adjust.  If the answer is “no”, why aren’t you shipping and delivering business value?

    Read the article

  • Obtaining positional information in the IEnumerable Select extension method

    - by Kyle Burns
    This blog entry is intended to provide a narrow and brief look into a way to use the Select extension method that I had until recently overlooked. Every developer who is using IEnumerable extension methods to work with data has been exposed to the Select extension method, because it is a pretty critical piece of almost every query over a collection of objects.  The method is defined on type IEnumerable and takes as its argument a function that accepts an item from the collection and returns an object which will be an item within the returned collection.  This allows you to perform transformations on the source collection.  A somewhat contrived example would be the following code that transforms a collection of strings into a collection of anonymous objects: 1: var media = new[] {"book", "cd", "tape"}; 2: var transformed = media.Select( item => 3: { 4: Media = item 5: } ); This code transforms the array of strings into a collection of objects which each have a string property called Media. If every developer using the LINQ extension methods already knows this, why am I blogging about it?  I’m blogging about it because the method has another overload that I hadn’t seen before I needed it a few weeks back and I thought I would share a little about it with whoever happens upon my blog.  In the other overload, the function defined in the first overload as: 1: Func<TSource, TResult> is instead defined as: 1: Func<TSource, int, TResult>   The additional parameter is an integer representing the current element’s position in the enumerable sequence.  I used this information in what I thought was a pretty cool way to compare collections and I’ll probably blog about that sometime in the near future, but for now we’ll continue with the contrived example I’ve already started to keep things simple and show how this works.  The following code sample shows how the positional information could be used in an alternating color scenario.  I’m using a foreach loop because IEnumerable doesn’t have a ForEach extension, but many libraries do add the ForEach extension to IEnumerable so you can update the code if you’re using one of these libraries or have created your own. 1: var media = new[] {"book", "cd", "tape"}; 2: foreach (var result in media.Select( 3: (item, index) => 4: new { Item = item, Index = index })) 5: { 6: Console.ForegroundColor = result.Index % 2 == 0 7: ? ConsoleColor.Blue : ConsoleColor.Yellow; 8: Console.WriteLine(result.Item); 9: }

    Read the article

  • What is the best type of c# timer to use with an Unity game that uses many timers simultaneously?

    - by Kyle Seidlitz
    I am developing a stand-alone 3d game in Unity that will have anywhere from 1 to 200 timers running simultaneously. For this game timer durations will range from 5 minutes to 4 days. There will not be any countdown displays or any UI for the timers. An object will be selected, a menu choice will then be selected, and the timer will start. Several events will occur at different intervals during the duration of the timer. The events will be confined to changing the material of the selected object, and calling a 1 second sound effect like a chime or a bell. If the user wants to save or end the game before all the timers are done, the start of the still running timers is to be saved to an XML file such that when the game is started again, any still running timers will have a calculation done to see if the timer is then done, where the game will change the materials appropriately. I am still trying to figure out what type of timer to use, and see also if there are any suggestions for saving and calculating times over several days. What class(es) of timers should I use? Are there any special issues I should look out for in terms of performance?

    Read the article

  • Loading main javascript on every page? Or breaking it up to relevant pages?

    - by Kyle
    I have a 700kb decompressed JS file which is loaded on every page. Before I had 12 javascript files on each page but to reduce http requests I compressed them all into 1 file. This file is ~130kb gzipped and is served over gzip. However on the local computer it is still unpacked and loaded on every page. Is this a performance issue? I've profiled the javascript with firebug profiler but did not see any issues. The problem/illusion I am facing is there are jquery libraries compressed in that file that are sometimes not used on the current page. For example jquery datatables is 200kb compressed and that is only loaded on 2 of my website pages. Another is jqplot and that is another 200kb. I now have 400kb of excess code that isn't executed on 80% of the pages. Should I leave everything in 1 file? Should I take out the jquery libraries and load only relevant JS on the current page?

    Read the article

  • System speakers not recognized

    - by Kyle Maxwell
    Since upgrading to Xubuntu 13.10, sound has not functioned properly (e.g. screeching when playing Skype notifications). Now, however, it does not function at all. pavucontrol only shows Dummy Output and does not recognize the built-in speakers on my Dell Precision M4600. Possibly related, the sound indicator applet does not come up when I click on it, only showing a small white bar underneath it. I have purged and reinstalled pulseaudio. lspci -v shows: 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 04) Subsystem: Dell Precision M4600 Flags: bus master, fast devsel, latency 0, IRQ 56 Memory at f2560000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel 01:00.1 Audio device: NVIDIA Corporation GF106 High Definition Audio Controller (rev a1) Subsystem: Dell Device 14a3 Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at f0080000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel The "Capabilities: <access denied" line makes me wonder if there's a permissions issue, as the Log Out applet now shows "Restart" and "Shutdown" grayed out. groups shows me in: kmaxwell adm dialout cdrom sudo dip plugdev fuse lpadmin netdev sambashare vboxusers

    Read the article

  • Where's the source code?

    - by Kyle Burns
    I've been contacted by several people through this blog asking about the missing source code for the "Beginning Windows 8 Application Development - XAML Edition" book (the book is available at http://www.amazon.com/gp/product/1430245662/http://www.amazon.com/gp/product/1430245662/) and wanted to share this with others who may have come to this blog looking for it but may not have communicated with me.  The publisher (Apress) does know that the source code is not posted on the book's product page and will be correcting it.  Apress is located in New York City and things were slowed down a little bit last week due to the storm, but I've been assured they will be correcting the product page as soon as they can.  Thanks to everyone who has bought the book and I especially appreciate your patience.

    Read the article

  • How do I alter/customize the GRUB boot menu for Ubuntu 12.10?

    - by Kyle Payne
    I use a shared computer, so I need to make it user friendly for my-less-than-computer-knowledgable friend currently have Ubuntu 12.10 installed I would like to change the GRUB menu so that Windows 7 is at the top of the list (thus allowing the automatic timeout to automatically select it on startup) and Ubuntu down below I've already used the information used at { How do I change the grub boot order? } and that didn't work.

    Read the article

  • How to bring an application from Sublime Text to a web IDE for sharing?

    - by Kyle Pennell
    I generally work on my projects locally in Sublime Text but sometimes need to share them with others using things like Jsfiddle, codepen, or plunker. This is usually so I can get unstuck. Is there an easier way to share code that doesn't involve purely copy pasting and the hassle of getting all the dependencies right in a new environment? It's taking me hours to get some of my angular apps working in plunker and I'm wondering if there's a better way.

    Read the article

  • Wireless xbox360 pad connected with xboxdrv doesn't show in jstest

    - by Kyle Letham
    So Lubuntu 13.04, 3.4.0. I'm trying to use my wireless xbox controller. I've installed xboxdrv. I have to run it with sudo, or I get USBController::USBController(): libusb_open() failed: LIBUSB_ERROR_ACCESS. If I connect the adapter, run the program, and then press the button on the gamepad to connect it, it never really connects - just flashes like it's searching. If I reboot, leaving the controller searching, and run the command again while it's searching after I boot up, the controller connects (the light becomes solid in one corner). Using pkill xboxdrv and then relaunching it doesn't work. Regardless, no matter what I do, the gamepad never shows up in jstest-gtk. sudo xboxdrv --debug gives me: Controller: Microsoft Xbox 360 Wireless Controller (PC) Vendor/Product: 045e:0719 USB Path: 002:005 Wireless Port: 0 Controller Type: Xbox360 (wireless) [DEBUG] XboxdrvMain::run(): creating UInput [DEBUG] XboxdrvMain::run(): creating ControllerSlotConfig [DEBUG] UInput::create_uinput_device(): create device: 65534 [DEBUG] LinuxUinput::LinuxUinput(): Xbox Gamepad (userspace driver) 0:0 Any ideas?

    Read the article

  • Confusion on HLSL Samplers. Can I Set Samplers Inside Functions?

    - by Kyle Connors
    I'm trying to create a system where I can instance a quad to the screen, however I've run into a problem. Like I said, I'm trying to instance the quad, so I'm trying to use the same geometry several times, and I'm trying to do it in one draw call. The issue is, I want some quads to use different textures, but I can't figure out how to get the data into a sampler so I can use it in the pixel shader. I figured that since we can simply pass in the 4 bytes of our IDirect3DTexture9* to set the global texture, I can do so when passing in my dynamic buffer. (Which also stores each objects world matrix and UV data) Now that I'm sending the data, I can't figure how to get it into the sampler, and I really want to assume that it's simply not possible. Is there any way I could achieve this?

    Read the article

  • iPhone in-app purchasing for Ecommerce [closed]

    - by Kyle B.
    This may not be the appropriate location for this, but would like to ask in the hopes an iOS developer with familiarity on the rules and regulations could comment. I would like to develop an iOS app that performs Ecommerce transactions. If I roll my own payment processor, and checkout process: 1) Is this allowed by Apple's rules, and 2) Would I be required to remit 30% of the transaction sale to Apple?

    Read the article

  • Working with logout dialog box - text error

    - by aaron.kyle
    I am having a problem with the shutdown dialog box for Ubuntu 12.04. If I am logged in as any user and press shutdown, I see the box with the question 'Are you sure..." and its usual options. Shutting down when I am not logged in as a specific user, however, displayed only square boxes. An image of this error can be found here: I believe this error started a few weeks ago when i accidentally changed the group for my root system directory, so it might be a permission thing or an improperly assigned group lingering somewhere. The trouble is that I don't know where the text for this box is stored, and no idea where to begin checking. Can any one point me in the right direction?

    Read the article

  • Why does the Git community seem to ignore side-by-side diffs

    - by Kyle Heironimus
    I used to use Windows, SVN, Tortoise SVN, and Beyond Compare. It was a great combination for doing code reviews. Now I use OSX and Git. I've managed to kludge together a bash script along with Gitx and DiffMerge to come up with a barely acceptable solution. I've muddled along with this setup, and similar ones, for over a year. I've also tried using the Github diff viewer and the Gitx diff viewer, so it's not like I've not given them a chance. There are so many smart people doing great stuff with Git. Why not the side-by-side diff with the option of seeing the entire file? With people who have used both, I've never heard of anyone that likes the single +/- view better, at least for more than a quick check.

    Read the article

  • Layout Columns - Equal Height

    - by Kyle
    I remember first starting out using tables for layouts and learned that I should not be doing that. I am working on a new site and can not seem to do equal height columns without using tables. Here is an example of the attempt with div tags. <div class="row"> <div class="column">column1</div> <div class="column">column2</div> <div class="column">column3</div> <div style="clear:both"></div> </div> Now what I tried with that was doing making columns float left and setting their widths to 33% which works fine, I use the clear:both div so that the row would be the size of the biggest column, but the columns will be different sizes based on how much content they have. I have found many fixes which mostly involve css hacks and just making it look like its right but that's not what I want. I thought of just doing it in javascript but then it would look different for those who choose to disable their javascript. The only true way of doing it that I can think of is using tables since the cells all have equal heights in the same row. But I know its bad to use tables. After searching forever I than came across this: http://intangiblestyle.com/lab/equal-height-columns-with-css/ What it seems to do is exactly the same as tables since its just setting its display exactly like tables. Would using that be just as bad as using tables? I honestly can't find anything else that I could do. edit @Su' I have looked into "faux columns" and do not think that is what I want. I think I would be able to implement better designs for my site using the display:table method. I posted this question because I just wasn't sure if I should since I have always heard its bad using tables in website layouts.

    Read the article

  • What are the best SEO techniques for a professional blog? [closed]

    - by Kyle
    Possible Duplicate: What are the best ways to increase your site's position in Google? Beginner to SEO here, starting with a personal site, looking for some insight and feedback. Question: what's more important, domain name or site title? Question: how important are the meta tags (description and keywords) on your site? Description should be under 60 chars right? How many keywords is ideal? Question: #1 most important SEO principle = ?? (my guess is getting others to link to your site) -thanks.

    Read the article

  • What is the best type of c# timer to use with a Unity game that uses many timers simultaneously?

    - by Kyle Seidlitz
    I am developing a stand-alone 3d game in Unity that will have anywhere from 1 to 200 timers running simultaneously. There will be a GameObject containing 1 timer. For this game timer durations will range from 5 minutes to 4 days. There will not be any countdown displays or any UI for the timers. Each object is a prefab, with all the necessary materials included. An attached script will handle the timer and all the necessary code to change the materials and make any sound effects. Once the timer is expired, the user will then click on the object again, and the object will be destroyed, and the user's inventory will be adjusted. If the user wants to save or end the game before all the timers are done, the start value of the still running timers is to be saved to an XML file such that when the game is started again, any still running timers will be checked to see if they have expired, where the object's materials will be changed appropriately. I am still trying to figure out what type of timer to use, and see also if there are any suggestions for saving and calculating times over several days. What class(es) of timers should I use? Are there any special issues I should look out for in terms of performance?

    Read the article

  • Access Control Lists for Roles

    - by Kyle Hatlestad
    Back in an earlier post, I wrote about how to enable entity security (access control lists, aka ACLs) for UCM 11g PS3.  Well, there was actually an additional security option that was included in that release but not fully supported yet (only for Fusion Applications).  It's the ability to define Roles as ACLs to entities (documents and folders).  But now in PS5, this security option is now fully supported.   The benefit of defining Roles for ACLs is that those user roles come from the enterprise security directory (e.g. OID, Active Directory, etc) and thus the WebCenter Content administrator does not need to define them like they do with ACL Groups (Aliases).  So it's a bit of best of both worlds.  Users are managed through the LDAP repository and are automatically granted/denied access through their group membership which are mapped to Roles in WCC.  A different way to think about it is being able to add multiple Accounts to content items...which I often get asked about.  Because LDAP groups can map to Accounts, there has always been this association between the LDAP groups and access to the entity in WCC.  But that mapping had to define the specific level of access (RWDA) and you could only apply one Account per content item or folder.  With Roles for ACLs, it basically takes away both of those restrictions by allowing users to define more then one Role and define the level of access on-the-fly. To turn on ACLs for Roles, there is a component to enable.  On the Component Manager page, click the 'advanced component manager' link in the description paragraph at the top.   In the list of Disabled Components, enable the RoleEntityACL component. Then restart.  This is assuming the other configuration settings have been made for the other ACLs in the earlier post.   Once enabled, a new metadata field called xClbraRoleList will be created.  If you are using OracleTextSearch as the search indexer, be sure to run a Fast Rebuild on the collection. For Users and Groups, these values are automatically picked up from the corresponding database tables.  In the case of Roles, there is an explicitly defined list of choices that are made available.  These values must match the roles that are coming from the enterprise security repository. To add these values, go to Administration -> Admin Applets -> Configuration Manager.  On the Views tab, edit the values for the ExternalRolesView.  By default, 'guest' and 'authenticated' are added.  Once added, you can assign the roles to your content or folder. If you are a user that can both access the Security Group for that item and you belong to that particular Role, you now have access to that item.  If you don't belong to that Role, you won't! [Extra] Because the selection mechanism for the list is using a type-ahead field, users may not even know the possible choices to start typing to.  To help them, one thing you can add to the form is a placeholder field which offers the entire list of roles as an option list they can scroll through (assuming its a manageable size)  and view to know what to type to.  By being a placeholder field, it won't need to be added to the custom metadata database table or search engine.  

    Read the article

  • Generating Wrappers for REST APIs

    - by Kyle
    Would it be feasible to generate wrappers for REST APIs? An earlier question asked about machine readable descriptions of RESTful services addressed how we could write (and then read) API specifications in a standardized way which would lend itself well to generated wrappers. Could a first pass parser generate a decent wrapper that human intervention could fix up? Perhaps the first pass wouldn't be consistent, but would remove a lot of the grunt work and make it easy to flesh out the rest of the API and types. What would need to be considered? What's stopping people from doing this? Has it already been done and my google fu is weak for the day?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >