Search Results

Search found 1427 results on 58 pages for 'brian rasmussen'.

Page 15/58 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • First time user here

    - by Brian
    Never used Linux before but I decided I want to start somewhere and Ubuntu seemed like the right place to start. I burned the 64bit version iso onto a CD and installed it onto a fresh new hard drive I got and it installed nicely or so I thought. First major problem was the fact that the screen slip oddly, second when I tried to log in everything just kind of froze, I could still move the cursor but thats it. I'm not too tech savy but I can follow instructions and any help given would be greatly appreciated. I am considering dual booting it with my other hard drive that has windows 7 on it but I'm afraid I might mess that up, plus if I do it that way I wouldn't know how to get rid of Ubuntu if I decide its not for me.

    Read the article

  • NDC Oslo Videos Are Online

    - by Brian Schroer
    Originally posted on: http://geekswithblogs.net/brians/archive/2014/06/07/ndc-oslo-videos-are-online.aspxJust when I was almost caught up on TechEd North America 2014 videos… The sessions from this week’s NDC Oslo conference can be viewed now on their Vimeo site: http://vimeo.com/ndcoslo/videos/sort:date/format:detail You can filter the conference’s agenda and find speakers / topics that you’re interested in via this page: http://ndcoslo.oktaset.com/agenda. If I counted correctly, there are 173(!) videos from this year’s conference, and a total of 467 videos from this and previous years. I’ve watched a lot of sessions from the major conferences that include .NET material, and NDC consistently has the best presentations in my opinion. There are lots of my favorite speakers: Crockford, Uncle Bob, Damian Edwards, Venkat Subramanian, Hanselman (I’m interested in seeing if he still thinks “poop” is funny, or got that out of his system at TechEd ;), Cory House (hey, KC!), the .NET Rocks Guys and more, so check it out!

    Read the article

  • How can I connect my USB (HP) printer in 10.4, which can't be discovered and worked in 9.x

    - by Brian
    My printer was working under 9.x. It is a an hp photosmart C3100 series. When I open the Admin- printing. no local printers are found. I try to add via other (My local choices are Serial and other). I have tried many uri's - ipp://localhost:631/ipp, http://localhost/ipp, localhost, 127.0.0.1, etc... None have worked. Under the networked I have tried JetDirect, using localhost and 127.0.0.1 and port 631. I have tried many options under IPP with different variants in the host trying to verify a printer. No luck. I tried LPD/LPR with localhost and tried the probe. no luck. I tried the cups admin via localhost:631 and that didn't work. On the old version its simply found the local printer, I might have picked the driver, I can't remember but it was the photosmart c3100 series that was working. I just can't get 10.4 to print.

    Read the article

  • Ubuntu/Windows XP - Joint systems

    - by Brian Buck
    I have a computer sectioned to run both Ubuntu 10.10 and Windows XP but as XP refuses to log in (If I select XP the log in starts with the Windows logo being shown on the screen for a few seconds before the screen goes blank and the page showing the option of selecting Windows or Ubuntu the re-appears), I would like to clear it completely from the system and just run Ubuntu. Can this be done, and if so, how do I proceed with doing it?

    Read the article

  • Life Is Full Of Changes (Part 1)

    - by Brian Jackett
    Today will be my last day with Sogeti.  I’ve been with Sogeti USA for just over 4 years.  In that time I’ve gotten to work on some great projects, develop relationships with some brilliant and passionate people, participate in the .Net developer and SharePoint communities, and grow my skills in a number of areas I’m passionate about.     As with all good things they must come to an end though.  I’ve accepted a position with another company and will provide more details once the transition has completed.  This decision was a difficult one to make but it provides a great career opportunity on many levels.  As much as my new schedule allows I plan to continue participating in local user groups, speaking at conferences, and blogging.     Speaking of which, you may have noticed my reduced blogging activity in the past few months.  In addition to a career change I’m also in the process of moving to a new residence (only a few miles from my current residence, so I’ll still be in Columbus.)  Searching for a new place, filling out paperwork, and all of the other work associated with this move has taken away a good chunk of the time I used to devote to blogging.  Once everything gets settled out with the move and job change I’ll re-evaluate how much time I can devote to blogging.     A big thanks to Sogeti and everyone who has been so supportive over my time with them.  It’s hard to move on, but I am excited for the prospects that the future will bring.         -Frog Out

    Read the article

  • Is a Mission Oriented Architecture (MOA) a better way to describe things than SOA?

    - by Brian Langbecker
    I might sound like a troll, but I would like to seriously understand this deeper. The place I work at has started to use the term MOA, versus SOA as we believe it drives more clarity and want to compare it to the true goals of SOA. A Mission Oriented Architecture is an approach whereby an application is broken down into various business mission elements, with the database, file assets, batch and real time functionality all tightly coupled in terms of delivering that piece of the functionality. The mission allows the developers to focus on a specific piece of functionality to get it right, and to build it with the ability for that piece to scale as an independent entity within the overall application. By tightly coupling the data, file assets and business logic you achieve the goals of working on a very large problem in bite size pieces. Some definitions of SOA mix it up with what is essentially a method call on a web service versus a true "service". As an architect, I have always found it fun getting everyone on the same page regarding SOA. Is it better to call it a "mission" versus a "service"?

    Read the article

  • Three's mobile broadband

    - by Brian Taylor
    I've recently downloaded Ubuntu 12.10 and it seems great I just can't get my three ireland mifi dongle to work at all. When I plug it in, it only recognizes it as a storage devise. Any idea's of how establish a permanent connection to this mifi dongle? I'm in Ireland and running a Dell Inspiron 1520 with 3gb of ram and using the huawei E586 mifi device. Really want to continue using this os but is everything this complicated to get up and running on it?

    Read the article

  • Function keys and Hot are switched

    - by Brian
    I recently got a new laptop and installed 12.10 on it. I noticed right away that the "hot keys" on the F keys didn't always do what was pictured and also I noticed that the "hot keys" and the function keys were reversed. e.g. to close a window I have to hit alt-fn-F2 instead of just alt-F2. This is pretty annoying because I've been using the same keyboard shortcuts for years and I don't really want to have to re-learn them. I would also like to switch what the hot keys do, for example the F3 key is pictured as increase screen brightness but it really puts the computer to sleep. I would like to change this back to increase the brightness I looked in keyboard shortcuts settings but these options are not in there. I also downloaded gconf-editor but was unable to find where to change the shortcuts on it. (they weren't at apps \ metacity \ keybinding_commands like others described) any help would be greatly appreciated

    Read the article

  • Ubuntu One - Files API - Cloud - More detailed info somehwere?

    - by Brian McCavour
    I am just starting on a mobile app for Ubuntu One, and I'm reviewing the info at https://one.ubuntu.com/developer/files/store_files/cloud I find the information a bit lacking though. It's a nice reference, but for someone not familiar with it, I had to goggle search to find out what a "volume" was exactly (its kind of obvious, but never hurts to know the specifics) There's also things like: GET /api/file_storage/v1/volumes Return a JSON list of Volume Representations, one for each volume. A volume is a synced folder, or the Ubuntu One folder, owned by the user. Note that all volume paths begin with ~.: ... but there's no such thing as a JSON "list". Does it mean array ? And other things... So I was wondering if here existed another page with more detailed information. Maybe some sample request / responses or something? I could just write a little proof of concept app to answer some of these questions... but I prefer not to unless I have to. Thanks

    Read the article

  • How Does a 724% Return on Your Salesforce Automation Investment Sound?

    - by Brian Dayton
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Oracle Sales Cloud and Marketing Cloud customer Apex IT gained just that, a 724% return on investment (ROI) when they implemented these Oracle Cloud solutions in their fast-moving, rapidly-growing business. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Congratulations Apex IT! Apex IT was just announced as a winner of the Nucleus Research 11th annual Technology ROI Awards. The award, given by the analyst firm highlights organizations that have successfully leveraged IT deployments to maximize value per dollar spent. Fast Facts: Return on Investment - 724% Payback - 2 months Average annual benefit - $91,534 Cost: Benefit Ratio – 1:48 Business Benefits In addition to the ROI and cost metrics the award calls out improvements in Apex IT’s business operations—across both Sales and Marketing teams: Improved ability to identify new opportunities and focus sales resources on higher-probability deals Reduced administration and manual lead tracking—resulting in more time selling and a net new client increase of 46% Increased campaign productivity for both Marketing and Sales, including Oracle Marketing Cloud’s automation of campaign tracking and nurture programs Improved margins with more structured and disciplined sales processes—resulting in more effective deal negotiations Please join us in congratulating Apex IT on this award and their business achievements. Want More Details? Don’t take our word for it. Read the full Apex IT ROI Case Study and learn more about Apex IT’s business—including their work with Oracle Sales and Marketing Cloud on behalf of their clients in leading Sales organizations. Learn More About Oracle Sales Cloud www.oracle.com/salescloud www.facebook.com/oraclesalescloud www.youtube.com/oraclesalescloud Oracle Customer Experience and Complementary Sales Solutions Oracle Configure, Price and Quote (CPQ) Cloud Oracle Marketing Cloud Oracle Customer Experience /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Dual boot Win/7 and Ubuntu 12.04

    - by Brian
    So I've always wanted Ubuntu along with windows 7 and finally I went to try and make my computer that last night. (( I'll just skip the long story. )) I loaded Ubuntu from CD, Version 12.04 and it loaded fine and then I clicked the install icon. It asked me witch option I would like to proceed with, and I answered Ubuntu, in Windows 7. So it did everything rather quickly, and restarted itself; upon restarting itself it ejected the CD, and I thought everything was good to go. It brought me to the Option to either load Ubuntu or Windows, I was thinking to myself wow that was alot easier than I thought; Windows 7 Loads fine after it checked the HDD but when I go to load Ubuntu it brings me to the loading screen and then stays there for a long period time, finally moves on as if it was going to load into the regular dashboard, but loads into a dos looking thing. (( I'm sorry if I sound retarded explaining everything I'm not great with computers )) And at the very time it says something like installation failed. It also says it could not find a file or something like that. If you need me to go back and get the full message and put it up here I will. But if I put the CD in it loads fine. Thanks in advance for everyone that helps me solve this problem.

    Read the article

  • My application had a WindowsIdentity crisis

    - by Brian Donahue
    The project I have been working on this week to test computer environments needs to do various actions as a user other than the one running the application. For instance, it looks up an installed Windows Service, finds out who the startup user is, and tries to connect to a database as that Windows user. Later on, it will need to access a file in the context of the currently logged-in user. With ASP .NET, this is super-easy: just go into Web.Config and set up the "identity impersonate" node, which can either impersonate a named user or the one who had logged into the website if authentication was enabled. With Windows applications, this is not so straightforward. There may be something I am overlooking, but the limitation seems to be that you can only change the security context on the current thread: any threads spawned by the impersonated thread also inherit the impersonated credentials. Impersonation is easy enough to do, once you figure out how. Here is my code for impersonating a user on the current thread:         using System;         using System.ComponentModel;         using System.Runtime.InteropServices;         using System.Security.Principal;         public class ImpersonateUser         {                 IntPtr userHandle;   [DllImport("advapi32.dll", SetLastError = true)]                 static extern bool LogonUser(                         string lpszUsername,                         string lpszDomain,                         string lpszPassword,                         LogonType dwLogonType,                         LogonProvider dwLogonProvider,                         out IntPtr phToken                         );                     [DllImport("kernel32.dll", SetLastError = true)]                 static extern bool CloseHandle(IntPtr hHandle);                     enum LogonType : int                 {                         Interactive = 2,                         Network = 3,                         Batch = 4,                         Service = 5,                         NetworkCleartext = 8,                         NewCredentials = 9,                 }                     enum LogonProvider : int                 {                         Default = 0,                 }                 public static WindowsImpersonationContext Impersonate(string user, string domain, string password)                 {   IntPtr userHandle = IntPtr.Zero;                         bool loggedOn = LogonUser(                                 user,                                 domain,                                 password,                                 LogonType.Interactive,                                 LogonProvider.Default,                                 out userHandle);                               if (!loggedOn)                         throw new Win32Exception(Marshal.GetLastWin32Error());                           WindowsIdentity identity = new WindowsIdentity(userHandle);                         WindowsPrincipal principal = new WindowsPrincipal(identity);                         System.Threading.Thread.CurrentPrincipal = principal;                         return identity.Impersonate();   }         }   /* Call impersonation */ ImpersonateUser.Impersonate("UserName","DomainName","Password"); /* When you want to go back to the original user */ WindowsIdentity.Impersonate(IntPtr.Zero); When you want to stop impersonating, you can call Impersonate() again with a null pointer. This will allow you to simulate a variety of different Windows users from the same applicaiton.

    Read the article

  • I've got my Master's in Software Engineering... Now what? [closed]

    - by Brian Driscoll
    Recently I completed a Master of Science in Software Engineering from Drexel University (Philadelphia, PA, US), because I wanted to have some formal education in software (my undergrad is in Math Ed) and also because I wanted to be able to advance my career beyond just programming. Don't get me wrong; I love to code. I spend a lot of my spare time coding. However, for me writing code is just a means to an end: what I REALLY love is designing software. Not visual design, mind you, but the architecture of the system. So, ideally I'd like to try to get a job doing software architecture. The problem is that I have no real experience in it besides my graduate course work. So, what should I do to make my "bones" in software architecture? UPDATE Just so it's clear, I have over 5 years of work experience in software development and an MCTS cert in addition to my education, so I'm not looking for the usual "I'm fresh out of school, what should I do?" advice.

    Read the article

  • Layers - Logical seperation vs physical

    - by P.Brian.Mackey
    Some programmers recommend logical seperation of layers over physical. For example, given a DL, this means we create a DL namespace not a DL assembly. Benefits include: faster compilation time simpler deployment Faster startup time for your program Less assemblies to reference Im on a small team of 5 devs. We have over 50 assemblies to maintain. IMO this ratio is far from ideal. I prefer an extreme programming approach. Where if 100 assemblies are easier to maintain than 10,000...then 1 assembly must be easier than 100. Given technical limits, we should strive for < 5 assemblies. New assemblies are created out of technical need not layer requirements. Developers are worried for a few reasons. A. People like to work in their own environment so they dont step on eachothers toes. B. Microsoft tends to create new assemblies. E.G. Asp.net has its own DLL, so does winforms. Etc. C. Devs view this drive for a common assembly as a threat. Some team members Have a tendency to change the common layer without regard for how it will impact dependencies. My personal view: I view A. as silos, aka cowboy programming and suggest we implement branching to create isolation. C. First, that is a human problem and we shouldnt create technical work arounds for human behavior. Second, my goal is not to put everything in common. Rather, I want partitions to be made in namespaces not assemblies. Having a shared assembly doesnt make everything common. I want the community to chime in and tell me if Ive gone off my rocker. Is a drive for a single assembly or my viewpoint illogical or otherwise a bad idea?

    Read the article

  • What is the standard for naming variables and why?

    - by P.Brian.Mackey
    I'm going through some training on objective-c. The trainer suggests setting single character parameter names. The .NET developer in me is crying. Is this truly the convention? Why? For example, @interface Square : NSObject { int size; } -(void)setSize: (int)s; I've seen developers using underscores int _size to declar variables (I think people call the variable declared in @interface ivar for some unknown reason). Personally, I prefer to use descriptive names. E.G. @interface Square : NSObject { int Size; } -(void)setSize: (int)size; C, like C# is case sensitive. So why don't we use the same convention as .NET?

    Read the article

  • Should I force users to update an application?

    - by Brian Green
    I'm writing an application for a medium sized company that will be used by about 90% of our employees and our clients. In planning for the future we decided to add functionality that will verify that the version of the program that is running is a version that we still support. Currently the application will forcequit if the version is not among our supported versions. Here is my concern. Hypothetically, in version 2.0.0.1 method "A" crashes and burns in glorious fashion and method "B" works just fine. We release 2.0.0.2 to fix method A and deprecate version 0.1. Now if someone is running 0.1 to use method B they will be forced to update to fix something that isn't an issue for them right now. My question is, will the time saved not troubleshooting old, unsupported versions outweigh the cost in usability?

    Read the article

  • Slides and Scripts from SharePoint Cincy 2014

    - by Brian T. Jackett
    Originally posted on: http://geekswithblogs.net/bjackett/archive/2014/06/06/slides-and-scripts-from-sharepoint-cincy-2014.aspx   I was pleased to present at SharePoint Cincy again for the third year.  Geoff and all the organizers do a great job.  My presentation this year was “PowerShell for Your SharePoint Tool Belt”.  Below are my slides and demo scripts.  Thanks for all who attended, I hope you found something that will be useful for you in your work.   Demo PowerShell Scripts   Slidedeck           -Frog Out

    Read the article

  • Rethinking Oracle Optimizer Statistics for P6 Part 2

    - by Brian Diehl
    In the previous post (Part 1), I tried to draw some key insights about the relationship between P6 and Oracle Optimizer Statistics.  The first is that average cardinality has the greatest impact on query optimization and that the particular queries generated by P6 are more likely to use this average during calculations. The second is that these are statistics that are unlikely to change greatly over the life of the application. Ultimately, our goal is to get the best query optimization possible.  Or is it? Stability No application administrator wants to get the call at 9am that their application users cannot get there work done because everything is running slow. This is a possibility with a regularly scheduled nightly collection of statistics. It may not just be slow performance, but a complete loss of service because one or more queries are optimized poorly. Ideally, this should not be the case. The database optimizer should make better decisions with more up-to-date data. Better statistics may give incremental performance benefit. However, this benefit must be balanced against the potential cost of system down time.  It is stability that we ultimately desire and not absolute optimal performance. We do want the benefit from more accurate statistics and better query plans, but not at the risk of an unusable system. As a result, I've developed the following methodology around managing database statistics for the P6 database.  1. No Automatic Re-Gathering - The daily, weekly, or other interval of statistic gathering is unlikely to be beneficial. Quite the opposite. It is more likely to cause problems. 2. Smart Re-Gathering - The time to collect statistics is when things have changed significantly. For a new installation of P6, this is happening more often because the data is growing from a few rows to thousands and more. But for a mature system, the data is not changing significantly from week-to-week. There are times to collect statistics: New releases of the application Changes in the underlying hardware or software versions (ex. new Oracle RDBMS version) When additional user groups are added. The new groups may use the software in significantly different ways. After significant changes in the data. This may be monthly, quarterly or yearly.  3. Always Test - If you take away one thing from this post, it would be to always have a plan to test after changing statistics. In reality, statistics can be collected as often as you desire provided there are tests in place to verify that performance is the same or better. These might be automated tests or simply a manual script of application functions. 4. Have a Way Out - Never change the statistics without a way to return to the previous set. Think of the statistics as one part of the overall application code that also includes the source code--both application and RDBMS. It would be foolish to change to the new code without a way to get back to the previous version. In the final post, I will talk about the actual script I created for P6 PMDB and possible future direction for managing query performance. 

    Read the article

  • New Workstation &ndash; Lenovo W530 Core i7 32GB 256GB SSD Win8Pro

    - by Brian Lanham
    So I pretty-much have my new machine up and running full-time. I am still going to have to hit my old workstation for some things but am more-or-less working on my new machine.  It’s really fast. And Bret was right, I’m not so far using all the RAM. 16 would have been enough but as @CodeMonkeyJava “go big or go home”. Windows 8 is…interesting.  So far I still seem to do most of my work in the “Desktop”.  However, I like the Store concept and I like the Metro UX.  Live tiles are also nice.  I really like how I can switch between Desktop and Metro easily.  Overall I think Microsoft has done a great job of combining the needed experience for touch and mouse. My overall Windows 8 rating is 5.9 because of the video card. Otherwise I’m hitting 7.8.  The system boots from cold in about 11 seconds and performs complete shutdown in 4.7 seconds.  It wakes from sleep in less than 1 second. VS 2012 starts and restarts almost instantly.  In fact, I find myself staring at the start page without realizing it.  Build time doesn’t seem to be significantly increased but it is faster. I seem to already be reinvigorated for work with this new machine. I’m looking forward to the performance.

    Read the article

  • Help with Meshes, and Shading

    - by Brian Diehr
    In a game I'm making in LibGdx, I wish to incorporate a ripple effect. I'm confused as to how I get access to the individual pixels on the screen, or any way to influence them (apart from what I can do with sprite batch). I have my suspicions that I have to do it through openGL, and it has something to do with apply a mesh? This brings me to my question, what exactly is a mesh? I've been working on my game for about half a year, and am doing great with the other aspects of the game, but I find this more advance stuff isn't as well documented. Thanks!

    Read the article

  • Windows 7 with Ubuntu Problems HP

    - by Brian
    I have a HP e9300z, Have Windows 7 on it currently. x64 bit. I wish to Dual Boot Ubuntu with Windows 7, when i do the windows setup , do not even get the options to Install Alongside with, Just install or Something else. Someone in another forum said something with the HP BIOS, can not find where to change anything. So what do i have to do so Ubuntu sees my HDD with Windows? Manually make partition? Only have 1 partition my C: Windows. Thanks.

    Read the article

  • Event-Driven Debugging

    - by Brian Donahue
    Most application troubleshooting involves getting an error, analyzing the error message, and at worst, attaching a debugger to work out the real cause. What is not really covered is how to troubleshoot an applicaiton that is not errant, but is having a performance issue, and more than likely, in the middle of the night when you are snug in your bed, sawing logs. What you need is an ever-vigilant cyborg who never sleeps to sit in front of your server all night, but as SkyNet is not live yet, you can settle for the next-best thing. Windows provides performance counters and alerts that can tell you when an applicaiton reaches an unacceptable threshold of naughty behavior, but although it can tattle on your brainchild, it won't be the child psychiatrist that you need to tell you why he's pulling your server's pigtails and pulling faces at the teacher. What you need is to plug a debugger into performance monitor and have it tell you what's going on with your applicaiton at the time. For this purpose, I'd used Microsoft's MDbgEngine as the basis for an applicaiton that will dump a program's stacks, I call it Application Slicer Dicer Wonder Dumper Super Cyborg, or StackOMatic for short. StackOMatic can look at a program's behavior and tell you if the stacks are not moving, but it can also work on the command-line to dump all managed methods on the stack at will. Now that there is a command you can use to dump the stacks, all you need to do is politely tell Windows to run it when you're displeased with your creation as it's trashing the CPU of your server at 3 AM. The first step is to create a scheduled task to tell StackOMatic to dump your applicaiton. Start Task Scheduler and right-click Task Scheduler Library and then Create Task. For this exercise I'm creating a task that will dump the Red Gate SQL Monitor Base Monitor Service. In the Actions tab, I enter the path to StackOMatic and use the arguments to log the stack dump to a file: /PN:RedGate.Response.Engine.Alerting.Base.Service /OUT:c:\users\administrator\MonitorLog.txt Next, I go into Windows Server 2008's Reliability and Performance Monitor and add a new Data Collector Set. This set will produce an alert on the %Processor Time for the service. When the processor time breaches 50%, it will run the StackDumpBaseService task I created. Whenever the service misbehaves, it will append to the log file. Now when I go to work in the morning, I can see what the service was doing when it overloaded the processor and take action.

    Read the article

  • Get/Post Controller Logic Best Practice

    - by Brian Mains
    In an ASP.NET MVC project (Razor), I have a Get request, which loads two properties on a model, dependent on the property passed into the action method. So if the parameter has a value, the Group property is supplied data. But if not, the Groups collection property is supplied data. In the post action method, when I process the data, to repopulate the view, I have to provide similar logic, and could getaway with returning Action(param) (the get response) to the caller. My question is, based on experience, is that a good practice to get into? I see some downsides to doing that, but adds the lack of code redundancy. Or is there a better alternative?

    Read the article

  • AngularJS dealing with large data sets (Strategy)

    - by Brian
    I am working on developing a personal temperature logging viewer based on my rasppi curl'ing data into my web server's api. Temperatures are taken every 2 seconds and I can have several temperature sensors posting data. Needless to say I will have a lot of data to handle even within the scope of an hour. I have implemented a very simple paging api from the server so the server doesn't timeout and is currently only returning data in 1000 units per call, then paging through the data. I had the idea to intially show say the last 20 minutes of data from a sensor (or all sensors depending on user choices), then allowing the user to select other timeframes from which to show data. The issue comes in when you want to view all sensors or an extended time period (say 24 hours). Is there a best practice of handling this large amount of data? Would it be useful to load those first 20 minutes into the live view and then cache into local storage something like the last 24 hours? I haven't been able to find a decent idea of this in use yet even though there are a lot of ways to take this problem. I am just looking for some suggestions as to what might provide a good balance between good performance and not caching the entire data set on the client side (as beyond a week of data this might not be feasible).

    Read the article

  • SSD and HDD have window 7 recovery partition. Can I delete one to make room for ubuntu?

    - by Brian Ecker
    I'm trying to install ubuntu right now, and I've run into a problem. I have Windows 7 installed on my SSD, and I want to install ubuntu on my HDD, but I already have three partitions on my HDD. The partitions are two Recovery Partitions and one data partition. What I don't understand is why my data drive(the HDD) has recovery partitions for Windows 7? The same recovery partitions(or atleast I think they are the same. Same sizes, same names, same order) are on the SSD with the Windows 7 install. Can I safely delete the recovery partitions on the HDD? My other option, I think, is to put the boot partition for ubuntu on the SSD where I only have three partitions. Then I can put the other three logical partitions for ubuntu in an extended partition on the HDD. Can I do that, put the boot partition on one drive and the other partitions on another? Here is a picture of the partitions and I have circled the one I would like to delete to make room. http://imgur.com/XOpJQ

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >