Search Results

Search found 19115 results on 765 pages for 'region specific'.

Page 147/765 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • Good book for a software developer doing part-time (Linux) system administration work

    - by Tony Meyer
    In many smaller organisations, developers often end up doing some system administration work (for obvious reasons). A lot of the time, they have great developer skills, but few system administration skills (perhaps all self-taught), and so have to learn as they go, which is fairly inefficient. Are there canonical (or simply great) books that would help in this situation? More advanced than just using a shell (presumably a developer can do that), but not aimed at someone that hopes to spend many years doing this work. Ideally, something fairly generic (although specific to a distribution would be OK), covering databases, networking, general maintenance, etc, not just one specific task. For the most part, I'm interested in shell-based work (i.e. no GUI installed), although if there's something outstanding I'm missing, please point it out. (As an analogy, replace "system administration" with C, and I'd want K&R, with C++ and I'd want Meyers' "Effective C++").

    Read the article

  • hosts file ignored, how to troubleshoot?

    - by Superbest
    The hosts file on Windows computers is used to bind certain name strings to specific IP addresses to override other name resolution methods. Often, one decides to change the hosts file, and discovers that the changes refuse to take effect, or that even old entries of the hosts file are ignored thereafter. A number of "gotcha" mistakes can cause this, and it can be frustrating to figure out which one. When faced with the problem of Windows ignoring a hosts file, what is a comprehensive troubleshoot protocol that may be followed? This question has duplicates on SO, such as hosts file seems to be ignored, HOSTS file being ignored, /etc/hosts file being ignored as well as numerous discussions elsewhere. However, these tend to deal with a specific case, and once whatever mistake the OP made is found out, the discussion is over. If you don't happen to have made the same error, such a discussion isn't very useful. So I thought it would be more helpful to have a general protocol for resolving all hosts-related issues that would cover all cases.

    Read the article

  • Practical mysql schema advice for eCommerce store - Products & Attributes

    - by Gravy
    I am currently planning my first eCommerce application (mySQL & Laravel Framework). I have various products, which all have different attributes. Describing products very simply, Some will have a manufacturer, some will not, some will have a diameter, others will have a width, height, depth and others will have a volume. Option 1: Create a master products table, and separate tables for specific product types (polymorphic relations). That way, I will not have any unnecessary null fields in the products table. Option 2: Create a products table, with all possible fields despite the fact that there will be a lot of null rows Option 3: Normalise so that each attribute type has it's own table. Option 4: Create an attributes table, as well as an attribute_values table with the value being varchar regardless of the actual data-type. The products table would have a many:many relationship with the attributes table. Option 5: Common attributes to all or most products put in the products table, and specific attributes to a particular category of product attached to the categories table. My thoughts are that I would like to be able to allow easy product filtering by these attributes and sorting. I would also want the frontend to be fast, less concern over the performance of the inserting and updating of product records. Im a bit overwhelmed with the vast implementation options, and cannot find a suitable answer in terms of the best method of approach. Could somebody point me in the right direction? In an ideal world, I would like to offer the following kind of functionality - http://www.glassesdirect.co.uk/products/ to my eCommerce store. As can be seen, in the sidebar, you can select an attribute the glasses to filter them. e.g. male / female or plastic / metal / titanium etc... Alternatively, should I just dump the mySql relational database idea and learn mongodb?

    Read the article

  • Multiple URLs Single Server

    - by Amit
    Hi, In the DNS I have setup multiple URLs for my website (all pointing to same server). The entries in DNS looks like as below: Host TTL Numeric IP www .mysite.net 7200 192.168.31.12 @ (None) .mysite.net 7200 192.168.31.12 mycompany .mysite.net 7200 192.168.31.12 I wrote a code on Login.aspx page to check the URL and navigate to appropriate company login page. So if I type www.mysite.net or mysite.net then I am getting navigated to standard login page. But when I type mycompany.mysite.net still I am getting navigated to standard login page. But when I type https://mycompany.mysite.net then I am getting navigated to company specific login page. Why I need to type complete URL with https to get navigated to compay specific login page? Why it is not working just with mycompany.mysite.net? Any help of this is highly appriciated. Thanks, Amit

    Read the article

  • Wifi ap finder/selector for windows 7

    - by wag2639
    Is there wifi finder program for windows 7 that can let me choose specific access points to connect to? In my school, the wifi network uses the same SSID for numerous access points and usually the default wifi finder in Windows 7 connects to the first one. Even if it connects to the closest one with the best signal, there are times when a specific access point will not work. Machine: Lenovo IdeaPad S10-2 running Windows 7 (32bit) Note: On my ThinkPad, I've got access connection that does the job but that didn't seem to work on the IdeaPad the last time I tried.

    Read the article

  • Skinny controller in ASP.NET MVC 4

    - by thangchung
    Rails community are always inspire a lot of best ideas. I really love this community by the time. One of these is "Fat models and skinny controllers". I have spent a lot of time on ASP.NET MVC, and really I did some miss-takes, because I made the controller so fat. That make controller is really dirty and very hard to maintain in the future. It is violate seriously SRP principle and KISS as well. But how can we achieve that in ASP.NET MVC? That question is really clear after I read "Manning ASP.NET MVC 4 in Action". It is simple that we can separate it into ActionResult, and try to implementing logic and persistence data inside this. In last 2 years, I have read this from Jimmy Bogard blog, but in that time I never had a consideration about it. That's enough for talking now. I just published a sample on ASP.NET MVC 4, implemented on Visual Studio 2012 RC at here. I used EF framework at here for implementing persistence layer, and also use 2 free templates from internet to make the UI for this sample. In this sample, I try to implementing the simple magazine website that managing all articles, categories and news. It is not finished at all in this time, but no problems, because I just show you about how can we make the controller skinny. And I wanna hear more about your ideas. The first thing, I am abstract the base ActionResult class like this:    public abstract class MyActionResult : ActionResult, IEnsureNotNull     {         public abstract void EnsureAllInjectInstanceNotNull();     }     public abstract class ActionResultBase<TController> : MyActionResult where TController : Controller     {         protected readonly Expression<Func<TController, ActionResult>> ViewNameExpression;         protected readonly IExConfigurationManager ConfigurationManager;         protected ActionResultBase (Expression<Func<TController, ActionResult>> expr)             : this(DependencyResolver.Current.GetService<IExConfigurationManager>(), expr)         {         }         protected ActionResultBase(             IExConfigurationManager configurationManager,             Expression<Func<TController, ActionResult>> expr)         {             Guard.ArgumentNotNull(expr, "ViewNameExpression");             Guard.ArgumentNotNull(configurationManager, "ConfigurationManager");             ViewNameExpression = expr;             ConfigurationManager = configurationManager;         }         protected ViewResult GetViewResult<TViewModel>(TViewModel viewModel)         {             var m = (MethodCallExpression)ViewNameExpression.Body;             if (m.Method.ReturnType != typeof(ActionResult))             {                 throw new ArgumentException("ControllerAction method '" + m.Method.Name + "' does not return type ActionResult");             }             var result = new ViewResult             {                 ViewName = m.Method.Name             };             result.ViewData.Model = viewModel;             return result;         }         public override void ExecuteResult(ControllerContext context)         {             EnsureAllInjectInstanceNotNull();         }     } I also have an interface for validation all inject objects. This interface make sure all inject objects that I inject using Autofac container are not null. The implementation of this as below public interface IEnsureNotNull     {         void EnsureAllInjectInstanceNotNull();     } Afterwards, I am just simple implementing the HomePageViewModelActionResult class like this public class HomePageViewModelActionResult<TController> : ActionResultBase<TController> where TController : Controller     {         #region variables & ctors         private readonly ICategoryRepository _categoryRepository;         private readonly IItemRepository _itemRepository;         private readonly int _numOfPage;         public HomePageViewModelActionResult(Expression<Func<TController, ActionResult>> viewNameExpression)             : this(viewNameExpression,                    DependencyResolver.Current.GetService<ICategoryRepository>(),                    DependencyResolver.Current.GetService<IItemRepository>())         {         }         public HomePageViewModelActionResult(             Expression<Func<TController, ActionResult>> viewNameExpression,             ICategoryRepository categoryRepository,             IItemRepository itemRepository)             : base(viewNameExpression)         {             _categoryRepository = categoryRepository;             _itemRepository = itemRepository;             _numOfPage = ConfigurationManager.GetAppConfigBy("NumOfPage").ToInteger();         }         #endregion         #region implementation         public override void ExecuteResult(ControllerContext context)         {             base.ExecuteResult(context);             var cats = _categoryRepository.GetCategories();             var mainViewModel = new HomePageViewModel();             var headerViewModel = new HeaderViewModel();             var footerViewModel = new FooterViewModel();             var mainPageViewModel = new MainPageViewModel();             headerViewModel.SiteTitle = "Magazine Website";             if (cats != null && cats.Any())             {                 headerViewModel.Categories = cats.ToList();                 footerViewModel.Categories = cats.ToList();             }             mainPageViewModel.LeftColumn = BindingDataForMainPageLeftColumnViewModel();             mainPageViewModel.RightColumn = BindingDataForMainPageRightColumnViewModel();             mainViewModel.Header = headerViewModel;             mainViewModel.DashBoard = new DashboardViewModel();             mainViewModel.Footer = footerViewModel;             mainViewModel.MainPage = mainPageViewModel;             GetViewResult(mainViewModel).ExecuteResult(context);         }         public override void EnsureAllInjectInstanceNotNull()         {             Guard.ArgumentNotNull(_categoryRepository, "CategoryRepository");             Guard.ArgumentNotNull(_itemRepository, "ItemRepository");             Guard.ArgumentMustMoreThanZero(_numOfPage, "NumOfPage");         }         #endregion         #region private functions         private MainPageRightColumnViewModel BindingDataForMainPageRightColumnViewModel()         {             var mainPageRightCol = new MainPageRightColumnViewModel();             mainPageRightCol.LatestNews = _itemRepository.GetNewestItem(_numOfPage).ToList();             mainPageRightCol.MostViews = _itemRepository.GetMostViews(_numOfPage).ToList();             return mainPageRightCol;         }         private MainPageLeftColumnViewModel BindingDataForMainPageLeftColumnViewModel()         {             var mainPageLeftCol = new MainPageLeftColumnViewModel();             var items = _itemRepository.GetNewestItem(_numOfPage);             if (items != null && items.Any())             {                 var firstItem = items.First();                 if (firstItem == null)                     throw new NoNullAllowedException("First Item".ToNotNullErrorMessage());                 if (firstItem.ItemContent == null)                     throw new NoNullAllowedException("First ItemContent".ToNotNullErrorMessage());                 mainPageLeftCol.FirstItem = firstItem;                 if (items.Count() > 1)                 {                     mainPageLeftCol.RemainItems = items.Where(x => x.ItemContent != null && x.Id != mainPageLeftCol.FirstItem.Id).ToList();                 }             }             return mainPageLeftCol;         }         #endregion     }  Final step, I get into HomeController and add some line of codes like this [Authorize]     public class HomeController : BaseController     {         [AllowAnonymous]         public ActionResult Index()         {             return new HomePageViewModelActionResult<HomeController>(x=>x.Index());         }         [AllowAnonymous]         public ActionResult Details(int id)         {             return new DetailsViewModelActionResult<HomeController>(x => x.Details(id), id);         }         [AllowAnonymous]         public ActionResult Category(int id)         {             return new CategoryViewModelActionResult<HomeController>(x => x.Category(id), id);         }     } As you see, the code in controller is really skinny, and all the logic I move to the custom ActionResult class. Some people said, it just move the code out of controller and put it to another class, so it is still hard to maintain. Look like it just move the complicate codes from one place to another place. But if you have a look and think it in details, you have to find out if you have code for processing all logic that related to HttpContext or something like this. You can do it on Controller, and try to delegating another logic  (such as processing business requirement, persistence data,...) to custom ActionResult class. Tell me more your thinking, I am really willing to hear all of its from you guys. All source codes can be find out at here. Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="http://weblogs.asp.net//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");

    Read the article

  • Programming concepts taken from the arts and humanities

    - by Joey Adams
    After reading Paul Graham's essay Hackers and Painters and Joel Spolsky's Advice for Computer Science College Students, I think I've finally gotten it through my thick skull that I should not be loath to work hard in academic courses that aren't "programming" or "computer science" courses. To quote the former: I've found that the best sources of ideas are not the other fields that have the word "computer" in their names, but the other fields inhabited by makers. Painting has been a much richer source of ideas than the theory of computation. — Paul Graham, "Hackers and Painters" There are certainly other, much stronger reasons to work hard in the "boring" classes. However, it'd also be neat to know that these classes may someday inspire me in programming. My question is: what are some specific examples where ideas from literature, art, humanities, philosophy, and other fields made their way into programming? In particular, ideas that weren't obviously applied the way they were meant to (like most math and domain-specific knowledge), but instead gave utterance or inspiration to a program's design and choice of names. Good examples: The term endian comes from Gulliver's Travels by Tom Swift (see here), where it refers to the trivial matter of which side people crack open their eggs. The terms journal and transaction refer to nearly identical concepts in both filesystem design and double-entry bookkeeping (financial accounting). mkfs.ext2 even says: Writing superblocks and filesystem accounting information: done Off-topic: Learning to write English well is important, as it enables a programmer to document and evangelize his/her software, as well as appear competent to other programmers online. Trigonometry is used in 2D and 3D games to implement rotation and direction aspects. Knowing finance will come in handy if you want to write an accounting package. Knowing XYZ will come in handy if you want to write an XYZ package. Arguably on-topic: The Monad class in Haskell is based on a concept by the same name from category theory. Actually, Monads in Haskell are monads in the category of Haskell types and functions. Whatever that means...

    Read the article

  • Native packaging for JavaFX

    - by igor
    JavaFX 2.2 adds new packaging option for JavaFX applications, allowing you to package your application as a "native bundle". This gives your users a way to install and run your application without any external dependencies on a system JRE or FX SDK. I'd like to give you an overview of what is it, motivation behind it, and finally explain how to get started with it. Screenshots may give you some idea of user experience but first hand experience is always the best. Before we go into all of the boring details, here are few different flavors of Ensemble for you to try: exe, msi, dmg, rpm installers and zip of linux bundle for non-rpm aware systems. Alternatively, check out native packages for JFXtras 2. Whats wrong with existing deployment options? JavaFX 2 applications are easy to distribute as a standalone application or as an application deployed on the web (embedded in the web page or as link to launch application from the webpage). JavaFX packaging tools, such as ant tasks and javafxpackager utility, simplify the creation of deployment packages even further. Why add new deployment options? JavaFX applications have implicit dependency on the availability of Java and JavaFX runtimes, and while existing deployment methods provide a means to validate the system requirements are met -- and even guide user to perform required installation/upgrades -- they do not fully address all of the important scenarios. In particular, here are few examples: the user may not have admin permissions to install new system software if the application was certified to run in the specific environment (fixed version of Java and JavaFX) then it may be hard to ensure user has this environment due to an autoupdate of the system version of Java/JavaFX (to ensure they are secure). Potentially, other apps may have a requirement for a different JRE or FX version that your app is incompatible with. your distribution channel may disallow dependencies on external frameworks (e.g. Mac AppStore) What is a "native package" for JavaFX application? In short it is  A Wrapper for your JavaFX application that makes is into a platform-specific application bundle Each Bundle is self-contained and includes your application code and resources (same set as need to launch standalone application from jar) Java and JavaFX runtimes (private copies to be used by this application only) native application launcher  metadata (icons, etc.) No separate installation is needed for Java and JavaFX runtimes Can be distributed as .zip or packaged as platform-specific installer No application changes, the same jar app binaries can be deployed as a native bundle, double-clickable jar, applet, or web start app What is good about it: Easy deployment of your application on fresh systems, without admin permissions when using .zip or a user-level installer No-hassle compatibility.  Your application is using a private copy of Java and JavaFX. The developer (you!) controls when these are updated. Easily package your application for Mac AppStore (or Windows, or...) Process name of running application is named after your application (and not just java.exe)  Easily deploy your application using enterprise deployment tools (e.g. deploy as MSI) Support is built in into JDK 7u6 (that includes JavaFX 2.2) Is it a silver bullet for the deployment that other deployment options will be deprecated? No.  There are no plans to deprecate other deployment options supported by JavaFX, each approach addresses different needs. Deciding whether native packaging is a best way to deploy your application depends on your requirements. A few caveats to consider: "Download and run" user experienceUnlike web deployment, the user experience is not about "launch app from web". It is more of "download, install and run" process, and the user may need to go through additional steps to get application launched - e.g. accepting a browser security dialog or finding and launching the application installer from "downloads" folder. Larger download sizeIn general size of bundled application will be noticeably higher than size of unbundled app as a private copy of the JRE and JavaFX are included.  We're working to reduce the size through compression and customizable "trimming", but it will always be substantially larger than than an app that depends on a "system JRE". Bundle per target platformBundle formats are platform specific. Currently a native bundle can only be produced for the same system you are building on.  That is, if you want to deliver native app bundles on Windows, Linux and Mac you will have to build your project on all three platforms. Application updates are the responsibility of developerWeb deployed Java applications automatically download application updates from the web as soon as they are available. The Java Autoupdate mechanism takes care of updating the Java and JavaFX runtimes to latest secure version several times every year. There is no built in support for this in for bundled applications. It is possible to use 3rd party libraries (like Sparkle on Mac) to add autoupdate support at application level.  In a future version of JavaFX we may include built-in support for autoupdate (add yourself as watcher for RT-22211 if you are interested in this) Getting started with native bundles First, you need to get the latest JDK 7u6 beta build (build 14 or later is recommended). On Windows/Mac/Linux it comes with JavaFX 2.2 SDK as part of JDK installation and contains JavaFX packaging tools, including: bin/javafxpackagerCommand line utility to produce JavaFX packages. lib/ant-javafx.jar Set of ant tasks to produce JavaFX packages (most recommended way to deploy apps) For general information on how to use them refer to the Deploying JavaFX Application guide. Once you know how use these tools to package your JavaFX application for other deployment methods there are only a few minor tweaks necessary to produce native bundles: make sure java is used from JDK7u6 bundle you have installed adjust your PATH settings if needed  if you are using ant tasks add "nativeBundles=all" attribute to fx:deploy task if you are using javafxpackager pass "-native" option to deploy command or if you are using makeall command then it will try build native packages by default result bundles will be in the "bundles" folder next to other deployment artifacts Note that building some types of native packages (e.g. .exe or .msi) may require additional free 3rd party software to be installed and available on PATH. As of JDK 7u6 build 14 you could build following types of packages: Windows bundle image EXE Inno Setup 5 or later is required Result exe will perform user level installation (no admin permissions are required) At least one shortcut will be created (menu or desktop) Application will be launched at the end of install MSI WiX 3.0 or later is required Result MSI will perform user level installation (no admin permissions are required) At least one shortcut will be created (menu or desktop)  MacOS bundle image dmg (drag and drop) installer Linux bundle image rpm rpmbuild is required shortcut will be added to the programs menu If you are using Netbeans for producing the deployment packages then you will need to add custom build step to the build.xml to execute the fx:deploy task with native bundles enabled. Here is what we do for BrickBreaker sample: <target name="-post-jfx-deploy"> <fx:deploy width="${javafx.run.width}" height="${javafx.run.height}" nativeBundles="all" outdir="${basedir}/${dist.dir}" outfile="${application.title}"> <fx:application name="${application.title}" mainClass="${javafx.main.class}"> <fx:resources> <fx:fileset dir="${basedir}/${dist.dir}" includes="BrickBreaker.jar"/> </fx:resources> <info title="${application.title}" vendor="${application.vendor}"/> </fx:application> </fx:deploy> </target> This is pretty much regular use of fx:deploy task, the only special thing here is nativeBundles="all". Perhaps the easiest way to try building native bundles is to download the latest JavaFX samples bundle and build Ensemble, BrickBreaker or SwingInterop. Please give it a try and share your experience. We need your feedback! BTW, do not hesitate to file bugs and feature requests to JavaFX bug database! Wait! How can i ... This entry is not a comprehensive guide into native bundles, and we plan to post on this topic more. However, I am sure that once you play with native bundles you will have a lot of questions. We may not have all the answers, but please do not hesitate to ask! Knowing all of the questions is the first step to finding all of the answers.

    Read the article

  • Does this file format exist?

    - by Jon Chase
    Is there a file format that handles the following use case... I'd like to create a tar file (or whatever - I'm just using tar here b/c it's a well known file format for containing multiple files) that would be usable even if I only had access to specific chunks of said file. For example, say I tar up my mp3 and photo collection into a 100GB tar file, then put the file into some long term storage somewhere. Later, I want to access a specific mp3 file. I don't want to download the entire 100GB tar file just to get to one mp3. In fact, let's say I can't download the entire 100GB tar file. Instead, I'd like to say "give me megabytes 10 through 19 of the 100GB tar file" and then have the mp3 magically extracted from those 10 megabytes. Does a file format like this exist?

    Read the article

  • Measuring accesses to files - apache

    - by George
    So, I run a website, that among other things serves some files (usually PDFs). All of these are stored under a specific directory on the server: /var/www/vhosts/mysite.com/httpdocs/site/pdf_files Due to storage issues on my VPS I am thinking of getting some S3 or other cloud storage, and mount it as a drive using S3QL/S3FS. Then I will be able to have the pdf_files folder symlinked to the cloud folder and serve those files using that, without any changes on the web app (is that a good plan?) Now, before doing that, to estimate costs, I need to measure how many file accesses people do, how many times those pdf files are downloaded each month for example. Basically how many times those pdf files are accessed through the webserver. I'd like to do it on the apache level. What's the best way that this can be done? e.g.: measuring the bandwidth used by files in that specific folder would also be nice, but estimating the GET requests I'll be doing to amazon is more important.

    Read the article

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

  • Speaking at DevReach

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Next week, I will be speaking at Devreach on the following topics - Authoring custom WCF services in SharePoint Sahil Malik, Level 400 We live in a different world today! Gone are the times when you built your webparts around postbacks! Welcome silverlight, jquery, bing maps, google maps, and many others! And there are many enhancements in SharePoint 2010 that let you build such applications, the question is which is right for you? In this session Sahil compares WCF REST Services in SharePoint, The client object model, and custom WCF services, and then dives deep into the WCF aspects of SharePoint. All code, very few slides!   Scalability and Performance of SharePoint 2010 Sahil Malik, Level 400 If there is a topic that has more misinformation than anything else, it has to be the scalability and performance aspects of SharePoint. Did you know, SharePoint 2010 has some real world, under the covers improvement that help it perform and scale better? This session involves taking a deep look under the covers into the specific improvements Microsoft has made between SharePoint 2007 and SharePoint 2010 that truly qualifies SharePoint 2010 as an enterprise scalable product. This doesn't mean the product doesn't have limits - but this session is a lot more than just limits written on a powerpoint slide. This presentation is a true under the scenes look at specific improvements!   Devreach is a premier conference, check out their very impressive speaker and sessions line up. Comment on the article ....

    Read the article

  • What's The Difference Between Imperative, Procedural and Structured Programming?

    - by daniels
    By researching around (books, Wikipedia, similar questions on SE, etc) I came to understand that Imperative programming is one of the major programming paradigms, where you describe a series of commands (or statements) for the computer to execute (so you pretty much order it to take specific actions, hence the name "imperative"). So far so good. Procedural programming, on the other hand, is a specific type (or subset) of Imperative programming, where you use procedures (i.e., functions) to describe the commands the computer should perform. First question: Is there an Imperative programming language which is not procedural? In other words, can you have Imperative programming without procedures? Update: This first question seems to be answered. A language CAN be imperative without being procedural or structured. An example is pure Assembly language. Then you also have Structured programming, which seems to be another type (or subset) of Imperative programming, which emerged to remove the reliance on the GOTO statement. Second question: What is the difference between procedural and structured programming? Can you have one without the other, and vice-versa? Can we say procedural programming is a subset of structured programming, as in the image?

    Read the article

  • How to implement curved movement while tracking the appropriate angle?

    - by Vexille
    I'm currently coding a 2D top-down car game which will be turn-based. And since it's turn-based, the cars won't be controlled directly (i.e. with a simple velocity vector that adjusts its angle when the player wants to turn), but instead it's movement path has to be planned beforehand, and then the car needs to follow the path when the turn ends (think Steambirds). This question has some interesting information, but its focus is on homing-missile behaviour, which I kinda had figured out, but doesn't really apply to my case, I think, since I need to show a preview of the path when the player is planning his turn, then have the car follow that path. In that same question, there's an excellent answer by Andrew Russel which mentions Equations of Motion and Bézier's Curve. Some of his other suggestions of implementation are specific to XNA though, so they don't help much (I'm using Marmalade SDK). If I assume Bézier's Curve as the solution of choice, I'm left with one specific problem: I'll have the car's position (the first endpoint) and the target/final position (the last endpoint), but what should I use as the control point (assuming a square/quadratic curve)? And whether I use Bézier's Curve or another parametric equation, I'd still be left with another issue: the car can't just follow the curve, it must turn (i.e. adjust its angle) accordingly. So how can I figure out which way the car should be pointing to at any given point in the curve?

    Read the article

  • Big Data for Retail

    - by David Dorf
    Right up there with mobile, social, and cloud is the term "big data," which seems to be popping up lots in the press these days.  Companies like Google, Yahoo, and Facebook have popularized a new class of data technologies meant to solve the problem of processing large amounts of data quickly.  I first mentioned this in a posting back in March 2009.  Put simply, big data implies datasets so large they can't normally be processed using a standard transactional database.  The term "noSQL" is often used in this context as well. Actually, using parallel processing within the Oracle database combined with Exadata can achieve impressive results.  Look for more from Oracle at OpenWorld as hinted by Jean-Pierre Dijcks. McKinsey recently released a report on big data in which retail was specifically mentioned as an industry that can benefit from the new technologies.  I won't rehash that report because my friend Rama already did such a good job in his posting, Impact of "Big Data" on Retail. The presentation below does a pretty good job of framing the problem, although it doesn't really get into the available technologies (e.g. Exadata, Hadoop, Cassandra, etc.) and isn't retail specific. Determine the Right Analytic Database: A Survey of New Data Technologies So when a retailer asks me about big data, here's what I say:  Big data refers to a set of technologies for processing large volumes of structured and unstructured data.  Imagine collecting everything uttered by your customers on Facebook and Twitter and combining it with all the data you can find about the products you sell (e.g. reviews, images, demonstration videos), including competitive data.  Assuming you could process all that data, you could then personalize offers to specific customers based on their tastes, ensure prices are competitive, and implement better local assortments.  It's really not that far off.

    Read the article

  • WCI Analytics Installation / Configuration Support Webinar

    - by brian.harrison
    Based on the success of the OAM / WCI integration webinar, the second in our series of Technical Support "brown bag" webinars will be delivered on Tuesday, March 30 at 8AM Pacific Daylight Time. Please review the details below, if you would like to attend the webinar, please take a moment to send an email to the address provided for registration and you will be enrolled in the meeting. What are the best practices for installing and configuring Analytics for the WebCenter Interaction (formerly "ALUI") Portal Application? What are some of the most common failures that occur in this implementation and what can be done to correct these common issues? What are the most common reasons for the tables to be "empty" when I try to produce utilization reports? These are just some of the main areas that will be covered in this one hour webinar which will demonstrate the WCI Analytics installation and configuration in action. Our demonstration will focus on areas where Technical Support sees the largest numbers of customer questions become support incidents in an effort to help avoid the need to create an incident to get the implementation working properly in the customer environment. We will demonstrate the most recent version of WCI Analytics (10.3.0.1) for this presentation, but naturally specific issues known to specific versions will be covered as well. Please join us for what we know will be a valuable and relevant learning session. If you would like to attend this session please send an email to [email protected] indicating your interest, and we will respond to you with a meeting invitation including all of the required access information.

    Read the article

  • How to cross-reference many character encodings with ASCII OR UTFx?

    - by Garet Claborn
    I'm working with a binary structure, the goal of which is to index the significance of specific bits for any character encoding so that we may trigger events while doing specific checks against the profile. Each character encoding scheme has an associated system record. This record's leading value will be a C++ unsigned long long binary value and signifies the length, in bits, of encoded characters. Following the length are three values, each is a bit field of that length. offset_mask - defines the occurrence of non-printable characters within the min,max of print_mask range_mask - defines the occurrence of the most popular 50% of printable characters print_mask - defines the occurrence value of printable characters The structure of profiles has changed from the op of this question. Most likely I will try to factorize or compress these values in the long-term instead of starting out with ranges after reading more. I have to write some of the core functionality for these main reasons. It has to fit into a particular event architecture we are using, Better understanding of character encoding. I'm about to need it. Integrating into non-linear design is excluding many libraries without special hooks. I'm unsure if there is a standard, cross-encoding mechanism for communicating such data already. I'm just starting to look into how chardet might do profiling as suggested by @amon. The Unicode BOM would be easily enough (for my current project) if all encodings were Unicode. Of course ideally, one would like to support all encodings, but I'm not asking about implementation - only the general case. How can these profiles be efficiently populated, to produce a set of bitmasks which we can use to match strings with common characters in multiple languages? If you have any editing suggestions please feel free, I am a lightweight when it comes to localization, which is why I'm trying to reach out to the more experienced. Any caveats you may be able to help with will be appreciated.

    Read the article

  • AJI Report 14 &ndash; Brian Lagunas on XAML and Windows 8

    - by Jeff Julian
    We sat down with Brian at the Iowa Code Camp to talk about his sessions, WPF, Application Design, and what Infragistics has to offer developers. Infragistics is a huge supporter of regional events like Iowa Code Camp and we want to thank them for their support of the Midwest region. Brian is a sharp guy and it was great to meet him and learn more about what makes him tick. Brian Lagunas is an INETA Community Speaker, co-leader of the Boise .Net Developers User Group (NETDUG), and original author of the Extended WPF Toolkit. He is a multi-recipient of the Microsoft Community Contributor Award and can be found speaking at a variety of user groups and code camps around the nation. Brian currently works at Infragistics as a Product Manager for the award winning NetAdvantage for WPF and Silverlight components. Before geeking out, Brian served his country in the United States Army as an infantryman and later served his local community as a deputy sheriff.   Listen to the Show   Site: http://brianlagunas.com Twitter: @BrianLagunas

    Read the article

  • DLP projector has odd colors

    - by torbengb
    At my place of work, we have several different video projectors, but they all use DLP technology, and the colors are wrong: for instance, yellow looks more like green, and all other colors are similarly distorted. Any kind of presentation or collaborative work is hindered by these wrong colors. On the laptop screen, the colors are fine but on the projector (hooked up via normal short VGA cable, and showing the same image at the same time), the colors look wrong. This is not about one specific projector or one specific laptop; it seems that any combination of projector + laptop has the exact same problem. Someone said that DLP is poor technology, but that's not true. I'm using a DLP projector at home (regular PC connected via HDMI) and the colors are excellent. Still, there's some kind of curse on the machinery at work. How can we get decent colors?

    Read the article

  • What are the implications of expanding an internal subnet mask?

    - by Philip
    Our network is currently working on a 192.168.0.x subnet, all controlled through DHCP, except for the few main servers who have hard-configured IP address settings. What would I kill if I changed the DHCP-published subnet mask from 255.255.255.0 to 255.255.0.0? The reason for doing this is not because we have a huge sudden influx of machines, but because I'd like to start partitioning specific devices into specific IP ranges (to be neat and tidy). For what its worth, I don' plan on changing the allocated DHCP address range, but rather want to move some of the reserved and excluded DHCP addresses out of the address pool. e.g. printers will be 192.168.2.x I will obviously need to change the subnet mask manually on my manually configured devices.

    Read the article

  • GPLv2 - Multiple AI chess engines to bypass GPL

    - by Dogbert
    I have gone through a number of GPL-related questions, the most recent being this one: http://stackoverflow.com/questions/3248823/legal-question-about-the-gpl-license-net-dlls/3249001#3249001 I'm trying to see how this would work, so bear with me. I have a simple GUI interface for a game of Chess. It essentially can send/receive commands to/from an external chess engine (ie: Tong, Fruit, etc). The application/GUI is similar in nature to XBoard ( http://www.gnu.org/software/xboard/ ), but was independently designed. After going through a number of threads on this topic, it seems that the FSF considers dynamically linking against a GPLv2 library as a derivative work, and that by doing so, the GPLv2 extends to my proprietary code, and I must release the source to my entire project. Other legal precedents indicate the opposite, and that dynamic linking doesn't cause the "viral" effect of the GPL to propagate to my proprietary code. Since there is no official consensus that can give a "hard-and-fast" answer to the dynamic linking question, would this be an acceptable alternative: I build my chess GUI so that it sends/receives the chess engine AI logic as text commands from an external interface library that I write The interface library I wrote itself is then released under the GPL The interface library is only used to communicate via a generic text pipe to external command-line chess engines The chess engine itself would be built as a command-line utility rather than as a library of any sort, and just sends strings in the Universal Chess Interface of Chess Engine Communication Protocol ( http://en.wikipedia.org/wiki/Chess_Engine_Communication_Protocol ) format. The one "gotcha" is that the interface library should not be specific to one single GPL'ed chess engine, otherwise the entire GUI would be "entirely dependent" on it. So, I just make my interface library so that it is able to connect to any command-line chess engine that uses a specific format, rather than just one unique engine. I could then include pre-built command-line-app versions of any of the chess engines I'm using. Would that sort of approach allow me to do the following: NOT release the source for my UI Release the source of the interface library I built (if necessary) Use one or more chess engines and bundle them as external command-line utilities that ship with a binary version of my UI Thank you.

    Read the article

  • Looking for a good free or cheap task tracking system

    - by JWood
    I've finally decided that pen and paper/whiteboards are not up to the job as my workload increases so I'm looking for a good task tracking system. I need something that can track tasks in categories (projects) and allow me to assign priority to each task. I've tried iTeamWork which requires projects to have an end time which is no good for me as at least one of my projects is ongoing. I also tried Teamly which was required tasks to be set to a specific day which is no good as tasks sometimes take more than a day and I would like them organised by priority rather than specific days. Preferably looking for something hosted but I'm happy to install on our servers if it supports PHP/MySQL. Oh, and an iPhone client would be the icing on the cake! Can anyone recommend anything?

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >