Search Results

Search found 8821 results on 353 pages for 'core duo'.

Page 19/353 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Core Data: Overkill for simple, static UITableView-based iPhone App?

    - by David Foster
    Hello! I have a rather simple iPhone app consisting of numerous views containing a single, grouped table view. These views are held together in navigation controllers which are grouped in a tab bar. Simple stuff. My table views do little more than list text (like "Dog", "Cat" and "Weasel") and this data is being served from a collection of plists. It's perhaps worth mentioning too that these tables are 'static' in the sense that their data is pre-determined and will only ever be amended—and if so, very rarely indeed—by the developer (in this case, moi). This rudimentary approach has reached its limits though, and I think I'm going to need something a bit more relational. I have worked a tad with Core Data in the past, but only with apps whose data is determined by user input. I have four closely related questions: Is Core Data overkill for an app consisting mainly of a selection of simple table views? Do you recommend using Core Data to manage data which is predetermine and extremely unlikely to ever change? Can one lock Core Data down so that its data can't change, thereby relinquishing my responsibility as the developer to handle the editing and saving of the managed object context? How do I go about giving Core Data my predetermined data, and in a format I know that it can work with? Thanks a bunch guys.

    Read the article

  • How does Core Data determine if an NSObjects data can be dropped?

    - by Kevin
    In the app I am working on now I was storing about 500 images in Core Data. I have since pulled those images out and store them in the file system now, but in the process I found that the app would crash on the device if I had an array of 500 objects with image data in them. An array with 500 object ids with the image data in those objects worked fine. The 500 objects without the image data also worked fine. I found that I got the best performance with both an array of object ids and image data stored on the filesystem instead of in core data. The conclusion I came to was that if I had an object in an array that told Core Data I was "using" that object and Core Data would hold on to the data. Is this correct?

    Read the article

  • Can we use union of two sqlite databases with same tables for Core Data?

    - by Tofrizer
    Hi All, I have an iPhone Core Data app with a pre-populated sqlite "baseline" database. Can I add a second smaller sqlite database with the same tables as my pre-populated "baseline" database but with additional / complementary data such that Core Data will happily union the data from both databases and, ultimately, present to me as if it was all a single data source? Idea that I had is: 1) the "baseline" database never changes. 2) I can download the smaller "complementary" sqlite database for additional data as and when I need to (I'm assuming downloading sqlite database is allowed, please comment if otherwise). 3) Core Data is then able to union data from 1 & 2. I can then reference this unified data by calling my defined Core Data managed object model. Hope this makes sense. Thanks in advance.

    Read the article

  • Server Core: Best Practice for Applications on Windows Server

    - by The Official Microsoft IIS Site
    I have been talking with a number of customers, CSOs, CIOs and industry professionals over the past few weeks and I realized that the availability and benefits of using the Server Core option of Windows Server 2008 or Windows Server 2008 R2 was not as widely known as I think it should be. Windows Server Core provides a minimal installation environment for running specific server roles, which reduces the maintenance and management requirements and the attack surface for those server roles. The following...(read more)

    Read the article

  • Dependency problem with mysql-server-core-5.5

    - by Tama
    When I start the Ubuntu software centre, it says I cannot do anything until the package catalog is repaired. However, repairing fails. I ran "sudo apt-get -f install" and found the problem to be: mysql-server-5.5 depends on mysql-server-core-5.5 (= 5.5.24-0ubuntu0.12.04.1); however: Version of mysql-server-core-5.5 on system is 5.5.28-0ubuntu0.12.04.2. So, the question is, how do I install that version and resolve the dependency problem?

    Read the article

  • Server Core in Windows Server 2012 - Improved Taste, Less Filling, More Uptime

    - by KeithMayer
    Would you like to reduce your patch maintenance requirements by over 1/3rd? Of course! Who wouldn't? Server Core in Windows Server 2012 reduces the disk footprint of the operating system by approximately 4GB! When using the Server Core installation option, the features related to the Server Graphical Shell ( ie., Explorer, Start Screen, and Internet Explorer ) and Graphical Management Tools and Infrastructure are not installed - GUI features that are usually not required on a dedicated s

    Read the article

  • Intel fields six-core embedded CPUs

    <b>LinuxDevices:</b> "The Xeon Processor 5600 series also includes the chipmaker's first six-core embedded processors, plus a dual-core processor for "micro servers" that has a TDP of only 30 Watts, the company says."

    Read the article

  • How to limit a process to a single CPU core?

    - by Jonathan
    How do you limit a single process program run in a Windows environment to run only on a single CPU on a multi-core machine? Is it the same between a windowed program and a command line program? UPDATE: Reason for doing this: benchmarking various programming languages aspects I need something that would work from the very start of the process, therefore @akseli's answer, although great for other cases, doesn't solve my case

    Read the article

  • What’s new in ASP.NET 4.0: Core Features

    - by Rick Strahl
    Microsoft released the .NET Runtime 4.0 and with it comes a brand spanking new version of ASP.NET – version 4.0 – which provides an incremental set of improvements to an already powerful platform. .NET 4.0 is a full release of the .NET Framework, unlike version 3.5, which was merely a set of library updates on top of the .NET Framework version 2.0. Because of this full framework revision, there has been a welcome bit of consolidation of assemblies and configuration settings. The full runtime version change to 4.0 also means that you have to explicitly pick version 4.0 of the runtime when you create a new Application Pool in IIS, unlike .NET 3.5, which actually requires version 2.0 of the runtime. In this first of two parts I'll take a look at some of the changes in the core ASP.NET runtime. In the next edition I'll go over improvements in Web Forms and Visual Studio. Core Engine Features Most of the high profile improvements in ASP.NET have to do with Web Forms, but there are a few gems in the core runtime that should make life easier for ASP.NET developers. The following list describes some of the things I've found useful among the new features. Clean web.config Files Are Back! If you've been using ASP.NET 3.5, you probably have noticed that the web.config file has turned into quite a mess of configuration settings between all the custom handler and module mappings for the various web server versions. Part of the reason for this mess is that .NET 3.5 is a collection of add-on components running on top of the .NET Runtime 2.0 and so almost all of the new features of .NET 3.5 where essentially introduced as custom modules and handlers that had to be explicitly configured in the config file. Because the core runtime didn't rev with 3.5, all those configuration options couldn't be moved up to other configuration files in the system chain. With version 4.0 a consolidation was possible, and the result is a much simpler web.config file by default. A default empty ASP.NET 4.0 Web Forms project looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> </configuration> Need I say more? Configuration Transformation Files to Manage Configurations and Application Packaging ASP.NET 4.0 introduces the ability to create multi-target configuration files. This means it's possible to create a single configuration file that can be transformed based on relatively simple replacement rules using a Visual Studio and WebDeploy provided XSLT syntax. The idea is that you can create a 'master' configuration file and then create customized versions of this master configuration file by applying some relatively simplistic search and replace, add or remove logic to specific elements and attributes in the original file. To give you an idea, here's the example code that Visual Studio creates for a default web.Release.config file, which replaces a connection string, removes the debug attribute and replaces the CustomErrors section: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="MyDB" connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated Security=True" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors defaultRedirect="GenericError.htm" mode="RemoteOnly" xdt:Transform="Replace"> <error statusCode="500" redirect="InternalError.htm"/> </customErrors> </system.web> </configuration> You can see the XSL transform syntax that drives this functionality. Basically, only the elements listed in the override file are matched and updated – all the rest of the original web.config file stays intact. Visual Studio 2010 supports this functionality directly in the project system so it's easy to create and maintain these customized configurations in the project tree. Once you're ready to publish your application, you can then use the Publish <yourWebApplication> option on the Build menu which allows publishing to disk, via FTP or to a Web Server using Web Deploy. You can also create a deployment package as a .zip file which can be used by the WebDeploy tool to configure and install the application. You can manually run the Web Deploy tool or use the IIS Manager to install the package on the server or other machine. You can find out more about WebDeploy and Packaging here: http://tinyurl.com/2anxcje. Improved Routing Routing provides a relatively simple way to create clean URLs with ASP.NET by associating a template URL path and routing it to a specific ASP.NET HttpHandler. Microsoft first introduced routing with ASP.NET MVC and then they integrated routing with a basic implementation in the core ASP.NET engine via a separate ASP.NET routing assembly. In ASP.NET 4.0, the process of using routing functionality gets a bit easier. First, routing is now rolled directly into System.Web, so no extra assembly reference is required in your projects to use routing. The RouteCollection class now includes a MapPageRoute() method that makes it easy to route to any ASP.NET Page requests without first having to implement an IRouteHandler implementation. It would have been nice if this could have been extended to serve *any* handler implementation, but unfortunately for anything but a Page derived handlers you still will have to implement a custom IRouteHandler implementation. ASP.NET Pages now include a RouteData collection that will contain route information. Retrieving route data is now a lot easier by simply using this.RouteData.Values["routeKey"] where the routeKey is the value specified in the route template (i.e., "users/{userId}" would use Values["userId"]). The Page class also has a GetRouteUrl() method that you can use to create URLs with route data values rather than hardcoding the URL: <%= this.GetRouteUrl("users",new { userId="ricks" }) %> You can also use the new Expression syntax using <%$RouteUrl %> to accomplish something similar, which can be easier to embed into Page or MVC View code: <a runat="server" href='<%$RouteUrl:RouteName=user, id=ricks %>'>Visit User</a> Finally, the Response object also includes a new RedirectToRoute() method to build a route url for redirection without hardcoding the URL. Response.RedirectToRoute("users", new { userId = "ricks" }); All of these routines are helpers that have been integrated into the core ASP.NET engine to make it easier to create routes and retrieve route data, which hopefully will result in more people taking advantage of routing in ASP.NET. To find out more about the routing improvements you can check out Dan Maharry's blog which has a couple of nice blog entries on this subject: http://tinyurl.com/37trutj and http://tinyurl.com/39tt5w5. Session State Improvements Session state is an often used and abused feature in ASP.NET and version 4.0 introduces a few enhancements geared towards making session state more efficient and to minimize at least some of the ill effects of overuse. The first improvement affects out of process session state, which is typically used in web farm environments or for sites that store application sensitive data that must survive AppDomain restarts (which in my opinion is just about any application). When using OutOfProc session state, ASP.NET serializes all the data in the session statebag into a blob that gets carried over the network and stored either in the State server or SQL Server via the Session provider. Version 4.0 provides some improvement in this serialization of the session data by offering an enableCompression option on the web.Config <Session> section, which forces the serialized session state to be compressed. Depending on the type of data that is being serialized, this compression can reduce the size of the data travelling over the wire by as much as a third. It works best on string data, but can also reduce the size of binary data. In addition, ASP.NET 4.0 now offers a way to programmatically turn session state on or off as part of the request processing queue. In prior versions, the only way to specify whether session state is available is by implementing a marker interface on the HTTP handler implementation. In ASP.NET 4.0, you can now turn session state on and off programmatically via HttpContext.Current.SetSessionStateBehavior() as part of the ASP.NET module pipeline processing as long as it occurs before the AquireRequestState pipeline event. Output Cache Provider Output caching in ASP.NET has been a very useful but potentially memory intensive feature. The default OutputCache mechanism works through in-memory storage that persists generated output based on various lifetime related parameters. While this works well enough for many intended scenarios, it also can quickly cause runaway memory consumption as the cache fills up and serves many variations of pages on your site. ASP.NET 4.0 introduces a provider model for the OutputCache module so it becomes possible to plug-in custom storage strategies for cached pages. One of the goals also appears to be to consolidate some of the different cache storage mechanisms used in .NET in general to a generic Windows AppFabric framework in the future, so various different mechanisms like OutputCache, the non-Page specific ASP.NET cache and possibly even session state eventually can use the same caching engine for storage of persisted data both in memory and out of process scenarios. For developers, the OutputCache provider feature means that you can now extend caching on your own by implementing a custom Cache provider based on the System.Web.Caching.OutputCacheProvider class. You can find more info on creating an Output Cache provider in Gunnar Peipman's blog at: http://tinyurl.com/2vt6g7l. Response.RedirectPermanent ASP.NET 4.0 includes features to issue a permanent redirect that issues as an HTTP 301 Moved Permanently response rather than the standard 302 Redirect respond. In pre-4.0 versions you had to manually create your permanent redirect by setting the Status and Status code properties – Response.RedirectPermanent() makes this operation more obvious and discoverable. There's also a Response.RedirectToRoutePermanent() which provides permanent redirection of route Urls. Preloading of Applications ASP.NET 4.0 provides a new feature to preload ASP.NET applications on startup, which is meant to provide a more consistent startup experience. If your application has a lengthy startup cycle it can appear very slow to serve data to clients while the application is warming up and loading initial resources. So rather than serve these startup requests slowly in ASP.NET 4.0, you can force the application to initialize itself first before even accepting requests for processing. This feature works only on IIS 7.5 (Windows 7 and Windows Server 2008 R2) and works in combination with IIS. You can set up a worker process in IIS 7.5 to always be running, which starts the Application Pool worker process immediately. ASP.NET 4.0 then allows you to specify site-specific settings by setting the serverAutoStartEnabled on a particular site along with an optional serviceAutoStartProvider class that can be used to receive "startup events" when the application starts up. This event in turn can be used to configure the application and optionally pre-load cache data and other information required by the app on startup.  The configuration settings need to be made in applicationhost.config: <sites> <site name="WebApplication2" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PreWarmup" /> </site> </sites> <serviceAutoStartProviders> <add name="PreWarmup" type="PreWarmupProvider,MyAssembly" /> </serviceAutoStartProviders> Hooking up a warm up provider is optional so you can omit the provider definition and reference. If you do define it here's what it looks like: public class PreWarmupProvider System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // initialization for app } } This code fires and while it's running, ASP.NET/IIS will hold requests from hitting the pipeline. So until this code completes the application will not start taking requests. The idea is that you can perform any pre-loading of resources and cache values so that the first request will be ready to perform at optimal performance level without lag. Runtime Performance Improvements According to Microsoft, there have also been a number of invisible performance improvements in the internals of the ASP.NET runtime that should make ASP.NET 4.0 applications run more efficiently and use less resources. These features come without any change requirements in applications and are virtually transparent, except that you get the benefits by updating to ASP.NET 4.0. Summary The core feature set changes are minimal which continues a tradition of small incremental changes to the ASP.NET runtime. ASP.NET has been proven as a solid platform and I'm actually rather happy to see that most of the effort in this release went into stability, performance and usability improvements rather than a massive amount of new features. The new functionality added in 4.0 is minimal but very useful. A lot of people are still running pure .NET 2.0 applications these days and have stayed off of .NET 3.5 for some time now. I think that version 4.0 with its full .NET runtime rev and assembly and configuration consolidation will make an attractive platform for developers to update to. If you're a Web Forms developer in particular, ASP.NET 4.0 includes a host of new features in the Web Forms engine that are significant enough to warrant a quick move to .NET 4.0. I'll cover those changes in my next column. Until then, I suggest you give ASP.NET 4.0 a spin and see for yourself how the new features can help you out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Intel Core i7-4960HQ vs. 4850HQ (Haswell) [on hold]

    - by Timothy R. Butler
    I'm looking at the new MacBook Pros and trying to decide between the Core i7-4960HQ (2.6 GHz) and i7-4850 (2.3 GHz). I've found some synthetic benchmarks comparing them, but I haven't found a lot of data, so I'd appreciate any pointers to good comparisons for the Haswell family (especially these two processors). My cursory analysis seems to suggest there isn't a huge gain from the extra 300 MHz. I'd like to determine not only whether this is generally true, but also to figure out if the gains that are made in performance come at too high of cost. Is the 2.6 going to be pushing the limits of what can fit in a thin laptop without overheating? I've looked at some of Intel's documentation, but have not been able to determine what the normal and maximum operating temperature differences are for the models. In the past, there have been times that Intel's fastest models in a given range ran especially hot and/or consumed significantly more power compared to slightly slower models. Do those concerns factor into the current generation?

    Read the article

  • Processor not running at max speed

    - by Andrew Hampton
    My laptop has an Intel Core 2 Duo T9300 which should be running at 2.5GHz, however CPU-Z consistently reports my Core speed at right under 1.6Ghz (8x multiplier and ~200MHz Bus Speed). Even when I'm doing heavy development work and the processor is running at 100% for extended periods of time the core speed reported by CPU-Z never goes up to 2.5GHz. My understanding is that this reduction in speed is to save power, but this happens even when I'm plugged into the outlet. Does anyone know why this is happening or how to fix it?

    Read the article

  • Deploying an EAR to JBOSS times out (org.rhq.core.pc.inventory.TimeoutException:)

    - by rangalo
    Hi, I am trying to deploy an ear file to JBOSS AS (defalut server). The application is the mavenised version of examples of SeamInAction book. When I copy the file to $JBOSS_HOME/server/default/deploy, I don't get any exception but the application doesn't respond, after some time trying to access the application from the browser gives following in the log... While deploying with admin-console (http://localhost:8080/admin-console) I get following error messgae: PS: After this Jboss gets into unusable state. I cannot even access admin-console. I just have to kill it. ErrorMessage in admin-console: Failed to create Resource Open18.ear - cause: org.rhq.core.pc.inventory.TimeoutException: Call to [org.rhq.plugins.jbossas5.ApplicationServerComponent.createResource()] with args [[CreateResourceReport: ResourceType=[ResourceType[id=0, category=Service, name=Enterprise Application (EAR), plugin=JBossAS5]], ResourceKey=[null]]] timed out. Invocation thread will be interrupted at org.rhq.core.pc.inventory.ResourceContainer$ResourceComponentInvocationHandler.invokeInNewThreadWithLock(ResourceContainer.java:437) at org.rhq.core.pc.inventory.ResourceContainer$ResourceComponentInvocationHandler.invoke(ResourceContainer.java:406) at $Proxy266.createResource(Unknown Source) at org.rhq.core.pc.inventory.CreateResourceRunner.call(CreateResourceRunner.java:113) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) Error Logs: 4:08:58,555 INFO [TableMetadata] foreign keys: [fkaf42e01ba13c3380, fk_course_ref_facility] 14:08:58,555 INFO [TableMetadata] indexes: [course_pkey] 14:08:58,645 INFO [TableMetadata] table found: public.facility 14:08:58,645 INFO [TableMetadata] columns: [zip, phone, state, type, uri, city, country, id, price_range, address, county, description, nam e] 14:08:58,645 INFO [TableMetadata] foreign keys: [] 14:08:58,645 INFO [TableMetadata] indexes: [facility_pkey] 14:08:58,705 INFO [TableMetadata] table found: public.hole 14:08:58,705 INFO [TableMetadata] columns: [id, m_par, l_handicap, name, l_par, number, course_id, m_handicap] 14:08:58,705 INFO [TableMetadata] foreign keys: [fk_hole_ref_course, fk30f4c09c3f1200] 14:08:58,705 INFO [TableMetadata] indexes: [hole_pkey, uniq_hole_number] 14:08:58,764 INFO [TableMetadata] table found: public.tee 14:08:58,764 INFO [TableMetadata] columns: [hole_id, distance, tee_set_id] 14:08:58,764 INFO [TableMetadata] foreign keys: [fk1c014f8de7677, fk_tee_ref_hole, fk1c014c69de560, fk_tee_ref_tee_set] 14:08:58,764 INFO [TableMetadata] indexes: [tee_pkey] 14:08:58,826 INFO [TableMetadata] table found: public.tee_set 14:08:58,826 INFO [TableMetadata] columns: [id, color, m_slope_rating, l_slope_rating, name, course_id, m_course_rating, l_course_rating, p os] 14:08:58,826 INFO [TableMetadata] foreign keys: [fk_tee_set_ref_course, fkaa6881b79c3f1200] 14:08:58,826 INFO [TableMetadata] indexes: [tee_set_pkey, uniq_tee_set_pos, uniq_tee_set_color] 14:08:58,827 INFO [SchemaUpdate] schema update complete 14:08:58,829 INFO [NamingHelper] JNDI InitialContext properties:{java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory, java. naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces} 14:08:58,850 INFO [TomcatDeployment] deploy, ctxPath=/Open18 14:15:53,969 WARN [DiscoveryComponentProxyFactory] The discovery component for resource type [ResourceType[id=0, category=Service, name=Connector, plugin=JBossAS5]] has been blacklisted 14:15:53,970 WARN [InventoryManager] Failure during discovery for [Connector] Resources - failed after 300002 ms. org.rhq.core.pc.inventory.TimeoutException: Call to [org.rhq.plugins.jbossas5.ConnectorDiscoveryComponent.discoverResources()] with args [[org.rhq.core.pluginapi.inventory.ResourceDiscoveryContext@96db1]] timed out. Invocation thread will be interrupted at org.rhq.core.pc.util.DiscoveryComponentProxyFactory$ResourceDiscoveryComponentInvocationHandler.invokeInNewThread(DiscoveryComponentProxyFactory.java:208) at org.rhq.core.pc.util.DiscoveryComponentProxyFactory$ResourceDiscoveryComponentInvocationHandler.invoke(DiscoveryComponentProxyFactory.java:181) at $Proxy249.discoverResources(Unknown Source) at org.rhq.core.pc.inventory.InventoryManager.invokeDiscoveryComponent(InventoryManager.java:272) at org.rhq.core.pc.inventory.InventoryManager.executeComponentDiscovery(InventoryManager.java:1697) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.discoverForResource(RuntimeDiscoveryExecutor.java:218) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.discoverForResource(RuntimeDiscoveryExecutor.java:234) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.runtimeDiscover(RuntimeDiscoveryExecutor.java:134) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.call(RuntimeDiscoveryExecutor.java:94) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.call(RuntimeDiscoveryExecutor.java:51) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:207) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) 14:15:53,981 WARN [NavigationContent] Unable to find node for deleted resource [Resource[id=-5, type=Connector, key=ajp://127.0.0.1:8009, name=ajp://127.0.0.1:8009, parent=JBoss Web]].

    Read the article

  • OpenGL extension vs OpenGL core

    - by user209347
    I was doubting: I'm writing a cross-platform engine OpenGL C++, I figured out windows forces the developers to access OpenGL features above 1.1 through extensions. Now the thing is, on Linux, I know that I can directly access functions if the version supports it through glext.h and opengl version. The problem is that if on Linux, the core doesn't support it, is it possible there is an extensions that supports the same functionality, in my case vertex buffer objects? I'm doing something like this: Windows: (hashdeck) define glFunction functionpointer_to_the_extension (apparently the layout changes font size if I use #) Linux: Since glext already defined glFunction, I can write in client code glFunction, and compile it both on Windows AND Linux without changing a single line in my client code using the engine (my goal). Now the thing is, I saw a tutorial use only the extension on Linux, and not checking for the opengl implementation version. If the functionality is available in the core, is it also available as extension (VBO's e.g.)? Or is an extension something you never know is available? I want to write an engine that gets all the possibilities on hardware, so I need to check (on Linux) for extensions as well as core version for possible functionality implementation.

    Read the article

  • Picking Core Language For Large Scale Web Platform

    - by ryanzec
    Now I have work with PHP and ASP.NET quite a bit and also played around few other language for web development. I am now at a point where need to start building a backend platform that will have the ability to support a large set of applications and I am trying to figure out which language I want to choose as my core language. When I say core language I mean the language that the majority of the backend code is going to be in. This is not to say that other languages won't be used because my guess is that they will but I want a large majority of the code (90%-98%) to be in 1 language. While I see to benefit of using the language that is best for the job, having 15% in php, 15% in ASP.NET, 5% in perl, 10% in python, 15% in ruby, etc… seems like a very bad idea to me (not to mention integrating everything seamlessly would probably add a bit of overhead). If you were going to be building a large scale web platform that need to support multiple applications from scratch, what would you choose as your core language and why?

    Read the article

  • How to safely remove a USB device from 2008 Core server?

    - by Qwerty
    I have a Hyper-V Core server 2008 that I administer via command line and remote tools. We have now got a new backup system in place and it involves me connecting an External USB drive (G:) to backup system state files. My question is how should I safely remove the drive for its weekly offsite swap? I've tried using the devcon tool however it just says the 'removal failed with no devices removed' with no other explanation. I have noticed that there isnt a readily available x64 version of devcon and that might be the cause of the problem. (I have read of people downloading a amd64 version but I have not located it myself, if someone knows where it is please let me know). The devcon command worked on my old 2003 x86 server with the command: devcon remove *3200AVJ_EXTERNAL* I have also looked at using fsutil volume dismount g: but it doesn't seem to work as G: is still listed as a connected volume. I have checked that the volume is not in use via remote tools and the net file command. Both show no open files in the G:\ volume. This could be a decent substitute as it might be used to flush any remaining IO to the volume can anyone clarify? Thanks in advance.

    Read the article

  • How do I calculate clock speed in multi-core processors?

    - by NReilingh
    Is it correct to say, for example, that a processor with four cores each running at 3GHz is in fact a processor running at 12GHz? I once got into a "Mac vs. PC" argument (which by the way is NOT the focus of this topic... that was back in middle school) with an acquaintance who insisted that Macs were only being advertised as 1Ghz machines because they were dual-processor G4s each running at 500MHz. At the time I knew this to be hogwash for reasons I think are apparent to most people, but I just saw a comment on this website to the effect of "6 cores x 0.2GHz = 1.2Ghz" and that got me thinking again about whether there's a real answer to this. So, this is a more-or-less philosophical/deep technical question about the semantics of clock speed calculation. I see two possibilities: Each core is in fact doing x calculations per second, thus the total number of calculations is x(cores). Clock speed is rather a count of the number of cycles the processor goes through in the space of a second, so as long as all cores are running at the same speed, the speed of each clock cycle stays the same no matter how many cores exist. In other words, Hz = (core1Hz+core2Hz+...)/cores.

    Read the article

  • Defining Discovery: Core Concepts

    - by Joe Lamantia
    Discovery tools have had a referencable working definition since at least 2001, when Ben Shneiderman published 'Inventing Discovery Tools: Combining Information Visualization with Data Mining'.  Dr. Shneiderman suggested the combination of the two distinct fields of data mining and information visualization could manifest as new category of tools for discovery, an understanding that remains essentially unaltered over ten years later.  An industry analyst report titled Visual Discovery Tools: Market Segmentation and Product Positioning from March of this year, for example, reads, "Visual discovery tools are designed for visual data exploration, analysis and lightweight data mining." Tools should follow from the activities people undertake (a foundational tenet of activity centered design), however, and Dr. Shneiderman does not in fact describe or define discovery activity or capability. As I read it, discovery is assumed to be the implied sum of the separate fields of visualization and data mining as they were then understood.  As a working definition that catalyzes a field of product prototyping, it's adequate in the short term.  In the long term, it makes the boundaries of discovery both derived and temporary, and leaves a substantial gap in the landscape of core concepts around discovery, making consensus on the nature of most aspects of discovery difficult or impossible to reach.  I think this definitional gap is a major reason that discovery is still an ambiguous product landscape. To help close that gap, I'm suggesting a few definitions of four core aspects of discovery.  These come out of our sustained research into discovery needs and practices, and have the goal of clarifying the relationship between discvoery and other analytical categories.  They are suggested, but should be internally coherent and consistent.   Discovery activity is: "Purposeful sense making activity that intends to arrive at new insights and understanding through exploration and analysis (and for these we have specific defintions as well) of all types and sources of data." Discovery capability is: "The ability of people and organizations to purposefully realize valuable insights that address the full spectrum of business questions and problems by engaging effectively with all types and sources of data." Discovery tools: "Enhance individual and organizational ability to realize novel insights by augmenting and accelerating human sense making to allow engagement with all types of data at all useful scales." Discovery environments: "Enable organizations to undertake effective discovery efforts for all business purposes and perspectives, in an empirical and cooperative fashion." Note: applicability to a world of Big data is assumed - thus the refs to all scales / types / sources - rather than stated explicitly.  I like that Big Data doesn't have to be written into this core set of definitions, b/c I think it's a transitional label - the new version of Web 2.0 - and goes away over time. References and Resources: Inventing Discovery Tools Visual Discovery Tools: Market Segmentation and Product Positioning Logic versus usage: the case for activity-centered design A Taxonomy of Enterprise Search and Discovery

    Read the article

  • Understanding The Very Nature Of Linux - Becoming Core Programmer

    - by MrWho
    Well, I want to know how I should exactly start and get into the right path to become a core programmer and also get decent understanding of Linux infrastructure and fundamentals. I know my question may seem general or something but that's not because of my inability to ask a question.I'm just confused, I've programmed in a few languages and have got my hand dirty to code so I'm aware of the big picture of what the programmers actually do.Now, I want to get deeper and start my studies in a different level than I used to learn before, I want to become advanced core programmer and learn where it really start from.I'd like to know the bit by bit of what the today's operating systems like linux have been built on. I DO really need good references, books would be preferred for learning the fundamentals.If someone tell me the general path of what I'm supposed to do, it would be really appreciated.

    Read the article

  • When is a 'core' library a bad idea?

    - by Alex Angas
    When developing software, I often have a centralised 'core' library containing handy code that can be shared and referenced by different projects. Examples: a set of functions to manipulate strings commonly used regular expressions common deployment code However some of my colleagues seem to be turning away from this approach. They have concerns such as the maintenance overhead of retesting code used by many projects once a bug is fixed. Now I'm reconsidering when I should be doing this. What are the issues that make using a 'core' library a bad idea?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >