Search Results

Search found 37348 results on 1494 pages for 'low end hardware'.

Page 114/1494 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Why do operating systems do low level stuff in C and C++? Why not just C++?

    - by Cole Johnson
    On the Wikipedia page for Windows, it states the Windows is written in Assembly for the bootloader and task switcher, and C and C++ for kernel routines. This confuses me because AFAIK, you can call C++ functions from an extern "C"'d block as C++ is just C with extra features (all of which can be rewritten in C if you wanted to AFAIK). I can get using C for the kernel functions so pure C apps can use them (like printf and such), but if they can just be wrapped in an extern "C " block, then why code in C? So my question is: Why would a kernel be written in both C and C++ instead of just C++

    Read the article

  • Ubuntu failed to detect monitor and very low resolution?

    - by Hiren
    I tried different versions of Ubuntu from 11.04 to 11.10 beta, but got same problem. My desktop pc configuration is, - intel core i5 2400 - DH67BL Motherboard - Inbuilt motherboard graphics - No extra graphics card attached - Acer-H193HQV 18.5" Monitor - 2GB RAM - 250GB Harddisk Problem : Ubuntu can't detect my monitor and saying it Unknown. Moreover, monitor's original resolution is 1366x768 but in the list of resolution there is only 1024x768 and 800x600 are there.

    Read the article

  • Low hanging fruit where "a sufficiently smart compiler" is needed to get us back to Moore's Law?

    - by jamie
    Paul Graham argues that: It would be great if a startup could give us something of the old Moore's Law back, by writing software that could make a large number of CPUs look to the developer like one very fast CPU. ... The most ambitious is to try to do it automatically: to write a compiler that will parallelize our code for us. There's a name for this compiler, the sufficiently smart compiler, and it is a byword for impossibility. But is it really impossible? Can someone provide a concrete example where a paralellizing compiler would solve a pain point? Web-apps don't appear to be a problem: just run a bunch of Node processes. Real-time raytracing isn't a problem: the programmers are writing multi-threaded, SIMD assembly language quite happily (indeed, some might complain if we make it easier!). The holy grail is to be able to accelerate any program, be it MySQL, Garage Band, or Quicken. I'm looking for a middle ground: is there a real-world problem that you have experienced where a "smart-enough" compiler would have provided a real benefit, i.e that someone would pay for?

    Read the article

  • Is there a global "low resolution" filter for OpenGL?

    - by Ian Henry
    I'm trying to learn a little about OpenGL, so I'm making a simple 2D game (with OpenTK), and so far it's coming along well. I thought it would be fun to give it that, for lack of a better word, retropixelated look of games from the early nineties. I figured it would be an easy thing to do -- simply draw everything at half its normal size and scale up with no anti-aliasing. But I can't find any resources on how to do this. I can set the min/mag filters of my textures to nearest and that works fine for my sprites, but I'm using lots of primitives and I'd like the effect to apply to them as well. The one idea I had was to draw everything at half size, then somehow copy the render buffer to a texture, then render that texture full-size, but I don't know how to do that, and it seems like there must be a better way. Can anyone help me out?

    Read the article

  • Why is my page ranked *very* low in Google? [closed]

    - by duality_
    I have created a web site mianatura.net that doesn't even rank in the top 100 results in Google for the query "Mia Natura". I have the text Mia Natura in the domain, <title>, <h1>, I have the site in Google Webmaster Tools, the site is crawled (finding 172 results for site:mianatura.net). I have checked my standing manually (going through the SERPs), using What Page of Search Am I on and diyseo. The site is not involved in any dubious link building campaigns (as far as I know). So what's going on?

    Read the article

  • Should I put an app I wrote on my résumé even if it has low ratings?

    - by charliehorse55
    Last summer I wrote an iPhone app for the Toronto Film Festival. The development was pretty rushed, and the design goals were changed multiple times. In particular, the central film list view controller was redesigned three times in the week before launch. I forgot to update one of my functions to match the changed design, and the app shipped with a serious bug. While the app was fairly popular, this bug crippled the app and it got a lot of poor reviews. I fixed the bug as soon as I got a crash report, but it got stuck in the iTunes review process for the duration of the film festival. Should I put this app on my résumé? The app has poor ratings and most of the reviews mention crashes, but it's also the only work experience I have. Additionally, how should I approach this topic in an interview? Here is the iTunes link for the app: https://itunes.apple.com/ca/app/official-tiff/id550151899?mt=8

    Read the article

  • Why isn't Japanese software industry as strong as their hardware technology?

    - by Joan Venge
    I admire Japanese technology and their innovation. They always seem to be one step ahead of everyone else. But why isn't their software industry just as developed? Why aren't there any Japanese operating systems, high-end game engines, 3D digital content creation applications? I would like to see their take on these and I think it could bring alot of innovation. Btw I mentioned 3D software because the animation industry is strong there as well, but they are using North American software for this.

    Read the article

  • Why are bugs responsible for big deficiencies in functionality given such low priority?

    - by keepitsimpleengineer
    Well, first of all, change is inevitable and mostly good. Furthermore attempts at simplifying the User Interface such as Gnome 3, Unity to make Linux more inclusive hold much promise, even though they adversely affect my style of working. Additionally, though now retired, I have worked with computers for 47 years, and though I do nothing serious for others now, I still do heavy duty things. 10.04 LTS is my big workstation, and I had three 10.10 systems for Mythtv, and one of which is further adapted for video & related. The Mythtv were 10.10 because of a dormant bug regarding installing to 10.04. My work habits consistently use dual monitors and compiz cube and 3D windows with the computing horsepower to support them. The dual monitors with separate X screens has been not been functional since 11.04, and cube/3D windows not functional in Unity, and with diminished functionality Gnome. There is a bug filed (after upgrade to 12.04 amd64 Gnome Classic not properly draw second screen) I have mitigated the situation some by switching to Xubuntu and eschewing Unity. The question that comes to mind is why this bug is not given more attention in that it nearly cuts functionality in half for more competent workstations. Sample workspace... Please know that I appreciate all the hard work, dedication require to pull off something as big as Ubuntu et al.

    Read the article

  • How to install Edubuntu on a system with low memory (256 Mb)?

    - by int_ua
    I'm preparing an old system with 256 Mb RAM to send it to some children. It doesn't have Ethernet controller and there are no Internet access at the destination. I've chosen Edubuntu for obvious reasons and modified it with UCK trying to minimize memory usage just to install, let alone using it yet. But Ubiquity won't start even in openbox (edited /etc/lightdm/lightdm.conf) because there are no space left on /cow right after booting. I've already deleted things like ibus, zeitgeist, update-manager (no network access after all), twisted-core, plymouth logos. I'm thinking about creating a swap partition on HDD, can it be later added to expand this /cow ? Is there a package for the text-mode installation which is used on Alternate CDs? I don't want to re-create Edubuntu from an Alternate CD. This behavior is reproducible in VM limited to 256Mb RAM.

    Read the article

  • Should I manage authentication on my own if the alternative is very low in usability and I am already managing roles?

    - by rumtscho
    As a small in-house dev department, we only have experience with developing applications for our intranet. We use the existing Active Directory for user account management. It contains the accounts of all company employees and many (but not all) of the business partners we have a cooperation with. Now, the top management wants a technology exchange application, and I am the lead dev on the new project. Basically, it is a database containing our know-how, with a web frontend. Our employees, our cooperating business partners, and people who wish to become our cooperating business partners should have access to it and see what technologies we have, so they can trade for them with the department which owns them. The technologies are not patented, but very valuable to competitors, so the department bosses are paranoid about somebody unauthorized gaining access to their technology description. This constraint necessitates a nightmarishly complicated multi-dimensional RBAC-hybrid model. As the Active Directory doesn't even contain all the information needed to infer the roles I use, I will have to manage roles plus per-technology per-user granted access exceptions within my system. The current plan is to use Active Directory for authentication. This will result in a multi-hour registration process for our business partners where the database owner has to manually create logins in our Active Directory and send them credentials. If I manage the logins in my own system, we could improve the usability a lot, for example by letting people have an active (but unprivileged) account as soon as they register. It seems to me that, after I am having a users table in the DB anyway (and managing ugly details like storing historical user IDs so that recycled user IDs within the Active Directory don't unexpectedly get rights to view someone's technologies), the additional complexity from implementing authentication functionality will be minimal. Therefore, I am starting to lean towards doing my own user login management and forgetting the AD altogether. On the other hand, I see some reasons to stay with Active Directory. First, the conventional wisdom I have heard from experienced programmers is to not do your own user management if you can avoid it. Second, we have code I can reuse for connection to the active directory, while I would have to code the authentication if done in-system (and my boss has clearly stated that getting the project delivered on time has much higher priority than delivering a system with high usability). Third, I am not a very experienced developer (this is my first lead position) and have never done user management before, so I am afraid that I am overlooking some important reasons to use the AD, or that I am underestimating the amount of work left to do my own authentication. I would like to know if there are more reasons to go with the AD authentication mechanism. Specifically, if I want to do my own authentication, what would I have to implement besides a secure connection for the login screen (which I would need anyway even if I am only transporting the pw to the AD), lookup of a password hash and a mechanism for password recovery (which will probably include manual identity verification, so no need for complex mTAN-like solutions)? And, if you have experience with such security-critical systems, which one would you use and why?

    Read the article

  • Is it a good idea to add robots "noindex" meta tags to deep low content pages, e.g. product model data

    - by Cognize
    I'm considering adding robots "noindex, follow" tags to the very numerous product data pages that are linked from the product style pages in our online store. For example, each product style has a page with full text content on the product: http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE Then many data pages with technical data for each model code is linked from the product style page. http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-1 http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-2 http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-3 It is these technical data pages that I intend to add the no index code to, as I imagine that this might stop these pages from cannibalizing keyword authority for more important content rich pages on the site. Any advice appreciated.

    Read the article

  • What scripts get run on session start and end?

    - by maaartinus
    I want to mount and unmount an encrypted partition just like ecryptfs works with the home partition. Where can I put my commands so that they get executed upon session start and session end (the important part is the unmounting when I log out). UPDATE: For the session start I'm using .gnomerc and it does what I want. For the session end I've found no solution. I had a look at the sources of ecryptfs and it looks quite complicated. It's a pity that nobody thought about creating something analog to .bash_logout.

    Read the article

  • Low-level 10-finger multi-touch data on the Nexus 7?

    - by Croad Langshan
    I'm considering getting a Nexus 7 to do some multi-touch development on Ubuntu in the run-up to 13.04 (i.e., now :-). What APIs, /dev files, or protocols are available, or could be made available with not too much work on my part? What data is available from the device? The data I want to get my hands on is -- if I can -- the same as I get from /dev/uinput/event* from an Apple Magic Trackpad, viz: positions of all touches (could be as many as 10 simultaneous touches, but much more typically 6 or fewer) their size/pressure (in both x and y directions) their angle their identity -- i.e. an integer that is somewhat reliably preserved across touch events, for as long as a finger doesn't lift off the surface Not all of this data is essential -- but the more of it there is, the merrier.

    Read the article

  • What technology(s)would be suitable for the front end part of a Java web game?

    - by James.Elsey
    As asked in a previous question, I'm looking to create a small MMO that will be deployed onto GAE. I'm confused about what technologies I could use for the user interface, I've considered the following JSP Pages - I've got experience with JSP/JSTL and I would find this easy to work with, it would require the user having to "submit" the page each time they perform an action so may become a little clumsey for players. Applet - I could create an applet that sits on the front end and communicates to the back end game engine, however I'm not sure how good this method would be and have not used applets since university.. What other options do I have? I don't have any experience in Flash/Flex so there would be a big learning curve there. Are there any other Java based options I may be able to use? My game will be text based, I may use some images, but I'm not intending to have any animations/graphics etc Thanks

    Read the article

  • implementing a high level function in a script to call a low level function in the game engine

    - by eat_a_lemon
    In my 2d game engine I have a function that does pathfinding, find_shortest_path. It executes for each time step in the game loop and calculates the next coordinate pair in the series of coordinates to reach the destination object. Now I want to call this function in a scripting language and have it only return the last coordinate pair result. I want the game engine to go about the business of rendering the incremental steps but I don't want the high level script to care about the rendering. The high level script is only for ai game logic. Now I know how to bind a method from C to python but how can I signal and coordinate the wait time between the incremental steps without the high level function returning until its time for the last step?

    Read the article

  • Using low level api for datastore in google app engine ? is it bad ?

    - by Chez
    There is little documentation on how to use the low level api for datastore and quite a lot on JPA and JDO and how it translates to it. My question is: is there any advantage in coding against the JPA or JDO specs instead of accessing directly the low level api for datastore ? From an initial look, it seems simple and straight forward but I am not sure if there are good reasons why not to do it. Thanks Cx

    Read the article

  • If I can take a screen capture of a graphical anomaly, can it still be a hardware issue?

    - by Jay Carr
    I have a strange graphical anomaly going on my iMac right now (green and magenta boxes are appearing sporadically), I'm slowly trying to work through different possibilities but I thought I'd start with the basics: Can a graphical anomaly that I can screen capture still be a hardware issue? I know, it seems really obvious. If it's hardware, it should show up well after the operating system has had it's say. And since the operating system is (I assume) doing the screen capture, it seems like it shouldn't see the anomaly unless the problem is software in nature. But, as I've researched this problem I see a lot of people taking their computers in to service people for hardware issues and Apple then resolving said issue. To further complicate things, I also have Windows 8 installed via bootcamp, and the issues seems to be showing up there as well. Anyway, it feels like it must be a driver issue, since I assume that's what the two OSes have in common, but...I thought I'd come here for some disambiguation. In my case, yes, I can screen capture the anomaly (at least in OSX I can), so I assume it's somehow a software (or driver) issue. But I wanted to double check because the internet is being ambiguous...

    Read the article

  • How do I protect a low budget network from rogue DHCP servers?

    - by Kenned
    I am helping a friend manage a shared internet connection in an apartment buildling with 80 apartments - 8 stairways with 10 apartments in each. The network is laid out with the internet router at one end of the building, connected to a cheap non-managed 16 port switch in the first stairway where the first 10 apartments are also connected. One port is connected to another 16 port cheapo switch in the next stairway, where those 10 apartments are connected, and so forth. Sort of a daisy chain of switches, with 10 apartments as spokes on each "daisy". The building is a U-shape, approximately 50 x 50 meters, 20 meters high - so from the router to the farthest apartment it’s probably around 200 meters including up-and-down stairways. We have a fair bit of problems with people hooking up wifi-routers the wrong way, creating rogue DHCP servers which interrupt large groups of the users and we wish to solve this problem by making the network smarter (instead of doing a physical unplugging binary search). With my limited networking skills, I see two ways - DHCP-snooping or splitting the entire network into separate VLANS for each apartment. Separate VLANS gives each apartment their own private connection to the router, while DHCP snooping will still allow LAN gaming and file sharing. Will DHCP snooping work with this kind of network topology, or does that rely on the network being in a proper hub-and-spoke-configuration? I am not sure if there are different levels of DHCP snooping - say like expensive Cisco switches will do anything, but inexpensive ones like TP-Link, D-Link or Netgear will only do it in certain topologies? And will basic VLAN support be good enough for this topology? I guess even cheap managed switches can tag traffic from each port with it’s own VLAN tag, but when the next switch in the daisy chain receives the packet on it’s “downlink” port, wouldn’t it strip or replace the VLAN tag with it’s own trunk-tag (or whatever the name is for the backbone traffic). Money is tight, and I don’t think we can afford professional grade Cisco (I have been campaigning for this for years), so I’d love some advice on which solution has the best support on low-end network equipment and if there are some specific models that are recommended? For instance low-end HP switches or even budget brands like TP-Link, D-Link etc. If I have overlooked another way to solve this problem it is due to my lack of knowledge. :)

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >