Search Results

Search found 10438 results on 418 pages for 'power architecture'.

Page 93/418 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • ANSI C as core of a C# project? Is this possible?

    - by Nektarios
    I'm writing a NON-GUI app which I want to be cross platform between OS X and Windows. I'm looking at the following architecture, but I don't know if it will work on the windows side: (Platform specific entry point) - ANSI C main loop = ANSI C model code doing data processing / logic = (Platform specific helpers) So the core stuff I'm planning to write in regular ANSI C, because A) it should be platform independent, B) I'm extremely comfortable with C, C) It can do the job and do it well (Platform specific entry point) can be written in whatever necessary to get the job done, this is a small amount of code, doesn't matter to me. (Platform specific helpers) is the sticky thing. This is stuff like parsing XML, accessing databases, graphics toolkit stuff, whatever. Things that aren't easy in C. Things that modern languages/frameworks will give for free. On OS X this code will be written in Objective-C interfacing with Cocoa. On Windows I'm thinking my best bet is to use C# So on Windows my architecture (simplified) looks like (C# or C?) - ANSI C - C# Is this possible? Some thoughts/suggestions so far.. 1) Compile my C core as a .dll -- this is fine, but seems there's no way to call my C# helpers unless I can somehow get function pointers and pass them to my core, but that seems unlikely 2) Compile a C .exe and a C# .exe and have them talk via shared memory or some kind of IPC. I'm not entirely opposed to this but it obviously introduces a lot of complexity so it doesn't seem ideal 3) Instead of C# use C++, it gets me some nice data management stuff and nice helper code. And I can mix it pretty easily. And the work I do could probably easily port to Linux. But I really don't like C++, and I don't want this to turn in to a 3rd-party-library-fest. Not that it's a huge deal, but it's 2010.. anything for basic data management should be built in. And targetting Linux is really not a priority. Note that no "total" alternatives are OK as suggested in other similar questions on SO I've seen; java, RealBasic, mono.. this is an extremely performance intensive application doing soft realtime for game/simulation purposes, I need C & friends here to do it right (maybe you don't, but I do)

    Read the article

  • Consolidate information held in a number of SQL Server Express Instances

    - by user321271
    Hi, I'm trying to determine the best architecture for creating an oData web service for information held in a number of SQL Server Express instances. The web service should provide a consolidated view of the data. All the SQL Server Express instances have the same DB schema. I was initially planning to use SQL server replication however as I understand it, SQL Server 2008 Express cannot be used as a publisher. Any help or suggestions would be appreciated.

    Read the article

  • Porting 32 bit C++ code to 64 bit - is it worth it? Why?

    - by NTDLS
    I am aware of some the obvious gains of the x64 architecture (higher addressable RAM addresses, ect)... but: What if my program has no real need to run in native 64 bit mode. Should I port it anyway? Are there any foreseeable deadlines for ending 32 bit support? Would my application run faster / better / more secure as native x64 code?

    Read the article

  • CPU and Motherboard clock speeds

    - by NZHammer
    I have been doing some reading about CPU clock speeds and how CPU clock speeds are calculated. After reading several articles, I have come to the understanding that your CPU clock speed is determined by: CPU clock speed = cpu multiplier x mobo clock speed A few questions came about after reading this which I cannot seem to find the answer to anywhere: If the CPU clock speed is dependent upon the mobo clock speed, then how is the clock speed of the CPU predetermined upon buying the CPU (i.e. written on the box without knowing what mobo is being used)? After installation, does the CPU adjust it's multiplier based upon the mobo clock speed to achieve advertised speeds? For example, if the CPU clocks speed is advertised at 2.4GHz and the mobo clock speed is 100MHz, will the multiplier be automatically set to 24x? Why does mobo clock speed seem to not be very important / talked about? For example, when I search on Newegg, mobo clock speed never seems to be listed. When I search enthusiast forums and overclocking forums, mobo clock speed is rarely mentioned. To me, it seems like the mobo clock speed would be pretty important. If I am understanding things correctly, a lower mobo clock speed means that you CPU must work harder to achieve advertised clock speeds. I guess that I should stop there with the questions for now, as I may be asking my questions based on incorrect assumptions. Thanks!

    Read the article

  • 32bit vs 64bit guest VM and RAM usage

    - by sims
    Why does a 32bit domU (Xen guest VM) use less RAM than a 64bit? Notes: The same software complied for a different arch(AMD64 vs. 686). Obviously this is Linux or BSD or something easily ported. Maybe this is also a good one for SO. I've read this is so. I can guess why, but I'd like to hear everyone's comments.

    Read the article

  • Sending email from various domains

    - by IMHO
    We are building hosted software service that is used by multiple customers. These customers want to communicate with their customers (end customer). So, today we send it from our domain: example.com However, we would like to send email to come from their specific domains. When we put their customer emails in Reply-To - it shows up as "on behalf" in clients like outlook. What are the ways to send email from their domain without installing software on their network?

    Read the article

  • How to achieve the following RTO & RPO with logshipping only using SQL Server?

    - by Jimmy Chandra
    Trying to come up with viable backup restore & logshipping solution for achieving the following: 15 minutes Recovery Point Objective (no more than 15 minutes data loss at any time) 5 minutes Recovery Time Objective (must be able to get the db up and running back by 5 minutes) Considering using logshipping only (which I think is kind of pushing it, but I want to know if anyone else know how to achieve this). Some other info for consideration: Using 40 Gbit / sec fiber channel between the primary and disaster recovery (DRC) sites The sites are about 600 km apart. At close of business, the amount of data generated is predicted to be about 150 MB/sec. Log backup is planned for every 5 min. Doing some rough calculation I came up w/ the following numbers: 40 Gbit / sec = 5 MB / sec @ 100% network efficiency. 5 MB / sec = 300 MB / min. @ 300 MB / min, the total amount of data that can be transfer considering the 5min RTO is about 1.5GB, but that will left no time for the actual backup and restore, so if we cut it down to 3min logshipping time, which equals to ~900 MB over 3 minutes at 100% network efficiency, that will left about 1 min backup time and 1 minute restore time. Currently don't have any information if the system being used is capable of restoring 900 MB in 1 min, but assume it can. for COB scenario... 150 MB/sec, and considering the 3 min logshipping time, which should equal to about 27 GB of data over 3 mins...??? I think this is where the SLA will break... since there is no way to transfer 27 GB of data over a 40Gbit/sec line in 3 min. Can I get someone else opinion? I am thinking database mirroring might be a better answer for this...

    Read the article

  • Hyperthreading vs. SQL Server & PostgreSQL

    - by IanC
    I have read that hyperthreading is a "performance killer" when it comes to DBs. However, what I read didn't state which CPUs. Further, it mostly indicated that I/O was "cut to < 10% performance". That logically doesn't make sense since I/O is primarily a function of controllers and disks, not CPUs. But then no one ever said bugs made sense. What I read also stated that SQL Server could put two parallel query ops onto 1 logical core (2 threads), thereby degrading performance. I have a hard time believing SQL Server's architects would have made such an obvious miscalculation. Does anyone have and data on how hyperthreading on current generation CPUs affects either of the RDBMSs I mentioned?

    Read the article

  • How to scale out image hosting/serving?

    - by Continuation
    I asked this question on stackoverflow and it was suggested that I try it here: I'm building a website where users can upload photos and I'd also convert uploaded photos into thumbnails. Planning ahead, if the website gets popular, how do I scale it out so that the images (both original and thumbnails) will be stored in and served from multiple servers? Maybe a cluster? Is there any open source software that would help me in this? Thanks.

    Read the article

  • Web application and remote storage of files

    - by Matt
    Hi have a web application that can store lots and lots of files on the server. i.e. users upload data to it. The files are stored below a particular storage path. The web host will be an IBM xseries 345. However, the disks are really expensive so we would like to put the files onto a less expensive server. Now here is the question. Should I use an NFS mount on the IBM server of a path on the storage server? Or should I write some scripts to upload the files to the storage server instead. Both the storage server and the web host are on the same network. Only the web server is visible to the world. Is NFS performance suitable for an expected low to moderately loaded server?

    Read the article

  • Graphical MySQL tools

    - by Shlomo Shmai
    Are there any good graphical tools (preferably free) for navigating a MySQL database? I find myself doing a lot of the same SQL queries to look at data in the tables. I would imagine there's a GUI for doing this that makes life easier. Any one know of such a thing? Thanks a lot.

    Read the article

  • Understanding the nop byte(s)

    - by Cole Johnson
    Ok, so I was reading through the AMD64 manuels and knowing that nop is really an xchg eax, eax, I looked at the xchg and found something interesting, that it seems a byte can be encoded into the instruction for specifying the registers (apologies I'm on my iPod): picture. So what I am wondering is how does the processor know if there is a byte after to work with or is it that that extra register has to be of type rAX causing it to actually still be the one byte 0x90

    Read the article

  • 2d & 3d CAD software on the MAC [closed]

    - by Mark Smith
    Hey I was wondering what 2d & 3d CAD softwares people use on the mac? There are a couple of other questions like this on the site but they're over a year old and knowing how fast paced the software game is I thought i'd repost! I use MacDraft for my 2d drafting and the same companies Interiors Pro for my 3d work, have done for a while. I used Rhino 4.0 at uni but being a student I had to find software that wasn't thousands of £$£$!! thus stumbling onto the Microspot products. Anybody using the software or found any other bargains out there?

    Read the article

  • IPC between multiple processes on multiple servers

    - by z8000
    Let's say you have 2 servers each with 8 CPU cores each. The servers each run 8 network services that each host an arbitrary number of long-lived TCP/IP client connections. Clients send messages to the services. The services do something based on the messages, and potentially notify N1 of the clients of state changes. Sure, it sounds like a botnet but it isn't. Consider how IRC works with c2s and s2s connections and s2s message relaying. The servers are in the same data center. The servers can communicate over a private VLAN @1GigE. Messages are < 1KB in size. How would you coordinate which services on which host should receive and relay messages to connected clients for state change messages? There's an infinite number of ways to solve this problem efficiently. AMQP (RabbitMQ, ZeroMQ, etc.) Spread Toolkit N^2 connections between allservices (bad) Heck, even run IRC! ... I'm looking for a solution that: perhaps exploits the fact that there's only a small closed cluster is easy to admin scales well is "dumb" (no weird edge cases) What are your experiences? What do you recommend? Thanks!

    Read the article

  • When modern computers boot, what initial setup of RAM do they execute, and how does it exactly work?

    - by user272840
    I know the title reeks of confusion, and some of you might assume I am just wondering about how the computer boots in general, but I'm not. But I'll sort this out for you people now: 1.Onboard firmware is how mostly all modern computer devices work, whether or not with EFI/UEFI(even without "onboard firmware", older computers still employed bank switching, or similar methods with snap-in firmware, cartridges, etc.) 2.On startup there is no "programs" running in the traditional sense yet, i.e. no kernel, OS, user-applications; all of the instructions, especially the very first instruction, is specified by the Instruction Pointer, I am guessing. How is the IP/PC/etc. set to first point to an address for a BIOS/firmware/etc. instruction, and how do the BIOS instructions map themself out in memory prior to startup? 3.Aside from MMIO, BIOS uses certain RAM addresses to have instructions. The big ? comes in when I ask this ... how does BIOS do this? Conclusion: I am assuming that with the very first instruction there is an initial hardware setup for BIOS prior to complete OS bootup. What I want to know is if it's hardware engineered to always work this way, if there's another step in this bootup method I am missing, a gap of information I am unaware of, or how this all works from the very first instruction, and the RAM data itself.

    Read the article

  • Architectural advice - web camera remote access

    - by Alan Hollis
    I'm looking for architectural advice. I have a client who I've built a website for which essentially allows users to view their web cameras remotely. The current flow of data is as follows: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Ftp connection is enabled for the cameras ftp user. Web camera opens ftp connection to server. Web camera begins taking photos. Web camera sends photo to ftp server. On image url request: Server reads latest image on hard drive uploaded via ftp for camera. Server deleted any older images from the server. This is working okay at the moment for a small amount of users/cameras ( about 10 users and around the same amount of cameras), but we're starting to worrying about the scalability of this approach. My original plan was instead of having the files read from the server, the web server would open up an ftp connection to the web server and read the latest images directly from there meaning we should have been able to scale horizontally fairly easily. But ftp connection establishment times were too slow ( mainly due to the fact that PHP out of the ox is unable to persist ftp connections ) and so we abandoned this approach and went straight for reading from the hard drive. The firmware provider for the cameras state they're able to build a http client which instead of using ftp to upload the image could post the image to a web server. This seems plausible enough to me, but I'm looking for some architectural advice. My current thought is a simple Nginx/PHP/Redis stack. Web camera issues post requests of latest image to Nginx/PHP and the latest image for that camera is stored in Redis. The clients can then pull the latest image from Redis which should be extremely quick as the images will always be stored in memory. The data flow would then become: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Camera is sent an http request to start posting images to a provided url Web camera begins taking photos. Web camera sends post requests to server as fast as it can On image url request: Server reads latest image from redis Server tells redis to delete later image My questions are: Are there any greater overheads of transferring images via HTTP instead of FTP? Is there a simple way to calculate how many potential cameras we could have streaming at once? Is there any way to prevent potentially DOS'ing our own servers due to web camera requests? Is Redis a good solution to this problem? Should I abandon PHP/Ngix combination and go for something else? Is this proposed solution actually any good? Will adding HTTPs to the mix cause posting the image to become too slow? Thanks in advance Alan

    Read the article

  • Install correct libraries depending on 64/32 bit

    - by Rich
    I am using Bash to install a customised version of JBoss, and one of the things I would like to do is install the correct version of the Apache Portable Runtime, which is a native binary. This script could be run on both 32 and 64 bit versions of RHEL. What are my options for identifying which version of the APR to install? I think we only have 32bit and x64-based systems here. I would still like to identify i64 systems so that the script can refuse to install on that type of machine. I am aware of using uname -m and grepping /proc/cpuinfo to find out, but was wondering which approach others would recommend?

    Read the article

  • equivalent to CDN but for dynamical content?

    - by ajsie
    so i know for serving static elements (css, js, images, videos etc) you should use CDN since they are spread out throughout the world. but how could i spread out by apache servers? is there an equivalent to CDN but for dynamical pages? or is it the traditional LAMP way. if so, i guess my best option is to find an international hosting provider that hosts in different countries, so the content will be served from the country located nearest the client machine. any suggestions of such hosting providers? or is it best practice to contact different hosting providers in different countries that do not relate to each other. what is the right way to go?

    Read the article

  • Does Anyone Still Prefer N-Tier Architecture After Having *Shipped* an MVC Application?

    - by Jim G.
    Other SO threads have asked people if they prefer N-Tier or MVC architecture. I'm not looking to continue that debate on this thread. I'm looking for something more specific. My Question: Does Anyone Still Prefer N-Tier Architecture After Having Shipped an MVC Application? Reason for My Question: Before I shipped an MVC web application, I wasn't convinced that it was superior to N-Tier Architecture. Specifically, if better unit testing was the only obvious benefit of MVC, then I saw no reason to switch gears and adopt a new architecture. But after having shipped an MVC application, I can see many benefits (which have been enumerated on other threads).

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >