Search Results

Search found 12798 results on 512 pages for 'language agnostic'.

Page 184/512 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • GI ????

    - by Allen Gao
    Normal 0 7.8 ? 0 2 false false false MicrosoftInternetExplorer4 classid="clsid:38481807-CA0E-42D2-BF39-B33AF135CC4D" id=ieooui st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} ??????????11gR2 GI ?????????,??????GI????????????????????? ????????GI???????3???,ohasd??,??????,??????? ??,ohasd??? 1. /etc/inittab?????? h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null ???,??????? root 4865 1 0 Dec02 ? 00:01:01 /bin/sh /etc/init.d/init.ohasd run ??????????????,???? +init.ohasd ????????? + os????????? + ??S* ohasd????, ??S96ohasd + GI????????(crsctl enable crs) ??,ohasd.bin ??????,????OLR????,??,??ohasd.bin??????,?????OLR??????????????OLR???$GRID_HOME/cdata/${HOSTNAME}.olr 2. ohasd.bin????????agents(orarootagent, oraagent, cssdagnet ? cssdmonitor) ???????????????,?????agent??????,??????????$GRID_HOME/bin ???????????,??,?????????,??corruption. ???,??????? 1. Mdnsd ??????(Multicast)???????????????????,??????????????????????????? 2. Gpnpd ????,??????????bootstrap ??,??????????????gpnp profile???,?????mdnsd??????,???????????,?????????????,??gpnp profile (<gi_home>/gpnp/profiles/peer/profile.xml)?????????? 3. Gipcd ????,????????????????(cluster interconnect)?????,???????gpnpd???,??,??????????,?????gpnpd ??????? 4. Ocssd.bin ?????????????gpnp profile?????????(Voting Disk),????gpnpd ??????????,?????????????,??ocssd.bin ??????,?????????? + gpnp profile ?????????? + gpnpd ??????? + ??????asm disk ??????????? + ??????????? 5. ??????????:ora.ctssd, ora.asm, ora.cluster_interconnect.haip, ora.crf, ora.crsd ?? ??:????????????????ocssd.bin, gpnpd.bin ? gipcd.bin ????,??gpnpd.bin????,ocssd.bin ? gipcd.bin ?????????,?gpnpd.bin????????,ocssd.bin ? gipcd.bin ????????gpnp profile?????????? ??,????????????,?????crsd????????? 1. Crsd?????????????OCR,????OCR????ASM?,???? ASM??????,??OCR???ASM??????????OCR???????,???????????????? 2. Crsd ?????agents(orarootagent, oraagent_<rdbms_owner>, oraagent_<gi_owner> )???agent????,??????????$GRID_HOME/bin ???????????,??,?????????,??corruption. 3. ????????  ora.net1.network : ????,?????????????,scanvip, vip, listener?????????????,??????????,vip, scanvip ?listener ??offline,?????????????? ora.<scan_name>.vip:scan???vip??,?????3?? ora.<node_name>.vip : ?????vip ?? ora.<listener_name>.lsnr: ???????????????,?11gR2??,listener.ora???????,????????? ora.LISTENER_SCAN<n>.lsnr: scan ????? ora.<????>.dg: ASM ????????????????mount???,dismount???? ora.<????>.db: ???????11gR2????????????,??????????rac ????????,??????????,???????“USR_ORA_INST_NAME@SERVERNAME(<node name> )”???????,??????????ASM???,???????????????????,??dependency?????????,??????????????????,???dependancy???????,??????(crsctl modify res ……)? ora.<???>.svc:?????????11gR2 ??,?????????,???10gR2??,???????????,srv ?cs ????? ora.cvu :?????11.2.0.2???,???????cluvfy??,???????????????? ora.ons : ONS??,????????,????? ??,?????GI??????????????????? $GRID_HOME/log/<node_name>/ocssd <== ocssd.bin ?? $GRID_HOME/log/<node_name>/gpnpd <== gpnpd.bin ?? $GRID_HOME/log/<node_name>/gipcd <== gipcd.bin ?? $GRID_HOME/log/<node_name>/agent/crsd <== crsd.bin ?? $GRID_HOME/log/<node_name>/agent/ohasd <== ohasd.bin ?? $GRID_HOME/log/<node_name>/mdnsd <== mdnsd.bin ?? $GRID_HOME/log/<node_name>/client <== ????GI ??(ocrdump, crsctl, ocrcheck, gpnptool??)??????????? $GRID_HOME/log/<node_name>/ctssd <== ctssd.bin ?? $GRID_HOME/log/<node_name>/crsd <== crsd.bin ?? $GRID_HOME/log/<node_name>/cvu <== cluvfy ????????? $GRID_HOME/bin/diagcollection.sh <== ????????????????? ??,????????(/var/tmp/.oracle ? /tmp/.oracle),??????????????????ipc???,??,?????????????????????,???GI?????????????????????,??????????GI??????????????

    Read the article

  • How can you connect three external displays to a laptop with only one vga or dvi port and plenty of

    - by Byron
    I have had some success with usb docks like this Universal Docking Station by Kensington to connect one external display to my laptop while using the onboard vga port for another display. But that's only two displays and I'm shooting for three. All I do is develop software and work in Photoshop... no games. For the sake of discussion, we can assume a Thinkpad or equivalent laptop with Windows 7 (I'm hoping for a platform-agnostic solution). How could I do this?

    Read the article

  • How does ARM Cortex A8 compare with a modern x86 processor

    - by thomasrutter
    I was wondering how does a modern ARM chip based on ARM Cortex A8 compare, in clock-for-clock performance and capability, to a modern x86 chip such as a Core 2 Duo or Core i5? I realise due to the different instruction sets it'll depend heavily on what you're doing. To put it another way, rendering a web page in webkit on a 1GHz ARM Cortex A8 based chip should be about equivalent to doing in on a Core i5 at __ MHz? Update October 2013: Since I asked this question years ago it's become a lot more common, when reading about mobile devices, to see architecture-agnostic benchmarks that you can compare across platforms - for example, in-browser benchmarks like Sunspider in Webkit will run on just about anything and you see these in reviews all the time now. And there's things like Geekbench now.

    Read the article

  • How does the build quality of laptops compare?

    - by pgwillia
    I'm looking to replace my 5 year old laptop. I want my next laptop to endure at least this long. I typically have Thunderbird, Firefox, Eclipse Java IDE, Skype, a ssh session, and Apache Tomcat running. I'm currently running Karmic Ubuntu, but am agnostic about operating system and would move to Win 7 or OS X. I frequently travel with this computer. I also value battery longevity and power conservation (if possible). Above all I'm looking to minimize cost. I think the hardware that best meets my needs is an Intel i7 processor, 8 GB RAM, 100GB @7200 rpm or SSD hardrive, and about 15 inch monitor. These specs are met by most brands. Does anyone know specific pros/cons and build quality for Macbook Pro, Lenovo Thinkpad (W510 or T510), Sony's VPC-F1190, and ASUS G Series G73JH-X1 NoteBook? Are all i7 processors created equal? Do you have other suggestion that meet my needs?

    Read the article

  • Is there such thing as a portable database that runs easily OsX and Windows?

    - by Jean-Philippe Murray
    I need to make a small database for a project at school (not computer related at all, I'm indexing and categorizing paper documents of a research projet). The thing is that in september, my semester is over and other students will have to taker over the project (and so on, for every semester!), so I'd need something that would be free and OS agnostic (or at least OsX/Windows) so it would easily be given to the next students on the project. I was thinking about a WAMP running USB key that would have a MySQL / HTML interface, but it will become locked to the OS I choose first. LibreOffice and the likes will be an option in the end if I don't find anything truly portable. Anyone has a solution in mind?

    Read the article

  • Sticky sessions on load balancers with HTTP and HTTPS

    - by javano
    How does sticky sessions relate to HTTP and HTTPS; If I place a load balancer in front of some web app servers that run a front end that supports HTTPS, will the sessions remain "sticky" on a typical load balancer that lists "stick sessions" as one of it's supported features? I understand that question is partly open ended; To clarify, would I require a load balancer that supports sticky HTTPS session specifical or is "sticky sessions" a principal that functions agnostic of the HTTP payload, be it encrypted or not? Thank you.

    Read the article

  • Advantages of a deployment tool over shell

    - by Jimmy
    Currently I have all of my deployment scripts in shell, which installs about 10 programs and configures them. The way I see it shell is a fantastic tool for this: Modular: Only one program per script, this way I can spread the programs across different servers Simple: Shell scripts are extremely simple and don't need any other software installed One-click: I only have to run the shell script once and everything is setup Agnostic: Most programmers can figure out shell, and don't need to know how to use a specific program. Versioning: Since my code is on github a simple git pull and restart all of supervisor will run my latest code. My question is, with all of these advantages, why is it people are constantly telling me to use a tool such as ansible or chef, and not to use shell.

    Read the article

  • The Debut of Oracle Database Firewall at RSA 2011

    - by Troy Kitch
    We're very proud of the coverage and headlines Oracle Database Firewall made this past week during RSA Conference 2011 in San Francisco. In case you missed our previous post, we announced the availability of this latest addition to the Oracle Defense-in-Depth database security solutions. The announcement was picked up many publications including eWeek, CRN, InformationWeek and more. Here is just some of the press on this very important security solution: "It's rare to find a new product category these days, but I think a new product from Oracle fills the bill. In the crowded enterprise security field, that's saying something." Enterprise System Journal: A New Approach to Database Security By James E. Powell "Databases and the content they store are among the most valuable IT assets - and the most targeted by hackers. In an effort to help secure databases, Oracle today is launching the new Oracle Database Firewall as an approach to defend databases against SQL injection and other database attacks." Database Journal: Oracle Debuts Database Firewall (also appeared in InternetNews.com) By Sean Michael Kerner "Oracle Database Firewall understands SQL-statement formats, and can be configured to blacklist and whitelist traffic based on source. When it detects suspicious statements within SQL traffic -- ones that might indicate SQL injection attacks, for example -- it can replace them with neutral statements that will keep the session running without allowing potentially harmful traffic through." Network World: Oracle Database Firewall defuses SQL injection attacks By Tim Green "The firewall uses "SQL grammar analysis" to prevent SQL injection attacks and other attempts to grab information. The Oracle Database Firewall features white and black lists policies, exceptions and rules that mark the time of day, IP address, application and user." ZDNet: RSA Roundup: Oracle Database Firewall By Larry Dignan "The database giant announced Oracle Database Firewall on Feb. 14 at the RSA Conference in San Francisco. The firewall application establishes a "defensive perimeter" around databases by monitoring and enforcing normal application behavior in real-time, the company said." eWEEK: Oracle Database Firewall Delivers Vendor-Agnostic Security By Fahmida Y. Rashid

    Read the article

  • General directions on developing a server side control system for JS/Canvas Action RPG

    - by Billy Ninja
    Well, yesterday I asked on anti-cheat JS, and confirmed what I kind of already knew that it's just not possible. Now I wanna measure roughly how hard it is to implement a server side checking that is agnostic to client input, that does not mess with the game experience so much. I don't wanna waste to much resource on this matter, since it's going to be initially a single player game, that I may or would like to introduce some kind of ranking, trading system later on. I'd rather deliver better more cool game features instead. I don't wanna have to guarantee super fast server response to keep the game going lag free. I'd rather go with more loose discrete control of key variables and instances. Like store user's action on a fifo buffer on the client, and push that actions to the server gradually. I'd love to see a elegant, generic solution that I could plug into my client game logic root (not having to scatter treatments everywhere in my client js) - and have few classes on Node.js server that could handle that - without having to mirror/describe all of my game entities a second time on the server.

    Read the article

  • Microsoft Sync Framework

    - by kaleidoscope
    Introduction It is a platform that enables collaboration and offline access for applications, services and devices. Sync framework features technologies and tools that enable roaming, data sharing and taking data offline. Moreover, developers can build synchronization ecosystems that integrate any application with data from any store, by using any protocol over any network. Highlights * Add sync support to new and existing applications, services and devices * Enable collaboration and offline capabilities for any application * Roam and share information form any data store, over any protocol and over any network configuration * Leverage sync capabilities exposed in Microsoft technologies to create sync ecosystems * Extend the architecture to support custom data types including files Benefits of using Sync Framework * An extensible model that lets you integrate multiple data sources into a synchronization ecosystems. * A managed API for all components and a native API for select components. * Conflict handling for automatic and custom resolution schemes. * Filters that let you synchronize a subset of data, such as only those files that contain images. * A compact and efficient metadata model that enables synchronization for virtually any participant, without significant changes to the data store: - Any data store     Add synchronization to a wide range of applications, services and devices. - Any data type     Introduce new data types to synchronize. - Any protocol     Use existing architectures and protocols to synchronize data. The transport – agnostic architecture allows integration of synchronization into a variety of protocols, including over-the-air and embedded devices. - Any network configuration     Enable synchronization for your applications, devices and services in true peer-to-peer or hub-and-spoke configurations. Easily recover from network interruptions. Reduce network traffic by efficiently selecting changes to synchronize. Technorati Tags: Anish Sharma,Microsoft Sync Framework

    Read the article

  • What build tools do not depend on java (or Ruby)?

    - by Mohamed Meligy
    I'm wondering what generic build tools out there include their binary run-times and do not depend on another environment not shipped with them. For example, ANT requires Java, Rake requires Ruby, etc.. would be great if talking about also target-platform-agnostic tools, where I'd just give whatever command for building, whatever command for testing, etc.. and can then define my artifacts in CI or so. Would see something like that useful for building .NET projects (say, on both Windows .NET and Mono), and Node JS projects especially. I do not want to install Java and / or Ruby if what I want is a .NET build or a Node JS build. This is a bit of general awareness question not an exact problem I'm facing, that's why it's here not on StackOverflow. Update: To explain a bit more, what I'm after is the build script that would run MSBuild for compiling for example ( in .NET, and then maybe several Node/NPM commands in Node, etc..), and then have the rest build/test steps, instead of setting these all in MSBuild (again, in .NET case, also, wondering if there is equivalent story in Node).

    Read the article

  • How do I choose a package format for Linux software distribution?

    - by Ian C.
    We have a Java-based application that, to date, we've been distributing as a tarball with instructions for deploying. It's mostly self-contained so deployment is fairly straight-forward: Untar on the disk you'd like it to live on; Make sure Java is in your path and a suitable distro and version; Verify ownership and group on all the files Start up the server processes with our start script If the user wants to get in to start-on-boot stuff with SysV we have some written instructions and a template init file for it in our tarball. We'd like to make this installation process a little more seamless; take care of the permissions and the init script deployment. We're also going to start bundling our own JRE with the application so that we're mostly free of external dependencies. The question we're faced with now is: how do we pick a package format for distribution? Is RPM the standard? Can all package management tools deal with it now? Our clients primarily run RHEL and CentOS, but we do have some using SuSE and even Debian. If we can pick a distro-agnostic format we'd prefer that. What about a self-extracting shell script? Something akin to how Java is distributed. If we're dependency-free would the self-extracting script be sufficient? What features or conveniences would we lose out on going with the script versus a proper package format meant for use by a package manager?

    Read the article

  • DDD Model Design and Repository Persistence Performance Considerations

    - by agarhy
    So I have been reading about DDD for some time and trying to figure out the best approach on several issues. I tend to agree that I should design my model in a persistent agnostic manner. And that repositories should load and persist my models in valid states. But are these approaches realistic practically? I mean its normal for a model to hold a reference to a collection of another type. Persisting that model should mean persist the entire collection. Fine. But do I really need to load the entire collection every time I load the model? Probably not. So I can have specialized repositories. Some that load maybe a subset of the object graph via DTOs and others that load the entire object graph. But when do I use which? If I have DTOs, what's stopping client code from directly calling them and completely bypassing the model? I can have mappers and factories to create my models from DTOs maybe? But depending on the design of my models that might not always work. Or it might not allow my models to be created in a valid state. What's the correct approach here?

    Read the article

  • Should I be using Lua for game logic on mobile devices?

    - by Rob Ashton
    As above really, I'm writing an android based game in my spare time (android because it's free and I've no real aspirations to do anything commercial). The game logic comes from a very typical component based model whereby entities exist and have components attached to them and messages are sent to and fro in order to make things happen. Obviously the layer for actually performing that is thin, and if I were to write an iPhone version of this app, I'd have to re-write the renderer and core driver (of this component based system) in Objective C. The entities are just flat files determining the names of the components to be added, and the components themselves are simple, single-purpose objects containing the logic for the entity. Now, if I write all the logic for those components in Java, then I'd have to re-write them on Objective C if I decided to do an iPhone port. As the bulk of the application logic is contained within these components, they would, in an ideal world, be written in some platform-agnostic language/script/DSL which could then just be loaded into the app on whatever platform. I've been led to believe however that this is not an ideal world though, and that Lua performance etc on mobile devices still isn't up to scratch, that the overhead is too much and that I'd run into troubles later if I went down that route? Is this actually the case? Obviously this is just a hypothetical question, I'm happy writing them all in Java as it's simple and easy get things off the ground, but say I actually enjoy making this game (unlikely, given how much I'm currently disliking having to deal with all those different mobile devices) and I wanted to make a commercially viable game - would I use Lua or would I just take the hit when it came to porting and just re-write all the code?

    Read the article

  • Making a 2D game with responsive resolution

    - by alexandervrs
    I am making a 2D game, however I wish for it to be resolution agnostic. My target resolution i.e. where things look as intended is 1600 x 900. My ideas are: Make the HUD stay fixed to the sides no matter what resolution, use different size for HUD graphics under a certain resolution and another under a certain large one. Use large HD PNG sprites/backgrounds which are a power of 2, so they scale nicely. No vectors. Use the player's native resolution. Scale the game area (not the HUD) to fit (resulting zooming in some and cropping the game area sides if necessary for widescreen, no stretch), but always fill the screen. Have a min and max resolution limit for small and very large displays where you will just change the resolution(?) or scale up/down to fit. What I am a bit confused though is what math formula I would use to scale the game area correctly based on the resolution no matter the aspect ratio, fully fit in a square screen and with some clip to the sides for widescreen. Pseudocode would help as well. :)

    Read the article

  • Did 12.04 just add multi-touch gesture support mid-release?

    - by adempewolff
    I was reviewing the updates I was about to download today and I noticed that a lot of them had to do with gesture support, noticed that many of these were new installs rather than upgrades. Has 12.04 just added multi-touch gesture support mid-release? If so, what are the capabilities that this adds? Which applications already support these capabilities and can I expect others to add support in the near future? Here are the packages that were installed: Install: libframe6:amd64 (2.2.4-0ubuntu0.12.04.1), libgeis1:amd64 (2.2.9.2-0ubuntu1), libgrail5:amd64 (3.0.6-0ubuntu0.12.04.01, automatic) And here are those that were upgraded (also including many with touch support): Upgrade: libgrip0:amd64 (0.3.4-0ubuntu2~ubuntu12.04.1, 0.3.5-0ubuntu1~12.04.1), eog:amd64 (3.4.2-0ubuntu1, 3.4.2-0ubuntu1.1), ginn:amd64 (0.2.4-0ubuntu1, 0.2.4.1-0ubuntu1) Of which the descriptions for the new installs are, libgeis1: Gesture engine interface support A common API for clients of a systemwide gesture recognition and propagation engine. libframe6: Touch Frame Library This library handles the buildup and synchronization of a set of simultaneous touches. The library is input agnostic, with bindings for mtdev, frame and XI2.1. libgrail5: Gesture Recognition And Instantiation Library This library consists of an interface and tools for handling gesture recognition and gesture instantiation. Applications can use the grail callbacks to receive gesture primitives and raw input events from the underlying kernel device. And the descriptions for the upgraded packages are, ligrip0: provides multitouch gestures to GTK+ apps Libgrip hooks gesture recognition into GTK+ applications. ginn: Gesture Injector: No-GEIS, No-Toolkits A daemon with jinn-like wish-granting capabilities: it gives applications the ability to support a subset of multi-touch gestures without having to integrate GEIS or multi-touch GTK/Qt libs. Adding in a ton of new libraries and upgrading the existing components makes me wonder if 12.04 is meant to start natively supporting gestures other than two finger scroll in the near future. I expected these capabilities to be introduced soon but I thought that they would only be rolled out in a new release, not as upgrades for an existing release. Anyone have any info about this?

    Read the article

  • Python or Ruby in 2011.

    - by Sleeper Smith
    What I'm really asking is, in the current services and technologies provided, which is a more "useful" language? Which one has more opportunity? Some background info first. I'm a .net C# dev for 5 years. Having done a few projects on Amazon AWS, I'm looking to start a few projects of my own. But Azure's too expensive, and AWS has too much management overhead. My current choice is Google App Engine and Python. Logical enough. But what I want to ask here is this: In Linux world, which is more useful? Recently heard about Heroku for Ruby. How viable is this? Looking at the pricing model indicates that it's more expensive. Which one has more up-to-date and exciting open source projects? For instance Trac is just plain out dated compared to Redmine. One of the big reason pulling me for Ruby is Redmine. Implementations? IronPython/IronRuby/JRuby etc etc. Which one is more standardised and more implementation agnostic? Which one is easier to port between Windows/Linux? Anyway, your input and thoughts are greatly appreciated. thanks.

    Read the article

  • What's the relationship between meta-circular interpreters, virtual machines and increased performance?

    - by Gomi
    I've read about meta-circular interpreters on the web (including SICP) and I've looked into the code of some implementations (such as PyPy and Narcissus). I've read quite a bit about two languages which made great use of metacircular evaluation, Lisp and Smalltalk. As far as I understood Lisp was the first self-hosting compiler and Smalltalk had the first "true" JIT implementation. One thing I've not fully understood is how can those interpreters/compilers achieve so good performance or, in other words, why is PyPy faster than CPython? Is it because of reflection? And also, my Smalltalk research led me to believe that there's a relationship between JIT, virtual machines and reflection. Virtual Machines such as the JVM and CLR allow a great deal of type introspection and I believe they make great use it in Just-in-Time (and AOT, I suppose?) compilation. But as far as I know, Virtual Machines are kind of like CPUs, in that they have a basic instruction set. Are Virtual Machines efficient because they include type and reference information, which would allow language-agnostic reflection? I ask this because many both interpreted and compiled languages are now using bytecode as a target (LLVM, Parrot, YARV, CPython) and traditional VMs like JVM and CLR have gained incredible boosts in performance. I've been told that it's about JIT, but as far as I know JIT is nothing new since Smalltalk and Sun's own Self have been doing it before Java. I don't remember VMs performing particularly well in the past, there weren't many non-academic ones outside of JVM and .NET and their performance was definitely not as good as it is now (I wish I could source this claim but I speak from personal experience). Then all of a sudden, in the late 2000s something changed and a lot of VMs started to pop up even for established languages, and with very good performance. Was something discovered about the JIT implementation that allowed pretty much every modern VM to skyrocket in performance? A paper or a book maybe?

    Read the article

  • Windows Azure Event

    - by Blog Author
    Get cloud ready with Windows Azure The cloud is everywhere and here at Microsoft we’re flying high with our cloud computing release, Windows Azure. As most of you saw at the Professional Developers Conference, the reaction to Windows Azure has been nothing short of “wow” – and based on your feedback, we’ve organized this special, all-day Windows Azure Firestarter event to help you take full advantage of the cloud. Maybe you've already watched a webcast, attended a recent MSDN Event on the topic, or done your own digging on Azure. Well, here's your chance to go even deeper. This one-of-a-kind event will focus on helping developers get ‘cloud ready’ with concrete details and hands-on tactics. We’ll start by revealing Microsoft’s strategic vision for the cloud, and then offer an end-to-end view of the Windows Azure platform from a developer’s perspective. We’ll also talk about migrating your data and existing applications (regardless of platform) onto the cloud. We’ll finish up with an open panel and lots of time to ask questions. Following this event, please join us for an engaging conversation about any and all Cloud Computing topics. This FREE event is hosted by Northwest Cloud, the cloud agnostic community group, and sponsored by Microsoft. http://www.nwcloud.org/redmond/2010-04-06

    Read the article

  • C++ Numerical Recipes &ndash; A New Adventure!

    - by JoshReuben
    I am about to embark on a great journey – over the next 6 weeks I plan to read through C++ Numerical Recipes 3rd edition http://amzn.to/YtdpkS I'll be reading this with an eye to C++ AMP, thinking about implementing the suitable subset (non-recursive, additive, commutative) to run on the GPU. APIs supporting HPC, GPGPU or MapReduce are all useful – providing you have the ability to choose the correct algorithm to leverage on them. I really think this is the most fascinating area of programming – a lot more exciting than LOB CRUD !!! When you think about it , everything is a function – we categorize & we extrapolate. As abstractions get higher & less leaky, sooner or later information systems programming will become a non-programmer task – you will be using WYSIWYG designers to build: GUIs MVVM service mapping & virtualization workflows ORM Entity relations In the data source SharePoint / LightSwitch are not there yet, but every iteration gets closer. For information workers, managed code is a race to the bottom. As MS futures are a bit shaky right now, the provider agnostic nature & higher barriers of entry of both C++ & Numerical Analysis seem like a rational choice to me. Its also fascinating – stepping outside the box. This is not the first time I've delved into numerical analysis. 6 months ago I read Numerical methods with Applications, which can be found for free online: http://nm.mathforcollege.com/ 2 years ago I learned the .NET Extreme Optimization library www.extremeoptimization.com – not bad 2.5 years ago I read Schaums Numerical Analysis book http://amzn.to/V5yuLI - not an easy read, as topics jump back & forth across chapters: 3 years ago I read Practical Numerical Methods with C# http://amzn.to/V5yCL9 (which is a toy learning language for this kind of stuff) I also read through AI a Modern Approach 3rd edition END to END http://amzn.to/V5yQSp - this took me a few years but was the most rewarding experience. I'll post progress updates – see you on the other side !

    Read the article

  • Making a game with responsive resolution

    - by alexandervrs
    I am making a game, however I wish for it to be resolution agnostic. My target resolution i.e. where things look as intended is 1600 x 900. My ideas are: Make the HUD stay fixed to the sides no matter what resolution, use different size for HUD graphics under a certain resolution and another under a certain large one. Use large HD sprites/backgrounds which are a power of 2, so they scale nicely. Use the player's native resolution. Scale the game area (not the HUD) to fit (resulting zooming in some and cropping the game area sides if necessary for widescreen, no stretch), but always fill the screen. Have a min and max resolution limit for small and very large displays where you will just change the resolution(?) or scale up/down to fit. What I am a bit confused though is what math formula I would use to scale the game area correctly based on the resolution no matter the aspect ratio, fully fit in a square screen and with some clip to the sides for widescreen. Pseudocode would help as well. :)

    Read the article

  • Oracle Unveils AutoVue Release 20.1

    - by prasenjit.niyogi(at)oracle.com
    We are extremely pleased to announce the availability of Oracle's AutoVue Release 20.1. AutoVue 20.1 is the latest major release of the family of Enterprise Visualization solutions from Oracle. Highlights of the release include: Unparalleled new format support and enhancements for 3D CAD, 2D, CAD, ECAD and PDF documents New capabilities that support end-to-end design to manufacture processes in the Electronics & High Tech space, that allow manufacturing engineers to perform accurate manufacturability reviews through better support for variants, overlays and polarity Significant printing enhancements, such as printing of markup notes; support for Excel file print settings; and print in grayscale; which serve to optimize paper-based business processes Powerful integration enablement capabilities to extend visualization into existing enterprise architectures and systems; including AutoVue Hotspots that enable visual navigation and action by linking visual data to structured enterprise data, and new AutoVue Document Print Services (DPS) to enrich enterprise applications with format and platform agnostic printing of any document type Improvements for cost-effective AutoVue deployment and administration, including support for virtualization Release 20.1 Webcast - Attend the webcast on April 13th at 12:00 pm EST to discover what is new and exciting in the latest release. Encourage your customers, prospects, and partners to attend. Title: Oracle Unveils AutoVue Release 20.1 Channel: Oracle AutoVue Channel Register Here: http://www.brighttalk.com/webcast/26282 To discover more about the latest release, and to find out what the customers and partners are saying about the value of this offering, check out the: What's New is AutoVue 20.1 Datasheet You can also learn all about the latest format support here AutoVue 20.1 Format Support Sheet We look forward to seeing you at the webcast. If you have any questions feel free to ask, and we will answer it in this forum. Enjoy AutoVue 20.1!

    Read the article

  • Distinguishing between UI command & domain commands

    - by SonOfPirate
    I am building a WPF client application using the MVVM pattern that provides an interface on top of an existing set of business logic residing in a library which is shared with other applications. The business library followed a domain-driven architecture using CQRS to separate the read and write models (no event sourcing). The combination of technologies and patterns has brought up an interesting conundrum: The MVVM pattern uses the command pattern for handling user-interaction with the view models. .NET provides an ICommand interface which is implemented by most MVVM frameworks, like MVVM Light's RelayCommand and Prism's DelegateCommand. For example, the view model would expose a number of command objects as properties that are bound to the UI and respond when the user performs actions like clicking buttons. Many implementations of the CQRS use the command pattern to isolate and encapsulate individual behaviors. In my business library, we have implemented the write model as command / command-handler pairs. As such, when we want to do some work, such as create a new order, we 'issue' a command (CreateOrderCommand) which is routed to the command-handler responsible for executing the command. This is great, clearly explained in many sources and I am good with it. However, take this scenario: I have a ToolbarViewModel which exposes a CreateNewOrderCommand property. This ICommand object is bound to a button in the UI. When clicked, the UI command creates and issues a new CreateOrderCommand object to the domain which is handled by the CreateOrderCommandHandler. This is difficult to explain to other developers and I am finding myself getting tongue-tied because everything is a command. I'm sure I'm not the first developer to have patterns overlap like this where the naming/terminology also overlap. How have you approached distinguishing your commands used in the UI from those used in the domain? (Edit: I should mention that the business library is UI-agnostic, i.e. no UI technology-specific code exists, or will exists, in this library.)

    Read the article

  • What are the hard and fast rules for Cache Control?

    - by Metalshark
    Confession: sites I maintain have different rules for Cache Control mostly based on the default configuration of the server followed up with recommendations from the Page Speed & Y-Slow Firefox plug-ins and the Network Resources view in Google's Speed Tracer. Cache-Control is set to private/public depending on what they say to do, ETag's/Last-Modified headers are only tinkered with if Y-Slow suggests there is something wrong and Vary-Accept-Encoding seems necessary when manually gziping files for Amazon CloudFront. When reading through the material on the different options and what they do there seems to be conflicting information, rules for broken proxies and cargo cult configurations. Any of the official information provided by the analysis tools mentioned above is quite inaccessible as it deals with each topic individually instead of as a unified strategy (so there is no cross-referencing of techniques). For example, it seems to make no sense that the speed analysis tools rate a site with ETag's the same as a site without them if they are meant to help with caching. What are the hard and fast rules for a platform agnostic Cache Control strategy? EDIT: A link through Jeff Atwood's article explains Caching in superb depth. For the record though here are the hard and fast rules: If the file is Compressed using GZIP, etc - use "cache-control: private" as a proxy may return the compressed version to a client that does not support it (the browser cache will hold files marked this way though). Also remember to include a "Vary: Accept-Encoding" to say that it is compressible. Use Last-Modified in conjunction with ETag - belt and braces usage provides both validators, whilst ETag is based on file contents instead of modification time alone, using both covers all bases. NOTE: AOL's PageTest has a carte blanche approach against ETags for some reason. If you are using Apache on more than one server to host the same content then remove the implicitly declared inode from ETags by excluding it from the FileETag directive (i.e. "FileETag MTime Size") unless you are genuinely using the same live filesystem. Use "cache-control: public" wherever you can - this means that proxy servers (and the browser cache) will return your content even if the rest of the page needs HTTP authentication, etc.

    Read the article

  • Can DVCSs enforce a specific workflow?

    - by dukeofgaming
    So, I have this little debate at work where some of my colleagues (which are actually in charge of administrating our Perforce instance) say that workflows are strictly a process thing, and that the tools that we use (in this case, the version control system) have no take on it. In otherwords, the point that they make is that workflows (and their execution) are tool-agnostic. My take on this is that DVCSs are better at encouraging people in more flexible and well-defined ways, because of the inherent branching occurring in the background (anonymous branches), and that you can enforce workflows through the deployment model you establish (e.g. pull requests through repository management, dictator/liutenant roles with their machines setup as servers, etc.) I think in CVCSs you have to enforce workflows through policies and policing, because there is only one way to share the code, while in DVCSs you just go with the flow based on the infrastructure/permissions that were setup for you. Even when I have provided the earlier arguments, I'm still unable to fully convince them. Am I saying something the wrong way?, if not, what other arguments or examples do you think would be useful to convince them? Edit: The main workflow we have been focusing on, because it makes sense to both sides is the Dictator/Lieutenants workflow: My argument for this particular workflow is that there is no pipeline in a CVCS (because there is just sharing work in a centralized way), whereas there is an actual pipeline in DVCSs depending on how you deploy read/write permissions. Their argument is that this workflow can be done through branching, and while they do this in some projects (due to policy/policing) in other projects they forbid developers from creating branches.

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >