Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 399/537 | < Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >

  • How can I make my PHP development environment more efficient?

    - by pixel
    I want to start a home-brew pet project in PHP. I've spent some time in my life developing in PHP and I've always felt it was hard to organize the development environment efficiently. In my previous PHP work, I've used a windows desktop machine and a linux server for development. This configuration had it's advantages: it's easy to configure Apache (and it's modules)/PHP/MySql on a linux box, and, at the time, this configuration was the same like on production server. However, I never successfully set up a debug connection between my Eclipse install and X-debug on server. Transferring files from my local workspace to the server was also very annoying (either ftp or Bazaar script moving files from repository to web root). For my new setup, I'm considering installing everything on my local machine. I'm afraid that it will slow down workstation performance (LAMP + Eclipse), and that compatibility problems will kick-in. What would you recommend? Should I develop using two separate machines? On one? Do you have experience using one of above configurations in your work?

    Read the article

  • Usb stick too slow to benchmark?

    - by user85340
    I have a Core 2 Duo [email protected] with 3GB RAM. After some time using XUbuntu 10.10 on an 8GB stick I decided to switch to 12.04 and put it onto a 32GB stick (Transcend). I use an EXT4 with no journalling, noatime etc set. /tmp and /run is using tmpfs. And it is REALLY slow. MUCH slower than the old Xubuntu on the 8GB stick. Starting takes minutes, all applications "fade" because they respond too slow. I first thought that the NVidia graphics card is responsible for this, because there seem to be some known problems with that. Doing the adjustment (uncheck the sync checkbox) did not help. I believe the root cause is that the access to the USB stick is extremely slow. Running the read benchmark of the disk utility then brought the message "disk is too slow to benchmark"! BUT: When I do the same benchmark with the live CD I get around 20MB read performance and have a very responsive system! So how can I find out what is going one here?

    Read the article

  • Creating Ideal Customers with Modern Marketing

    - by Richard Lefebvre
    “Without that real-time perspective, it's just not possible to stay in step with what your customers want and need.” — Customer-Obsessed Marketing Is Your Next Competitive Edge Every business talks about focusing on the customer. But few actually deliver. Why? Because digital marketing technology can’t tell a compelling story. It lacks engaging dialogue with no connection beyond the transaction. It’s lost in translation because marketers don’t speak code. And it’s confusing to the customer because marketing and IT can’t connect process and data. Take a look at your digital marketing picture. From a distance it may look fine. But look up close. It’s fragmented and the dots are not connected. You need much higher resolution. Step back and see the big picture. Zoom in on the individual customer. But you’ll need Modern Marketing technology engineered with enterprise grade data management and proven cloud performance. Explore the people, processes, and technology of the Oracle Marketing Cloud. Create a culture of customer obsession. Simplify marketing across all channels to turn casual prospects into passionate advocates. Engage ideal customers with a meaningful experience. Personalize your brand narrative for each customer in every chapter of your story to increase engagement and revenue. Read the full article and watch the videos here

    Read the article

  • Rendering multiple squares fast?

    - by Sam
    so I'm doing my first steps with openGL development on android and I'm kinda stuck at some serious performance issues... What I'm trying to do is render a whole grid of single colored squares on to the screen and I'm getting framerates of ~7FPS. The squares are 9px in size right now with one pixel border in between, so I get a few thousand of them. I have a class "Square" and the Renderer iterates over all Squares every frame and calls the draw() method of each (just the iteration is fast enough, with no openGL code the whole thing runs smootlhy at 60FPS). Right now the draw() method looks like this: // Prepare the square coordinate data GLES20.glVertexAttribPointer(mPositionHandle, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false, vertexStride, vertexBuffer); // Set color for drawing the square GLES20.glUniform4fv(mColorHandle, 1, color, 0); // Draw the square GLES20.glDrawElements(GLES20.GL_TRIANGLES, drawOrder.length, GLES20.GL_UNSIGNED_SHORT, drawListBuffer); So its actually only 3 openGL calls. Everything else (loading shaders, filling buffers, getting appropriate handles, etc.) is done in the Constructor and things like the Program and the handles are also static attributes. What am I missing here, why is it rendering so slow? I've also tried loading the buffer data into VBOs, but this is actually slower... Maybe I did something wrong though. Any help greatly appreciated! :)

    Read the article

  • New Marketing Kits Available

    - by Cinzia Mascanzoni
    New marketing kits are available on the OPN portal. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Oracle Optimized DataCenter Oracle Storage for Oracle Database and Engineered Systems StorageTek SL150 - New Scalable Storage Solutions for Growing Businesses Extreme Database Performance meets Its Backup and Recovery Match with Oracle's Sun ZFS Backup Appliance Maximize Value and Business Agility through Data Center Virtualization Be A Content King with Oracle WebCenter Content

    Read the article

  • Why is my USB data transfer so slow?

    - by Dave M G
    Whenever I do any kind of file transfer using USB, whether to a USB stick, or with my Android phone, or anything else, it is ridiculously slow. It says 59.8 KB/sec, which would be an awesome speed if this were 1991 and I was using a modem to dial up to my local BBS. Surely USB technology is better than that...? 37 seconds to move less data than the equivelent of 1 MP3 file? Also, regardless of what it says about speed and time, the reality is much, much slower. I routinely see it say something like "37 seconds left" and have to wait for minutes. Sometimes, if I want to move large amounts of files, it can say it will take 8 hours or more. Is this normal? My computer may not be the most awesome on the market, and about a year old, but it's an i5 with 4GB RAM and modern components, so surely this isn't the hardware's fault. What can I do to get better USB data transfer performance? Also, I did look at this question, but my newbie eyes don't see anything that look like an actual solution, just a lot of discussion about what transfer rates could or should be. Update: As requested in the comments, I've generated a whole bunch of output from the command line, and put it on Ubuntu Pastebin. Please see it here.

    Read the article

  • Choosing the Database Solution for Large Data Application

    - by GµårÐïåñ
    I have been tasked to write an application that will be a combination of document and inventory management in VB.net which will be used to store document images in TIFF, PDF, XPS, TXT, DOC, PPT and so on as binary data that can be retrieved for viewing, printing, and possible OCR to be searchable as well along with meta data such as sender, recipient, type of document, date, source, etc. So the table would probably be something like: DOC_NAME, DOC_DATE, NOTES, ... DOC_BINARY (where the actual document will be put inside) What my concern is finding a database solution that will not become unstable due to size restrictions, records limitations and performance. Some of the options are MS_SQL, SQL Express, SQLite, mySQL, and Access. Now I can pretty much eliminate Access right off the bat as it is just too limiting and not scalable. I can further eliminate SQL Express because of the 2 GB limit and again scalability. So that leaves me with MS_SQL, SQLite and mySQL (although if anyone has other options they think would be good as well, please feel free to share them, by no means am I set on these only). So this brings me to what you guys think is the best option for what I have described. The goal is that the data is all in one place (a single file) that will make backup and portability easier. For small volume usage, pretty much any solution will hold for a while, but my goal is to think ahead and make sure its able to withstand heavy large volume usage as well. Another consideration is also the interoperability with .NET and stability of such code to avoid errors and memory leaks. Your feedback would be greatly appreciated.

    Read the article

  • Open Source Highlight: namebench

    - by eddraper
    DNS is a big deal.  Even small incremental changes to improve its performance can yield significant value due to the vast quantity of look-ups required when using the internet.  Until now, It’s always been one of those things I had to kinda take on faith… was my ISP doing a good job?  Are those public DNS server really that much faster?  What about security and privacy concerns? Let me introduce you to namebench.  This is the kinda tool I really love – one that immediately delivers value and is almost over-the-top OCD in its attention to detail. Trust me, this tool is utterly ruthless in it’s quest for getting it right – you’re not left with a big question mark after it presents its data.  The results are conclusive and actionable.  Here’s what is does: It hunts down the fastest DNS servers from your desktop that it can find using thousands of requests.  No, it doesn’t pop up this little dialog in 10 seconds to give you some “off the cuff” answer from a handful of providers.  It takes the better part of 10-15 minutes to run.  When it finishes, it presents you with a veritable horn-o-plenty of data.  Mean response duration, response distribution, bad data,  no stone is left un-turned. Check it out.  You’ll dig it.

    Read the article

  • StreamInsight V2.0 Released!

    - by Roman Schindlauer
    The StreamInsight Team is proud to announce the release of StreamInsight V2.0! This is the version that ships with SQL 2012, and as such it has been available through Connect to SQL CTP customers already since December. As part of the SQL 2012 launch activities, we are now making V2.0 available to everyone, following our tradition of providing a separate download page. StreamInsight V2.0 includes a number of stability and performance fixes over its predecessor V1.2. Moreover it introduces a dependency on the .NET Framework 4.0, as well as on SQL 2012 license keys. For these reasons, we decided to bump the major version number, even though V2.0 does not add new features or API surface. It can be regarded a stepping stone to the upcoming release 2.1 which will contain significantly new APIs (that will depend on .NET 4.0). Head over here to download StreamInsight V2.0. The updated Books Online can be found here. Update: For instructions on how to make your existing application work against the new bits without recompilation, see here. Regards, The StreamInsight Team

    Read the article

  • Free Xsigo Technical Pre-sales workshop for Selected Partners !

    - by mseika
    In 2012 Oracle acquired Xsigo, a developer of network I/O virtualisation solutions. This acquisition compliments Oracle’s extensive virtualisation portfolio. With Oracle Virtual Networking products (Xsigo) you can: Virtualise connectivity from any server to any storage and any network. Reduce datacentre complexity by 70% Cut infrastructure expenses by up to 50% Benefits to Channel Partners: Offer a unique proposition that your competitors can’t match. Provide an innovative solution that delivers more performance at less cost. High margins that help sell more products and services. This course is aimed at Technical Pre-Sales Consultants equipping them to provide detailed demos, and architect RFP feedback and customer solutions. The language of this event is French. WHEN24th September 2013 WHEREOracle France 15, boulevard Charles De Gaulle92715 COLOMBES FEESFree of charge 09.00: Welcome, Coffee & Introduction 09.30: Value Propositions, Architecture & Use Cases 11.30: Build a OVN Web Quote & TCO 12.30: Lunch 13.30: Competitive Summary 14.00: Design Scenario Workshop 15.45: Questions/Opportunities  REGISTRATION: Register via this link as soon as possible, 14th june, latest. Note that we have only 20 seats in total for this event. Note that after 14th june we will release free seats for other organizations to register. We look forward to your participation! What we expect from you: You will bring your own laptop. Recommended browser is Firefox 10 ESR. You have checked the material and conducted the assessments. You will be flexible in terms of Agenda and Progress as we intend this to be more of a Workshop having Dialogue rather than sticking tightly into the tentative timeline. What this is not: This PartnerLab does not replace Oracle University Trainings. This PartnerLab does not lead to a Certification as such. This PartnerLab does not enable Partners to full and complete implementation skills.

    Read the article

  • What strategy should be employed to access Facebook data offline?

    - by user686021
    I'm working on a project similar to Klout which provides detail about how you influence other people and who influenced you. We'll be fetching data from few social networking sites (i.e linked in, facebook, twitter etc) to analyze how users interacts with one another. For that we need to parse the data and store it in db and have to analyze it so that strength of relation of two user can be decided. We'll be accessing data offline as well to provide them with accurate results. If we consider facebook activities, we need to have access to Facebook users' news feed, wall data which includes likes,comments,shares etc. To decide how one user influence other, we'll store all the data and analyze it. I need suggestions on what steps need to be taken for great performance. We'll be using ASP.Net(C#) Web forms, SQL Server, jQuery. Main concern is parsing of data, it's storage and retrieval with least overhead. For that I've summarized few points as below : Should we switch over to document-oriented database, like MongoDB or RavenDB for the whole app or part of it even though none of team member have experience with them? Should we use SQL Server Analysis service? Is there any other library than Json.NET for parsing data? Is it advisable to use any C# library over FQL + GET Request ? I've tried to provide as much info as possible. Please share your views for the same.

    Read the article

  • 13.04 Temp Save, rt3290, Kernel downgrade

    - by user170534
    It's kind of a multiplying but I didn't wanted to open more than one topics. I'd have a fresh install of Ubuntu with tlp configured and using acpi_call to keep 7670M turned off. I was a short time arch user and with openbox and firefox it was about 60 to 70 degrees; wanted to turn to a stable release just for this reason. acpitz-virtual-0 Adapter: Virtual device temp1: +50.0°C radeon-pci-0100 Adapter: PCI adapter temp1: -128.0°C coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +56.0°C (high = +87.0°C, crit = +105.0°C) Core 0: +54.0°C (high = +87.0°C, crit = +105.0°C) Core 1: +55.0°C (high = +87.0°C, crit = +105.0°C) The temperature is not seriously high yet can be lower. Another issue is my wifi card, rt3290: The rt2800pci module is fine and all that but the download performance is pretty bad alongside of annoying signal range and the prop. driver gives a conf error about some specific rt2860 code and integrales in pci_dev file. If I blank off the error making variables, module loads but the driver is unused. Since the driver is old, I was thinking of downgrading the kernel of raring to 3.7 or may be 3.6. Should &-or can I?

    Read the article

  • Merging similar graphs based solely on the graph structure?

    - by Buttons840
    I am looking for (or attempting to design) a technique for matching nodes from very similar graphs based on the structure of the graph*. In the examples below, the top graph has 5 nodes, and the bottom graph has 6 nodes. I would like to match the nodes from the top graph to the nodes in the bottom graph, such that the "0" nodes match, and the "1" nodes match, etc. This seems logically possible, because I can do it in my head for these simple examples. Now I just need to express my intuition in code. Are there any established algorithms or patterns I might consider? (* When I say based on the structure of the graph, I mean the solution shouldn't depend on the node labels; the numeric labels on the nodes are only for demonstration.) I'm also interested in the performance of any potential solutions. How well will they scale? Could I merge graphs with millions of nodes? In more complex cases, I recognize that the best solution may be subject to interpretation. Still, I'm hoping for a "good" way to merge complex graphs. (These are directed graphs; the thicker portion of an edge represents the head.)

    Read the article

  • Display glitches running ATI propietary driver under Ubuntu 12.10

    - by crystallero
    I have a lot of problems with the Ati propietary driver (fglrx). I have an iMac (mid 2011) with a Radeon HD 6900M [1002:6720]. I did not have any problem under Ubuntu 12.04, but since I updated to 12.10, I get some annoying graphic corruption. The worst one is that sometimes the screen does not update with the new information. It happens a lot when I change between tabs in Chrome or Sublime Text. It usually gets updated when I scroll the page. Sometimes, when I type, I have to wait a little bit to view the new characters. And I get trails when I move windows too (like a part of the window). After a while, the trail disappears. I tried to install fglrx, fglrx-updates and the new beta driver downloaded from Ati (12.11 Beta 11/16/2012), with no luck. It happens the same with all of them. I tried to mess with Compiz config, but it didn't fix anything. The open source driver does not suffer this problem, but I need the performance of the propietary driver . Do you have any clue? Thanks.

    Read the article

  • How can I make my PHP development environment more efficient?

    - by pixel
    I want to start a home-brew pet project in PHP. I've spent some time in my life developing in PHP and I've always felt it was hard to organize the development environment efficiently. In my previous PHP work, I've used a windows desktop machine and a linux server for development. This configuration had it's advantages: it's easy to configure Apache (and it's modules)/PHP/MySql on a linux box, and, at the time, this configuration was the same like on production server. However, I never successfully set up a debug connection between my Eclipse install and X-debug on server. Transferring files from my local workspace to the server was also very annoying (either ftp or Bazaar script moving files from repository to web root). For my new setup, I'm considering installing everything on my local machine. I'm afraid that it will slow down workstation performance (LAMP + Eclipse), and that compatibility problems will kick-in. What would you recommend? Should I develop using two separate machines? On one? Do you have experience using one of above configurations in your work?

    Read the article

  • Algorithms for Data Redundancy and Failover for distributed storage system?

    - by kennetham
    I'm building a distributed storage system that works with different storage sizes. For instance, my storage devices have sizes of 50GB, 70GB, 150GB, 250GB, 1000GB, 5 storage systems in one system. My application will store any files to the storage system. Question: How can I build a distributed storage with the idea of data redundancy and fail-over to store documents, videos, any type of files at the same time ensuring that should one of any storage devices fail, there would be another copy of these files on another storage device. However, the concern is, 50GB of storage can only store this maximum number of files as compared to 70GB, 150GB etc. With one storage in mind, bringing 5 storage systems like a cloud storage, is there any logical way to distribute or store the files through my application? How do I ensure data redundancy through different storage sizes? Is there any algorithm to collate multiple blob files into a single file archive? What is the best solution for one cloud storage with multiple different storage sizes? I open this topic with the objective of discussing the best way to implement this idea, assuming simplicity, what are the issues of this implementation, performance measurements and discussion of the limitations.

    Read the article

  • How do I find actors in an area on a poly-precise basis?

    - by Almo
    Ok, I've been asking various questions and getting some good answers, but I think I need to rethink my method, so I'll describe the problem. I have a player who has a big blue box in front of him. This box shows which KActors will be pushed when he pulls the trigger: Currently, the blue box spawns a descendant of Actor which checks collision to see which KActors are touching it: foreach Owner.TouchingActors(class'DynamicSMActor', DynamicActorItt) { // do stuff } The problem is, if you check for touching between Actors and KActors, it looks like it does a plain axis-aligned bounding-box collision. The power will push the box on the lower right, when it's clear it's not touching the blue box. How should I do this properly? I just need a way to find out which KActors are touching that area, on a poly-by-poly level. These collisions are only done with rectangular boxes and simple sphere collision; we are aware of the potential for performance issues with complex objects and poly-collision. I've tried making the collision checker a KActor, but it doesn't report any TouchingActors. This issue is causing us trouble in a lot of other places as well. So solving this problem is a core issue in our game.

    Read the article

  • Has anyone used game salad before and how does it compare with cocos2d in terms of 2d game development

    - by jih
    First a short intro. I am new to the game development space and want to make some 2d games for iOS. I first come across cocos2d and kobold but then wanted something more graphical for rapid prototyping. I then found Game Maker which doesn't support iOS but is fairly easy to learn and then found Game Salad which supports iOS as well as other platforms. I know this question has been ask before but I want to know in terms of the types of games I want to develop what an learning investment path would be best. The types of games genre I am interest are: Side scrollers Simple games like diamond dash or ninja fruits, shanghai, etc Old fashioned zelda or dragonquest type (nintendo fan here:-) 2d adventure RPG games (real time or turn based) Mystery turn based games like carmen sandiego, wizardry, myst etc. So now the question becomes Which game development environment should I invest my time in learning. Game Salad or cocos2d? It would seem game salad would be great for quickies being graphical but in terms of 2d platform games etc would there be speed/performance/feature penalties? Are there certain 2d games genre of the 4 above that Game salad is better at while certain type cocos2d would be better at? Anyone with experience of both can share some pointers? Thanks. inexperienced jih

    Read the article

  • Is sending data to a server via a script tag an outdated paradigm?

    - by KingOfHypocrites
    I inherited some old javascript code for a website tracker that submits data to the server using a script url: var src = "http://domain.zzz/log/method?value1=x&value2=x" var e = document.createElement('script'); e.src = src; I guess the idea was that cross domain requests didn't haven't to be enabled perhaps. Also it was written back in 2005. I'm not sure how well XmlHttpRequests were supported at the time. Anyone could stick this on their website and send data to our server for logging and it ideally would work in most any browser with javascript. The main limitation is all the server can do is send back javascript code and each request has to wait for a response from the server (in the form of a generic acknowledgement javascript method call) to know it was received, then it sends the next. I can't find anyone doing this online or any metrics as to whether this faster or more secure than XmlHttpRequests. I don't know if this is just an old way of doing things or it's still the best way to send data to the server when you are mostly trying to send data one way and you need the best performance possible. So in summary is sending data via a script tag an outdated paradigm? Should I abandon in favor of using XmlHttpRequsts?

    Read the article

  • SPARC SuperCluster Papers

    - by user12616590
    Oracle has been publishing white papers that describe uses and characteristics of the SPARC SuperCluster product. Here are just a few: A Technical Overview of the Oracle SPARC SuperCluster T4-4SPARC SuperCluster T4-4 is a high performance, multi-purpose engineered system that has been designed, tested and integrated to run a wide array of enterprise applications. It is well suited for multi-tier enterprise applications with Web, database and application components. This 20-page paper discusses the components and technical characteristics of this product. SPARC SuperCluster T4-4 Platform Security Principles and CapabilitiesThe security capabilities designed into the SPARC SuperCluster, and architectural, deployment, and operational best practices for taking advantage of them. Consolidating Oracle E-Business Suite on Oracle’s SPARC SuperClusterThis Oracle Optimized Solution describes the implementation and use of SPARC SuperCluster as a consolidation platform for E-Business Suite in 30 pages. Oracle Optimized Solution for Oracle PeopleSoft Human Capital Management on SPARC SuperClusterThe Oracle Optimized Solution for PeopleSoft Human Capital Management on SPARC SuperCluster is the industry's only proven, tested, applications-to-disk solution that maintains excellence managing absences, optimizing collaborative activities, streamlining knowledge and honing processes; 31 pages. I hope you find some of those papers useful.

    Read the article

  • Less graphics power all the sudden (Intel HD 3000)

    - by queueoverflow
    I have a Intel Sandy Bridge i5 with the HD 3000 graphics card. I used to be able to play Urban Terror and Nexuiz comfortably with 85 and 60 frames per seconds until mid/end of October 2012, the former even on a full HD display with that many frames. Now I have around 30 to 45 on the smaller laptop screen and around 20 to 30 on the external monitor. Did something happen to Kubuntu 12.04 so that it has less graphics performance than previously? Update I looked into the system monitor and could not detect anything being at the maximum. The four CPU cores were pretty much bored, the 8 GB RAM were filled with maybe 2 GB. And I ran intel_cpu_top and did not notice anything at its limit. See the output. after Kernel bisecting I now did a kernel bisect and tried 3.2.0-23, 3.2.0-27, 3.2.0-29 and 3.2.0-30 and all had full graphics power. Interestingly, I then had full power when I just booted back into the regular 3.2.0-32 kernel. This does not make sense to me …

    Read the article

  • Purpose oriented user accounts on a single desktop?

    - by dd_dent
    Starting point: I currently do development for Dynamics Ax, Android and an occasional dabble with Wordpress and Python. Soon, I'll start a project involving setting up WP on Google Apps Engine. Everything is, and should continue to, run from the same PC (running Linux Mint). Issue: I'm afraid of botching/bogging down my setup due to tinkering/installing multiple runtimes/IDE's/SDK's/Services, so I was thinking of using multiple users, each purposed to handle the task at hand (web, Android etc) and making each user as inert as possible to one another. What I need to know is the following: Is this a good/feasible practice? The second closest thing to this using remote desktops connections, either to computers or to VM's, which I'd rather avoid. What about switching users? Can it be made seamless? Anything else I should know? Update and clarification regarding VM's and whatnot: The reason I wish to avoid resorting to VM's is that I dislike the performance impact and sluggishness associated with it. I also suspect it might add a layer of complexity I wish to avoid. This answer by Wyatt is interesting but I think it's only partly suited for requirements (web development for example). Also, in reference to the point made about system wide installs, there is a level compromise I should accept as experessed by this for example. This option suggested by 9000 is also enticing (more than VM's actually) and by no means do I intend to "Juggle" JVMs and whatnot, partly due to the reason mentioned before. Regarding complexity, I agree and would consider what was said, only from my experience I tend to pollute my work environment with SDKs and runtimes I tried and discarded, which would occasionally leave leftovers which cause issues throught the session. What I really want is a set of well defined, non virtualized sessions from which I can choose at my leisure and be mostly (to a reasonable extent) safe from affecting each session from the other. And what I'm really asking is if and how can this be done using user accounts.

    Read the article

  • Writing generic code when your target is a C compiler

    - by enobayram
    I need to write some algorithms for a PIC micro controller. AFAIK, the official tools support either assembler or a subset of C. My goal is to write the algorithms in a generic and reusable way without losing any runtime or memory performance. And if possible, I would like to do this without increasing the development time much and compromising the readability and maintainability much either. What I mean by generic and reusable is that I don't want to commit to types, array sizes, number of bits in a bit field etc. All these specifications, IMHO, point to C++ templates, but there's no compiler for it for my target. C macro metaprogramming is another option, but, again my opinion, that greatly reduces readability and increases development time. I believe what I'm looking for is a decent C++ to C translator, but I'd like to hear anything else that satisfies the above requirements. Maybe a translator from another high-level language to C that produces very efficient code, maybe something else. Please note that I have nothing against C, I just wish templates were available in it.

    Read the article

  • Would this data requirement suit a Document -Oriented database?

    - by codecowboy
    I have a requirement to allow users to fill in journal/diary entries per day. I want to provide a handful of known journal templates with x columns to fill in. An example might be a thought diary; a user has to record a thought in one column, describe the situation, rate how they felt etc. The other requirement is that a user should be able to create their own diary templates. They might have a need for a 10 column diary entry per day and might need to rate some aspect out of 50 instead of 10. In an RDBMS, I can see this getting quite complicated. I could have individual tables for my known templates as the fields will be fixed. But for custom diary templates I imagine I would would need a table storing custom_field_types (the diary columns), a table storing entries referencing their field types (custom_entries) and then a third custom_diary table which would store rows matching custom_entries to diaries. Leaving performance / scaling aside, would it be any simpler or make more sense to use a document oriented database like MongoDB to store this data? This is for a web application which might later need an API for mobile devices.

    Read the article

  • Create Math Game with PHP, Ajax, Jquery

    - by Sambucasun
    I am developing a website where user can create their own game which can be joined by other users as well. It's a simple maths game which will shoot equations based on time or count specified. I want that moment user create a game, it will be listed in "current Games" section. Other users can check out the list and select the game to join. After game is created, creater should have a screen which should be having his name with display pic. Now gradually as others start joining the game, list should updated automatically. Once enough users are there i will start the game. The same list should be displayed to other users who join the game. Once game is over all will be displayed a summary list. I have gone through couple of threads but could not get clear idea. Do I need to use comet or other technology to create such game or simple PHP, Ajax or Jquery will suffice ? Also I want my website should be mobile compatible so i am designing it in html5. If i create this game using just Ajax then will there be any performance issue while playing through mobile. I am not much experienced so just need guidance for what should be appropriate or use for my requirement.

    Read the article

< Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >