Search Results

Search found 6634 results on 266 pages for 'fast fashion'.

Page 90/266 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • How to manage and estimate unstructured requirements received from customers

    - by user20358
    A lot of the times I receive a software system's requirements from our customers in a very unstructured format. It is usually a bunch of "product development" guys from the customer's who come up with these "proposed solutions" to the business problems they have. While they are the experts at the business domain, a lot of the times they don't have the solutions right. This results in multiple versions of the same requirement mixing up of two requirements into one a few versions of the requirement later down the line, the requirements which were combined together get separated out again, each taking with it some of the new additions How do you work with such requirements coming in and sort them out into proper use cases and before development begins? What tools can we use to track a particular requirement's history, from the first time it was conceived till the time it gets crystallized into a proper use case? Estimating work against requirements received in such a fashion is a nightmare which ends up in making mistakes in understanding the requirement correctly and estimating the effort against it correctly. Any tips, tools, tricks to make this activity more manageable? I'm just trying to get some insights from someone more experienced than I am in requirements management and effort estimation.

    Read the article

  • Fastest way to set up a JSON server on my local machine [closed]

    - by Mohsen
    I am a front-end developer. For many experiements I do I need to have a server that talks JSON with my client side app. Normally that server is a simple server that response to my POSTs and GETs. For example I need to setup a server that saves, modifies and read data from a "library" database like this: POST /books create a book GET /book/:id gets a book and so on... What is the fastest to set up and easiest technology stack for database and server in this case? I am open to use Ruby, Nodejs and anything that do the job fast and easy. Is there any framework (on any language) that do stuff like this for me?

    Read the article

  • How would I batch rename a lot of files using command-line?

    - by Whisperity
    I have a problem which I am unable to solve: I need to rename a great dump of files using patterns. I tried using this, but I always get an error. I have a folder, inside with a lot of files. Running ls -1 | wc -l, it returns that I have like 160000 files inside. The problem is, that I wish to move these files to a Windows system, but most of them have characters like : and ? in them, which makes the file unaccessible on said Windows-based systems. (As a "do not solve but deal with" method, I tried booting up a LiveCD on the Windows system and moving the files using the live OS. Under that Ubuntu, the files were readable and writable on the mounted NTFS partition, but when I booted back on Windows, it showed that the file is there but Windows was unable to access it in any fashion: rename, delete or open.) I tried running rename 's/\:/_' * inside the folder, but I got Argument list too long error. Some search revealed that it happens because I have so many files, and then I arrived here. The problem is that I don't know how to alter the command to suit my needs, as I always end up having various errors like Trying find -name '*:*' | xargs rename : _, it gives xargs: unmatched single quote; by default quotes are special to xargs unless you use the -0 option [\n] syntax error at (eval 1) line 1, near ":" [\n] xargs: rename: exited with status 255; aborting Adding the -0 after xargs turns the error message to xargs: argument line too long These files are archive files generated by various PHP scripts. The best solution would be having a chance to rename them before they are moved to Windows, but if there is no way to do it, we might have a way to rename the files while they are moved to Windows. I use samba and proftpd to move the files. Unfortunately, graphical software are out of the question as the server containing the files is what it is, a server, with only command-line interface.

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • Very different I/O performance in C++ on Windows

    - by Mr.Gate
    Hi all, I'm a new user and my english is not so good so I hope to be clear. We're facing a performance problem using large files (1GB or more) expecially (as it seems) when you try to grow them in size. Anyway... to verify our sensations we tryed the following (on Win 7 64Bit, 4core, 8GB Ram, 32 bit code compiled with VC2008) a) Open an unexisting file. Write it from the beginning up to 1Gb in 1Mb slots. Now you have a 1Gb file. Now randomize 10000 positions within that file, seek to that position and write 50 bytes in each position, no matter what you write. Close the file and look at the results. Time to create the file is quite fast (about 0.3"), time to write 10000 times is fast all the same (about 0.03"). Very good, this is the beginnig. Now try something else... b) Open an unexisting file, seek to 1Gb-1byte and write just 1 byte. Now you have another 1Gb file. Follow the next steps exactly same way of case 'a', close the file and look at the results. Time to create the file is the faster you can imagine (about 0.00009") but write time is something you can't believe.... about 90"!!!!! b.1) Open an unexisting file, don't write any byte. Act as before, ramdomizing, seeking and writing, close the file and look at the result. Time to write is long all the same: about 90"!!!!! Ok... this is quite amazing. But there's more! c) Open again the file you crated in case 'a', don't truncate it... randomize again 10000 positions and act as before. You're fast as before, about 0,03" to write 10000 times. This sounds Ok... try another step. d) Now open the file you created in case 'b', don't truncate it... randomize again 10000 positions and act as before. You're slow again and again, but the time is reduced to... 45"!! Maybe, trying again, the time will reduce. I actually wonder why... Any Idea? The following is part of the code I used to test what I told in previuos cases (you'll have to change someting in order to have a clean compilation, I just cut & paste from some source code, sorry). The sample can read and write, in random, ordered or reverse ordered mode, but write only in random order is the clearest test. We tryed using std::fstream but also using directly CreateFile(), WriteFile() and so on the results are the same (even if std::fstream is actually a little slower). Parameters for case 'a' = -f_tempdir_\casea.dat -n10000 -t -p -w Parameters for case 'b' = -f_tempdir_\caseb.dat -n10000 -t -v -w Parameters for case 'b.1' = -f_tempdir_\caseb.dat -n10000 -t -w Parameters for case 'c' = -f_tempdir_\casea.dat -n10000 -w Parameters for case 'd' = -f_tempdir_\caseb.dat -n10000 -w Run the test (and even others) and see... // iotest.cpp : Defines the entry point for the console application. // #include <windows.h> #include <iostream> #include <set> #include <vector> #include "stdafx.h" double RealTime_Microsecs() { LARGE_INTEGER fr = {0, 0}; LARGE_INTEGER ti = {0, 0}; double time = 0.0; QueryPerformanceCounter(&ti); QueryPerformanceFrequency(&fr); time = (double) ti.QuadPart / (double) fr.QuadPart; return time; } int main(int argc, char* argv[]) { std::string sFileName ; size_t stSize, stTimes, stBytes ; int retval = 0 ; char *p = NULL ; char *pPattern = NULL ; char *pReadBuf = NULL ; try { // Default stSize = 1<<30 ; // 1Gb stTimes = 1000 ; stBytes = 50 ; bool bTruncate = false ; bool bPre = false ; bool bPreFast = false ; bool bOrdered = false ; bool bReverse = false ; bool bWriteOnly = false ; // Comsumo i parametri for(int index=1; index < argc; ++index) { if ( '-' != argv[index][0] ) throw ; switch(argv[index][1]) { case 'f': sFileName = argv[index]+2 ; break ; case 's': stSize = xw::str::strtol(argv[index]+2) ; break ; case 'n': stTimes = xw::str::strtol(argv[index]+2) ; break ; case 'b':stBytes = xw::str::strtol(argv[index]+2) ; break ; case 't': bTruncate = true ; break ; case 'p' : bPre = true, bPreFast = false ; break ; case 'v' : bPreFast = true, bPre = false ; break ; case 'o' : bOrdered = true, bReverse = false ; break ; case 'r' : bReverse = true, bOrdered = false ; break ; case 'w' : bWriteOnly = true ; break ; default: throw ; break ; } } if ( sFileName.empty() ) { std::cout << "Usage: -f<File Name> -s<File Size> -n<Number of Reads and Writes> -b<Bytes per Read and Write> -t -p -v -o -r -w" << std::endl ; std::cout << "-t truncates the file, -p pre load the file, -v pre load 'veloce', -o writes in order mode, -r write in reverse order mode, -w Write Only" << std::endl ; std::cout << "Default: 1Gb, 1000 times, 50 bytes" << std::endl ; throw ; } if ( !stSize || !stTimes || !stBytes ) { std::cout << "Invalid Parameters" << std::endl ; return -1 ; } size_t stBestSize = 0x00100000 ; std::fstream fFile ; fFile.open(sFileName.c_str(), std::ios_base::binary|std::ios_base::out|std::ios_base::in|(bTruncate?std::ios_base::trunc:0)) ; p = new char[stBestSize] ; pPattern = new char[stBytes] ; pReadBuf = new char[stBytes] ; memset(p, 0, stBestSize) ; memset(pPattern, (int)(stBytes&0x000000ff), stBytes) ; double dTime = RealTime_Microsecs() ; size_t stCopySize, stSizeToCopy = stSize ; if ( bPre ) { do { stCopySize = std::min(stSizeToCopy, stBestSize) ; fFile.write(p, stCopySize) ; stSizeToCopy -= stCopySize ; } while (stSizeToCopy) ; std::cout << "Creating time is: " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; } else if ( bPreFast ) { fFile.seekp(stSize-1) ; fFile.write(p, 1) ; std::cout << "Creating Fast time is: " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; } size_t stPos ; ::srand((unsigned int)dTime) ; double dReadTime, dWriteTime ; stCopySize = stTimes ; std::vector<size_t> inVect ; std::vector<size_t> outVect ; std::set<size_t> outSet ; std::set<size_t> inSet ; // Prepare vector and set do { stPos = (size_t)(::rand()<<16) % stSize ; outVect.push_back(stPos) ; outSet.insert(stPos) ; stPos = (size_t)(::rand()<<16) % stSize ; inVect.push_back(stPos) ; inSet.insert(stPos) ; } while (--stCopySize) ; // Write & read using vectors if ( !bReverse && !bOrdered ) { std::vector<size_t>::iterator outI, inI ; outI = outVect.begin() ; inI = inVect.begin() ; stCopySize = stTimes ; dReadTime = 0.0 ; dWriteTime = 0.0 ; do { dTime = RealTime_Microsecs() ; fFile.seekp(*outI) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++outI ; if ( !bWriteOnly ) { dTime = RealTime_Microsecs() ; fFile.seekg(*inI) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++inI ; } } while (--stCopySize) ; std::cout << "Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " (Ave: " << xw::str::itoa(dWriteTime/stTimes, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { std::cout << "Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " (Ave: " << xw::str::itoa(dReadTime/stTimes, 10, 'f') << ")" << std::endl ; } } // End // Write in order if ( bOrdered ) { std::set<size_t>::iterator i = outSet.begin() ; dWriteTime = 0.0 ; stCopySize = 0 ; for(; i != outSet.end(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekp(stPos) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Ordered Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dWriteTime/stCopySize, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { i = inSet.begin() ; dReadTime = 0.0 ; stCopySize = 0 ; for(; i != inSet.end(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekg(stPos) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Ordered Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dReadTime/stCopySize, 10, 'f') << ")" << std::endl ; } }// End // Write in reverse order if ( bReverse ) { std::set<size_t>::reverse_iterator i = outSet.rbegin() ; dWriteTime = 0.0 ; stCopySize = 0 ; for(; i != outSet.rend(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekp(stPos) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Reverse ordered Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dWriteTime/stCopySize, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { i = inSet.rbegin() ; dReadTime = 0.0 ; stCopySize = 0 ; for(; i != inSet.rend(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekg(stPos) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Reverse ordered Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dReadTime/stCopySize, 10, 'f') << ")" << std::endl ; } }// End dTime = RealTime_Microsecs() ; fFile.close() ; std::cout << "Flush/Close Time is " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; std::cout << "Program Terminated" << std::endl ; } catch(...) { std::cout << "Something wrong or wrong parameters" << std::endl ; retval = -1 ; } if ( p ) delete []p ; if ( pPattern ) delete []pPattern ; if ( pReadBuf ) delete []pReadBuf ; return retval ; }

    Read the article

  • Sales tracker that allows complex queries?

    - by feklee
    On a site, every click on a product should be registered by a sales tracker: price, type, etc. The sales tracker should provide an API so that complex queries can be performed, such as: Which products of a type "teapot" had a price below 20 EUR? Requirements: Recorded data should be available for querying no later than two hours after it has been recorded. For example, there are reports that Google Analytics may take up to 24h to update data. That is not acceptable. Querying doesn't need to be fast, but recording does (of course). Which sales tracker allows complex queries against collected data?

    Read the article

  • Excel 2013 Data Explorer and GeoFlow make 3-D maps quick and easy

    - by John Paul Cook
    Excel add-ins Data Explorer and GeoFlow work well together, mainly because they just work. Simple, fast, and powerful. I started Excel 2013, used Data Explorer to search for, examine, and then download latitude-longitude data and finally used GeoFlow to plot an interactive 3-D visualization. I didn’t use any fancy Excel commands and the entire process took less than 3 minutes. You can download the GeoFlow preview from here . It can also be used with Office 365. Start by clicking the DATA EXPLORER...(read more)

    Read the article

  • Simple solution now to a problem from 8 years ago. Use SQL windowing function

    - by Kevin Shyr
    Originally posted on: http://geekswithblogs.net/LifeLongTechie/archive/2014/06/10/simple-solution-now-to-a-problem-from-8-years-ago.aspxI remember having this problem 8 years ago. We had to find the top 5 donor per month and send out some awards. The SQL we came up with was clunky and had lots of limitation (can only do one year at a time), then switch the where clause and go again. Fast forward 8 years, I got a similar problem where we had to find the top 3 combination of 2 fields for every single day. And the solution is this elegant: SELECT CAST(eff_dt AS DATE) AS "RecordDate" , status_cd , nbr , COUNT(*) AS occurance , ROW_NUMBER() OVER (PARTITION BY CAST(eff_dt AS DATE) ORDER BY COUNT(*) DESC) RowNum FROM table1 WHERE RowNum < 4 GROUP BY CAST(eff_dt AS DATE) , status_cd , nbr If only I had this 8 years ago. :) Life is good now!

    Read the article

  • Last chance to see ... Virtualisation for Developers at NxtGenUG Cambridge, Tuesday 14th December

    - by Liam Westley
    As a farewell to 2010 I'm also saying farewell to presenting my Virtualisation for Developers and Hyper-V for Developers presentations with a final outing at NxtGenUG in Cambridge (my first visit to a user group in The Fens). I may have some homemade nibbles and party stuff to liven up the evening, and a certain Rachel Hawley has suggested a santa hat might be appropriate too. It's going to be a fun night. Sign up details are available here,   http://www.nxtgenug.net/ViewEvent.aspx?EventID=353 And for those of you who can't make this last outing, I am planning on converting both presentations into a series of blog posts so the content will be available to a wider audience.  If the posts don't seem to be appearing fast enough drop me an e-mail to remind me to get on with it !

    Read the article

  • New computer - AMD with which mobo? [closed]

    - by RhZ
    I need to buy new computers for the office, all running ubuntu 10.04 or 11.10, whichever works. I am looking at a asus mobo with the AMD870 northbridge and 850 sounthbridge. Can anyone tell me if that is buggy or not? And with maybe an Athlon II X4 640 processor. At home I am running an asus mobo with AMD880/SB850 at home, which is good, although the on board ATI video card was buggy, I put a nvida card in as well and its great now. But for the office I want to save cost, don't need a kick-ill system. Still, I need the machines to be fast and look good, don't want to skimp on performance. Can anyone provide me with some advice about this? I will buy a custom machine, not from one of the big manufacturers. Thanks! :-)

    Read the article

  • What do I need to know about Data Structures and Algorithms in the "real" world

    - by Ray T Champion
    I just finished the data structures and algorithms course in school , I took it during the summer so 6wks course vs a 16 wk course during the regular semester. So not only was the course hard but it was really really really fast. My question is what do I need to know about data structures in the real world? I understand what they do and how they work, for the most part, but I had a real tough time coding them , I wouldn't be able to write the code for a binary tree class or a balanced tree class from scratch .... Is that bad? should I retake it , or is knowledge of how they work sufficient, without being able to write the classes from scratch?

    Read the article

  • Ubuntu 14.04 doesn't detect my discrete GPU

    - by user258887
    I recently purchased a laptop with an Nvidia GeForce 860m, and have installed Ubuntu 14.04. On my old laptop I had 12.04, which automatically filled Additional Drivers with Nvidia drivers. But on this computer, the only thing in Additional Drivers is Qualcomm. So I manually installed Nvidia, but X Server Settings doesn't seem to detect any GPU... lspci | grep VGA reports only my integrated Intel GPU, but lspci -v reports many things, including the Nvidia GPU: 01:00.0 3D controller: NVIDIA Corporation GM107M [GeForce GTX 860M] (rev a2) Subsystem: ASUSTeK Computer Inc. Device 157d Flags: fast devsel, IRQ 16 Memory at ec000000 (32-bit, non-prefetchable) [size=16M] Memory at c0000000 (64-bit, prefetchable) [size=256M] Memory at d0000000 (64-bit, prefetchable) [size=32M] I/O ports at e000 [size=128] Expansion ROM at ed000000 [disabled] [size=512K] Capabilities: access denied Don't know what any of that means. Not sure if it's supposed to say 'access denied'... I need my GPU to do CUDA and OpenGL programming. What else can I do to figure out why this isn't working?

    Read the article

  • PowerShell One-Liners: Collections, Hashtables, Arrays and Strings

    The way to learn PowerShell is to browse and nibble, rather than to sit down to a formal five-course meal. In his continuing series on PowerShell one-liners, Michael Sorens provides Fast Food for busy professionals who want results quickly and aren't too faddy. Part 3 has as its tasty confections - Collections, Hashtables, arrays and strings. "A real time saver" Andy Doyle, Head of IT ServicesAndy and his team saved time by automating backup and restores with SQL Backup Pro. Find out how much time you could save. Download a free trial now.

    Read the article

  • Experienced programmer, beginner at web design, tools for effective maintainable web design? [closed]

    - by Clinton
    I do quite a bit of programming in my work, which I'm comfortable with, but recently I've being trying to do some web-design for non-work related reasons. I've got a Drupal site up and running, and added some content. But they all look fairly basic. Header with some content. It doesn't look particularly polished. Anyway, as an example, what I wanted to do was make some "bubbles", each with some text in them. From a programmers point of view, say: bubble(question_text, answer_text) might expand to a box with some border, with "Question: " + question_text then "Answer: " + answer_text. Of course I'd have lots of these bubbles, but I'd like to change their look and feel in one place, so simple HTML would be a maintainable nightmare. I also want to lay them out on the screen in some fashion. I was thinking a mixture of javascript and CSS, or possibly use PHP which Drupal uses. On the other hand, I fear I might be taking a 1990s approach to this, and that there's actually tools available now that make this process a lot easier. I'm just wondering what the best approach to this sort of task is? Should I be using offline web design software and copying the code to Drupal, and if so, any recommendations? I'm sorry if my question is a bit vague, because I'm not really sure what question I should be asking. I'd appreciate if you answer and comment, and I'll try my best to be more specific as I understand more.

    Read the article

  • Possible to draw a select portion of a render target? (in XNA)

    - by TheBroodian
    I'm going to try to do this in reverse fashion and skip straight to the punch line, and then give the back story afterward: Is it possible to, after drawing a scene to a RenderTarget2D, only draw a select portion of the RenderTarget2D, if I don't want the entire thing? I'm using xTile to manage world data in my game (it's a great piece of work, colinvella [xTile's author] has made an amazing product), and for the most part it works great. xTile supports parallax effects in its layers to add some wonderful depth to 2d scenes, which was great, until I implemented a dynamic split-screen system into my game. Wanted to make a co-op game that wouldn't require players to be in close proximity to each other, so I made it so that if the players separate too far apart, the singular full-screen viewport 'snaps-apart', and is replaced by two split-screen viewports, which then smoothly transition to their respective player targets. The effect is pretty smooth aside from the part where the parallax backgrounds become skewed once the viewports split, because xTile's ratio for handling parallax effects is dependent upon viewport size. This is unfortunate, because the effect would otherwise be really snazzy, but the backgrounds become pretty heavily affected when the game goes from single-viewport to multi-viewport. So, Colinvella suggests using rendertargets to record the scene at full viewport size, and then only drawing a portion of it. But as far as I can tell, that isn't even possible? That being said, I've never even used render targets before, so I'm still learning, hence the question here.

    Read the article

  • Why is Backbone.js a bad option in the Technology radar 2012 of thoughtworks?

    - by Cfontes
    In the latest Technology Radar 2012 they state that Backbone.js has pushed to far on it's MVC abstraction and say that Knockout.js or Angular.js should be used instead. I cannot get why they think that Backbone.js model is bad, for me it's just a way to create a standard so people can have some kind of roadmap to dev frontend JS without Spaghetti code. Also for me Angular and Knockout solve a different problem, I like both of them but having to code all MVC classes is something I think is kind of a rework. The thing is simple easy extendable and fast to learn, comes with a lot of goodies and is easy to combine with other Frameworks. (see Knockback.js) Can anybody tell me what made it so bad to their eyes ?

    Read the article

  • Joomla Backend running slow on localhost

    - by boothe
    I made a local backup of a Joomla site a few months ago to test changes before updating the live site. Everything worked fine. Today I checked the local version after a while but when I open the backend (/administrator) it takes a while until the site is loaded. I tried out different things and accidently disconnected my Network Connection. After that everything loads as fast as before. But when I connect the Network Connection the problem reappears. I am running Joomla 1.5.14 on XAMPP 1.7.0.

    Read the article

  • How to capture the screen in DirectX 9 to a raw bitmap in memory without using D3DXSaveSurfaceToFile

    - by cloudraven
    I know that in OpenGL I can do something like this glReadBuffer( GL_FRONT ); glReadPixels( 0, 0, _width, _height, GL_RGB, GL_UNSIGNED_BYTE, _buffer ); And its pretty fast, I get the raw bitmap in _buffer. When I try to do this in DirectX. Assuming that I have a D3DDevice object I can do something like this if (SUCCEEDED(D3DDevice->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &pBackbuffer))) { HResult hr = D3DXSaveSurfaceToFileA(filename, D3DXIFF_BMP, pBackbuffer, NULL, NULL); But D3DXSaveSurfaceToFile is pretty slow, and I don't need to write the capture to disk anyway, so I was wondering if there was a faster way to do this

    Read the article

  • Finding inspiration / help for making up (weapon) names

    - by Rookie
    I'm really bad with words, especially with English words. Currently I'm struggling to make a good weapon names for my game, it needs to display the weapon functionality (weak/strong/fast/ballistic etc) correctly as well. For example the best weapon in a (futuristic) game cannot be called just with the name "Laser", it's just too boring, right? Are there any tools, websites or anything that helps me finding good names for weapons? (or anything else similar). I was thinking to use scientific names, but noticed that they are really hard to write, and they get very long, and I also lack information about science, I only know I could use the atomic sub-particles names in the weapons for example. How do I get started with becoming good with making up names? (this could apply in generally to any naming problems).

    Read the article

  • How can I make KDE faster in Ubuntu 12.04. It's very slow

    - by Rizwan Rifan
    I installed the kubuntu-desktop package in Ubuntu 12.04 LTS, but the problem is KDE responses very slowly. If I click on an application's icon to run it, it appears after 10 seconds and sometimes does not appear at all. It hangs all the time. The cursor is almost impossible to follow because of the lag. I have read on the Internet that Unity uses more memory and CPU than KDE. But on my PC Unity runs smoothly and KDE does not. So what should I do to make KDE as fast, responsive and smooth as Unity? My specifications are as follows: RAM: 1.5 GB (DDR2) Processor: 3 GHz Dual Core Graphics Card: Intel HD graphics with 256 MB memory.

    Read the article

  • Bursts of white noise sound when playing videos

    - by Dave M G
    I've recently freshly reinstalled Ubuntu, and Mythbuntu, on all my computers. On one of my computers, Mythbuntu 11.10, when I play a video, when it starts, I get a burst of white noise (static) that stays on. If I stop the video and restart it, the noise goes away. Sometimes if I fast forward or manipulate the video, the noise will start. It seems to be initiated, and stopped, by starting or interacting with the video. Any ideas as to why this is happening and how I can get rid of it?

    Read the article

  • Time tracker for lxde

    - by deshmukh
    I have only recently started using lxde. And I am liking it. It is blazing fast, not-at-all resource hungry and just does what I want. The only thing I am missing is a time tracker tool. I have been using Hamster Time Tracker on gnome for quite some time. In lxde, I can still launch the application. But there are no reminders when the time limit is up, etc. The time tracker is just another window. Is there any way to get hamster working in lxde with notifications for time-up and an icon in the panel, etc.? Alternatively, is there another application like Hamster that will do all that Hamster does and WORKS in lxde?

    Read the article

  • Oracle Streamlines Tracking of Global Carbon Footprint and Greenhouse Gas Emissions

    - by Evelyn Neumayr
    Oracle has automated its global carbon footprint and greenhouse gas emissions measurement using Oracle Environmental Accounting and Reporting. By using this solution, Oracle was able to increase organizational efficiency and reduce the need for labor intensive, manual processes in the tracking of greenhouse gas (GHG) emissions for both voluntary and legislated environmental reporting. The move to Oracle Environmental Accounting and Reporting enables Oracle to more effectively meet both internal and governmental reporting needs, while addressing the associated economic mandates for reporting emissions and sustainability efforts. Organizations across the company can now record environmental data such as energy consumed or energy generated at facilities or locations within the enterprise, and can automatically calculate corresponding GHG emissions resulting from the use of emission sources. In addition, Oracle Environmental Accounting and Reporting includes data integration from multiple applications to ensure proper representation and calculation of emissions across the globe. The result is access to fast, accurate data and reporting to help the company meet its sustainability goals.

    Read the article

  • Can't login after forced shutdown

    - by user1205935
    I am on ubuntu 12.04 and forced a shutdown (hardware switch). Now I can't login anymore. In particular, after entering my password I get sent back to the login screen. The error message is to fast for me to read. I am, however, able to login into the Guest account through the usual login screen, and able to login into my own account on tty1 after pressing Ctrl+Alt+F1. What should I try to get back into my own account via the login screen and get back to a working desktop? Update: I can login with gdm, but not with the (default) lightdm. So the forced shutdown seems to have broken lightdm. The question now is, how do I fix a broken lightdm?

    Read the article

  • Whats the Quickest and Cheapest Solution to setup a Affiliate Program for an Online Product?

    - by szahn
    I have a simple HTML landing page setup for an online product I want to sell. This product is a hardcover book. I want to be able to allow other people to setup their own landing pages and make a percentage of the sale from their site. What are some good payment processors or payment gateways that make setting up an affiliate system easy and fast? Clarification - When someone purchases an item, I want (whatever the payment processor is) to automatically route a percentage of that payment to the affiliate and the rest to the original author.) Are there any payment frameworks that already do this? I've found a few sites that let you do this, but they seem to restrict you to digital purchases only. However, my sites is selling a ship-able product and the affiliate system needs to support this.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >