Search Results

Search found 23323 results on 933 pages for 'worst is better'.

Page 103/933 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • How should I structure a site with content dependent on visitor type (not user)?

    - by Pedr
    I have a website that displays different content depending on two selections made by a visitor: Whether they are a teacher or student, and their learning level (from 4 options). Everything is public and they don't need to authenticate to access the content. Depending on their selection, different content is displayed across the whole site, other than a contact and about page. The tone of the language changes depending on whether the visitor is a student or teacher and the materials available on each page also change depending on the learning level, however in all cases, the structure of the site is identical. Currently I'm using a cookie to store the visitor's selections and render different content appropriately, so I have a single set of URLs which display different content depending on the cookie, with one of the permutations as default. I appreciate this is far from ideal, but what is the better option? Would I be better using a distinguishing segment for each selection, for example: http://example.com/teacher/lv3/resources/activities http://example.com/teacher/lv4/resources/activities http://example.com/student/lv4/resources/activities etc. What is the most sensible way to handle this situation?

    Read the article

  • What do I need to get so I can upgrade my Thinkpad x61 Tablet hard drive?

    - by user36118
    My Thinkpad X61 Tablet is running out of space, and I would like to give it a bigger drive. I would like to clone the old drive to a bigger new drive. What do I need to get to accomplish this? The fewer things to get, the better, of course. The easier, the better. My system: Thinkpad X61 Tablet. XP w/ the latest SP. I am OK with XP, and don't want to reinstall it. No optical drive. USB 2.0 connectors (Bootable, I think). Things I have: USB 2.0 external drive housing. USB flash stick (2GB).

    Read the article

  • Recommendations for SSD for server and database use?

    - by Tony_Henrich
    SSDs are a new technology and they are constantly improving. A lot of the posts here were posted in 2009 when SSDs where less mature and not as fast. What was recommend back then is probably out of date today because of better options. The SSD is used to hold SQL Server databases. Size is probably 128G. The database is used with a CMS and web server so web pages need to get their data and render as fast as possible. Which modern SSD is recommended for such a use? Is there an SSD better than Intel X-25 E/M in terms of performance/cost? (I am also evaluating cost between : RAM + UPS (semi persistent) vs SSD for same amount of gigabytes. No RAID is involved)

    Read the article

  • Change filtering method used by Firefox when zooming

    - by peak
    I often zoom in a step or two when reading long texts in Firefox, but when I do so the images become super blurry. It's not really a big deal but when reading text on images (mathematical equations mostly), it's a bit distracting. It seems as if they are scaled using only bilinear interpolation. If I scale an image the same amount in for example Paint.NET or Photoshop the result is much better. Is there any way to change the filtering method used by Firefox to bicubic or another better method? I am Using Firefox 3.5 on Windows BTW.

    Read the article

  • what will EcmaScript 6 bring to the table for us

    - by user697296
    Our company ported moderate chunks of business logic to JavaScript. We compile the code with a minifier, which further improves performance. Since the language is dynamically typed, it lends itself well to obfuscation, which occurs as a byproduct of minification. We went to great efforts to ensure it positively screams, performance-wise. We can now do what we did before, faster, better, with less code, on more platforms. In summary, we are very satisfied with the current state of the language. I personally love the language especially for its cross-platform nature. So naturally, I read up a lot about the state of JavaScript compilers, performance and compatibility across as many browsers and platforms as I have time to research. The one theme which has been growing louder and louder these days, is the news about ECMAScript 6. So far, what I have been able to gather is that ES6 promises a better development experience; firstly by enabling new ways to do things, secondly by reporting errors early. This sounds great for those who are still waiting for the language to meet their needs before jumping on board. But we have already jumped on board in a big way. Sure, I expect that we will have to do ongoing maintenance and feature revisions on our code through the years, and that we would obviously make use of best practices at the time. But I don't see us refactoring major portions of it to take advantage of language features that are mostly intended to boost developer productivity. I keep wondering, what impact will the language advances ultimately have on our existing, well-written, well-performing code base? Is there something I am missing? Is there something we ought to look out for? Does anyone have tips or guidance on how we should approach the ecmascript.next finalization? Should we care?

    Read the article

  • I can write code...but can't design well. Any suggestions?

    - by user396089
    I feel that I am good at writing code in bits and pieces, but my designs really suck. The question is how do I improve my designs (in order to become a better designer). I think schools and colleges do a good job of teaching people as to how to become good at mathematical problem solving, but lets admit the fact that most programs taught at school are generally around 1000 - 2000 lines long, which means that it is mostly an academic exercise and no way reflects the complexity of real world software (a few hundred thousand to millions of lines of code). This is where I believe that even projects like topcoder/project euler also won't be of much help, they might sharpen your mathematical problem solving ability - but you might become a theoretician programmer; someone who is more interested in the nice, clean stuff, and someone who is utterly un-interested in the day to day mundane and hairy stuff that most application programmers deal with. So my question is how do I improve my design skills? That is the ability to design small/medium scale applications that will go into a few thousand of lines of code? How can I learn design skills that would help me build a better html editor kit, or some graphics program like gimp?

    Read the article

  • Iterative and Incremental Principle Series 5: Conclusion

    - by llowitz
    Thank you for joining me in the final segment in the Iterative and Incremental series.  During yesterday’s segment, I discussed Iteration Planning, and specifically how I planned my daily exercise (iteration) each morning by assessing multiple factors, while following my overall Implementation plan. As I mentioned in yesterday’s blog, regardless of the type of exercise or how many increment sets I decide to complete each day, I apply the 6 minute interval sets and a timebox approach.  When the 6 minutes are up, I stop the interval, even if I have more to give, saving the extra energy to apply to my next interval set.   Timeboxes are used to manage iterations.  Once the pre-determined iteration duration is reached – whether it is 2 weeks or 6 weeks or somewhere in between-- the iteration is complete.  Iteration group items (requirements) not fully addressed, in relation to the iteration goal, are addressed in the next iteration.  This approach helps eliminate the “rolling deadline” and better allows the project manager to assess the project progress earlier and more frequently than in traditional approaches. Not only do smaller, more frequent milestones allow project managers to better assess potential schedule risks and slips, but process improvement is encouraged.  Even in my simple example, I learned, after a few interval sets, not to sprint uphill!  Now I plan my route more efficiently to ensure that I sprint on a level surface to reduce of the risk of not completing my increment.  Project managers have often told me that they used an iterative and incremental approach long before OUM.   An effective project manager naturally organizes project work consistent with this principle, but a key benefit of OUM is that it formalizes this approach so it happens by design rather than by chance.    I hope this series has encouraged you to think about additional ways you can incorporate the iterative and incremental principle into your daily and project life.  I further hope that you will share your thoughts and experiences with the rest of us.

    Read the article

  • How do you find a new software to download?

    - by user63411
    I see that the most time on computer I loose just for searching a suitable software. How you do it? what is your way to find a software? you first go on Forum?, google? Which torrent site? which P2P prog.? website, server? my problem - my way is: For example: "I need a software that have function remote and also leaves you to copy and paste (drag and drop - file manage). And after few hours searching on Google I download LOGMEIN pro2, but than I see that is just trial. so if I need to find alternative I will spend another whole day Where to go? I am not big amateur but I need better system. I need better intorudction how to find what is the suitable software for you and where you can download it?

    Read the article

  • Opinion choosing Switch

    - by mastercode
    ) i have to reestruct a LAN network, with (currently) +/- 60hosts connected ... i have File Servers hosted, VoIP Phones,wireless AP's,printers, scanners, plotters,biometric dispositive,and 2 QNAP TS412 as FileServer and BackupServer, a Mac Mini as main Server of almost all services that need server ... and, a HP V1910-24 (L2+) and another two switches,but, only L2. which switch in your opinion, could fit better this reestruct, to ensure a VLAN division- and have to support Inter VLAN routing also - provide better performance, and also, allow a Future expansion. the budget, is low xD hehe!!

    Read the article

  • Reliance on the compiler

    - by koan
    I've been programming in C and C++ for some time, although I would say I'm far from being expert. For some time I've been using various strategies to develop my code such as unit tests, test driven design, code reviews and so on. When I wrote my first programs in BASIC I typed in long listings before finding they would not run and they were a nightmare to debug. So I learnt to write a small bit and then test it. These days I often find myself repeatedly writing a small bit of code then using the compiler to find all the mistakes. That's OK if it picks up a typo but when you start adjusting the parameters types etc just to make it compile you can screw up the design. It also seems that the compiler is creeping into the design process when it should only be used for checking syntax. There's a danger here of over reliance on the compiler to make my programs better. Are there better strategies than this ? I vaguely remember some time ago an article on a company developing a type of C compiler where an extra header file also specified the prototypes. The idea was that inconsistencies in the API definition would be easier to catch if you had to define it twice in different ways.

    Read the article

  • How come Core i7 (desktop) dominates Xeon (server)?

    - by grant tailor
    I have been using this performance benchmark results to select what CPUs to use on my web server and to my surprise, looks like Core i7 CPUs dominates the list pushing Xeon CPUs into the bush. Why is this? Why is Intel making the Core i7 perform better than the Xeon. Are Desktop CPUs supposed to perform better than server grade Xeon CPUs? I really don't get this and will like to know what you think or why this is so. Also I am thinking about getting a new web server and thinking between the i7-2600 VS the Xeon E3-1245. The i7-2600 is higher up in the performance benchmark but I am thinking the Xeon E3-1245 is server grade. What do you guys think? Should I go for the i7-2600? Or is the Xeon E3-1245 a server grade CPU for a reason?

    Read the article

  • Using Subdomains for Newly Regional Company

    - by Taylord22
    The company I work for is expanding their business to new territories. I've got a lot of stabilization to do in the region/state where we're one of the most well known companies of our kind. Currently, we have 3 distinct product lines which are currently distinguished by 3 separate URLS. This is affecting the user flow of our site, so we'd like to clean it up before launching our products into the various regions. The business has decided to grow into 5 new states (one state consisting of one county only) — none of which will feature all 3 products. Our homebase state is the only one that will have all 3 products this year. My initial thought was to use subdomains to separate out the regions, that way we could use a canonical tag to stabilize the root domain (which would feature home state content, and support content for all regions), and remove us from potential duplicate content penalization. Our product content will be nearly identical across the regions for the first year. I second guessed myself by thinking that it was perhaps better to use a "[product].root/region" URL instead. And I'm currently stuck by wondering if it was not better to build out subdomains for products and regions...using one modifier or the other as a funnel/branding page into the other. For instance, user lands on "region.root.com" and sees exactly what products we offer in that region. Basically, a tailored landing page. Meanwhile the bulk of the product content would actually live under "product.root.com/region/page". My head is spinning. And while searching for similar questions I also bumped into reference of another tag meant to be used in some similar cases to mine. I feel like there's a lot of risks involved in this subdomain strategy, but I also can't help but see the benefits in the user flow.

    Read the article

  • How come i7 (desktop) dominates Xeon (server)?

    - by grant tailor
    I have been using this performance benchmark results http://www.cpubenchmark.net/high_end_cpus.html to select what CPUs to use on my web server and to my surprise...looks like i7 CPUs dominates the list pushing Xeon CPUs into the bush. Why is this? Why is Intel making the i7 perform better than the Xeon. Are Desktop CPUs supposed to perform better than server grade Xeon CPUs? I really don't get this and will like to know what you think or why this is so. Also i am thinking about getting a new web server and thinking between the i7-2600 VS the Xeon E3-1245. The i7-2600 is higher up in the performance benchmark but i am thinking the Xeon E3-1245 is server grade...so what do you guys think? Should i go for the i7-2600? Or is the Xeon E3-1245 a server grade CPU for a reason?

    Read the article

  • PDF to Image Conversion in Java

    - by Geertjan
    In the past, I created a NetBeans plugin for loading images as slides into NetBeans IDE. That means you had to manually create an image from each slide first. So, this time, I took it a step further. You can choose a PDF file, which is then automatically converted to an image for each page, each of which is presented as a node that can be clicked to open the slide in the main window. As you can see, the remaining problem is font rendering. Currently I'm using PDFBox. Any alternatives that render font better? This is the createKeys method of the child factory, ideally it would be replaced by code from some other library that handles font rendering better: @Override protected boolean createKeys(List<ImageObject> list) { mylist = new ArrayList<ImageObject>(); try { if (file != null) { ProgressHandle handle = ProgressHandleFactory.createHandle( "Creating images from " + file.getPath()); handle.start(); PDDocument document = PDDocument.load(file); List<PDPage> pages = document.getDocumentCatalog().getAllPages(); for (int i = 0; i < pages.size(); i++) { PDPage pDPage = pages.get(i); mylist.add(new ImageObject(pDPage.convertToImage(), i)); } handle.finish(); } list.addAll(mylist); } catch (IOException ex) { Exceptions.printStackTrace(ex); } return true; } The import statements from PDFBox are as follows: import org.apache.pdfbox.pdmodel.PDDocument; import org.apache.pdfbox.pdmodel.PDPage;

    Read the article

  • Creating my own PHP framework

    - by onlineapplab.com
    Disclaimer: I don't want to start any flame war so there will not be no name of any framework mentioned. I've been using quite many from the existing PHP frameworks and my experience in each case was similar: everything is nice a the beginning but in the moment you require something non standard you get into lot of problems to fix otherwise simple issues. In case of frameworks following the MVC design pattern there are some issues with the implementation of each layer for example there is a lot of codding used for model and data access with using ORM and presentation is not much more than pure phtml. Some frameworks use their own wrappers for existing PHP functionality and in some cases severely limiting original functionality. Depending on framework you can have additional problems like lack of documentation, slow or non existent development cycle and last but not least speed. While ago I made my own framework which while doing it's job and being used for few different applications after couple of years more of experience with PHP doesn't seem to be perfect piece of codding. I could write my own framework and use additional experience I've gathered during these years to make it better on the other hand I'm aware that there is quite many better programmers working on creating/upgrading existing frameworks. So does it make at all nay sense to write my own PHP framework if there is so many possibilities to choose from?

    Read the article

  • How can I justify a technology over another? (Java over .NET)

    - by user674887
    We are working in a Java/.NET company and my team and I are planning a project for a client. One of the requirements is that the project has to be done in .NET I've asked about this requirement, and the client said that it doesn't matter, and that if I have a good reason we can use other technology. But, I have to justify the decision. As a Project Manager / Analyst I'm interested in making the project in Java because: The team knows java much better, regarding the language and frameworks I don't know anything about .NET technology (and maybe we could make bad decisions thinking in a Java way to do things) There are other people in company that have more skills in .NET but they have other projects with more priority. For experience, I'm sure that if we use Java, the project will have much more quality. But this arguments could be weak from the client perspective. How can I justify making the project in Java? EDIT: I'm not asking if one technology is better than other. "It's not a technology war" question.

    Read the article

  • Hide or Show singleton?

    - by Sinker
    Singleton is a common pattern implemented in both native libraries of .NET and Java. You will see it as such: C#: MyClass.Instance Java: MyClass.getInstance() The question is: when writing APIs, is it better to expose the singleton through a property or getter, or should I hide it as much as possible? Here are the alternatives for illustrative purposes: Exposed(C#): private static MyClass instance; public static MyClass Instance { get { if (instance == null) instance = new MyClass(); return instance; } } public void PerformOperation() { ... } Hidden (C#): private static MyClass instance; public static void PerformOperation() { if (instance == null) { instance = new MyClass(); } ... } EDIT: There seems to be a number of detractors of the Singleton design. Great! Please tell me why and what is the better alternative. Here is my scenario: My whole application utilises one logger (log4net/log4j). Whenever, the program has something to log, it utilises the Logger class (e.g. Logger.Instance.Warn(...) or Logger.Instance.Error(...) etc. Should I use Logger.Warn(...) or Logger.Warn(...) instead? If you have an alternative to singletons that addresses my concern, then please write an answer for it. Thank you :)

    Read the article

  • Some explaination on WLAN adaptor antennas and radios

    - by gert_78
    We have a desktop in a room where the is no wired connection that has bad reception from the wireless AP. Someone (I don't remember who of course) told me there are high gain antennas that will make it possible to get a better reception (and throughput). I believe he said they are adaptors with "high gain" antennas. He also told me he uses it in hotels when he has a bad WLAN reception. When he connects that card the reception and speed is a lot better all of a sudden. But can someone explain me in understandable language what that is and what the thing with high gain, dB, dBi and mW is all about please? What type of card do we need? On like this or this?

    Read the article

  • How to run webcam software only when I am not home (phone is not on the LAN)?

    - by endolith
    Currently I've got cron starting Motion when I typically leave for work, and then killing it when I typically get home, so I can watch my cat/burglars/etc. But it would be better if it could detect when I'm actually home and disable the webcam during those times, and enable it at other times. I was thinking my presence could be detected by my Android phone joining the LAN. So something like A script that checks every few minutes whether my phone's hostname or MAC address is currently on the LAN or A Tasker script on my phone that contacts the home computer in some way (simple web server?) when it joins a certain SSID or ... Any better ideas or advice about how to implement one of these?

    Read the article

  • Effectively managing crontab

    - by jakenoble
    My crontab looks something like this; 1 * * * * /var/www/cron/site1.sh > /dev/null 2>&1 0 * * * * /var/www/cron/site2.sh > /dev/null 2>&1 3 * * * * /var/www/cron/site3.sh > /dev/null 2>&1 This works great and lets me place all the nasty little script calls into one place, rather than making crontab harder to read than it already is. But, this fails massively when site2.sh needs one script to run once a day, another to run once a week and another to run every 5 minutes. And of course it gets worse as new scripts get added with different timings. Is there a better way? EDIT By better I mean making it more manageable, having a large crontab is not manageable, but neither is having scripts all over the place. Not a GUI necessarily.

    Read the article

  • Please Help Me Optimize This

    - by Zero
    I'm trying to optimize my .htaccess file to avoid performance issues. In my .htaccess file I have something that looks like this: RewriteEngine on RewriteCond %{HTTP_USER_AGENT} bigbadbot [NC,OR] RewriteCond %{HTTP_USER_AGENT} otherbot1 [NC,OR] RewriteCond %{HTTP_USER_AGENT} otherbot2 [NC] RewriteRule ^.* - [F,L] The first rewrite rule (bigbadbot) handles about 100 requests per second, whereas the other two rewrite rules below it only handle a few requests per hour. My question is, since the first rewrite rule (bigbadbot) handles about 99% of the traffic would it be better to place these rules into two separate rulesets? For example: RewriteEngine on RewriteCond %{HTTP_USER_AGENT} bigbadbot [NC] RewriteRule ^.* - [F,L] RewriteCond %{HTTP_USER_AGENT} otherbot1 [NC,OR] RewriteCond %{HTTP_USER_AGENT} otherbot2 [NC] RewriteRule ^.* - [F,L] Can someone tell me what would be better in terms of performance? Has anyone ever benchmarked this? Thanks!

    Read the article

  • How to avoid tons of `instanceof` in collision detection?

    - by Prog
    Consider a simple game with 4 kinds of entities: Robots, Dogs, Missiles, Walls. Here's a simple collision-detection mechanism in psuedocode: (I know, O(n^2). Irrelevant for this question). for(Entity entityA in entities){ for(Entity entityB in entities){ if(collision(entityA, entityB)){ if(entityA instanceof Robot && entityB instanceof Dog) entityB.die(); if(entityA instanceof Robot && entityB instanceof Missile){ entityA.die(); entityB.die(); } if(entityA instanceof Missile && entityB instanceof Wall) entityB.die(); // .. and so on } } } Obviously this is very ugly, and will get bigger and harder to maintain the more entities there are, and the more conditions there are. One option to make this better is to have separate lists for each kind of entity. For example a Robots list, a Dogs list etc. And than check for collisions of all Robots with Dogs, and all Dogs with Walls, etc. This is better, but I still don't think it's good. So my question is: The collision detection system spotted a collision. Now what? What is the common way to react to the collision? Should the system notify the entity itself that it collided with something, and have it decide for itself how to react? E.g. entityA.reactToCollision(entityB). Or is there some other solution?

    Read the article

  • Should I store my code/projects on my SSD or my secondary drive?

    - by user37467
    I just got a new box. It has an SSD for the primary drive, and a 1TB SATA for the secondary drive. I'm going to run windows and my binaries on the SSD and keep all my downloads/documents/music/etc on the secondary drive. My question is should I also keep my Visual Studio Projects and code on the SSD or keep them on the secondary drive? The faster SSD would presumably be better for compiling and indexed searches, but would it be better to keep it on the 2nd drive for a more parallel disk IO situation?

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >