Search Results

Search found 26810 results on 1073 pages for 'fixed point'.

Page 582/1073 | < Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >

  • How determine keyboard variation when manufacturer changes it

    - by Maksee
    When I decided to purchase Toshiba Z830, I specially noticed at photos that the keyboard was good for me (wide Enter, Left Shift, Backspace), you can query it at images.google.com, on most photos they're all wide. When I finally bought it (Z830-A2S), the keyboard was different, the Enter is narrow and the left Shift is "split" into Shift and backslash keys (probably 5% of photos at images.google.com). Is it normal for manufacturers to change this during the production cycle or this can be variations from different contractors? But the main point, is it possible to determine this from the full model name or somewhere else without visiting a store?

    Read the article

  • 301 redirect: Is this good or bad for 2 domains?

    - by Tim
    Since i couldn't find any appropriate answer to my specific question, I wanted to ask you. I've read alot of things about the 301-redirect for moving pages and so on. A customer of mine has booked a new domain last year for better search results (he included his main keyword into the domain. Before he had only a domain with his business name, which had nothing to say about what he does). I told him, that he should do a 301-redirect so he doesn't loose his position in Google and to redirect all new customers coming from the old domain to the new domain. After about one year where his site hat a good amount of traffic the search results of Google for his keywords are getting more worse. Since he didn't maintain his website (no new content, bad content on all pages and so on) I assumed this would be the problem. He gave his website to another company which also makes websites. They told him, that this 301-redirection is very bad for his website. They removed it, and also updated his content and the template so now he has the same meta keywords on every page (instead of the specific ones I put there before). He also removed the canonical-tag which I placed there to ensure no duplicate content. What I am now afraid of is, that without this redirect Google now will find duplicate content and therefore kick him out of the index, which would be a nightmare, since most of his customers come over his website. I need verification of the fact, that the 301 isn't bad but in fact the correct way of working with 2 domains. If possible with good sources I can point out to him since he don't wants to hear anything about this. If someone also has a few words about the keywords and the canonical-tag I would really appreciate it! Thank you very much!

    Read the article

  • Virtual MS Sql Server not consuming enough CPU

    - by rocketman
    We have a Win2008 server 32 bit running as a virtual machine under ESX server. It has 6 CPU cores of 2Gz each and 4GB ram. It's running MS Sql Server 2008 R2 only. Problem: The server is heavily loaded and responds slowly. From windows taskmanagers point of view, it really looks overloaded, CPU wise. However, our external "cloud manager" says it's only using 2.5GHz worth of CPU-cycles in the cluster. I/O times looks "good". We have already tried to set the SQL servers number of working threads from 0(auto) to 256, to no effect. How to tune the VM host, guest or SQL to use all of it's alotted resources? Does it sound possible att all?

    Read the article

  • Globacom and mCentric Deploy BDA and NoSQL Database to analyze network traffic 40x faster

    - by Jean-Pierre Dijcks
    In a fast evolving market, speed is of the essence. mCentric and Globacom leveraged Big Data Appliance, Oracle NoSQL Database to save over 35,000 Call-Processing minutes daily and analyze network traffic 40x faster.  Here are some highlights from the profile: Why Oracle “Oracle Big Data Appliance works well for very large amounts of structured and unstructured data. It is the most agile events-storage system for our collect-it-now and analyze-it-later set of business requirements. Moreover, choosing a prebuilt solution drastically reduced implementation time. We got the big data benefits without needing to assemble and tune a custom-built system, and without the hidden costs required to maintain a large number of servers in our data center. A single support license covers both the hardware and the integrated software, and we have one central point of contact for support,” said Sanjib Roy, CTO, Globacom. Implementation Process It took only five days for Oracle partner mCentric to deploy Oracle Big Data Appliance, perform the software install and configuration, certification, and resiliency testing. The entire process—from site planning to phase-I, go-live—was executed in just over ten weeks, well ahead of the four months allocated to complete the project. Oracle partner mCentric leveraged Oracle Advanced Customer Support Services’ implementation methodology to ensure configurations are tailored for peak performance, all patches are applied, and software and communications are consistently tested using proven methodologies and best practices. Read the entire profile here.

    Read the article

  • Project Management Software / 1 maybe 2 developers

    - by Ominus
    I am looking for software that I can use to "manage" multiple projects (5 - 10). Here are the features I would like but any recommendation is welcome. Bug/Feature tracking on a per project basis. Some way to keep all documents, diagrams, specs, requirements, in one place with the project. Better yet a tool where all these things or most of them could be authored. Task management during the development phase with milestones and estimates/actuals. Git integration I have been doing contract work and i have been doing really well for myself as far as getting projects but its becoming VERY hard to manage everything in an efficient manner. I am trying to learn about best practices when it comes to software programming methodologies and the more I read the more i realize that I am just managing these projects poorly. I am getting things done but the more I take on the less "solid" everything is. I am afraid if I don't get some good solid tools/practices in place I am going to do my customers and myself a disservice. The problem is that there are SO many options that its hard to weed through them all. I was at a point today where I had decided that I would just code my own (there is some irony here)! Obviously everyone has their likes dislikes I would love to hear from some of you lone programmers and how you manage everything since our needs aren't exactly the same thing that a large team might need. I also want a solution that can scale to 2 maybe 3 developers if I end up hiring some people to help with my work load. Thanks again for your usual insights!

    Read the article

  • Impact Earth Lets You Simulate Asteroid Impacts

    - by Jason Fitzpatrick
    If you’re looking for a little morbid simulation to cap off your Friday afternoon, this interactive asteroid impact simulator makes it easy to the results of asteroid impacts big and small. The simulator is the result of a collaboration between Purdue University and the Imperial College of London. You can adjust the size, density, impact angle, and impact velocity of the asteroid as well as change the target from water to land. The only feature missing is the ability to select a specific location as the point of impact (if you want to know what a direct strike to Paris would yield, for example, you’ll have to do your own layering). Once you plug all that information in, you’re treated to a little 3D animation as the simulator crunches the numbers. After it finishes you’ll see a breakdown of a variety of effects including the size of the crater, the energy of the impact, seismic effects, and more. Hit up the link below to take it for a spin. Impact Earth [via Boing Boing] How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • Worker processes not starting in IIS 7.5. What should I check?

    - by locster
    I have a Windows 7 machine (Windows version 6.1.7601 SP1 Build 7601) with IIS installed. At some point the installation appears to have become 'corrupted' in some way, as any requests are now met with the message: Service Unavailable HTTP Error 503. The service is unavailable. In IIS manager IIS is started and the app pool I am using reports itself as 'Started', yet there is no w3wp.exe process listed in the process list in task manager (I am a local admin and have clicked the 'Show processes from all users' button. I have enabled logging for the web site (at default location of %SystemDrive%\inetpub\logs\LogFiles), but this folder is empty. I am assuming that this log output is written by w3wp.exe as it handles requests (no w3wp.exe, no log file?). Presumably there is another layer of request handling that is responsible for starting the worker processes, does thsi layer have log files I can check, and/or can I uninstall/re-install that layer? Thanks.

    Read the article

  • Use System Restore to rescue lost user profile in Win XP?

    - by im_chc
    Hi! My win XP account profile has recently been "reset". Many app settings are lost. For example, the "recent project" list in VS 2005 is empty. There should be lots of other stuffs that are painfully lost without me knowing! What can I do? Can I retrieve the app settings from System Restore? I don't have much confidence on this util, even tho I think restoring to a point when the profile still works, and back up away the C:\Documents and Settings (is it where all the app setting files are located?), that should work... Is it reliable to restore to a previous restore pt and then goes back to the latest RP? I've googled on System Restore, looks like what the util does is just back up some physical files, and restore them when doing System Restore. That sounds quite safe, but I am still uncomfortable to this. Thx for u guys' help in advance!

    Read the article

  • How can I setup a Firewall without NAT?

    - by SRobertJames
    We have 16 IP addresses from our ISP, and are setting up a SonicWall Firewall. I'd like to have the SonicWall do NAT for the LAN, but act as a firewall only (no NAT) for the servers which are using some of the 16 addresses. How do I set this up? If I set the WAN's subnet to include the 16 IPs, the SonicWall won't route the traffic to the LAN interface. Should I set the WAN subnet to only include the ones we are dedicating for NAT, and then keep the others on the LAN? Related point: How can I set multiple IP addresses for a SonicWall LAN interface?

    Read the article

  • What is the best way to back up dedicated web server? (Amanda versus Rsync)

    - by Scott
    Hello everyone, I am trying to establish valid back ups for my web server. It is a linux box on CentOS. I have asked around and "rsync" was suggested by some of the server fault community. However, my coworker at work says that this is really only moving over the physical files and isn't really a usable "snapshot." He suggested using "amanda" and that this did full server snapshots that are more what I am accustomed to. I know at my company we have virtual machines that we take snapshots of and we can restore everything back to just as they were with little effort and little downtime. Is this possible with rsync? Or would I need to create a new server and then migrate the files back and do various configurations? I think I prefer being able to just reset everything to a point in time. Forgive my ignorance, Back ups are something that I have never really had to worry about before.

    Read the article

  • Powershell Foreach-Object with if statement not working - help!

    - by Dmart
    I have a batch file that takes a computer name as its argument and will install Java JRE on it remotely. Now, Im trying to run this Powershell script to repeatedly call the batch file and install Java on any system that it finds without the latest version. It seems to run error-free, but the statements inside the if code block never seem to run - even when the if conditional test evaluates to true. Can anyone look at this script and point out what I'm possibly missing? I'm using the Quest AD cmdlets, and BSOnPosh module. Thank you. get-qadcomputer -sizelimit 0 -name mypc* -searchroot 'OU=MyComputers,DC=MyDomain,DC=lcl'| test-host -property name |ForEach-Object -process { $targnm = $_.name $tststr=reg query "\\$targnm\HKLM\SOFTWARE\JavaSoft\Java Runtime Environment" /v Java6FamilyVersion if(-not ($tststr | select-string -SimpleMatch '1.6.0_20')) { $mssg="Updating to JRE 6u20 on $targnm" Out-Host $mssg Out-File -filepath c:\install_jre_log.txt -inputobject $mssg -Append cmd /c \\server\apps\java\installjreremote.cmd $targnm } else { $mssg ="JRE 6u20 found on $targnm" Out-Host $mssg Out-File -filepath c:\install_jre_log.txt -inputobject $mssg -Append } }

    Read the article

  • What does an ACPI BIOS configure during boot?

    - by RJSmith92
    When a PC boots with an ACPI BIOS, what does it exactly do? I understand that the point of ACPI is to allow the OS to control hardware resources and power management but before the OS is loaded does ACPI configure just the devices needed to boot and then let the OS configure the rest? If the OS wants to re-asign hardware resources does it store this information in the ACPI tables so that the next time the system is booted it assigns them how the OS wants? The ACPI driver asks the PCI bus driver (Pci.sys) to enumerate devices on it's bus once the OS is loaded, how are these devices configured whilst the PC is booting when it doesn't have other bus drivers? Any help with any of the above questions would be greatly appreciated. Thanks.

    Read the article

  • What tools and knowledge do I need to create an application which generates bespoke automated e-mails? [on hold]

    - by Seraphina
    I'd like some suggestions as to how to best go about creating an application which can generate bespoke automated e-mails- i.e. send a personalized reply to a particular individual, interpreting the context of the message as intelligently as possible... (This is perhaps too big a question to be under one title?) What would be a good starting point? What concepts do I need to know? I'd imagine that the program needs to be able trawl through e-mails as and when they come in, and search for keywords in e-mail content, in order to write an appropriate reply. So there needs to be some form of automated response embedded in the code. Machine learning and databases come to mind here, as I'm aware that google incorporates machine learning already in gmail etc. It is quite tricky to google the above topic, and find the perfect tutorial. But there are some interesting articles and papers out there: Machine Learning in Automated Text Categorization (2002) by Fabrizio Sebastiani , Consiglio Nazionale Delle Ricerche However, this is not exactly a quick start guide. I intend to add to this question, and no doubt other questions will spark off this one. I look forward to suggestions.

    Read the article

  • Force Windows 8 to search indexed files

    - by Hrvoje
    When using search files in Windows 8 (win+f) I don't get expected results. For example, I installed VLC, it's in Program Files (86) folder, and that folder is selected for indexing. Search for files (win+f) gives 0 results. If I pin to start that exe, then it's found - but I don't want to do this, that's not the point. Where does it search for files? Is there any way to specify search locations? It doesn't use Indexing Options settings, at least it seams so. Also, searching from explorer window is kinda slow - I tried entering VLC.EXE in search box (when in c:\ root), and it takes some time to give correct results. It works, but it looks like it doesn't use indexing, rather scan all files/folders, which is slow.

    Read the article

  • Grub2 attempting to boot hd1 when it should boot hd0

    - by JoBu1324
    I'm attempting to perform a "normal" install on a USB3 SSD (I don't know if it is noteworthy, but I don't have a swap partition). The installation proceeds normally (I'm installing from a USB2 device I created using LiLi Boot, with a copy of Ubuntu 12.10 64bit that I downloaded directly from the source. The system I'm running Ubuntu on has had a more traditional installation of ubuntu running on it without issue (also 12.10), so I know that everything works A-OK when booting from a 7200RPM internal disk. There are a number of oddities that I've noticed so far, including graphics corruption, but the first and most pressing issue is that Grub2 refuses to recognize the correct hd. From /boot/grub/grub.cfg: if [ x$feature_default_font_path = xy ] ; then font=unicode else insmod part_msdos insmod ext2 set root='hd1,msdos1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd1,msdos1 --hint-efi=hd1,msdos1 --hint-baremetal=ahci1,msdos1 b58ee4f7-d41d-400a-b7b8-18bd1f0ae9d3 else search --no-floppy --fs-uuid --set=root b58ee4f7-d41d-400a-b7b8-18bd1f0ae9d3 fi font="/usr/share/grub/unicode.pf2" fi This is from a 100% fresh install of linux (first boot), which was installed while no hard drives were connected to the system, other than the USB2 LiLi drive. The system refuses to boot unless I change the hd1,msdos1 - hd0,msdos1 in the grub menu at boot, when it is the only disk device connected to the PC. What options are left for me to troubleshoot this issue? I've been racking my brains and taxing the internet trying to dig up something on this problem, but now I'd like to see if the Ubuntu community can rise to the challenge and help me fix this boot problem. This is the second time I've attempted this particular setup. The first time, after days of wasted time, I managed to get it to boot every other boot - i.e. every even boot it would boot into Ubuntu like it was happy; every odd boot it would boot into the BusyBox or Grub prompt. At one point it complained that it couldn't find /dev/disk/by-uuid/[the disk], which I found most perplexing, since the disk was there and booted before and after the occurrence (with intervention).

    Read the article

  • LINQ to Twitter Maintenance Feedback

    - by Joe Mayo
    Originally posted on: http://geekswithblogs.net/WinAZ/archive/2013/06/16/linq-to-twitter-maintenance-feedback.aspxIt’s always fun to receive positive feedback on your work. If you receive a sufficient amount of positive feedback, you know you’re doing something right. Sometimes, people provide negative feedback too. There are a couple ways to handle it: come back fighting or engage for clarification. The way you handle the negative feedback depends on what your goals are. Feedback Approaches If you know the feedback is incorrect and you need to promote your idea or product, you might want to come back fighting. The feedback might just be comments by a troll or competitor wanting to spread FUD. However, this could be the totally wrong approach if you misjudge the source and intentions of the feedback. In a lot of cases, feedback is a golden opportunity. Sometimes, a problem exists that you either don’t know about or don’t realize the true impact of the problem. If you decide to come back fighting, you might loose the opportunity to learn something new. However, if you engage the person providing the feedback, looking for clarification, you might learn something very important. Negative feedback and it’s clarification can lead to the collection of useful and actionable data. In my case, something that prompted this blog post, I noticed someone who tweeted a negative comment about LINQ to Twitter. Normally, any less than stellar comments are usually from folks that need help – so I help if I can. This was different. I was like “Don’t use LINQ to Twitter”. This is an open source project, the comment didn’t come from a competing project, and  sounded more like an expression of frustration. So I engaged. Not only did the person respond, but I got some decent quality feedback. What’s also interesting is a couple other side conversations sprouted on the subject, which gave me more useful data. LINQ to Twitter Thread Actions Essentially, this particular issue centered around maintenance. There are actually several sub-issues at play here: dependencies, error handling, debugging, and visibility. I’ll describe each one and my interpretation. Dependencies Dependencies are where a library has references to other libraries. This means that when you build your application, you need DLLs for the entire dependency graph for your application. There are several potential problems with this that include more libraries for configuration management, potential versioning mismatches, and lack of cross-platform support. In the early days of LINQ to Twitter, I allowed developers to contribute and add dependencies, but it became very problematic (for reasons stated). It was like a ball and chain that kept me from moving forward. So, I refactored and pulled other open-source into my project to eliminate external dependencies. This lets me fix the code in my project without relying on someone else to upgrade or fix their DLL. The motivation for this was from early negative feedback that translated as important data and acted on it. Today, LINQ to Twitter has zero dependencies. Note: Rejecting good code from community members who worked hard to make your project better is a painful experience in itself. I have to point out that any contribution was not in vain because they had a positive influence on my subsequent refactoring that resulted in a better developer experience. Error Handling Error handling has been a problem in the past. I have this combination of supporting both synchronous and asynchronous (APM) processing that can be complex at times. Within the last 6 months, I did a fair amount of refactoring to detect errors and process them properly. I also refactored TwitterQueryException so it includes important data from Twitter. During this refactoring, I’ve made breaking changes that I felt would improve the development experience (small things like renaming a callback property to Exception, rather than Error). I think the async error handling is much better than it was a year ago. For all the work I’ve done, there is more to do. I think that a combination of more error handling support, e.g. improving semantics, and education through documentation and samples will improve the error handling story. Because of what I’ve done so far, it isn’t bad, but I see opportunities for improvement. Debugging Debugging can be painful. Here’s why: you have multiple layers of technology to navigate and figure out where the real problem is – Twitter API, Security, HTTP, LINQ to Twitter, and application. You can probably add your own nuances to that list, but the point is that debugging in this environment can be complex. I think that my plans for error handling will contribute to making the debugging process easier. However, there’s more I can do in the way of documentation and guidance. Some of the questions to be answered revolve around when something goes wrong, how does the developer figure out that there is a problem, what the problem is, and what to do about it. One example that has gone a long way to helping LINQ to Twitter developers is the 401 FAQ. A 401 Unauthorized is the error that the Twitter API returns when a use isn’t able to authenticate and is one of the most difficult problems faced by LINQ to Twitter developers. What I did was read guidance from Twitter and collect techniques from my own development and actions helping other developers to compile an extensive list of reasons for the 401 and ways to fix the problem. At one time, over half of the questions I answered in the forums were to help solve 401 issues. After publishing the 401 FAQ, I rarely get a 401 question and it’s because the person didn’t know about the FAQ. If the person is too lazy to read the FAQ, that’s not my issue, but the results in support issues have been dramatic. I think debugging can benefit from the education and documentation approach, but I’m always open to suggestions on whatever else I can do. Visibility Visibility is a nuance of the error handling/debugging discussion but is deeply rooted in comfort and control. The questions to ask in this area are what is happening as my code runs and how testable is the code. In support of these areas, LINQ to Twitter does have logging and TwitterContext properties that help see what’s happening on requests. The logging functionality allows any developer to connect a TextWriter to the Log property of TwitterContext to see what’s happening. Further, TwitterContext has a Headers property to see the headers Twitter returns and a RawResults property to show the Json string Twitter returns. From a testing perspective, I’ve been able to write hundreds of unit tests, over 600 when this post is published, and growing. If you write your own library, you have full control over all of these aspects. The tradeoff here is that while you have access to the LINQ to Twitter source code and modify it for all the visibility, LINQ to Twitter *will* change (which is good) and you will have to figure out how to merge that with your changes (which is hard). The fact is that this is a limitation of any 3rd party library, not just LINQ to Twitter. So, it’s a design decision where the tradeoff is between control and productivity. That said, there are things I can do with LINQ to Twitter to make the visibility story more compelling. I think there are opportunities to improve diagnostics. This would be a ton of work because it would need to provide multi-level logging that can be tuned for production and support any logging provider you want to attach. I’ve considered approaches such as how the new Semantic Logging application block connects to Windows Error Reporting as a potential target. Whatever I do would need to be extensible without creating native external dependencies. e.g. how many 3rd party libraries force a dependency on a logging framework that you don’t use. So, this won’t be an easy feat, but I believe it can be part of the roadmap. I think that a lot of developers are unaware of existing visibility features, so the first step would be to provide more documentation and guidance. My thought are that this would lead to more feedback that will help improve this area. Summary Recent feedback highlights some of items that are important to LINQ to Twitter developers, such as dependencies, error handling, debugging, and visibility. I know that there are maintenance issues that have been problems for LINQ to Twitter developers in the past. I’ve done a lot of work in this area, such as improving error handling, adding visibility features, and providing extensive API documentation. That said, there is more to be done to make LINQ to Twitter the best Twitter API experience available for .NET developers and I welcome anyone’s thoughts on what I’ve written here or new improvements. @JoeMayo

    Read the article

  • Does IE have more strict Javascript parsing than Chrome?

    - by Clay Shannon
    This is not meant to start a religio-technical browser war - I still prefer Chrome, at least for now, but: Because of a perhaps Chrome-related problem with my web page (see https://code.google.com/p/chromium/issues/detail?can=2&start=0&num=100&q=&colspec=ID%20Pri%20M%20Iteration%20ReleaseBlock%20Cr%20Status%20Owner%20Summary%20OS%20Modified&groupby=&sort=&id=161473), I temporarily switched to IE (10) to see if it would also view the time value as invalid. However, I didn't even get to that point - IE stopped me in my tracks before I could get there; but I found that IE was right - it is more particular/precise in validating my code. For example, I got this from IE: SCRIPT5007: The value of the property '$' is null or undefined, not a Function object ...which was referring to this: <script src="/CommonLogin/Scripts/jquery-1.9.1.min.js" type="text/javascript"></script> <script type="text/javascript"> // body sometimes becomes white???? with jquery 1.6.1 $("body").css("background-color", "#405DA7"); < This line is highlighted as the culprit: $("body").css("background-color", "#405DA7"); jQuery is referenced right above it - so why did it consider "$" to be undefined, especially when Chrome had no problem with it...ah! I looked at that location (/CommonLogin/Scripts/) and saw that, sure enough, the version of jQuery there was actually jquery-1.6.2.min.js. I added the updated jQuery file (1.9.1) and it got past this. So now the question is: why does Chrome ignore this? Does it download the referenced version from its own CDN if it can't find it in the place you specify? IE did flag other errs after that, too; so I'm thinking perhaps IE is better at catching lurking problems than, at least, Chrome is. Haven't tested Firefox diesbzg yet.

    Read the article

  • Expanding RAID-5

    - by Garry
    I'm new to RAID and trying to get my head around things. I have owned a Drobo in the past (which I liked) but it failed. Here's a hypothetical scenario: Assume I set up a RAID-5 array consisting of four 1TB hot-swappable 2.5" SATA drives. I name this volume 'My Data'. By my calculations, that would give me 2.7TB of usable space and the ability to recover if a single drive fails. I have a few questions: What happens if I pull out a single 1TB drive and replace it with a 2TB drive? Would the array automatically rebuild itself with no issues? Would the maximum capacity remain 2.7TB? If number (1) above is true and the array rebuilds itself with three 1TB drives one 2TB drive what would happen if I then pulled another 1TB drive out and stuck in a 2TB drive (you can see where I'm going here can't you). Would I eventually be able to gain more storage by gradually adding bigger drives? From a practical point of view, how much input is required from me as the end user whilst these drives are being pulled out and put in? On the Drobo, the storage space just automagically handles itself. Would I have to be actively involved in telling Ubuntu what was going on or would any of it be automated? Thanks in advance,

    Read the article

  • Web application interacts bi-directional with server program?

    - by Roelof Berkepeis
    I want to write a web application to play chess against the engine Crafty. I'm not new to PHP and javascript, but must learn how to interact with a server process : how can a web application and/or (jQuery) ajax interact bi-directionally with a (linux) program running on the server? At this moment i am developing on (Apache) local host. Crafty is installed on my Ubuntu PC. This well-known chess engine has no GUI, it runs in terminal by the command $ /usr/games/crafty and so you can play chess against it and even see it's calculations. I can make Crafty run by PHP, using the functions proc_open() or exec(), and most documentation i found states that the output stream should be a file .. But i think i don't want such setup, because then the webpage should be constanty polling that file (eg. by ajax) to see if some new data was appended, right? How can Crafty talk to the web page directly, saying "i have calculated another variation" or "i have decided a move" etc, then display this info on the web page and let the user give some counter move, just like in terminal. Isn't it possible to use some session / stream / listener? I have no clue at all, can anybody point me in a right direction?

    Read the article

  • Delete temporary files from batch script in xp

    - by Keith Bentrup
    I'm looking for a good batch script that would quickly find & clean all the known safe temporary folders/files from Windows (as many variants as possible) machines (e.g. the windows temp folder, all users IE temp folders, etc.). I'm fond of UI tools like CCleaner (over Cleanmgr.exe), but when I'm trying to clean several computers quickly and/or with minimal involvement, it would be nice to have a script. Plus with a script, I could chain several scripts together. Maybe one to then fire up various antivirus and/or malware detectors. Anyone have a good one or can point to a good resource?

    Read the article

  • Start building websites using Java

    - by Alex coady
    I'm a web developer and I've used PHP, along with MySQL, Javascript, jQuery etc; all of which I'm very confident with and can use on a professional level. What I'm wanting to do however, is to start using Java instead of PHP as I'm creating sites that need to be scalable. I really need a starting point. I can program in Java, I understand the language but have never used it for the web. Links, tutorials, examples would be a great help. I have also considered using Scala (the language Twitter is written in).. but don't really understand the benefits. Please no jargon; I need clear information that essentially takes me through the process of creating a very simple website using Java; I'm more than capable of building the site once that's done.. I think. I've previously used Eclipse so if assume I can. Thanks in advance!

    Read the article

  • embedded tomcat 7 behind iis 7.5 proxy ssl problems

    - by user1058410
    I'm using embedded tomcat 7 behind a iis 7.5 proxy server, with requests being forwarded to tomcat with arr. Everything works fine unless iis is set to require ssl. Then things like links that are generated dynamically in .jsp files on tomcat don't work right. For example if a link is supposed to point to _https://somewhere.com:443 it will be wrote as _http://somewhere.com:8080 (8080 is the port tomcat is running on). The problem seems to come from when tomcat looks at itself to build out the url it sees correctly that it is running on _http://somewhere.com:8080, but i need it to think otherwise. Does anybody know how to accomplish this without using ssl between iis and tomcat? Sorry for the underscores in front of the imaginary urls.

    Read the article

  • How to use wget to grab copy of Google Code site documents?

    - by Alex Reynolds
    I have a Google Code project which has a lot of wiki'ed documentation. I would like to create a copy of this documentation for offline browsing. I would like to use wget or a similar utility. I have tried the following: $ wget --no-parent \ --recursive \ --page-requisites \ --html-extension \ --base="http://code.google.com/p/myProject/" \ "http://code.google.com/p/myProject/" The problem is that links from within the mirrored copy have links like: file:///p/myProject/documentName This renaming of links in this way causes 404 (not found) errors, since the links point to nowhere valid on the filesystem. What options should I use instead with wget, so that I can make a local copy of the site's documentation and other pages?

    Read the article

  • How should a JEE application store credentials for logging in to an external system?

    - by FGreg
    I am in a situation where I have a Web Application (WAR) that is accessing a REST service provided by another application. The REST service uses Basic HTTP Authentication. So that means the application calling the REST service needs to store user credentials somehow. To further complicate things, this is an enterprise, so there are different 'regions' the application moves through which will have different credentials for the same service (think local development, development region, integration region, user test region, production, etc...) My first instinct is that the credentials should be stored by the JEE container and the application should ask the container for the credentials (probably via JNDI?). I'm beginning to read about Java Authentication and Authorization Service (JAAS) but I'm not sure if that is the appropriate solution to this problem. How should a JEE application store credentials for logging in to an external system? A few more details about my WAR. It is a Spring-Integration project that has no front-end. The container I am working with is Websphere. I am using JEE 5 and Spring 4.0.1. To this point I have not needed to consider spring-security... does this situation mean I should re-evaluate that decision?

    Read the article

  • Processor upgrade on a laptop vs. Ram upgrade. Also does ram always matter?

    - by Evan
    I have a Dell Inspiron 14r (N4110) with an Intel Core i3 and 4gb of ram. It runs very smoothly, however gaming on this laptop is very limited. This is mostly because of integrated graphics but i have seen a computer with a Core i5, and very similar specs otherwise, run games that the N4110 cannot. This other computer has integrated graphics and 6gb of ram. I am wondering whether upgrading ram or upgrading the processor make the most difference in performance. Which setup would get better performance, an i5 with 4gb of ram or an i3 with 8 gb of ram? (Both with integrated graphics) Also, is there a certain point at which you have too much ram for the computer to ever possibly use? For instance is there really any difference in performance between 8 gb and 16 gb of ram?

    Read the article

< Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >